Skip to content

Feed aggregator

The Honest Manual Writer Heuristic

DevelopSense Blog - Tue, 05/31/2016 - 00:09
Want a quick idea for a a burst of activity that will reveal both bugs and opportunities for further exploration? Play “Honest Manual Writer”. Here’s how it works: imagine you’re the world’s most organized, most thorough, and—above all—most honest documentation writer. Your client has assigned you to write a user manual, including both reference and […]
Categories: Blogs

Keep your applications fit with cloud load testing

HP LoadRunner and Performance Center Blog - Mon, 05/30/2016 - 20:55


Summer is coming. Just as you think of getting fit, your applications need to get into shape as well. Your applications will enjoy the benefits of a cloud load testing solution to improve performance.

 Continue reading to find out some of the advantages of cloud load testing, and how StormRunner Load can help your organization ensure top performance.

Categories: Companies

Static Analysis for C++

Software Testing Magazine - Mon, 05/30/2016 - 17:11
Static analysis tools have the potential to significantly improve programmer productivity as well as the safety, reliability and efficiency of the code they write. Modern static analysis has moved well beyond the mental model people often have based on “lint”: just finding simple typos. Static analysis can find subtle, complex bugs early, identify opportunities to improve performance, encourage consistent style and appropriate usage of libraries and APIs. This talk looks at the different purposes static analysis tools can be used to meet all these different goals. It will present specific examples from our experience working with sophisticated analysis tools on large, commercial codebases. The talk will also present a specific implementation of a modern static analysis toolkit for C++. This toolkit is being used in a number of different contexts: to provide tool-based enforcement of new coding guidelines and rules, to migrate people to modern C++ coding idioms and to find important security and reliability defects. Video producer:
Categories: Communities


Testing TV - Mon, 05/30/2016 - 17:05 is the online service that runs Blizzard’s games. As such, it is a large scale distributed system with many interacting parts and dependencies on various services and data. While developing servers, I needed a way to isolate and test functionality that I was working on. This talk covers my experience designing for testability […]
Categories: Blogs

German Testing Day, Frankfurt, Germany, June 13-14, 2016

Software Testing Magazine - Mon, 05/30/2016 - 10:30
The German Testing Day is a one conference dedicated to software testing that takes place in Frankfurt am Main. Most of the talks are in German, but there are also some presentations in English. There will be a night session the evening before the main conference day and a tutorial day on the last day. In the agenda of the German Testing Day you can find topics like “Test Design Patterns in Practice – an Experience Report”, “Quality Analysts as the driving force towards continuous deployments”, “Use Model-based Test Techniques to Mistake-Proof Your Agile Process”, “Automated Tests in the Cloud”, “Case studies in Solving Constraints for Agile and Continuous Delivery using Service Virtualization”, “TestOps – More effective and efficient tests in large-scale projects through technical autonomy in test environments” or “State-Driven Testing – State Diagrams, Test Cases, Graph Theory”. Web site: Location for German Testing Day conference: Kap Europa, (Messe Frankfurt GmbH), Osloer Strasse 5, 60327 Frankfurt am Main, Germany
Categories: Communities

Test Automation Day, Rotterdam, Netherlands, June 23 2016

Software Testing Magazine - Mon, 05/30/2016 - 10:00
Test Automation Day is a one day multi-tracks software testing conference organized for software testers and IT professionals in Rotterdam, the Netherlands. It features talks and workshops by international speakers focused on Test Automation Innovation. Don’t miss the opportunity to interact with world class speakers about innovative automated testing methods, technologies, strategies and tools. In the agenda of the Test Automation Day conference you can find topics like “Model-based testing of probabilistic programs”, “Test Automation Smells – Automatically uncovering quality defects”, “Test automation, handcuffs or enforcement?”, “Agile Functional Test Automation”, “What makes a test automation engineer so special?”, “Automation, Now, Then, Where”, “Intelligent Mistakes in Test Automation”, “Automated integration testing with Arquillian”, “Implementing Test Automation, A simple strategy?”, ” Effective Risk Analysis in Performance Testing: the Dutch Railways approach” or “The ROI of (Acceptance) Test Driven Development” Web site: Location for Test Automation Day conference: World Trade Center, Beursplein 37, 3011 AA Rotterdam, Netherlands
Categories: Communities

Flaky Tests at Google and How We Mitigate Them

Google Testing Blog - Sat, 05/28/2016 - 02:34
by John Micco

At Google, we run a very large corpus of tests continuously to validate our code submissions. Everyone from developers to project managers rely on the results of these tests to make decisions about whether the system is ready for deployment or whether code changes are OK to submit. Productivity for developers at Google relies on the ability of the tests to find real problems with the code being changed or developed in a timely and reliable fashion.

Tests are run before submission (pre-submit testing) which gates submission and verifies that changes are acceptable, and again after submission (post-submit testing) to decide whether the project is ready to be released. In both cases, all of the tests for a particular project must report a passing result before submitting code or releasing a project.

Unfortunately, across our entire corpus of tests, we see a continual rate of about 1.5% of all test runs reporting a "flaky" result. We define a "flaky" test result as a test that exhibits both a passing and a failing result with the same code. There are many root causes why tests return flaky results, including concurrency, relying on non-deterministic or undefined behaviors, flaky third party code, infrastructure problems, etc. We have invested a lot of effort in removing flakiness from tests, but overall the insertion rate is about the same as the fix rate, meaning we are stuck with a certain rate of tests that provide value, but occasionally produce a flaky result. Almost 16% of our tests have some level of flakiness associated with them! This is a staggering number; it means that more than 1 in 7 of the tests written by our world-class engineers occasionally fail in a way not caused by changes to the code or tests.

When doing post-submit testing, our Continuous Integration (CI) system identifies when a passing test transitions to failing, so that we can investigate the code submission that caused the failure. What we find in practice is that about 84% of the transitions we observe from pass to fail involve a flaky test! This causes extra repetitive work to determine whether a new failure is a flaky result or a legitimate failure. It is quite common to ignore legitimate failures in flaky tests due to the high number of false-positives. At the very least, build monitors typically wait for additional CI cycles to run this test again to determine whether or not the test has been broken by a submission adding to the delay of identifying real problems and increasing the pool of changes that could contribute.

In addition to the cost of build monitoring, consider that the average project contains 1000 or so individual tests. To release a project, we require that all these tests pass with the latest code changes. If 1.5% of test results are flaky, 15 tests will likely fail, requiring expensive investigation by a build cop or developer. In some cases, developers dismiss a failing result as flaky only to later realize that it was a legitimate failure caused by the code. It is human nature to ignore alarms when there is a history of false signals coming from a system. For example, see this article about airline pilots ignoring an alarm on 737s. The same phenomenon occurs with pre-submit testing. The same 15 or so failing tests block submission and introduce costly delays into the core development process. Ignoring legitimate failures at this stage results in the submission of broken code.

We have several mitigation strategies for flaky tests during presubmit testing, including the ability to re-run only failing tests, and an option to re-run tests automatically when they fail. We even have a way to denote a test as flaky - causing it to report a failure only if it fails 3 times in a row. This reduces false positives, but encourages developers to ignore flakiness in their own tests unless their tests start failing 3 times in a row, which is hardly a perfect solution.
Imagine a 15 minute integration test marked as flaky that is broken by my code submission. The breakage will not be discovered until 3 executions of the test complete, or 45 minutes, after which it will need to be determined if the test is broken (and needs to be fixed) or if the test just flaked three times in a row.

Other mitigation strategies include:
  • A tool that monitors the flakiness of tests and if the flakiness is too high, it automatically quarantines the test. Quarantining removes the test from the critical path and files a bug for developers to reduce the flakiness. This prevents it from becoming a problem for developers, but could easily mask a real race condition or some other bug in the code being tested.
  • Another tool detects changes in the flakiness level of tests and works to identify the change that caused the test to change the level of flakiness.

In summary, test flakiness is an important problem, and Google is continuing to invest in detecting, mitigating, tracking, and fixing test flakiness throughout our code base. For example:
  • We have a new team dedicated to providing accurate and timely information about test flakiness to help developers and build monitors so that they know whether they are being harmed by test flakiness.
  • As we analyze the data from flaky test executions, we are seeing promising correlations with features that should enable us to identify a flaky result accurately without re-running the test.

By continually advancing the state of the art for teams at Google, we aim to remove the friction caused by test flakiness from the core developer workflows.

Categories: Blogs

How to gain the best from LoadRunner’s support of HTTP/2

HP LoadRunner and Performance Center Blog - Fri, 05/27/2016 - 18:18

Advanced Port Mapping Settings teaser.jpg

LoadRunner load testing software now supports HTTP/2 (available beginning with version 12.53).
In this blog we will explain what HTTP/2 is and how LoadRunner exactly works with it…

Keep reading to learn more.

Categories: Companies

Using heat maps to obtain actionable application-user insights

Based on studies from Google, Bing, Walmart, Amazon and others, we know that Users change their behavior because of the slightest degradation in performance. Amazon claims they see a 1% conversion drop per 100ms speed. But what about other factors such as first time vs revisiting users, users entering your app through Landing Page A vs […]

The post Using heat maps to obtain actionable application-user insights appeared first on about:performance.

Categories: Companies

Recap: Dave Haeffner’s Practical Tips and Tricks for Selenium Test Automation (Webinar)

Sauce Labs - Thu, 05/26/2016 - 14:00

Thanks to everyone who signed up for our recent webinar, “Practical Tips and Tricks for Selenium Test Automation”, featuring Selenium project contributor Dave Haeffner.

In this presentation, Dave reviews the best and most useful tips & tricks from his weekly Selenium tip newsletter, Elemental Selenium. If you have unanswered Selenium questions, or want to learn how to use Selenium like a pro, check out the recording and slide deck as Dave covers topics such as:

  • Headless test execution
  • Testing HTTP status codes
  • Blacklisting third-party content
  • Load testing
  • Broken image checking
  • Testing “forgot password”
  • Working with A/B testing
  • File downloads
  • Additional debugging output
  • Visual testing & cross-browser testing

And more!

Want to learn more about Selenium? Download Dave’s Selenium Bootcamp for a comprehensive getting started guide. 

Access the recording HERE and view the slides below: 

Practical Tips & Tricks for Selenium Test Automation from Sauce Labs

Categories: Companies

Test automation as an orchard

Dorothy Graham Blog - Thu, 05/26/2016 - 13:46
At StarEast in May 2016, I was kindly invited to give a lightning keynote, which I did on this analogy. Hope you find it interesting and useful!


Automation is SO easy.
Let me rephrase that - automation often seems to be very easy.When you see your first demo, or run your first automated test, it’s like magic - wow, that’s good, wish I could type that fast.
But good automation is very different to that first test.
If you go into the garden and see a lovely juicy fruit hanging on a low branch, and you reach out and pick it, you think, "Wow, that was easy - isn’t it good, lovely and tasty".
But good test automation is more like building an orchard to grow enough fruit to feed a small town.
Where do you start?First you need to know what kind of fruit you want to grow - apples? oranges? (oranges would not be a good choice for the UK). You need to consider what kind of soil you have, what kind of climate, and also what will the market be - you don’t want to grow fruit that no one wants to buy or eat.
In automation, first you need to know what kind of tests you want to automate, and why. You need to consider the company culture, other tools, what the context is, and what will bring lasting value to your business.
Growing pains?Then you need to grow your trees. Fortunately automation can grow a lot quicker than trees, but it still takes time - it’s not instant.
While the trees are growing, you need to prune them and prune them hard especially in the first few years. Maybe you don’t allow them to fruit at all for the first 3 years - this way you are building a strong infrastructure for the trees so that they will be stronger and healthier and will produce much more fruit later on. You may also want to train them to grow into the structure that you want from the trees when they are mature.
In automation, you need to prune your tests - don’t just let them grow and grow and get all straggly. You need to make sure that each test has earned its place in your test suite, otherwise get rid of it. This way you will build a strong infrastructure of worthwhile tests that will make your automation stronger and healthier over the years, and it will bring good benefits to your organisation. You need to structure your automation (a good testware architecture) so that it will give lasting benefits.
Feeding, pests and diseasesOver time, you need to fertilise the ground, so that the trees have the nourishment they need to grow to be strong and healthy.
In automation, you need to nourish the people who are working on the automation, so that they will continue to improve and build stronger and healthier automation. They need to keep learning, experimenting, and be encouraged to make mistakes - in order to learn from them.
You need to deal with pests - bugs - that might attack your trees and damage your fruit.
Is this anything to do with automation? Are there bugs in automated scripts? In testing tools? Of course there are, and you need to deal with them - be prepared to look for them and eradicate them.
What about diseases? What if one of your trees gets infected with some kind of blight, or suddenly stops producing good fruit? You may need to chop down that infected tree and burn it, because it you don’t, this blight might spread to your whole orchard.
Does automation get sick? Actually, a lot of automation efforts seem to decay over time - they take more and more effort to maintain. technical debt builds up, and often the automation dies. If you want your automation to live and produce good results, you might need to take drastic action and re-factor the architecture if it is causing problems. Because if you don’t, your whole automation may die.
Picking and packingWhat about picking the fruit? I have seen machines that shake the trees so they can be scooped up - that might be ok if you are making cider or applesauce, but I wouldn’t want fruit picked in that way to be in my fruit bowl on the table. Manual effort is still needed. The machines can help but not do everything (and someone is driving the machines).
Test execution tools don’t do testing, they just run stuff. The tools can help and can very usefully do some things, but there are tests that should not be automated and should be run manually. The tools don’t replace testers, they support them.
We need to pack the fruit so it will survive the journey to market, perhaps building a structure to hold the fruit so it can be transported without damage.
Automation needs to survive too - it needs to survive more than one release of the application, more than one version of the tool, and may need to run on new platforms. The structure of the automation, the testware architecture, is what determines whether or not the automated tests survive these changes well.
Marketing, selling, roles and expectationsIt is important to do marketing and selling for our fruit - if no one buys it, we will have a glut of rotting fruit on our hands.
Automation needs to be marketed and sold as well - we need to make sure that our managers and stakeholders are aware of the value that automation brings, so that they want to keep buying it and supporting it over time.
By the way, the people who are good at marketing and selling are probably not the same people who are good at picking or packing or pruning - different roles are needed. Of course the same is true for automation - different roles are needed: tester, automator, automation architect, champion (who sells the benefits to stakeholders and managers).
Finally, it is important to set realistic expectations. If your local supermarket buyers have heard that eating your fruit will enable them to leap tall buildings at a single bound, you will have a very easy sell for the first shipment of fruit, but when they find out that it doesn’t meet those expectations, even if the fruit is very good, it may be seen as worthless.
Setting realistic expectations for automation is critical for long-term success and for gaining long-term support; otherwise if the expectations aren’t met, the automation may be seen as worthless, even if it is actually providing useful benefits.
SummarySo if you are growing your own automation, remember these things:
  • -      it takes time to do it well
  • -      prepare the ground
  • -      choose the right tests to grow
  • -      be prepared to prune / re-factor
  • -      deal with pests and diseases (see previous point)
  • -      make sure you have a good structure so the automation will survive change
  • -      different roles are needed
  • -      sell and market the automation and set realistic expectations
  • -      you can achieve great results

I hope that all of your automation efforts are very fruitful!

Categories: Blogs

Cambridge Lean Coffee

Hiccupps - James Thomas - Thu, 05/26/2016 - 06:40

This month's Lean Coffee was hosted by Cambridge Consultants. Here's some brief, aggregated comments on topics covered by the group I was in.

What is your biggest problem right now? How are you addressing it?
  • A common answer was managing multi-site test teams (in-house and/or off-shore)
  • Issues: sharing information, context, emergent specialisations in the teams, communication
  • Weinberg says all problems are people problems
  • ... but the core people problem is communication
  • Examples: chinese whispers, lack of information flow, expertise silos, lack of visual cues (e.g. in IM or email)
  • Exacerbated by time zone and cultural differences; lack/difficulty of ability to sit down together,  ...
  • Trying to set up communities of practice (e.g. Spotify Guilds) to help communication, iron out issues
  • Team splits tend to be imposed by management
  • But note that most of the problems can exist in a colocated team too

  • Another issue was adoption of Agile
  • Issues: lack of desire to undo silos, too many parallel projects, too little breaking down of tasks, insufficient catering for uncertainty, resources maxed out
  • People often expect Agile approaches to "speed things up" immediately
  • On the way to this Lean Coffee I was listening to Lisa Crispin on Test Talks:"you’re going to slow down for quite a long time, but you’re going to build a platform ... that, in the future, will enable you to go faster"

How do you get developers to be open about bugs?
  • Some developers know about bugs in the codebase but aren't sharing that information. 
  • Example: code reviewer doesn't flag up side-effects of a change in another developer's code
  • Example: developers get bored of working in an area so move on to something else, leaving unfinished functionality
  • Example: requirements are poorly defined and there's no appetite to clarify them so code has ambiguous aims
  • Example: code is built incrementally over time with no common design motivation and becomes shaky
  • Is there a checklist for code review that both sides can see?
  • Does bug triage include a risk assessment?
  • Do we know why the developers aren't motivated to share the information?
  • Talking to developers, asking to be shown code and talked through algorithms can help
  • Watching commits go through; looking at the speed of peer review can suggest places where effort was low

Testers should code; coders should test
  • Discussion was largely about testers in production code
  • Writing production code (even under guidance in non-critical areas) gives insight into the production
  • ... but perhaps it takes testers away from core skills; those where they add value to the team?
  • ... but perhaps testers need to be wary of not simply reinforcing skills/biases we already have?
  • Coders do test! Even static code review is testing
  • Why is coding special? Why shouldn't testers do UX, BA, Marketing, architecting, documentation, ...
  • Testing is dong other people's jobs
  • ... or is it?
  • These kinds of discussion seem to be predicated on the idea that  manual testing is devalued
  • Some discussion about whether test code can get worse when developers work on it
  • ... some say that they have never seen that happen
  • ... some say that developers have been seen to deliberately over-complicate such code in order to make it an interesting coding task
  • ... some have seen developers add very poor test data to frameworks 
  • ... but surely the same is true of some testers?
  • We should consider automation as a tool, rather than an all (writing product code) or nothing (manual tester). Use it when it makes sense to, e.g. to generate test data

Ways to convince others that testing is adding value
  • Difference between being seen as personally valuable against the test team adding value
  • Overheard: "Testing is necessary waste"
  • Find issues that your stakeholders care about
  • ... these needn't be in the product, they can be e.g. holes in requirements
  • ... but the stakeholders need to see what the impact of proceeding without addressing the issues could be
  • Be humble and efficient and professional and consistent and show respect to your colleagues and the project
  • Make your reporting really solid - what we did (and didn't); what we found; what the value of that work was (and why)
  • ... even when you find no issues

Categories: Blogs

GSoC Project Intro: Improving Job Creation/Configuration

About me My name is Samat Davletshin and I am from HSE University from Moscow, Russia. I interned at Intel and Yandex, and cofounded a startup project where I personally developed front-end and back-end of the website. I am excited to participate in GSoC with Jenkins this summer as a chanсe to make a positive change for thousands of users as well as to learn from great mentors. Abstract Although powerful, Jenkins new job creation and configuration process may be non obvious and time consuming. This can be improved by making UI more intuitive, concise, and functional. I plan to achieve this by creating a simpler new job creation, configuration...
Categories: Open Source

Introducing Blue Ocean: a new user experience for Jenkins

In recent years developers have become rapidly attracted to tools that are not only functional but are designed to fit into their workflow seamlessly and are a joy to use. This shift represents a higher standard of design and user experience that Jenkins needs to rise to meet. We are excited to share and invite the community to join us on a project we’ve been thinking about over the last few months called Blue Ocean. Blue Ocean is a project that rethinks the user experience of Jenkins, modelling and presenting the process of software delivery by surfacing information that’s important to development teams with as few clicks as...
Categories: Open Source

DB2 Single-row FETCH still wasting big money on Mainframe

Why are we still giving away thousands of dollars to IBM? Is it because we are too lazy to rewrite our code to leverage Multi-row FETCH and Multi-row INSERT vs using Single-row FETCH? Let me show you why it is so important to leverage these very neat features, which were introduced with DB2 for z/OS […]

The post DB2 Single-row FETCH still wasting big money on Mainframe appeared first on about:performance.

Categories: Companies

SonarLint 2.0 Is Now Available

Sonar - Wed, 05/25/2016 - 15:25

SonarLint is a pretty recent product that we released for the first time a few months ago for Eclipse, IntelliJ and Visual Studio. We have recently released the version 2.0 which brings the ability to connect SonarLint with a SonarQube server and was greatly expected by the community. I think the addition of this new feature is a good chance to recap SonarLint features. But before I do this, let me remind you of the SonarLint’s mission: to help developers spot as many coding issues as possible in their IDE, while they code. It has to be instant, integrated to the IDE, and valuable.

Since SonarLint 1.0, you can install the product from the market place for all 3 IDEs we currently support: Eclipse Marketplace, Jetbrains Plugin Repository or Visual Studio Gallery. Et voilà… You can continue your coding as usual and you will start seeing SonarLint issues reported as you type. If you open a file, it will get decorated immediately with issues.

You also benefit from a nice panel containing a list of issues that have been detected. Each issue comes with a short message and if that is not enough you can open a more detailed description of the problem, with code snippets and references to well known coding standards.

As I am sure you guessed already, all of this does not require any configuration. And this is actually the reason why version 2.0 was so expected: people who have defined their quality profile in SonarQube want to be able to use the same profile in SonarLint. This is the main feature provided by SonarLint 2.0.

In order to have SonarLint use the same quality profile as SonarQube you have to bind your project in your IDE to the remote project in SonarQube. This is done in two steps:

  • Configure a connection to your SonarQube server (URL + credentials)
  • Bind your project with the remote one

Et voilà… again… SonarLint will fetch configuration from the SonarQube server and use it when inspecting code.

That’s it for today!

Categories: Open Source

The Sauce Journey – Courage, Transparency, Trust

Sauce Labs - Wed, 05/25/2016 - 15:00

In my last blog post, I described the first step on our journey from Engineering to DevOps, which was the formation of project-focused SCRUM teams. SCRUM brings many opportunities for improving the development process, but it’s wise to keep in mind the old saying “SCRUM doesn’t fix problems, it points them out.” This means that the very first thing to emerge from SCRUM is transparency, because it requires us to examine how our teams and processes actually function on a day-to-day basis, and through ceremonies like stand ups and retrospectives, we are asked to clarify our goals and purposes to our colleagues, our customers, and ourselves.

The essence of SCRUM ceremonies is communication, and communication leads to transparency. In standups, making a statement about what you plan to do that day, and what is blocking you, provides a simple way to bring transparency about your work to your team. But it also forces you to be introspective, to be clear to yourself about what you are doing, what the blockers and issues are, and what it is that you can accomplish. More than anything else, stand ups are opportunities for reflective communication that surfaces problems at the same that it seeks to resolve them and move the entire team closer to their goal. The same can be said of retrospectives, but here the emphasis shifts from internal to external communication. From the internal perspective, retrospectives are useful for documenting issues and how they were met, and then using that information for iterative improvement. But they are much more important for communicating to customers that we understand where our challenges are, and that we have ways to deal with them.

The ultimate outcome of transparency is the development of trust. A recent New York Times article described the efforts of Google to find the formula to create the perfect team, and what they found was that, above all, a sense of psychological safety among team members was critical to the overall team success. Team members had to feel that they could bring up issues or ideas without being steamrolled by other team members, or subject to harsh criticism for admitting to mistakes. SCRUM ceremonies provide the opportunity for communication, but for it to be effective communication, you have to trust that your colleagues will really listen to what you say, and they have to trust that you are being honest and open in saying it. This is why one of our core values at Sauce is “It’s okay to be wrong, but not stay wrong.” This is how we try to foster a sense of psychological safety, and trust, among our team members. At the same time, we need to build trust with our customers, which requires that we be open and honest with them, even when it comes to making painful admissions.

This was all put to the test during the Great Holiday Outage of 2015. In the midst of a highly stressful situation, our team was able to root cause the issues and develop a solution because we encouraged open communication, rather than trying to find someone to whom we could assign blame. Without trust and open communication, teams fall apart under stress, because everyone is ultimately forced to look out for themselves. A history of open communication is also essential to maintaining the trust of customers in these situations. Because of these outages, we were threatened with the loss of several major renewal contracts. What saved them was publishing a retrospective that made it clear to our customers that we understood what happened, but, more importantly, that we had taken steps to make sure it wouldn’t happen again.

Communication, transparency, trust – these are the qualities that SCRUM brings to the development process that helps create the environment for successful teams. There are other qualities, like individual empowerment and improved collaboration, that I will write about in future posts, but these are the qualities that are most important in taking the first step in the evolution from Engineering to DevOps. They are the foundations upon which everything else is built, because ultimately they are about discovering who we are, as an organization and individuals, and using that knowledge to build our internal and external relationships.

Joe Alfaro is VP of Engineering at Sauce Labs. This is the third post in a series dedicated to chronicling our journey to transform Sauce Labs from Engineering to DevOps. Read the first post here.

Categories: Companies

ALM Tools: What Do You REALLY Need?

The Seapine View - Wed, 05/25/2016 - 13:30

Application lifecycle management (ALM) systems built from several low-cost tools can end up costing you more than a single-vendor, end-to-end solution.

CS quote ALM NEED (1)
Let’s say you want to know if it’ll rain later today, and you decide to get a weather app for your phone. If you’re like most people, you’ll gravitate to the cheapest app. A 99-cent app looks better than one that costs two bucks, right?

But what if you need to know the forecast for the week, and the 99-cent app only tells you today’s weather. Now it’s worthless, and you have to pony up for the $2 app anyway—which does deliver the extended forecasts you need.

Essentially, you’ve just paid three bucks for that $2 app.

Many companies go through a similar cycle when they buy ALM tools. They find a low-cost tool that meets their immediate needs, and they think they’re good to go. Maybe it has a weak workflow, no triggering capabilities, no traceability, can’t merge defects, etc.—but they don’t need all that right now.

They figure when they do need more functionality, they’ll cross that bridge when they come to it. In the meantime, they saved money today, and that’s what counts.

When Saving Money Costs Money

The problem with this approach is that it can end up costing more money than it saves. What if your growth explodes overnight? Or you win a project that requires more stringent auditing and compliance? Suddenly, that low-cost tool isn’t up to the task, and you have to scramble to find a solution.

At this point, most companies start adding on other tools to fill in the gaps. It still seems cost-effective, because you’re only paying for what you need when you need it.

The downside to this “à la carte” approach is that it creates a patchwork ALM system with multiple points of failure, multiple versions of the same information, and no one to help when the system breaks.

What’s more, your home-built system still doesn’t meet your needs. Oh, it kind of does what you want, but it’s a real pain in the neck get information from one tool to the other.

The longer you continue down this path, the harder it is for you to find tools that integrate with all the others you’ve already stuck together.

In the end, you wind up paying more for all the disparate tools you had to cobble together than you would have paid for a single, flexible tool that covers your application lifecycle from end to end.

But here’s the kicker. That patchwork system starts to cost you even more in lost productivity, development and testing delays, failed audits, missed opportunities, and other business problems.

Why Choose a Single-Vendor Solution?

Instead of building an ALM system out of low-priced tools from different vendors, consider the advantages of investing in a single-vendor solution:

  • A Single Source of Truth: The biggest advantage of a single-vendor solution is that you always have the most current information in one place. With a tool that manages and links all development artifacts, you know the test case linked to a requirement is always the current test case, the status of a fixed defect is always recent, and your reports are always accurate.
  • Seamless Integration: Because every part of the solution comes from the same vendor, it’s painless to move information from the requirements management tool, for example, to the test case management tool. Everything just works, with no duct tape required.
  • End-to-end Traceability: With all your information managed and linked within a seamlessly integrated suite of tools, end-to-end traceability becomes a snap. With a few clicks, you can trace from requirements through test cases, test results, resolutions, and source code.
  • Help When You Need It: Unlike a system made up of tools from different vendors, there’s only one support team to call when you have an issue.

Before you patch a gap in your ALM system with another low-cost tool, consider how much it will really cost you in the long run. You’ll almost certainly be better off with a single-vendor solution.

Categories: Companies

Turn Digital Disruption into Digital Value: HPE Performance Center - version 12.53 now available

HP LoadRunner and Performance Center Blog - Wed, 05/25/2016 - 05:10


High Velocity Innovation teaser.png

The newest version of HPE Performance Center is now available! Keep reading to find out how this performance testing software can help you manage the digital disruption.

Categories: Companies

Refactoring a Jenkins plugin for compatibility with Pipeline jobs

This is a guest post by Chris Price. Chris is a software engineer at Puppet, and has been spending some time lately on automating performance testing using the latest Jenkins features. In this blog post, I’m going to attempt to provide some step-by-step notes on how to refactor an existing Jenkins plugin to make it compatible with the new Jenkins Pipeline jobs. Before we get to the fun stuff, though, a little background. How’d I end up here? Recently, I started working on a project to automate some performance tests for my company’s products. We use the awesome Gatling load testing tool for these tests, but we’ve largely been...
Categories: Open Source

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today