Skip to content

Open Source

Jenkins World 2016 Session Videos

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. The videos of the sessions from Jenkins World 2016 are up! I’ve updated the wrap-up posts with links to each of the sessions mentioned: Jenkins Pipeline Scaling Jenkins Ask the Experts & Demos You can also find video from all the sessions here. Enjoy!...
Categories: Open Source

Controlling the Flow with Stage, Lock, and Milestone

This is a guest post by Patrick Wolf, Director of Product Management at CloudBees. Recently the Pipeline team began making several changes to improve the stage step and increase control of concurrent builds in Pipeline. Until now the stage step has been the catch-all for functionality related to the flow of builds through the Pipeline: grouping build steps into visualized stages, limiting concurrent builds, and discarding stale builds. In order to improve upon each of these areas independently we decided to break this functionality into discrete steps rather than push more and more features into an already packed stage step. stage - the stage...
Categories: Open Source

Selenium 3.0: Out Now!

Selenium - Thu, 10/13/2016 - 20:39

We are very pleased to announce the release of Selenium 3.0. If you’ve been waiting for a stable release since 2.53.1, now’s your chance to update. And if you do, here is what you’ll find:

As we’ve said before, for users of the WebDriver APIs this is a drop-in replacement. You’ll find that modern browsers, such as Chrome and Edge will continue to work just as before, and we’ve taken the opportunity to fix some bugs and improve stability. Selenium Grid users may require updates to their configuration as the json config file format has been updated, as have some of command line parameter options, but the upgrade should also be smooth. 

The major change in Selenium 3.0 is we’re removing the original Selenium Core implementation and replacing it with one backed by WebDriver. This will affect all users of the Selenium RC APIs. For more information, please see the previous post.

A lot has changed in the 5 years between versions 2 and 3. When we shipped Selenium 2, the Selenium project was responsible for providing the driver for each browser. Now, we are happy to say that all the major browser vendors ship their own implementations (Apple, Google, Microsoft, and Mozilla). Because the browser vendors know their browsers better than anyone, their WebDriver implementations can be tightly coupled to the browser, leading to a better testing experience for you.

The other notable change has been that there is now a W3C specification for browser automation, based on the Open Source WebDriver. This has yet to reach “recommendation” status, but the people working on it (including members of the Selenium project!) are now focusing on finishing the text and writing the implementations.

Mozilla has been a front-runner in implementing the W3C WebDriver protocol. On the plus side, this has exposed problems with the spec as it has evolved, but it also means that Firefox support is hard to track as their engineering efforts have been forward looking, rather than on supporting the current wire protocol used by Selenium WebDriver. For now, the best advice we can offer is for you to try the latest release of geckodriver and Selenium together.

These are exciting times for browser automation! Selenium 3.0 is a major release and we’re looking forward to improving things further, as well as tracking the ongoing work of the W3C spec. Our goal is to keep the changes your tests need to deal with to an absolute minimum, to continue preserving the hard work that’s gone into writing your existing tests. 

As a personal note, I’d like to say thank you to each of the many people that have worked so hard to make Selenium 3 possible. That’s not just the developers and contributors to the Open Source project (past and present), but also the engineers from Google, Microsoft, Mozilla, and Apple, and everyone involved with the W3C spec. I’d also like to say thank you to everyone who’s taken the time to report bugs, our users and our community. The project is great fun to work on and you’re the reason for that. A final thank you is due to the Software Freedom Conservancy, who have provided invaluable help with the logistics of running a large OSS project.

Happy hacking, everyone! May your tests run fast and true!

Categories: Open Source

The Tweets You Missed in September

Sonar - Wed, 10/05/2016 - 10:21

Here are the tweets you likely missed last month!

No barriers on! Sign up & start analyzing your OSS projects today!

— SonarQube (@SonarQube) September 1, 2016

SonarLint for @VisualStudio 2.7 Released: with 30 new rules targeting

— SonarLint (@SonarLint) September 23, 2016

SonarQube #JavaScript plugin 2.16 Released :

— SonarQube (@SonarQube) September 9, 2016

SonarQube ABAP 3.3 Released: Five new rules @SAPdevs #abap @SAP

— SonarQube (@SonarQube) September 7, 2016

SonarQube Scanner for Gradle 2.1 natively supports Android projects, and brings other improvements.

— SonarQube (@SonarQube) September 26, 2016

Categories: Open Source

Selenium 3 is Coming

Selenium - Tue, 10/04/2016 - 17:59

Selenium 3 is coming! As I write this, we think that “beta 4” will be the last beta before the official 3.0 release. I’m here to tell you about what’s changed, and what impact this will have on your testing.

  • WebDriver users will just find bug fixes and a drop-in replacement for 2.x.
  • Selenium Grid users will also find bug fixes and a simple update.
  • The WebDriver APIs are now the only APIs actively supported by the Selenium project.
  • The Selenium RC APIs have been moved to a “legacy” package.
  • The original code powering Selenium RC has been replaced with something backed by WebDriver, which is also contained in the “legacy” package.
  • By a quirk of timing, Mozilla have made changes to Firefox that mean that from Firefox 48 you must use their geckodriver to use that browser, regardless of whether you’re using Selenium 2 or 3.
In more depth:

When we released Selenium 2.0 in 2011, we introduced the new WebDriver APIs, and encouraged everyone to start moving to them. If you’re using the WebDriver APIs, then Selenium 3.0 is a simple drop-in upgrade. We’ve not changed any of the public WebDriver APIs, and the code is essentially the same as the last 2.x release. If you’re using Selenium Grid, the same applies: in most cases, you can just drop in the new JAR (or update your maven dependency to 3.0.0), and you’re done.

If the update to Selenium 3 is such a non-event, why did we call this Selenium 3.0? To answer this question, I first need to provide some history, and tell you a little about how Selenium works under the hood. The very first version of Selenium was “just” a very complicated Javascript framework, running in the browser and interpreting the table-based tests you may be familiar with if you use Selenium IDE. We call this “Selenium Core”. This Javascript framework formed the basis of the original implementation of Selenium RC (the oldest set of Selenium APIs, where all the method and functions were on the “Selenium” interface, and which have been deprecated for some time now). Over time, the needs of modern web testing have grown ever more complicated and sophisticated, and Selenium Core is now less capable of meeting these needs than it was before.

With Selenium 3.0, we are deleting the original Selenium Core implementation. If you use the old RC interfaces, we provide an alternative implementation that’s backed by WebDriver. This is the same “webdriver-backed selenium” that has been available as part of Selenium 2 since its release. Because the underlying technology has changed from Selenium Core to WebDriver, you may find some places where your existing tests using RC run into issues. Our experience with migrating suites is that it’s normally a systemic issue that can be fixed with a minimal engineering effort (that is, the problem is normally isolated to a few places, and these can be rewritten to avoid problems)

We’re also removing the original Selenium RC APIs from the main downloads. If you’re a Java user, and need to use them to support existing tests, then you’ll need a dependency to “org.seleniumhq.selenium:selenium-leg-rc:3.0.0” (or later!). It’s strongly recommended that you do not do this unless you absolutely need to.
If you’re someone who runs tests exported from IDE in the table format, there is now a new test runner that the project has made available for you to use that can be downloaded from the project’s website. It takes the same arguments as the old runner, and we’ve done our best to ensure the output of tests remains the same too.

At the same time as the Selenium project is shipping Selenium 3.0, Mozilla are changing the internals of Firefox in a way that makes it more stable and secure, but which also makes the community provided Firefox Driver no longer work. As such, if you use Firefox for your testing, you’ll need to use the geckodriver, which is an executable similar to the chromedriver and the Microsoft WebDriver for Edge. You’ll need to start using geckodriver even if you’re using Selenium 2 — the change is in the browser, not Selenium. Please be aware that geckodriver is alpha software, based on the evolving W3C WebDriver standard: everyone’s working flat out to give you the best testing experience they can, but there are undoubtedly some bumps in the road when it comes to testing with Firefox.

This release marks the culmination of a lot of hard work by the Selenium committers and community. I’d like to thank everyone who has been part of this process, and the Selenium users around the world who have done so much to make the project as successful as it is.

Categories: Open Source

Jenkins World 2016, That's a Wrap!

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. This year’s Jenkins World conference was a huge milestone for the Jenkins project - the first global event for the Jenkins community. It brought users and contributors together to exchange ideas on the current state of the project, celebrate accomplishments of the past year, and look ahead at all the exiting enhancements coming down the pipe(line). Contributor Summit To kick off Jenkins World, we had a full day "Contributor Summit". Jenkins is a distributed project with contributors from all over the globe. Conferences like this are perfect time to get contributors together face-to-face, to talk through current issues and...
Categories: Open Source

Jenkins World 2016 Wrap-up - Ask the Experts & Demos

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. As I mentioned in my previous post, Jenkins World brought together Jenkins users from organizations of all sizes. It also brought together Jenkins users of all skill levels; from beginners to experts (including to JAM organizers, board members, and long time contributors). A number of those experts also volunteered to staff the Open Source Hub’s "Ask the Experts" desk throughout the conference to answer Jenkins questions. This included, but was not limited to: Paul Allen, R Tyler Croy, James Dumay, Jesse Glick, Eddú Meléndez Gonzales, Jon Hermansen, Owen Mehegan, Oleg Nenashev, Liam Newman, Christopher Orr, Casey Vega, Mark Waite, Dean Yu, and Keith Zantow. I actually chose to spend the majority of...
Categories: Open Source

Jenkins World 2016 Wrap-up - Scaling

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. One of the great features of Jenkins is how far it can scale, not only from a software perspective, but also from an organizational one. From a single Jenkins master with one or two agents to a multiple master with thousands of agents, from a team of only a few people to a whole company with multiple disparate departments and organizations, you’ll find space where Jenkins is used. Like any software or organization, there are common challenges for increasing scale with Jenkins and some common best practices, but there are also some unique solutions. A big...
Categories: Open Source

Jenkins World 2016 Wrap-up - Pipeline

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. As someone who has managed Jenkins for years and manually managed jobs, I think pipeline is fantastic. I spent much of the conference manning the Ask the Experts desk of the "Open Source Hub" and was glad to find I was not alone in that sentiment. The questions were not "Why should I use Pipeline?", but "How do I do this in Pipeline?" Everyone was interested in showing what they have been able to accomplish, learning about best practices, and seeing what new features were on the horizon. The sessions and demos on Pipeline that I saw were...
Categories: Open Source

NUnit-Summary Becoming an “Official” NUnit Application - Thu, 09/22/2016 - 23:39

NUnit-Summary is an “extra” that I’ve maintained personally for some time. It uses built-in or user-supplied transforms to produce summary reports based on the results of NUnit tests.

I have contributed it to the NUnit project and we’re working on updating it to recognize NUnit 3 test results. The program has never had a 1.0 release, but we expect to produce one soon.

This old post talks about the original nunit-summary program.

Categories: Open Source

An Engine Extension for Running Failed Tests – Part 1: Creating the Extension - Thu, 09/22/2016 - 20:47

In a recent online discussion, one of our users talked about needing to re-run the NUnit console runner, executing just the failed tests from the previous run. This isn’t a feature in NUnit but it could be useful to some people. So… can we do this by creating an Engine Extension? Let’s give it a try!

The NUnit Test Engine supports extensions. In this case, we’re talking about a Result Writer extension, one that will take the output of a test run from NUnit and create an output file in a particular format. In this case, we want the output to be a text file with each line holding the full name of a failed test case. Why that format? Because it’s exactly the format that the console runner already recognizes for the --testlist option. We can use the file that is created as input to a subsequent test run.

Information about how to write an extension can be found on the Writing Engine Extensions page of the NUnit documentation. Details of creating a ResultWriter extension can be found on the Result Writers page.

To get started, I created a new class library project called failed-tests-writer. I made sure that it targeted .NET 2.0, because that allows it to be run under the widest range of runtime versions and I added a package reference to the NUnit.Engine.Api package. That package will be published on with the release of NUnit 3.5. Since that’s not out yet, I used the latest pre-release version from the NUnit project MyGet feed by adding to my NuGet package sources.

Next, I created a class to implement the extension. I called it FailedTestsWriter. I added using statements for NUnit.Engine and NUnit.Engine.Extensibility and implemented the IResultWriter interface. I gave my class Extension and ExtensionProperty attributes. Here is what it looked like when I was done.

using System;
using System.IO;
using NUnit.Engine;
using NUnit.Engine.Extensibility

namespace EngineExtensions
    [Extension, ExtensionAttribute("Format", "failedtests")]
    public class FailedTestsWriter : IResultWriter
        public void CheckWritability(string outputPath)
            using (new StreamWriter(outputPath, false, Encoding.UTF8)) { }

        public void WriteResultFile(XmlNode resultNode, string outputPath)
            using (var writer = new StreamWriter(outputPath, false, Encoding.UTF8))
                WriteResultFile(resultNode, writer);

        public void WriteResultFile(XmlNode resultNode, TextWriter writer)
            foreach (XmlNode node in resultNode.SelectNodes("//test-case[@result='Failed']")) // (3)

The ExtensionAttribute marks the class as an extension. In this case as in most cases, it’s not necessary to add any arguments. The Engine can deduce how the extension should be used from the fact that it implements IResultWriter.

As explained on the Result Writers page, this type of extension requires use of the ExtensionPropertyAttribute so that NUnit knows the name of the format it implements. In this case, I chose to use “failedtests” as the format name.

The CheckWriteability method is required to throw an exception if the provided output path is not writeable. We do that very simply by trying to create a StreamWriter. The empty using statement is merely an easy way to ensure that the writer is closed.

The main point of the extension is accomplished in the second WriteResultFile method. A foreach statement selects each failing test, which is then written to the output file.

Testing the Extension

That explains how to write the extension. In Part 2, I’ll explain how to deploy it. Meanwhile, I’ll tell you how I tested my extension in it’s own solution, using nunit3-console.

First, I installed the package NUnit.ConsoleRunner from I used version 3.4.1. Next, I created a fake package subdirectory in my packages folder, so it ended up looking like this:


Note that the new extension “package” directory name must start with “NUnit.Extension.” in order to trick the console-runner and engine into using it.

With this structure in place, I was able to run the console with the --list-extensions option to see that my extension was installed and I could use a command like

nunit3-console mytests.dll --result:FailedTests.lst;format=failedtests

to actually produce the required output.

Categories: Open Source

New Website

Watir - Web Application Testing in Ruby - Thu, 09/22/2016 - 16:43


Recently, we updated the Watir website to be hosted on GitHub Pages. The existing and sites will no longer be maintained, but most if not all of the information currently on them has been transferred over to the new site. Check it out at now redirects to this new site, and soon will become

If you think something is missing or have any comments or feedback, make an issue on the site’s GitHub page or join us on our Slack channel.

Categories: Open Source

Back to Blogging! - Thu, 09/22/2016 - 02:50

My blog has been offline for a long time, as you can see. The last prior post was in 2009!

Recently, I found a backup copy of the old blog and was able to re-establish it. Watch for some new posts in the near future.

Categories: Open Source

Jenkins World 2016 Wrap-up - Introduction

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. That’s a Wrap! Any way you look at it, last week’s Jenkins World Conference 2016 was a huge success. In 2011, a few hundred users gathered in San Francisco for the first "Jenkins User Conference". Over successive years, this grew into several yearly regional Jenkins user conferences. This year, over 1,300 people came from around the world to "Jenkins World 2016", the first global event for the Jenkins community. This year’s Jenkins World conference included: Keynote presentation by Jenkins creator, Kohsuke Kawaguchi, announcing a number of great new Jenkins project features, such as "Blue Ocean". More than 50...
Categories: Open Source

Jenkins Online Meetup report. Plugin Development - WebUI

On September 6th we had a Jenkins Online Meetup. This meetup was the second event in the series of Plugin Development meet ups. At this meetup we were talking about Jenkins Web UI development. Talks 1) Classic Jenkins UI framework - Daniel Beck In the first part of his talk, Daniel presented how Stapler, the web framework used in Jenkins, works, and how you can add to the set of URLs handled by Jenkins. In the second part he was talking about creating new views using Jelly and Groovy, and how to add new content to existing views. Keywords: Stapler, Jelly, Groovy-defined UIs 2) Developing modern Jenkins UIs with Javascript - Tom Fennelly Feel...
Categories: Open Source

Announcing the Blue Ocean beta, Declarative Pipeline and Pipeline Editor

At Jenkins World on Wednesday 14th of September, the Jenkins project was happy to introduce the beta release of Blue Ocean. Blue Ocean is the new user experience for Jenkins, built from the ground up to take advantage of Jenkins Pipeline. It is an entire rethink of the the way that modern developers will use Jenkins. Blue Ocean is available today via the Jenkins Update Center for Jenkins users running 2.7.1 and above. Get the beta Just search for BlueOcean beta in the Update Center, install it, browse to the dashboard, and then click the Try BlueOcean UI button on the dashboard. Whats included? Back in April we open sourced...
Categories: Open Source

Take the 2016 Jenkins Survey!

This is a guest post by Brian Dawson on behalf of CloudBees, where he works as a DevOps Evangelist responsible for developing and sharing continuous delivery and DevOps best practices. He also serves as the CloudBees Product Marketing Manager for Jenkins. Once again it’s that time of year when CloudBees sponsors the Jenkins Community Survey to assist the community with gathering objective insights into how jenkins is being used and what users would like to see in the Jenkins project. Your personal information (name, email address and company) will NOT be used by CloudBees for sales or marketing. As an added incentive to take the survey, CloudBees will enter participants into a...
Categories: Open Source

We Are Adjusting Rules Severities

Sonar - Thu, 09/08/2016 - 09:31

With the release of SonarQube 5.6, we introduced the SonarQube Quality Model, which pulls Bugs and Vulnerabilities out into separate categories to give them the prominence they deserve. Now we’re tackling the other half of the job: “sane-itizing” rule severities, because not every bug is Critical.

Before the SonarQube Quality Model, we had no way of bringing attention to bugs and security vulnerabilities except to give them high severity ratings. So all rules with a Blocker or Critical severity were related to reliability (bugs) or security (vulnerabilities), and vice versa as a tautology. That made sense before the SonarQube Quality Model, but it doesn’t now. Now, just being a Bug is enough to draw the right attention to an issue. Now, having every Bug or Vulnerability at the Blocker or Critical level is actually a distraction.

So we’re fixing it. We’ve reclassified the severity on every single rule specification in the RSpec repository. The changes to existing reliability/bug rules are reflected in version 4.2 of the Java plugin, and future releases of Java and other languages should reflect the rest of the necessary changes. In some cases, the changes are significant (perhaps even startling), so it makes sense to explain the thinking.

The first thing to know is that the reclassifications are done based on a truth table:

  Impact Likelihood Blocker high high Critical high low Major low high Minor low low

For each rule, we first asked ourselves: What’s the worst thing that can reasonably happen as a result of an issue raised by this rule, factoring in Murphy’s Law without predicting Armageddon?

With the worst thing in mind, the rest is easy. For bugs we evaluate impact and severity with these questions:
Impact: Will the “worst thing” take down the application (either immediately or eventually), or corrupt stored data? If the answer is “yes”, impact is high.
Likelihood: What is the probability the worst will happen?

For vulnerabilities, the questions are:
Impact: Could the exploitation of the vulnerability result in significant damage to your assets or your users?
Likelihood: What is the probability a hacker will be able to exploit the issue?

And for code smells:
Impact: Could the code smell lead a maintainer to introduce a bug?
Likelihood: What is the probability the worst will happen?

That’s it. Rule severities are now transparent and easy to understand. And as these changes roll out in new versions of the language plugins, severity inflation should quickly become a thing of the past!

Categories: Open Source

Continuous Delivery of Infrastructure with Jenkins

This is a guest post by Jenkins World speaker R Tyler Croy, infrastructure maintainer for the Jenkins project. I don’t think I have ever met a tools, infrastructure, or operations team that did not have a ton of work to do. The Jenkins project’s infrastructure "team" is no different; too much work, not enough time. In lieu of hiring more people, which isn’t always an option, I have found heavy automation and continuous delivery pipelines to be two solutions within reach of the over-worked infrastructure team. As a big believer in the concept of "Infrastructure as Code", I have been, slowly but surely, moving the project’s infrastructure from manual tasks to code,...
Categories: Open Source

Pipeline at Jenkins World 2016

This is a guest post by R. Tyler Croy, who is a long-time contributor to Jenkins and the primary contact for Jenkins project infrastructure. He is also a Jenkins Evangelist at CloudBees, Inc. I have been heavily using Jenkins Pipeline for just about every Jenkins-related project I have contributed to over the past year. Whether I am building and publishing Docker containers, testing infrastructure code or publishing this very web site, I have been adding a Jenkinsfile to nearly every Git repository I touch. Implementing Pipeline has been rewarding, but has not been without its own challenges. That’s why I’m excited to see lots of different Jenkins Pipeline related content in the agenda at Jenkins...
Categories: Open Source