Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
Updated: 3 hours 3 min ago

New In-Product, Upgrade Notifications with CloudBees Jenkins Platform

11 hours 21 min ago

We are proud to announce the immediate availability of CloudBees Jenkins Platform 2.32.1, which offers upgrade notifications and many key improvements such as a bump on the Jenkins core to the 2.32.1 LTS line. You may know that Beekeeper Upgrade Assistant allows users to review and install upgrades of verified components, tested through the CloudBees Assurance Program (CAP). Up to now, this recommended configuration or envelope was only updated when new CloudBees Jenkins Platform releases were announced. Starting with this release, CloudBees can update the configuration between releases, so that fixes can be safely deployed, keeping the Jenkins instance safely in the recommended configuration. Another key improvement, is the LTS upgrade on the rolling release. This LTS bump provides the latest and greatest stable Jenkins core, fixes 50+ bug fixes since the previous LTS release, includes some performance improvements and, most importantly, upgrades the Jenkins Remoting Module from version 2.6 to 3.1. The Jenkins Remoting model is at the heart of all communications not only between Jenkins masters and agents but also between CloudBees Jenkins Operation Center and each client master.

To learn more, read the full blog post on Cloudbees Network (CBN). As the hub of all product related knowledge, CBN is where you will find release details and all future product announcements. 


Blog Categories: JenkinsDeveloper Zone
Categories: Companies

Now On DevOps Radio: Twitter, On Getting to the Last Mile with DevOps

Wed, 01/11/2017 - 00:21

Wen Gu is the Software Engineering Manager of Engineering Effectiveness at Twitter and keynote speaker at Jenkins World 2016. He brings his experience in software development for tech giants to DevOps Radio’s first episode of 2017.

In this episode, special host Sacha Labourey, CEO of CloudBees, chats with Wen about his moves from DevTools to DevOps, Intuit to Twitter and Hudson to Jenkins, covering the evolution of CI/CD and the tricky definition of DevOps. Wen also describes the DevOps cultural experience at legacy tech companies like HP, Yahoo and Intuit, as well as at younger companies like Twitter. He stresses the importance of unifying DevOps’ tools and best practices to get to the crucial last mile of the software delivery process – deployment.

New to DevOps Radio in 2017? Catch up on last year’s episodes, available on the CloudBees website and on iTunes, and make sure to stay tuned for new episodes.

We’d love to hear your thoughts on the latest episode of DevOps Radio. Given Wen’s place of employment, it seems only appropriate to join the conversation on Twitter, which you can do so by including the @CloudBees handle and #DevOpsRadio in your post.


Sacha Labourey, CloudBees, and Wen Gu, Twitter, going the last mile together, on DevOps Radio








Categories: Companies

CloudBees OSS Demo Days

Tue, 01/10/2017 - 21:44

All of us here at CloudBees are committed to making Jenkins the de-facto tool for continuous integration (CI) and continuous delivery (CD). In addition to our CloudBees Jenkins Platform and Private SaaS Edition products we continue to invest heavily in Jenkins through contributions to Jenkins core, Pipeline, Blue Ocean and many vital plugins for Jenkins. Our success is dependent on Jenkins’ success.

This investment in open source software (OSS) is manifest in an engineering team and resources dedicated to contributing bug fixes, security fixes and new features to Jenkins and its ecosystem. All of the work done by this team goes directly towards improving Jenkins and we want to keep everyone up-to-date with the changes we are making. To that end we have begun hosting monthly demonstrations to show off the latest and greatest changes we are working on and answer any questions.

We hosted the first of these “Demo Days” at the end of October and the second at the beginning of December. These videos are publicly available and viewable at anytime. If you have questions, you can always ask someone in IRC #jenkins.

Going forward we are going to host the live broadcast for these demonstrations regularly on the second Thursday of each month at 8 AM PST/4 PM GMT. We are also moving to CloudBeesTV as the host and archive for the videos. I have created a new Playlist for “CloudBees OSS Demonstrations” to allow you to quickly find all related videos. 

The first OSS Demo Day on CloudBeesTV will be this Thursday, January 12, 2017.  We hope you’ll join us and ask questions.

Blog Categories: Jenknis
Categories: Companies

New DevOps Radio Episode: Brian Dawson on the Future of DevOps

Tue, 12/20/2016 - 17:45

CloudBees’ Brian Dawson is an expert on all things DevOps. As a CloudBees resident DevOps guru and evangelist, Brian has been thinking about what’s going to happen with DevOps in 2017. What does Brian see when he continuously (CI/CD joke intended) gazes into his crystal ball? There’s only one place to find out: DevOps Radio.

DevOps Radio host Andre Pino sat down with Brian to find out where DevOps is today and where it’s going in the future. Now, the newest episode of DevOps Radio describes what Brian sees for DevOps in the future, for organizations of all sizes. He also talks about how he thinks they can achieve a state of CD and DevOps maturity. Specifically, Brian walks through his latest research, “Assessing DevOps Maturity Using a Quadrant Model” online. This is a very insightful tool for assessing your own organization’s maturity on the DevOps journey.

If you’re making your DevOps holiday wish list or planning for 2017 then you need to hear this. The latest DevOps Radio episode is available now on the CloudBees website and on iTunes.

Join the conversation about the episode on Twitter by tweeting out to @CloudBees and including #DevOpsRadio in your post. After you listen, we want to know your thoughts. What did you think of this episode? What would you like to hear on DevOps Radio next?



Blog Categories: Developer ZoneCompany News
Categories: Companies

Meet the Bees: Antonio Muñiz

Mon, 12/19/2016 - 21:48

In every Meet the Bees blog post, you’ll learn more about a different CloudBees Bee. Let’s buzz on over to sunny Spain and meet Antonio Muñiz​.​

Who are you? What is your role at CloudBees?
Hi! My name is Antonio Muñiz, I work from Spain as a Software Engineer at CloudBees. My daily work is around the CloudBees Jenkins Platform, the product we deliver based on Jenkins.

Since I started to work at CloudBees almost two years ago, I haven’t stopped learning from incredibly talented engineers. One of the things I like most about working at CloudBees and with Jenkins is that I get to help a lot of other developers around the world in their daily work.

Jenkins is a really flexible platform, so as a Jenkins developer you have to integrate with a wide variety of tools like source control management, data analysis tools and security tools. This allows you to investigate and learn not only Jenkins but a lot of other software projects.

What do you think the future holds for Jenkins?
As I see it, nowadays every software project needs to be built and delivered automatically, somehow. Jenkins is currently the de facto CI/CD platform, so it is a special software component as any other software needs to be built and delivered automatically (as much as possible).

Jenkins allows developers to actually develop, without having to spend time delivering their software. As I like to say: “Developers, develop!”

That said, I think we all agree that “software is ruling the world,” so intrinsically Jenkins will “rule” the world in some sense.

What are some of your best tips and tricks for using Jenkins?

  1. Use Pipeline
  2. Don’t use Pipeline as if you were developing a web application, use it to orchestrate your continuous delivery process
     and put the build logic into your build scripts
  3. Use build agents to distribute the workload, don’t build in Jenkins itself
  4. Install the minimum set of plugins to achieve what you want
  5. Use reproducible builds and don’t store any state information used by the build in Jenkins
  6. Make it simple; one job for one responsibility. Call them in chain if needed (from the pipeline)
  7. Configure security as needed. Not everyone can be an administrator

What is your best experience, so far, in working at CloudBees?
To be honest, I feel proud and happy to be working at CloudBees *everyday.* As I said before, it’s a place where I don’t stop learning. There are multitudes of things to work on and the people around me are awesome.

But if I have to choose only one, it is the last company-wide meeting in Mexico. In February, 2016 we all met in Zihuatanejo (Mexico). We all spent a week there, we worked hard on a lot of new things and planned future work. Of course, we also had group activities: Mexican food on the beach and beautiful sunsets over the Pacific Ocean every afternoon. Awesome.

It’s important for me to meet face-to-face with the people I work with the rest of the year remotely, because everything flows better when you really know the person at the other side of the cable!

If you could eat only one meal for the rest of your life, what would it be?
Well, I’m from the south of Spain, very close to Jabugo so obviously I love cured ham! I’m sure that if a human being could survive eating *only* cured ham and drinking beer, you would have all the nutrients needed for the life. And not only survive, it would be a really pleasant life, of course :)

What’s your favorite sport?
Formula 1, without any doubt. It is bleeding-edge technology applied to sports. Every little detail in the design makes the difference between the first and the last to the finish line. I don’t care too much about drivers. They are probably nice people - nothing against them - but having all that technology available it must be relatively easy to go that fast!

I had a work opportunity in Ferrari last week, but of course I said: no, I have everything I need here! :)


Blog Categories: Jenkins
Categories: Companies

xUnit and Pipeline

Tue, 12/13/2016 - 16:38

The JUnit plugin is the go-to test result reporter for many Jenkins projects, but it is not the only one available. The xUnit plugin is a viable alternative that supports JUnit and many other test result file formats.


No matter the project, you need to gather and report test results. JUnit is one of the most widely supported formats for recording test results. For a scenarios where your tests are stable and your framework can produce JUnit output, the JUnit plugin is ideal for reporting results in Jenkins. It will consume results from a specified file or path, create a report and if it finds test failures it will set the the job state to "unstable" or "failed".

Test reporting with JUnit

There are also plenty of scenarios where the JUnit plugin is not enough. If your project has some failing tests that will take some time to fix, or if there are some flaky tests, the JUnit plugin's simplistic view of test failures may be difficult to work with.

No problem, the Jenkins plugin model lets us replace the JUnit plugin functionality with similar functionality from another plugin and Jenkins Pipeline lets us do this in safe stepwise fashion where we can test and debug each of our changes.

In this post, I will show you how to replace the JUnit plugin with the xUnit plugin in Pipeline code to address a few common test reporting scenarios.

Initial Setup

I'm going to use the "JS-Nightwatch.js" sample project from my previous post to demonstrate a couple common scenarios that the xUnit handles better. I already have the latest JUnit plugin and xUnit plugin installed on my Jenkins server.

I'll be keeping my changes in my fork of the "JS-Nightwatch.js" sample project on GitHub, under the "blog/xunit" branch.

Here's what the Jenkinsfile looked like at the end of that previous post and what the report page looks like after a few runs:

// Jenkinsfile
node {
    stage "Build"
    checkout scm

    // Install dependencies
    sh 'npm install'

    stage "Test"
    // Add sauce credentials
    sauce('f0a6b8ad-ce30-4cba-bf9a-95afbc470a8a') {
        // Start sauce connect
        sauceconnect(options: '', useGeneratedTunnelIdentifier: false, verboseLogging: false) {

            // List of browser configs we'll be testing against.
            def platform_configs = [

            // Nightwatch.js supports color ouput, so wrap this step for ansi color
            wrap([$class: 'AnsiColorBuildWrapper', 'colorMapName': 'XTerm']) {
                // Run selenium tests using Nightwatch.js
                // Ignore error codes. The junit publisher will cover setting build status.
                sh "./node_modules/.bin/nightwatch -e ${platform_configs} || true"

            junit 'reports/**'

            step([$class: 'SauceOnDemandTestPublisher'])

JUnit plugin console output

Switching from JUnit to xUnit

I'll start by replacing JUnit with xUnit in my pipeline. I use the Snippet Generator to create the step with the right parameters. The main downside of using the xUnit plugin is that while it is Pipeline compatible, it still uses the more verbose step() syntax and has some very rough edges around that, too. I've filed JENKINS-37611 but in the meanwhile, we'll work with what we have.

// Original JUnit step
junit 'reports/**'

// Equivalent xUnit step - generated (reformatted)
step([$class: 'XUnitBuilder', testTimeMargin: '3000', thresholdMode: 1,
    thresholds: [
        [$class: 'FailedThreshold', failureNewThreshold: '', failureThreshold: '', unstableNewThreshold: '', unstableThreshold: '1'],
        [$class: 'SkippedThreshold', failureNewThreshold: '', failureThreshold: '', unstableNewThreshold: '', unstableThreshold: '']],
    tools: [
        [$class: 'JUnitType', deleteOutputFiles: false, failIfNotNew: false, pattern: 'reports/**', skipNoTestFiles: false, stopProcessingIfError: true]]

// Equivalent xUnit step - cleaned
step([$class: 'XUnitBuilder',
    thresholds: [[$class: 'FailedThreshold', unstableThreshold: '1']],
    tools: [[$class: 'JUnitType', pattern: 'reports/**']]])

If I replace the junit step in my Jenkinsfile with that last step above, it produces a report and job result identical to the JUnit plugin but using the xUnit plugin. Easy!

node {
    stage "Build"
    // ... snip ...

    stage "Test"
    // Add sauce credentials
    sauce('f0a6b8ad-ce30-4cba-bf9a-95afbc470a8a') {
        // Start sauce connect
        sauceconnect(options: '', useGeneratedTunnelIdentifier: false, verboseLogging: false) {

            // ... snip ...

            // junit 'reports/**'
            step([$class: 'XUnitBuilder',
                thresholds: [[$class: 'FailedThreshold', unstableThreshold: '1']],
                tools: [[$class: 'JUnitType', pattern: 'reports/**']]])

            // ... snip ...

Test reporting with xUnit

xUnit plugin console output

Accept a Baseline

Most projects don't start off with automated tests passing or even running. They start with a people hacking and prototyping, and eventually they start to write tests. As new tests are written, having tests checked-in, running and failing can be valuable information. With the xUnit plugin we can accept a baseline of failed cases and drive that number down over time.

I'll start by changing the Jenkinsfile to fail jobs only if the number of failures is greater than an expected baseline, in this case four failures. When I run the job with this change, the reported numbers remain the same, but the job passes.

// Jenkinsfile
// The rest of the Jenkinsfile is unchanged.
// Only the xUnit step() call is modified.
step([$class: 'XUnitBuilder',
    thresholds: [[$class: 'FailedThreshold', failureThreshold: '4']],
    tools: [[$class: 'JUnitType', pattern: 'reports/**']]])

Accept a baseline of failing tests.

Next, I can also check that the plugin reports the job as failed if more failures occur. Since this is sample code, I'll do this by adding another failing test and checking the job reports as failed.

// tests/guineaPig.js
// ... snip ...

    'Guinea Pig Assert Title 0 - D': function(client) { /* ... */ },

    'Guinea Pig Assert Title 0 - E': function(client) {
            .waitForElementVisible('body', 1000)
            //.assert.title('I am a page title - Sauce Labs');
            .assert.title('I am a page title - Sauce Labs - Cause a Failure');

    afterEach: function(client, done) { /* ... */ }

// ... snip ...

All tests pass!

In a real project, we'd make fixes over a number of commits bringing the number of failures down and adjusting our baseline. Since this is a sample, I'll just make all tests pass and set the job failure threshold for failed and skipped cases to zero.

// Jenkinsfile
// The rest of the Jenkinsfile is unchanged.
// Only the xUnit step() call is modified.
step([$class: 'XUnitBuilder',
    thresholds: [
        [$class: 'SkippedThreshold', failureThreshold: '0'],
        [$class: 'FailedThreshold', failureThreshold: '0']],
    tools: [[$class: 'JUnitType', pattern: 'reports/**']]])
// tests/guineaPig.js
// ... snip ...

    'Guinea Pig Assert Title 0 - D': function(client) { /* ... */ },

    'Guinea Pig Assert Title 0 - E': function(client) {
            .waitForElementVisible('body', 1000)
            .assert.title('I am a page title - Sauce Labs');

    afterEach: function(client, done) { /* ... */ }

// ... snip ...
// tests/guineaPig_1.js
// ... snip ...

    'Guinea Pig Assert Title 1 - A': function(client) {
            .waitForElementVisible('body', 1000)
            .assert.title('I am a page title - Sauce Labs');

// ... snip ...

All tests pass!

Allow for Flakiness

We've all known the frustration of having one flaky test that fails once every ten jobs. You want to keep it active so you can work on isolating the source of the problem, but you also don't want to destabilize your CI pipeline or reject commits that are actually okay. You could move the test to a separate job that runs the "flaky" tests, but in my experience that just leads to a job that is always in a failed state and a pile of flaky tests that no one looks at.

With the xUnit plugin, we can keep the flaky test in main test suite but allow the job to still pass.

I'll start by adding a sample flaky test. After a few runs, we can see that the test fails intermittently and causes the job to fail too.

// tests/guineaPigFlaky.js
// New test file: tests/guineaPigFlaky.js
var https = require('https');
var SauceLabs = require("saucelabs");

module.exports = {

    '@tags': ['guineaPig'],

    'Guinea Pig Flaky Assert Title 0': function(client) {
        var expectedTitle = 'I am a page title - Sauce Labs';
        // Fail every fifth minute
        if (Math.floor( / (1000 * 60)) % 5 === 0) {
            expectedTitle += " - Cause failure";

            .waitForElementVisible('body', 1000)

    afterEach: function(client, done) {

        setTimeout(function() {
        }, 1000);



The pain of flaky tests failing the build

I can almost hear my teammates screaming in frustration just looking at this report. To allow specific tests to be unstable but not others, I'm going to add a guard "suite completed" test to the suites that should be stable, and keep the flaky test on it's own. Then I'll tell xUnit to allow for a number of failed tests, but no skipped ones. If any test fails other than the ones I allow to be flaky, it will also result in one or more skipped tests and will fail the build.

// Jenkinsfile
// The rest of the Jenkinsfile is unchanged.
// Only the xUnit step() call is modified.
step([$class: 'XUnitBuilder',
    thresholds: [
        [$class: 'SkippedThreshold', failureThreshold: '0'],
        // Allow for a significant number of failures
        // Keeping this threshold so that overwhelming failures are guaranteed
        //     to still fail the build
        [$class: 'FailedThreshold', failureThreshold: '10']],
    tools: [[$class: 'JUnitType', pattern: 'reports/**']]])
// tests/guineaPig.js
// ... snip ...

    'Guinea Pig Assert Title 0 - E': function(client) { /* ... */ },

    'Guinea Pig Assert Title 0 - Suite Completed': function(client) {
      // No assertion needed

    afterEach: function(client, done) { /* ... */ }

// ... snip ...
// tests/guineaPig_1.js
// ... snip ...

    'Guinea Pig Assert Title 1 - E': function(client) { /* ... */ },

    'Guinea Pig Assert Title 1 - Suite Completed': function(client) {
      // No assertion needed

    afterEach: function(client, done) { /* ... */ }

// ... snip ...

After a few more runs, you can see the flaky test is still being flaky, but it is no longer failing the build. Meanwhile, if another test fails, it will cause the "suite completed" test to be skipped, failing the job. If this were a real project, the test owner could instrument and eventually fix the test. When they were confident they had stabilized the test they could add a "suite completed" test after it to enforce it passing without changes to other tests or framework.

Flaky tests don't have to fail the build

Results from flaky test


This post has shown how to migrate from the JUnit plugin to the xUnit plugin on an existing project in Jenkins pipeline. It also covered how to use the features of xUnit plugin to get more meaningful and effective Jenkins reporting behavior.

What I didn't show was how many other formats xUnit supports - from CCPUnit to MSTest. You can also write your own XSL for result formats not on the known/supported list.

Links This is the last in a series of posts showing ways to use Jenkins Pipeline. Follow along with this entire series through the links below: Blog Categories: JenkinsDeveloper Zone
Categories: Companies

Meet the Bees: Patrick O'Hannigan

Fri, 12/09/2016 - 16:13

In every Meet the Bees blog post, you’ll learn more about a different CloudBees Bee. Let’s buzz on over to our Raleigh, North Carolina office and meet Patrick O’Hannigan.

Who are you? What is your role at CloudBees?

I am Patrick O’Hannigan. As of this writing, I’m the designated “Documentation Guy” for engineering, although my work also overlaps with product management. I’m here for writing, editing, documentation policy-making, UX design, extra hands on simple QA assignments and pestering our Jenkins experts in the interest of knowledge development. It’s a blast!

What makes CloudBees different from other companies?

All of us Bees seem to care about what we’re doing, and about helping each other. That’s refreshing and it is reinforced by company culture.

What does a typical day look like for you?

My day usually involves varying amounts of research, writing, editing, problem solving, reviewing GitHub pull requests from other contributors and spelunking through HipChat rooms for useful information that always comes up in team discussions there.

What do you think the future holds for Jenkins?

I think the future holds an even bigger footprint in the DevOps space as initiatives like Declarative Pipeline make Jenkins even more friendly for non-developers that it already is. We know there are different personas out there, and we’re appealing to more of them.

What is your favorite form of social media and why?

I like LinkedIn because it’s less of a distraction than Facebook, and because one of my contacts there has become a friend even though we aren’t likely to meet anytime soon. Plus I won a contest on LinkedIn and got a free book as my prize.

Something we all have in common these days is the constant use of technology. What’s your favorite gadget and why?

I have an older Android-based smartphone and I’m old-school enough to appreciate how it lets me keep in touch with people I care about.

Vanilla or chocolate or some other flavor, what’s your favorite ice cream flavor and brand?

Ben & Jerry’s Chunky Monkey feels like the original counter-cultural flavor. (It’s only competition for that title in my head would be Cherry Garcia.)


Blog Categories: Jenkins
Categories: Companies

Dynamic Proxies (The DevOps 2.0 Toolkit)

Thu, 12/01/2016 - 22:15

The decline of hardware proxies started a long time ago. They were too expensive and inflexible even before cloud computing become mainstream. These days, almost all proxies are based on software. The major difference is what we expect from them. While, until recently, we could define all redirections as static configuration files, that changed in favor of more dynamic solutions. Since our services are being constantly deployed, redeployed, scaled, and, in general, moved around, the proxy needs to be capable of updating itself with this ever changing end-point location.

We cannot wait for an operator to update configurations with every new service (or release) we are deploying. We cannot expect him to monitor the system 24/7 and react to a service being scaled as a result of increased traffic. We cannot hope that he will be fast enough to catch a node failure which results in all services being automatically rescheduled to a healthy node. Even if we could expect such tasks to be performed by humans, the cost would be too high since an increase in the number of services and instanced we're running would mean an increase in workforce required for monitoring and reactive actions. Even if such a cost is not an issue, we are slow. We cannot react as fast as machines can and that discrepancy between a change in the system and proxy reconfiguration could, at best, result in performance issues.

Among software based proxies, Apache ruled the scene for a long time. Today, age shows its face. It is rarely the weapon of choice due to its inability to perform well under stress and relative inflexibility. Newer tools like nginx and HAProxy took over. They are capable of handling a vast amount of concurrent requests without posing a severe strain on server resources.

Even nginx and HAProxy are not enough by themselves. They were designed with static configuration in mind and require us to add additional tools to the mix. An example would be a combination of templating tools like Consul Template that can monitor changes in service registry, modify proxy configurations and reload them.

Today, we see another shift. Typically, we would use proxy services not only to redirect requests, but also to perform load balancing among all instances of a single service. With the emergence of the (new) Docker Swarm (shipped with the Docker Engine release v1.12), load balancing (LB) is moved towards software defined network (SDN). Instead performing LB among all instances, a proxy would redirect a request to an SDN end-point which, in turn, would perform load balancing.

Services architecture is switching towards microservices and, as a result, deployment and scheduling processes and tools are changing. Proxies and expectations we have from them are following those changes.

The deployment frequency is becoming higher and higher, and that poses another question. How do we deploy often without any downtime?

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.

The book is available from Amazon ( and other worldwide sites) and LeanPub.

Categories: Companies

Usability and Stability Enhancements in CloudBees Jenkins Platform

Tue, 11/29/2016 - 15:29

We are excited to announce the availability of CloudBees Jenkins Platform This release delivers stability and usability by bumping the Jenkins core to 2.19.x and includes a key security fix. This is also the second “rolling release,” the output from a process we are using to provide the latest functionality to users on a more frequent release cadence. All enhancements and fixes are for the rolling release only. Fixed releases have diverged from rolling releases (locked to 2.7.X) and will follow a separate schedule.

Release Highlights Jenkins Core Bumped to 2.19.x LTS Line

This is the first LTS upgrade on the rolling release and adds key fixes, such as improved dependency management for plugins. With improved dependency management, administrators are warned when dependent plugins are absent during install time. Thus administrators can catch and fix the problem before run time and provide a smooth experience to their users.

Security-360 Fix Incorporated

All customers were sent the fix for Security-360 on Nov 16, 2016. This vulnerability allowed attackers to transfer a serialized Java object to the Jenkins CLI, making Jenkins connect to an attacker-controlled LDAP server, which in turn can send a serialized payload leading to code execution, bypassing existing protection mechanisms. If you have not installed the fix, we strongly urge you to upgrade to incorporate the security fix in your production environment.

Support for CloudBees Assurance Program in Custom Update Centers

CloudBees Assurance Program (CAP) provides a Jenkins binary and plugins that have been verified for stability and interoperability. Jenkins administrators can easily promote this distribution to their teams by setting CAP as an upstream source in their custom update centers. This reduces the operational burden by allowing admins to use CloudBees-recommended plugins for all their masters, ensuring compliance and facilitating governance.

CloudBees Assurance Program Plugin (CAP) Updates

These CloudBees verified plugins have been updated for this release of the CloudBees Jenkins Platform:

  • Mailer version 1.18
  • LDAP version 1.13
  • JUnit version 1.19
  • Email-ext version 2.51
  • Token-macro version 2.0
  • GitHub version 1.22.3
CloudBees Jenkins Platform Improvements

This release features many reliability improvements for the CloudBees Jenkins Platform, including many stability improvements to CloudBees Jenkins Operations Center connections to client masters.

Improvements & Fixes



 Jenkins core upgraded to 2.19.3 LTS (release notes)

Improved dependency management - Flags admin when plugins dependencies are present, Jenkins will not load dependent plugins, reducing errors when initializing. Creates a smoother startup through smarter scanning of plugins.

Jobs with lots of history no longer hang the UI - Improved performance from the UI for jobs with lots of build history. Lazy loading renders faster because build history will not automatically load on startup.

Reduce configuration errors caused by invalid form submissions - Browsers will not autocomplete forms in Jenkins, reducing configuration problems due to invalid data in form submissions resulting from using the browser back button. Only select form fields (e.g. job name) will offer autocompletion. For admins, Jenkins users who use the browser back button will no longer corrupt the Jenkins configuration.

CloudBees Assurance Program (CAP)

Support for Custom Update Centers - CAP is now available as an upstream source in Custom Update Centers, enabling admins to use CloudBees-recommended plugins for all their masters.

Mailer has been upgraded to version 1.18, includes a minor improvement to rendering page links and now supports the BlueOcean project.

JUnit has been upgraded to version 1.19, includes usability improvements around unsafe characters in the URI, highlighted test results.

Email-ext has been upgraded to version 2.51 contains an improvement pipeline support for expanding the tokens FAILED_TESTS, TEST_COUNTS and TRIGGER_NAME in a pipeline email notification.

Token-macro has been upgraded to 2.0 and contains improved pipeline support, allowing token macro to be used in a pipeline context, polish providing autocomplete when referencing a token name, support for variable expansion and some performance improvements when scanning large Jenkins instances.

Pipeline usability improvements

Environment variables in Pipeline jobs are now available as global Groovy variables - simplifies tracking variable scope in a pipeline.

Build and job parameters are available as environment variables and thus accessible as if they were global Groovy variables - parameters are injected directly into the Pipeline script and are no longer available in ‘bindings.’

Makes job parameters, environment variables and Groovy variables much more interchangeable, simplifying pipeline creation and making variable references much more predictable.

Skip Next Build plugin Adds the capability to skip all the jobs of a folder and its sub-folders or to skip all the jobs belonging to a “Skip Jobs Group.” Skip Jobs Group is intended to group together jobs that should be skipped simultaneously but are located in different folders. Support bundle Adds the logs of the client master connectivity to the support bundle. Fixes Details CloudBees Jenkins Platform core 
  • Possible livelock in CloudBees Jenkins Operations Center communication service.
  • Possible unbounded creation of threads in CloudBees Jenkins Operations Center communication service.
  • Fix NullPointerException in client master communication service when creating big CloudBees Jenkins Platform clusters.
  • Fix deadlock on client master when updating number of executors in CloudBees Jenkins Operations Center cloud.
  • Replace the term “slave” with “agent” in the CloudBees Jenkins Operations Center UI.
  • Unable to log into client master if a remember me cookie has been set during an authentication on the client master while CloudBees Jenkins Operations Center was unavailable.
  • “Check Now” on Manage Plugins doesn’t work when a client master is using a Custom Update Center.
  • Technical properties appear on the configuration screen of the CloudBees Jenkins Operations Center shared cloud when they should be hidden.
  • Move/copy fails in case client master is not connected to CloudBees Jenkins Operations Center.
  • Move/copy screen broken with infinite loop when the browse.js `fetchFolders` function goes to error.
Analytics and monitoring
  • Under heavy load, multiple CloudBeesMetricsSubmitter run obtaining threadInfos and slow down the application.
  • The number of available nodes in a cloud should be exposed as metrics.

Role-Based Access Control plugin

The Role-based Access Control REST API ignores requirement for POST requests (allows GET) thereby eliminating 404 HTTP errors when accessing groups from a nested client master folder.

GitHub Organization Folder plugin GitHub Organization Folder scanning issue when using custom marker files. CloudBees Assurance Program

LDAP upgraded to version 1.13, includes a major configuration bug fix.

GitHub has been upgraded to version 1.22.3 and contains a major bug fix for an issue that could crash Jenkins instances using LDAP for authentication

Frequently Asked Questions What is the CloudBees Assurance Program (CAP)?

The CloudBees Assurance Program (CAP) eliminates the risk of Jenkins upgrades by ensuring that various plugins work well together. CAP brings an unprecedented level of testing to ensure upgrades are no-risk events. The program bundles an ever-growing number of plugins in an envelope that is tested and certified together. The envelope installation/upgrade is an atomic operation - all certified versions are upgraded in lockstep, reducing the cognitive load on administrators in managing plugins.

Who is the CloudBees Assurance Program program designed for?

The program is designed for Jenkins administrators who manage Jenkins for their engineering organizations.

When was the CloudBees Assurance Program launched?

The program was launched in September 2016.

What is a rolling release?

The CAP program delivers a CloudBees Jenkins Platform on a regular cadence and this is called the “rolling” release model. A new release typically lands every 4-6 weeks. 

Do I have to upgrade on every release?

You are encouraged too but aren’t required. You can skip a release or two and the assurance program ensures your upgrades would be smooth.

What release am I on?

You can tell which version you are running by checking the footer of your CJE or CJOC instance.


How to Upgrade

Review the CloudBees Jenkins Enterprise Installation Guide and the CloudBees Jenkins Operations Center User Guide for details about upgrading, but here are the basics:

  1. Identify which CloudBees Jenkins Enterprise release line (rolling vs. fixed) you are currently running.
  2. Visit to download the latest release for your release line. (You must be logged in to see available downloads).
  3. If you are running CloudBees Jenkins Operations Center, you must upgrade it first, because you cannot connect a new CloudBees Jenkins Enterprise instance to an older version of CloudBees Jenkins Operations Center.
  4. Install the CloudBees Jenkins Platform as appropriate for your environment, and start the CloudBees Jenkins Platform instance.
  5. If the instance needs additional input during upgrade, the setup wizard prompts for additional input when you first access the instance.
Related Knowledgebase Articles Release Notes and Related Documentation


Blog Categories: JenkinsDeveloper ZoneCompany News
Categories: Companies

Now Live on DevOps Radio: Picture-Perfect CD, Featuring Dean Yu, Director, Release Engineering, Shutterfly

Mon, 11/28/2016 - 15:37

Jenkins World 2016 was buzzing with the latest in DevOps, CI/CD, automation and more. DevOps Radio wanted to capture some of that energy so we enlisted the help of Sacha Labourey, CEO at CloudBees, to host a series of episodes live at the event. We’re excited to present a new three-part series, DevOps Radio: Live at Jenkins World. This is episode two in the series.

Dean Yu, director of release engineering at Shutterfly, has been with the Jenkins community since before Jenkins was called Jenkins. Today, he’s a member of the Jenkins governance board and an expert in all things Jenkins and CI. He attended Jenkins World 2016 to catch up with the community, check out some sessions and sit down with Sacha Labourey for a special episode of DevOps Radio.

Sacha had a lot of questions for Dean, but the very first question he asked was, “What is new at Shutterfly?” Dean revealed how his team is using Jenkins, working on CI/CD and keeping pace with business during Shutterfly’s busiest season, the holidays. If you’re interested in learning CI/CD best practices or hearing what one Jenkins leader thinks about the future of software development and delivery, then you need to tune in today!

You don’t have to stop making your holiday card or photo book on, just plug in your headphone and tune into DevOps Radio. The latest DevOps Radio episode is available now on the CloudBees website and on iTunes.

Join the conversation about the episode on Twitter by tweeting to @CloudBees and including #DevOpsRadio in your post. After you listen, we want to know your thoughts. What did you think of this episode? What do you want to hear on DevOps Radio next? And, what’s on your holiday DevOps wishlist?

Sacha Labourey and Dean Yu talk about CD at Shutterfly, during Jenkins World 2016 (below).
P.S. Check out Dean’s massive coffee cup. It displays several pictures of his daughter and was created - naturally - on the Shutterfly website. 









Categories: Companies