Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
CloudBees provides an enterprise Continuous Delivery Platform that accelerates the software development, integration and deployment processes. Building on the power of Jenkins CI, CloudBees enables you to adopt continuous delivery incrementally or organization-wide, supporting on-premise, cloud and hybrid environments.
Updated: 1 day 36 min ago

Docker Hub 2.0 Integration with the CloudBees Jenkins Platform

Thu, 09/17/2015 - 15:11
Docker Hub 2.0 has just been announced, what a nice opportunity to discuss Jenkins integration!
For this blog post, I'll present a specific DockerHub use case:  How to access to the Docker Hub registry and to manage your credentials in Jenkins jobs.  

The Ops team is responsible for maintaining a curated base image with a common application runtime. As the company is building Java apps, they bundle Oracle JDK and Tomcat, applying security updates as needed.

The Ops team uses CloudBees Docker Build and Publish plugin to build a Docker image from a clean environment, and deploy to DockerHub on a private repository. Integration with Jenkins credentials makes it easy, and the plugin allows them to both deploy the base-image as "latest" and track all changes with a dedicated tag per build.


The Dev team are very productive, producing thousands of lines of Java code, relying on Jenkins to ensure the code follows coding, and testing coverage standards whilst packaging the application. 

During the build, they eventually include the packaged WAR file in a new Docker image, relying on Ops' base-image. To do this, they just had to write a minimalist Dockerfile and add it to their git repository. They can use this image to run some advanced tests and reproduce the exact production environment (even on their laptop for diagnostic purposes if needed). The Ops team is confident with such an image as they know the base image is safe.
    They also have installed Jenkins DockerHub Notification plugin, so they can configure the job to run when the Docker base-image is updated on the Hub. With this setup they know the last build will always rely on latest base-image, including all important security fixes that the Ops team is concerned about.

    This scenario has been tested on DockerHub 2.0 and works like a charm. Updating the base image sources on Github triggers a build for base-image job, which is then published to DockerHub 2.0.
    Jenkins detects these changes to the DockerHub hosted images, and and jobs that depend on the upstream base-image* will be rebuilt, tested, and published (and possibly released).

    The Ops team are happy with this, as their fears of developers running ancient docker images full of security holes are calmed by knowing that by simply updating the base-image, all projects that depend on it will be notified and updated automatically:

    An actual company would probably have a more sophisticated deployment pipeline than outlined above, with validation steps (and possibly approval) for each image.

    To learn more about Docker integration with CloudBees Jenkins Platform, be sure to read additional blogs on, including Architecture: Integrating CloudBees Jenkins Platform with Docker Hub (INSERT LINK).

    You can read more documentation about CloudBees and Docker containers here.

    * Note the new Docker-Workflow feature will automatically register for changes to base images if you use that way to build out your pipeline:

    Team's logos are from, which I recommend you follow - you may not learn much but you should get some good laughs.

    Nicolas De Loof
    Software Engineer

    Nicolas De Loof is based in Rennes, France. Read more about Nicolas in his meet the bees blog post, and follow him on Twitter.

    Categories: Companies

    Architecture: Integrating the CloudBees Jenkins Platform with Docker Hub 2.0

    Thu, 09/17/2015 - 15:11
    Docker is an incredibly hot topic these days. Its role in Jenkins infrastructures will soon become predominant as companies are discovering how Docker fits within their own environments as well as how to use Docker and Jenkins together most effectively across their software delivery pipelines.

    The major use cases for Docker in a Jenkins infrastructure are:
    • Customize the build environment: Different applications often require different build tools, some of these tools require root permissions to be installed on the build servers (x11/xvfb and Firefox for headless tests such as selenium, ImageMagick...). Jenkins admins once solved this problem by increasing the number of flavors of Jenkins slaves, but it was limited by hardware constraints and was not flexible for project teams. The CloudBees Docker Custom Build Environment Plugin and the CloudBees Docker Workflow Plugin offer a new way to solve this challenge with much more flexibility, allowing Jenkins admins to manage only one flavor of Jenkins slaves—Docker enabled slaves—and let the project team customize their build environment to their needs running their jobs in Docker containers.
    • Ship applications as Docker images: More and more applications get shipped as Docker images (instead of war/exe/... files) and the Continuous Integration platform has to build and publish these Docker images.

    For these scenarios, the Jenkins infrastructure needs to access to a Docker registry to retrieve/pull the Docker images used on Docker enabled slaves and to store/push the Docker images created by Jenkins builds.

    Docker HubThe Docker Hub is the cloud-based registry service proposed by Docker, Inc that combines the "official" registry of public images on which "every" Docker user relies with a private registry that will allow the user to manage private images.

    Integrating a Jenkins infrastructure with Docker Hub requires architecture decisions that are similar to the decisions to integrate a Jenkins infrastructure with online services such as GitHub or BitBucket.

    Direct connectivity from the Jenkins infrastructure to Docker Hub
    The most straightforward solution is to simply open network connectivity (http and https) from the Jenkins slaves to Docker Hub.

    Architecture: Jenkins infrastructure and
    Connecting the Jenkins infrastructure to Docker Hub through a proxy
    Several organisations will prefer to secure the connectivity of the Jenkins infrastructure to the "outside world" with firewalls and proxies.

    To do so, it is necessary to declare the HTTP proxy in the configuration of the Docker daemon on each Jenkins slaves as documented in Docker Documentation - Control and configure Docker with systemd - HTTP Proxy.

    Sample /etc/systemd/system/docker.service.d/http-proxy.conf:


    Architecture: Jenkins infrastructure and through an HTTP proxyPrivate Docker Registries behind firewalls?
    This blog post covered how to integrate a Jenkins infrastructure with the Docker Hub public registry service. We will cover in seperate post the integration of a Jenkins infrastructure with a private registry behind the firewalls.

    Accessing the Docker Hub registry in Jenkins jobs
    To see how to access to the Docker Hub registry and to manage your credentials in Jenkins jobs, please read Nicolas de Loof's blog post Docker Hub 2.0 Integration with CloudBees Jenkins Platform and watch the screencast:

    Cyrille Le Clerc is a product manager at CloudBees, with more than 15 years of experience in Java technologies. He came to CloudBees from Xebia, where he was CTO and architect. Cyrille was an early adopter of the “You Build It, You Run It” model that he put in place for a number of high-volume websites. He naturally embraced the DevOps culture, as well as cloud computing. He has implemented both for his customers. Cyrille is very active in the Java community as the creator of the embedded-jmxtrans open source project and as a speaker at conferences.
    Categories: Companies

    Jenkins Community Survey - Your Chance to Be Heard!

    Wed, 09/02/2015 - 04:24
    Just as in past years, CloudBees is again working with the community to sponsor a survey. The goal is for the community to get some objective insights into what Jenkins users would like to see in the Jenkins project.
    Read Kohsuke's blog about it on the Jenkins blog. 
    The survey will be open until the end of September. This is your chance to be heard and to have a say in development priorities for Jenkins. Why not take it now? 
    We understand the value for the community in learning what users want and how they are using Jenkins, so we are providing an added incentive for community members to fill out the survey. We have donated a $100 Amazon gift card that will be randomly awarded to a lucky survey taker. 
    As with most give-aways...there are always terms and conditions. So now the boring, legal stuff.

    Fine print:
    1. The survey will be open from September 1 to September 30, 2015. If you submit a completed survey, we will enter you to win a $100 Amazon gift certificate. Yeah, you can only enter the contest once, so please don’t over-stuff the survey box. After the survey closes, we’ll draw a name to choose the winner…and maybe it will be you!
    2. If you do not supply your name and email address, you are not eligible to win. Think about it – we have no way to contact you. If you do supply your name and email address, we’ll send you the survey results.
    1. The Amazon gift card can only be won by someone who lives in a country where you can buy from Amazon. If you live in a country without Amazon access, we will send you $100 via PayPal. If you live in a country under U.S. embargo, we’re sorry, but there’s not much we can do here.
    2. You must be 18 years old or older (20 or older in Japan).
    3. You must use Jenkins or be affiliated with its use.
    4. The winner is responsible for any federal, state and local taxes, import taxes and fees that may apply.
    5. This survey is administered by CloudBees, Inc., 2001 Gateway Place, Suite 670W, San Jose, CA 95110, +1-408-805-3552, If you’d like to send us feedback or have questions, please email us at And no, we do not accept bribes to rig the contest. :)
    6. Regardless of whether you win the Amazon gift card, you will have the satisfaction that you’re providing input that will help make Jenkins even better. Thank you in advance for sharing your thoughts with the community!
    7. Oh, and the best purchase necessary!
    Take the survey here
    Categories: Companies

    Jenkins User Conference U.S. West Speaker Highlight: Kaj Kandler

    Thu, 08/27/2015 - 21:56
    When Kaj attended JUC Boston in 2014, he was surprised to see how many enterprise Jenkins users had developed plugins to use for themselves. In his Jenkins blog post, Kaj shares some insight on developing enterprise-ready plugins.

    This post on the Jenkins blog is by Kaj Kandler,  Integration Manager at Black Duck Software, Inc. If you have your ticket to JUC U.S. West, you can attend his talk "Making Plugins that are Enterprise Ready" on Day 1.

    Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for the last Jenkins User Conference of the year: JUC U.S. West.

    Thank you to the sponsors of the Jenkins User Conference World Tour:

    Categories: Companies

    Volume 9 of the Jenkins Newsletter: Continuous Information is out!

    Thu, 08/27/2015 - 16:07
    The next issue of the Jenkins Newsletter, Continuous Information is out!

    There has been so much Jenkins content all from all over the world from events, to articles, blogs, training and everything in between:

    • Learn more about how Jenkins works with technologies like Kubernetes, Docker and Postman
    • Find a Meetup near you or another Jenkins event in your area
    • Find the latest news about Jenkins User Conference U.S. West
    • Read some articles and blog posts and expand your Jenkins knowledge

    Catch up on the latest Jenkins news every quarter and sign up to receive Continuous Information directly to your inbox every quarter. 
    Categories: Companies

    JUC Session Blog Series: Christian Lipphardt, JUC Europe

    Tue, 08/25/2015 - 20:57
    At the Jenkins user conference in London this year I stumbled into what turned out to be the most interesting session to my mind, From Virtual Machines to Containers: Achieving Continuous Integration, Build Reproducibility, Isolation and Scalability (a mouthful), from folks at a software shop by the name of Camunda.

    The key aspect of this talk was the extension of the “code-as-configuration” model to nearly the entire Jenkins installation. Starting from a chaotic set of hundreds of hand-maintained jobs, corresponding to many product versions tested across various environmental combinations (I suppose beyond the abilities of the Matrix Project plugin to handle naturally), they wanted to move to a more controlled and reproducible definition.

    Many people have long recognized the need to keep job configuration in regular project source control rather than requiring it to be stored in $JENKINS_HOME (and, worse, edited from the UI). This has led to all sorts of solutions, including the Literate plugin a few years back, and now various initialization modes of Workflow that I am working on, not to mention the Templates plugin in CloudBees Jenkins Enterprise.

    In the case of Camunda they went with the Job DSL plugin, which has the advantage of being able to generate a variable number of job definitions from one script and some inputs (it can also interoperate meaningfully with other plugins in this space). This plugin also provides some opportunity for unit-testing its output, and interactively examining differences in output from build to build (harking back to a theme I encountered at JUC East).

    They took the further step of making the entire Jenkins installation be stood up from scratch in a Docker container from a versioned declaration, including pinned plugin versions. This is certainly not the first time I have heard of an organization doing that, but it remains unusual. (What about Credentials, you might ask? I am guessing they have few real secrets, since for reproducibility and scalability they are also using containerized test environments, which can use dummy passwords.)

    As a nice touch, they added Elasticsearch/Kibana statistics for their system, including Docker image usage and reports on unstable (“flaky”?) tests. CloudBees Jenkins Operations Center customers would get this sort of functionality out of the box, though I expect we need to expand the data sources streamed to CJOC to cover more domains of interest to developers. (The management, as opposed to reporting/analysis, features of CJOC are probably unwanted if you are defining your Jenkins environment as code.)

    One awkward point I saw in their otherwise impressive setup was the handling of Docker images used for isolated build environments. They are using the Docker plugin’s cloud provider to offer elastic slaves according to a defined image, but since different jobs need different images, and cloud definitions are global, they had to resort to using (Groovy) scripting to inject the desired cloud configurations. More natural is to have a single cloud that can supply a generic Docker-capable slave (the slave agent itself can also be inside a Docker container), where the job directly requests a particular image for its build steps. The CloudBees Docker Custom Build Environment plugin can manage this, as can the CloudBees Docker Workflow plugin my team worked on recently. Full interoperation with Swarm and Docker Machine takes a bit more work; my colleague Nicolas de Loof has been thinking about this.

    The other missing piece was fully automated testing of the system, particularly Jenkins plugin updates. For now it seems they prototype such updates manually in a temporary copy of the infrastructure, using a special environment variable as a “dry-run” switch to prevent effects from leaking into the outside world. (Probably Jenkins should define an API for such a switch to be interpreted by popular plugins, so that the SMTP code in the Mailer plugin would print a message to some log rather than really sending mail, etc.) It would be great to see someone writing tests atop the Jenkins “acceptance test harness” to validate site-specific functions, with a custom launcher for their Jenkins service.

    All told, a thought-provoking presentation, and I hope to see a follow-up next year with their next steps!

    We hope you enjoyed JUC Europe! 

    Here is the abstract for Christian's talk "From Virtual Machines to Containers: Achieving Continuous Integration, Build Reproducibility, Isolation and Scalability." 

    Here are the slides for his talk and here is the video

    If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
    Categories: Companies

    Managing a Jenkins Docker Infrastructure: Docker Garbage Collector

    Mon, 08/24/2015 - 16:18
    Using Docker for Continuous Delivery is great. It brings development teams an impressive flexibility, as they can manage environments and test resources by themselves, and, at same time, enforce clean isolation with other teams sharing the same host resources.

    But a side effect on enabling Docker on build infrastructure is disk usage, as pulling various Docker images consumes hundreds megabytes. The layered architecture of Docker images ensures that you'll share the lower level layers as much as possible. However, as those layers get updated with various fixes and upgrades, the previous ones remain on disk, and can result, after few months, in huge disk usage within /var/lib/docker.

    Jenkins monitors can alert on disk consumption on build executors. However, a more proactive solution should be implemented versus simply making the node offline until administrator handle the issue "ssh-ing" to the server.
    Docker does not offer a standard way to address image garbage collection, so most production teams have created their own tool, including folks at Spotify who open-sourced docker-gc script.

    On a Jenkins infrastructure, a scheduled task can be created to run this maintenance script on all nodes. I did it for my own usage (after I had to handle filesystem full error). To run the script on all docker enabled nodes, I'm using a workflow job. Workflow make it pretty trivial to setup such a GC .

    The script I'm using relies on a "docker" label to be used on all nodes with docker support. Jenkins.instance.getLabel("docker").nodes returns all the build nodes with this label, so I can iterate on them and run a workflow node() block to execute the docker-gc script within a sh shell script command:

    def nodes = Jenkins.instance.getLabel("docker").nodes
    for (n in nodes) {
    node (n.nodeName) {
    sh 'wget -q -O - | bash'


    docker-gc script do check images not used by a container. When an image existed last run of the script, but is not used by a container,

    I hope that the Docker project will soon release an official docker-gc command. This will benefit to infrastructure teams, eliminating the need to re-invent custom solutions to the same common issue.
    Categories: Companies

    JUC Session Blog Series: Tom Canova, JUC U.S. East

    Thu, 08/20/2015 - 15:32
    I was pleased to be able to attend the D.C. Jenkins user conference this year, where I gave a talk on the progress of the Workflow plugin suite for Jenkins. One highlight was seeing Jenkins Workflows with Parallel Steps Boosts Productivity and Quality by Tom Canova of Naturally the title made me curious: how were people in the field using parallelism in workflows?

    The project he works on is a little unusual for someone coming from the software-delivery mindset, since while the ultimate deliverable is still software, what Jenkins is spending most of its time on is running that software (rather than a compiler or automated tests): the result is a summary of a big set of online recipes crunched through some natural language processing into a machine-friendly format. Each “build” is a dry-run of Chef Watson’s preparation for the dinner service, if you will.

    Since slicing & dicing all that messy web HTML can take a long time, Tom’s process follows a pretty standard three-stage fork-join model. In the first stage, one Jenkins slave finds a site index with a list of recipes, collecting a list of every recipe to be processed. In the main, second stage, a number of distributed slaves each pick up a subset of recipes, parse them, and dump the JSON result into Cloudant, using a 5Gb heap. Finally all the results are summarized and archived, and some follow-on jobs are triggered (I think in part as a workaround for missing Workflow plugin integrations). All told, the parallelization can cut a twenty-hour build into two hours, giving developers quicker feedback. Doing this from a traditional “freestyle” project would be tough—you would really need to set up a custom grid engine instead of using the Jenkins slave network you already have.

    Another unusual aspect of Tom’s setup was that the build history was really curated. Whereas some teams treat Jenkins builds as dispensable records created and then trimmed at a furious rate, here there may only be a few a week, and each one is examined by the developers to see how their changes affected the sample output. (The analysis is put right in the build description.)

    One interesting thing the developers do is interactively compare output from one build to another. After all, they want to judge whether their code changes produced reasonable changes in the result, or whether unexpected and unwanted effects arose in real data sets. For this they just do a diff (I think outside Jenkins) between build artifacts. After the talk I suggested to Tom that it would be useful for “someone” to write a Jenkins plugin which displays the diff between matching build artifacts of consecutive builds. This reminded me of something my team started producing when I worked on NetBeans: a readable summary of the changes in major application features from one build to the next.

    As a final note, I did try to get some meal advice from the live system. Whether I can convince my wife to let me cook this is another matter:

    Basque Red Beet Pasta Salad

    1 poblano pepper
    ½lb fusilli
    ½c cranberry juice
    1½c crumbled queso blanco
    3T achiote paste
    5 red beets
    3c cubed, peeled butternut squash
    3 halved tomatoes
    ¼c olive oil
    ½T chopped candied ginger

    Hmm. Looks like Jenkins still has its job cut out for it!

    We hope you enjoyed JUC U.S. East!
    Here is the abstract for Tom's talk "Jenkins Workflows with Parallel Steps Boosts Productivity and Quality." 
    Here are the slides for his talk and here is the video.

    If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
    Categories: Companies

    CloudBees Jenkins Platform on Amazon Web Services

    Tue, 08/18/2015 - 19:26
    CloudBees Jenkins Platform available on AWS Marketplace
    We are delighted to announce the immediate availability of CloudBees Jenkins Platform 15.05 on the AWS Marketplace.

    The two components of CloudBees Jenkins Platform, are offered as a bring your own license mode with a free trial.
    With these AWS marketplace offerings, you can seamlessly provision your virtual machines of Jenkins masters and Operations Centers and interact directly with AWS services, including Amazon EC2, S3, Route53 and Lambda from within Jenkins.

    CloudBees Jenkins Platform on AWS Marketplace

    Virtual Machines Specifications
    CloudBees Jenkins Enterprise and CloudBees Jenkins Operations Center AWS Marketplace AMIs are built with the following components:

    • Ubuntu 14.04 LTS (Trusty Tahr)
    • OpenJDK 8
      • Installed as a Debian package from the "ppa:openjdk-r/ppa" repository
    • CloudBees Jenkins Enterprise (resp CloudBees Jenkins Operations Center)
      • Installed as a Debian package
      • Running as a SystemD service
      • Listening on port 8080 (resp 8888)
      • JENKINS_HOME set to "/var/lib/jenkins"
    • Git
      • Installed as a Debian package from the "ppa:git-core/ppa" repository
    • HAProxy
      • Installed as a Debian package from the "ppa:vbernat/haproxy-1.5" repository
      • Listening on port 80 and forwarding to the Jenkins process (port 8080 resp. 8888)
      • Capable of listening on HTTPS:443 if configured (docs here)
    • SSH connection
      • Listen on port 22
      • User "ubuntu", SSH public key (aka EC2 key pair) provisioned through AWS management console. This user has "sudo" privileges.

    Security and Maintenance of the Servers
    • Firewall: firewall rules are defined in the AWS Management Console with EC2 Security Groups. CloudBees recommends to restrict access (inbound rules) from a limited IP Range, not allowing "all the internet" to access to the VM ; this is particularly important for the SSH and HTTP protocols. Deploying the VM in an Amazon VPC instead of "EC2 Classic" offers finer security settings.
    • OS Administrators are invited to frequently apply security fixes on the operating system of the VM ("sudo apt-get update" then "sudo apt-get upgrade")
    • Jenkins Administrators are invited to frequently apply upgrade the Jenkins plugins and the Jenkins Core through Jenkins administration console
    • Jenkins Administrator are invited to secure their Jenkins server enabling Authentication and Authorization on their newly created instances
    • Jenkins Administrators are invited to connect slave node to the Jenkins masters according to the needs of the project teams (CentOS, Ubuntu, Redhat Enterprise Linux, Windows Server...) and to disable builds on the masters
    • Jenkins Administrators are invited to frequently backup the Jenkins data (aka JENKINS_HOME) using the CloudBees Backup Plugin and/or doing a backup of the VM File System through AWS EC2 services (EBS snapshot ...)

    CloudBees Jenkins Platform is distributed on the AWS Marketplace on a Bring Your Own License mode. You can provision your Virtual Machines with the marketplace images and then enter your license details or start a free evaluation from the welcome screen of the created Jenkins instance.

    Screencast: Installing CloudBees Jenkins Enterprise on Amazon Web Services
    This screencast shows how to install a CloudBees Jenkins Enterprise VM on Amazon Web Services using the AWS Marketplace. The installation of CloudBees Jenkins Operations Center is similar, you just have to choose CloudBees Jenkins Operations Center instead of CloudBees Jenkins Enterprise in the marketplace.

    More Resources
    Categories: Companies

    Jenkins User Conference U.S. West Speaker Highlight: Andrew Phillips

    Tue, 08/18/2015 - 19:04
    In his presentation, Andrew will be taking a broader view than his talk at JUC U.S. East and will discuss common challenges you may come across and the solutions that you may need when moving from Continuous Integration to Continuous Delivery.

    This post on the Jenkins blog is by Andrew Phillips, VP, Product Management, XebiaLabs. If you have your ticket to JUC U.S. West, you can attend his talk "Sometimes Even the Best Butler Needs a Footman: Building an Enterprise Continuous Delivery Machine Around Jenkins" on Day 1.

    Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for the last Jenkins User Conference of the year: JUC U.S. West.

    Thank you to the sponsors of the Jenkins User Conference World Tour:

    Categories: Companies