Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
CloudBees provides an enterprise Continuous Delivery Platform that accelerates the software development, integration and deployment processes. Building on the power of Jenkins CI, CloudBees enables you to adopt continuous delivery incrementally or organization-wide, supporting on-premise, cloud and hybrid environments.
Updated: 20 hours 35 min ago

CloudBees Jenkins Platform: Accelerating CD in Enterprises

Wed, 06/24/2015 - 14:26
If you follow CloudBees and Jenkins, you must’ve heard a flurry of announcements at Jenkins User Conference in Washington DC and London.

This blog summarizes all the new goodies that CloudBees has launched at these conferences.The Launch of the CloudBees Jenkins PlatformOrganizations have matured from using Jenkins to do Continuous Integration to using it as a platform to bring enterprise-wide Continuous Delivery and they have used CloudBees products CloudBees Jenkins Enterprise and CloudBees Jenkins Operations Center to do so.

With the launch of the CloudBees Jenkins Platform, we bundle these in one easy consumable package with couple of editions (Team, Enterprise) serving small teams and enterprise administrators.

Screenshot 2015-06-09 18.41.56.png

Each edition comes with features that are outlined here (refer to the CJP documentation for details)Screenshot 2015-06-09 18.44.07.png

Screenshot 2015-06-09 18.44.42.png

Welcome to “Solution Packs”The CloudBees Jenkins Platform allows CloudBees to serve enterprise audiences with specific needs better. We do so through the ability to deliver specific feature sets through “solution packs”. One of the first pack that we are launching today, is the Amazon Solution Pack.

This pack lets customers share “elastic slaves” hosted on AWS with all Jenkins masters managed by CloudBees Jenkins Operation Center within an organization - these masters themselves are running on-premise or in the cloud. In addition, the CloudBees Jenkins Platform lets users directly use Amazon Web Services  within a Jenkins job using AWS CLI. Thus, developers can access any service that is accessible through the CLI as part of their build and deployment jobs.

In addition, we are providing AMIs on the Amazon market place to help Amazon customers easily bootstrap the complete CloudBees Jenkins Platform - i.e. both CloudBees Jenkins Enterprise and CloudBees Jenkins Operations Center - on AWS.

CloudBees Jenkins Platform for Pivotal CF
Last November, we announced a partnership with Pivotal and the release of CloudBees Jenkins Enterprise on Pivotal CF. After a successful launch, our customers quickly came back to us and asked support for CloudBees Jenkins Operation Center as well, so that those organizations could roll out an enterprise-wide Continuous Delivery platform fully hosted on the Pivotal CF platform.

So today, we are proud to announce the extension of this partnership with the release of CloudBees Jenkins Operation Center on Pivotal CF. And since we just announced a new packaging of our offering (see above), in a few weeks, we will be providing the complete CloudBees Jenkins Platform on Pivotal CF and remove individual references to CloudBees Jenkins Enterprise and CloudBees Jenkins Operations Center.

CloudBees Jenkins Platform for Microsoft Azure
Microsoft customers have demanded CloudBees Jenkins on the Azure platform for quite a while - and I am happy to announce that Microsoft and CloudBees have signed a partnership to make Azure a prime location for your Jenkins deployments. Full support will come in several steps.

Today, we are releasing CloudBees Jenkins Operations Center/CloudBees Jenkins Enterprise for Microsoft Azure - these are Azure images that help Microsoft customers be up and running quickly with both CloudBees products. The current images are based on our November 2014 release but we will be updating them in the next few weeks with the May CloudBees Jenkins Platform release.

My crystal ball tells me that there will be a lot of interesting announcements as we take the partnership forward.

New features in the CloudBees Jenkins PlatformAt the Jenkins User Conference, we also announced a number of new CloudBees Jenkins Platform features:
  1. Stabilize production masters and eliminate downtime to teams caused by jobs that aren’t stable: The ability to promote jobs from masters used for testing to masters used for production:
    1. Some of the most sophisticated IT departments, use CloudBees Jenkins Operations Center to manage Jenkins and create new jobs on a test master and then job is stable they want to promote it to production. We have made this process easy and seamless. Features include:
      1. Validate if the job will run successfully on the target master before promoting. Examples include checks mentioned in the next bullet.
      2. Implicitly perform validation before promoting aka “pre-flight checks”. These checks include:
        1. Validate that the core versions of Jenkins on the test and production masters are compatible.
        2. Validate that plugins that are used on the test master are available on the production master.
      3. A job is re-promoted perhaps after a few fixes, then the ability to preserve the history of the job on the target master.
  1. Build cross organizational and cross master pipelines: Trigger jobs across masters
    1. This feature helps organizations build CD pipelines that span across masters. Thus, enabling scenarios such as, jobs on Dev teams master tickle jobs on QA teams master. Some of the features are:
      1. Integrated with CloudBees Role-based Access Control: thus jobs can only be triggered by employees with the right permission
      2. Ease of use features such as a quick path browser to easily navigate to the downstream jobs if it is on a different master within a cluster.
  1. Improved UX, especially regarding the getting started experience:
    1. A very common ask is to make Jenkins UI more modern, we have taken first steps to address it in our product. If you have opinions, both positive or negative, we will like to hear about it.
New features in CJP but delivered in OSSAt CloudBees, we wear two hats both: Open Source and Proprietary product :-). So some of the biggest features that we have delivered this semester actually landed in OSS (hence are available both in open source and as part of the CloudBees Jenkins Platform).

Workflow ImprovementsAt the end of last year, CloudBees with the Jenkins community delivered a substantial new sub-system in Jenkins: Jenkins Workflow. Jenkins Workflow helps build real world pipelines programmatically. We have been busy since since and have released multiple new versions (8 to be specific). Workflow 1.8 brings in notable new features including increased support for third-party plugins such as Copy Artifact, Credentials Binding, etc. They are now all supported as first class citizen as part of a workflow definition. You can refer to the compatibility matrix for up-to-date information.

The following table captures most plugin update changes (Refer to the release notes for details): .Plugin SupportImproved Workflow DSL FeaturesImproved DSL FeaturesCopy ArtifactbuildStep: get downstream build propertiesLoad script from SCM: CI as source codeCredentials BindingwaitUntil: wait for external event before starting Safe Restart: restart Jenkins if WF isn’t runningMercurialSleep: sleep for sometime before proceeding
Rebuild: rebuild jobs with initial parametersfileExists: check if a file exists before proceeding
Build Token Root: securely trigger buildswithEnv: attach env variables before proceeding
Parallel Test Executor

Mail: support for mail within a workflow step

Perforce



The most interesting (well at-least to me :-)) is the ability to do CI-as-code with the “load script from SCM” feature. With this feature, developers can check in their build script into the source code repository and point their Jenkins job to the repository and Jenkins uses the script as its job configuration.

The CloudBees and Jenkins community will continue to add support in Workflow for Jenkins plugins - so watch this space.Continuous Delivery with Docker I saved the best for the last: a big effort from CloudBees in conjunction with the community has been providing first class support for Docker to build continuous delivery pipelines. I have written a separate blog to call this feature set out. Parting thoughts:I am pleased to see the breadth of solutions that we bring to the market today. It isn’t often that a release includes partnerships and solutions offered in the variety of domains as we did today. I am excited that we have pushed the boundaries by enabling modern, sophisticated pipelines with Jenkins and Docker.

What gets me most excited is the potential product and open source opportunities across CloudBees and Jenkins as we go ahead.

I would like to quote Robert Frost on behalf of CloudBees and Jenkins:
The woods are lovely, dark and deep,   But I have promises to keep,   And miles to go before I sleep,   And miles to go before I sleep.

  • Harpreet Singh
Links
  1. The Docker and Jenkins White Paper 
  2. Jenkins and Docker Blog
  3. CJP Documentation
  4. Workflow Release Notes
  5. Workflow Compatibility Support




Harpreet SinghVice President of Product Management 
CloudBees

Harpreet is the Vice President of Product Management and is based out of San Jose. Follow Harpreet on Twitter
Categories: Companies

Bringing Continuous Delivery to Cloud-Scale with Jenkins, Docker and "Tiger"

Tue, 06/23/2015 - 17:13
At JUC London I attended Bringing Continuous Delivery to Cloud-Scale with Jenkins, Docker and "Tiger" talk by Kohsuke Kwaguchi and Harpreet Singh.

"Continuous Delivery", "Cloud" and "Docker" - all buzzwords in the - this talk premises to be of high interest - or just vapor-ware! - room was packed; Here are my live notes


Kohsuke and Harpreet introduced the "Tiger" project they are working on (one of them asking for more and more features, the other implementing them when he's not doing a talk at some conference - I let you guess who's who).

CloudBees is focussing on Continuous Delivery (further noted "CD" for consistency). They took Tesla car as a sample to illustrate this, as a Tesla car can receive upgrades during the night to fix a technical issue identified on running cars one day before, and let users benefit the latest fixes/features with minimal delay.

To reconcile Dev and Ops tools within a single workflow to embrace all the continuous delivery process, workflow plugin is a key component to offer better flexibility. Docker is another major brick on the lego-puzzle team have to build to address CD challenge. With lightweight isolation it offers better reproducibility. a set of Docker-related plugins have been announced at JUC DC. Combined together, they allow to package the app and resources into container, and orchestrate their usage through the CD pipeline.

  • build and publish docker image (with credentials support for private repositories)
  • listen to dockerhub event so jenkins do trigger a build when some image is updated, to ensure everything is always up-to-date
  • workflow support to make docker images and container first class citizens in workflow DSL.


Kohsuke made a live demo of such an integration. He committed a fix to a demo project, which triggered a jenkins build to publish a fresh new Docker image. DockerHub notification then do trigger the CD workflow to upgrade the production application with this up-to-date Docker image. Docker traceability do record docker image fingerprints so we can check which Docker image was used and which jenkins build did created it.

Other demonstrated use-case is about managing build environment with Docker images. Docker plugin let you use docker containers as jenkins slaves. Docker Custom build environment let you control this directly from job configuration, or as a Dockerfile committed to your SCM side-by-side with project source code.

Docker definitively is a major component in Jenkins way to address the CD challenge. CloudBees is also working on addressing large scale installations with support for Docker-shared execution within CloudBees Jenkins Operation Center. Harpreet also announced plan to deliver Kubernetes support on next release. Operation Center is evolving to embrace multi-master installation, with "promotion" for jobs to get moved from one master to another, cross master triggers, and such multi-master interactions.

CloudBees product line is evolving into CloudBees platform : Team Edition for small team, Enterprise edition for larger installations, with "packs" for specific set of additional features (Amazon support for sample), and a fresh new "Tiger" project - here we go - aka Jenkins-as-a-Service, dedicated to big companies.


DEV@Cloud already do offer such a service with thousands jenkins masters hosted on Amazon and elastic build slave infrastructure. Tiger goal is to offer the same experience behind company firewall. Multi-tenanted masters and slaves provisioned on-demand without administration hell, is built on top of CloudBees platform so do benefit all the tooling provided by CloudBees Jenkins Enterprise (security, monitoring, visualization). 
Kohsuke made a quick demo of this new product. From CloudBees Jenkins Operation Center web UI he provisioned a fresh new client master. Tiger is managing the underlying infrastructure - based on Mesos and Docker containers - to find adequate "box" to host this new instance and storage bucket, and is sharing build resources the same way. Within the minute you get a fresh new jenkins master setup, ready to host team jobs and builds. Tiger is moving jenkins to the Cloud-scale with such a multi-tenant distributed solution.
So, Docker again. Seems this is not about the Tiger I expected but actually some Tiger Whale...


Jenkins / Docker / Continuous Delivery story is just starting, and lot's more feature and tools integration will come to offer simpler/better/faster (Daft Punk TM) Continuous Delivery.



Categories: Companies

Templating Jenkins Build Environments with Docker Containers

Tue, 06/23/2015 - 02:56

Builds often require that credentials or tooling be available to the slave node which runs it. For a small installation with few specialized jobs, this may be manageable using generic slaves, but when these requirements are multiplied by the thousands of jobs that many organizations running per day, managing and standardizing these slave environments becomes more challenging.
What is Docker?
Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. They can also be created from a read-only template called a Docker image.  
Docker images can be created from an environment definition called a Dockerfile or from a running Docker container which has been committed as an image. Once a Docker image exists, it can be pushed to a registry like Docker Hub and a container can be created from that image, creating a runtime environment with a guaranteed set of tools and applications installed to it. Similarly, containers can be committed to images which are then committed to Docker Hub.
Docker for bootstrapping and templating slaves
Docker has established itself as a popular and convenient way to bootstrap isolated and reproducible environments, which enables Docker containers to be the most maintainable slave environments. Docker containers’ tooling and other configurations can be version controlled in an environment definition called a Dockerfile, and Dockerfiles allows multiple identical containers can be created quickly using this definition or for more customized off-shoots to be created by using that Dockerfile's image as a base.

The CloudBees Custom Builds Environment Plugin allows Docker images and files to serve as template for Jenkins slaves, reducing the administrative overhead of a slave installation to only updating a few lines in a handful of environment definitions for potentially thousands of slaves.
Building with Docker ContainersThis plugin adds the option “Build inside a Docker container” in the build environment configuration of a job. To enable it, simply scroll to the “Build Environment” section of any Jenkins job and select the “Build inside a Docker container” option. You will then be able to specify whether a slave container should be created from a Dockerfile checked into the workspace (e.g. the file was in the root of the project) or whether to pull an explicit image from a Docker registry to use as the slave container.














Customized slave environments
For generic builds, you can leverage the most popular Jenkins slave image in Docker Hub called evarga/jenkins-slave or create a new image with a custom Dockerfile for any specialized builds that requires some build dependencies which should need to be available in the workspace, such as credentials.

To create a custom environment, you will need to create your own Docker slave image. This can be done by creating a new Dockerfile or running an existing slave image such as “evarga/jenkins-slave”, then installing the necessary custom tooling or credentials and committing your changes to a new image.

To create a new image from a Dockerfile, you can simply edit the below copy of the “evarga/Jenkins-slave” file using the Dockerfile guidelines and reference:
FROM ubuntu:trustyMAINTAINER Ervin Varga <ervin.varga@gmail.com>RUN apt-get updateRUN apt-get -y upgradeRUN apt-get install -y openssh-serverRUN sed -i 's|session    required     pam_loginuid.so|session    optional     pam_loginuid.so|g' /etc/pam.d/sshdRUN mkdir -p /var/run/sshdRUN apt-get install -y openjdk-7-jdkRUN adduser --quiet jenkinsRUN echo "jenkins:jenkins" | chpasswdEXPOSE 22CMD ["/usr/sbin/sshd", "-D"]

Builds which are built within a Docker container will be identifiable by the Docker icon displayed inline within a job’s build history.










Where do I start?
  1. The CloudBees Docker Custom Build Environment Plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. Other plugins complement and enhance the pipelines possible with this plugin. Read more about their uses cases in these blogs:
    1. Docker Build and Publish plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Jenkins Docker Workflow DSL
    4. Docker Traceability
    5. Docker Hub Trigger Plugin


  1. More information can be found in the newly released Jenkins Cookbook




Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Triggering Docker pipelines with Jenkins

Tue, 06/23/2015 - 02:45









As our blog series has demonstrated so far, Docker containers have a variety of uses within a CD pipeline and an organization's architecture. Jenkins can package applications into Docker containers and track them through a build pipeline into production. Builds themselves can be run in Docker containers thanks to Jenkins Workflow and the Custom Build Environments plugin, guaranteeing standardized, isolated, and clean environments for build executions. Pools of Docker hosts can also be shared between Jenkins masters using the CloudBees Jenkins Platform, creating the redundancy needed to ensure enough slaves are always online and available for masters. Combined, these solutions offer a great way to manage and create a Docker architecture for Jenkins and other internal applications, but what happens when it's time to upgrade these runtimes for security updates or new application releases? With Docker, changes to the base image require a rebuild of all containers in production from the new base image. But before we get too far into what this means, let's first review what Docker is.

What is Docker?Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. They can also be created from a read-only template called a Docker image.  
Docker images can be created from an environment definition called a Dockerfile or from a running Docker container which has been committed as an image. Once a Docker image exists, it can be pushed to a registry like Docker Hub and a container can be created from that image, creating a runtime environment with a guaranteed set of tools and applications installed to it. Similarly, containers can be committed to images which are then committed to Docker Hub.

Docker Hub is a Docker image registry which is offered by Docker Inc. as both a hosted service and a software for on-premise installations. Docker Hub allows images to be shared and pulled for use as containers or as dependencies for other Docker images. Docker containers can also be committed to Docker Hub as images to save them in their current state. Docker Hub is to Docker images what GitHub has become for many developers’ code — an essential tool for version and access control.
When the music fades...
There will inevitably be a time when the painstakingly-crafted Docker images that your organization has created will need to be updated for whatever reason. While Docker is fun and popular, it isn't (yet) so magical that it eliminates this evergreen maintenance. However, these upgrades need not be painful, so long as they are tested and validated before being pushed to production. 

Jenkins can now trigger these tests and re-deploys using the CloudBees Docker Hub Notification plugin. This plugin allows any changes to images in Docker Hub to trigger builds within Jenkins, including slave re-builds, application packaging, application releases via Docker images, and application deployments via Docker containers.
Monitoring for changes with Docker HubThis plugin adds new a build trigger to both standard Jenkins jobs and Jenkins Workflows. This trigger is called “Monitor Docker Hub for image changes” and allows Jenkins to track when a given Docker image is rebuilt, whether that image is simply referenced by the job or is in a given repository.


Once a job has been triggered, the build’s log will state what the trigger was (e.g. “triggered by push to <Docker Hub repo name>”). 
Docker Hub Hook ChainingDocker Hub itself supports webhook chains, which you can read more about in Docker’s webhook documentation. If you have added several webhooks for different operations, the callback that each service is doing is done in a chain. If one hook fails higher up in the chain, then any following webhooks will not be run. This can be useful if using the downstream Jenkins job as a QA check before performing any other operations based on the pushed image.

<jenkins-url>/jenkins/dockerhub-webhook/details will list all events triggered by all hook events, and you will be linked directly to the build, while Docker Hub’s webhook will link back to the Jenkins instance. You can also push tags to the Docker Hub repository.

Docker Image Pulls
This plugin also adds a build step for pulling images from Docker Hub. This is a simple build step that does a “docker pull” using the specified ID, credentials, and registry URL, allowing the Docker image that triggered the build to be pulled into the workspace for testing.
Where do I start?
  1. The CloudBees Docker Hub Notification Plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. Other plugins complement and enhance the pipelines possible with this plugin. Read more about their uses cases in these blogs:
    1. Docker Build and Publish plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Jenkins Docker Workflow DSL
    4. Docker Traceability
    5. Docker Custom Build Environment plugin

  1. More information can be found in the newly released Jenkins Cookbook



Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Traceability of Docker Images and Containers in Jenkins

Fri, 06/19/2015 - 14:48
Organizations are constantly striving to release software faster, to get their product into users' hands sooner and get feedback for improvement or correct problems. Software is never going to be perfect in its first iteration and the end users might actually want something different than what is produced. There is great value to the business if new features can be delivered and bugs fixed in a timely fashion; or a full-course change is required. Working on a minimal viable product (MVP) and using Agile practices, development teams can, in theory, produce a new working product at the end of every sprint. However, there is a big difference in continuously developing a product and continuously delivering that product to users.

Software is a world of interdependencies and all of those interdependencies have to be validated at various stages before a product is released. Are the external library files consistent? Is the database version the same? Are all the required packages installed on the target host OS? There are countless things that can go wrong when moving from development to testing to production.

Tools like Jenkins, Chef, and Puppet have helped to automate the flow of software through various stages and ensure a consistent environment. By continuously integrating all software dependencies and standardizing the configuration management of the environments, teams have reduced the number of variables in a delivery pipeline and eliminated potential problems allowing for more automation and, thus, expediting the delivery of the software.
The emergence of Docker and containers has further reduced the variables present in a delivery pipeline. With Docker, a single image can move from development to testing and finally to production without changing the application or the underlying configuration. As long as the Docker host is consistent then all containers with that image should work across all environment stages.

What is Docker?
Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. They can also be created from a read-only template called a Docker image.  
Docker images can be created from an environment definition called a Dockerfile or from a running Docker container which has been committed as an image. Once a Docker image exists, it can be pushed to a registry like Docker Hub and a container can be created from that image, creating a runtime environment with a guaranteed set of tools and applications installed to it. Similarly, containers can be committed to images which are then committed to Docker Hub.
The Interdependency Problem

The immutability of the Docker container goes a long way towards facilitating continuous delivery but it does not completely solve the problem of interdependencies. Docker containers are built upon images, both parent and base images. An application can run on an Apache parent image with a base image of CentOS. These images, and the containers they are used in, are all uniquely identified and versioned to account for change over time much like binary artifacts or gems.

In addition to image dependencies an application is not always contained in a single container; Dockerized applications are increasingly deployed as microservices. As Martin describes, breaking up monolithic applications into discrete functional units that interoperate is a great way to help teams continuously deliver parts of an application without requiring a release cycle of the entire application and every team involved. Not only are there image dependencies but we now have microservice dependencies. The level of abstraction has moved up a rung.
Traceability with Fingerprinting and Docker
Despite, or because of, all of the automation inherent in a continuous delivery pipeline things still break. When they do it is necessary to quickly identify and correct the problem across all of the dependencies that go into a running application. Visibility and traceability across all dependencies in an application are paramount to continuously delivering and running that application. To that end, Jenkins allows teams to track artifacts with a "Fingerprint", letting users see what went into a build and where that build is being used. Combined with the Deployment Notification Plugin  this fingerprint can be used to track when and where a package has been deployed by Chef or Puppet. This traceability is very useful for both developers and operations. If a bug is found in development it can be quickly traced to everywhere it has been deployed. Conversely, if a problem occurs in production the operations team can easily find the deployed build in Jenkins and see all components included.

The addition of the CloudBees Docker Traceability plugin enables Jenkins to now extend this same traceability to Docker images, showing the build and deployment history of each container and the related images. This plugin requires the Docker Commons plugin which provides the fingerprints for all Docker images and it is available to everyone in the Jenkins community.



The CloudBees Docker Traceability plugin provides both an overall view from the Jenkins sidebar of all containers currently registered and deployed and a detailed view of the container build from the build page. The Docker image ids are provided for all parent images and the base image used. In addition the Docker image ID is searchable in Jenkins to quickly find where and when it is deployed and how and when it was built.

Using this information, it is possible to determine if something changed in the code for a container or if one of the parent images or base image of a container changed from one build to another helping to determine the root cause of any problems in the overall application.

Where do I start?
  1. The CloudBees Docker Traceability Plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. Other plugins complement and enhance the pipelines possible with this plugin. Read more about their uses cases in these blogs:
    1. Docker Build and Publish plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Jenkins Docker Workflow DSL
    4. Docker Hub Trigger Plugin
    5. Docker Custom Build Environment plugin
  1. More information can be found in the newly released Jenkins Cookbook

Patrick Wolf
Product ManagerCloudBees 

Patrick Wolf is a product manager for CloudBees and is based in San Jose. 
Categories: Companies

Automating application releases with Docker

Fri, 06/19/2015 - 13:00
docker-whale-sea.jpg
Many organizations struggle with releasing their applications and this struggle has birthed an industry of tools designed to simplify the process.  Release management tools allow a release process to be defined as stages in a pipeline, and stages themselves contain sequential steps to be performed before the next begins. Stages are segmented using approval gates to ensure that QA and release managers get the final say on whether an artifact is ready for the next stage in the release pipeline, and the entire process is tracked for reporting purposes.

The goal of such processes is to ensure that only high quality releases are deployed into production and are released on time, and the release manager is responsible for it all.

An obstacle to a smooth release is the structural challenge of maintaining identical testing and production environments. When these environments differ, it allows an opportunity for unexpected regressions to slip through testing and botch a release. Ideally, all environments will be identical and contain the same dependent libraries and tooling for the application, as well as the same network configurations.
What is Docker?

Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. They can also be created from a read-only template called a Docker image.  

Docker images can be created from an environment definition called a Dockerfile or from a running Docker container which has been committed as an image. Once a Docker image exists, it can be pushed to a registry like Docker Hub and a container can be created from that image, creating a runtime environment with a guaranteed set of tools and applications installed to it. Similarly, containers can be committed to images which are then committed to Docker Hub.

Cookie-cutter environments and application packagingThe versatility and usability of Docker has made it a popular choice among DevOps-driven organizations. It has also made Docker an ideal choice for creating the standardized and repeatable environments that an organization needs for both creating identical testing and production environments as well as for packaging portable applications.

If an application is packaged in a Docker image, testing and deploying is a matter of creating a container from that image and running tests against the application inside. If the application passes the tests, then they should be stored in a registry and eventually deployed to production.

Automating the releaseAccording to Forrester Research, the top pain of release management is a lack of visibility into the release management process and their process’ lack of automation.


However, the testing, deploying, and releasing stages of these pipelines can be orchestrated by Jenkins  using the CloudBees Docker Build and Publish plugin. This plugin creates a new build step for building and packaging applications into Docker containers, as well as publishing them a images to both private and public Docker registries like Docker Hub.Screen Shot 2015-06-10 at 1.57.06 PM.png

Testing and QAApplications packaged in Docker images can be tested by running them as a container. Docker allows containers to be linked, granting the linked container shell access and allowing it to run scripts against the application’s container. This link can also be made between the Docker application container and another container which is packaging a service the application needs to run against for a true integration test, such as a test database.

Promotion and ReleaseJenkins supports the concept of promotion, where tested and approved artifacts are promoted to the next stage in a pipeline. Promotion is compatible with both traditional Jenkins jobs and the new Jenkins Workflow, and promotions can be set to only trigger if manually approved by particular users or team members.

In this case, the artifact is a Docker image containing our application, and once it  is promoted it can be manually or automatically moved to the next stage of its pipeline. The next stage can range from a pre-production staging area to a registry like Docker Hub, where the promoted image is known as a “Gold” image ready for deployment.

The promotion can also trigger any other number of pre-release actions, such as notifications and sending data about the artifact to a company dashboard.

Where do I start?
  1. The CloudBees Docker Build and Publish plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform. 
  2. More information can be found in the newly released Jenkins Cookbook
  3. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:
    1. Docker Slaves with the CloudBees Jenkins Platform
    2. Jenkins Docker Workflow DSL
    3. Docker Traceability
    4. Docker Hub Trigger Plugin
    5. Docker Custom Build Environment plugin


Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Disaster-proofing slaves with Docker Swarm and the CloudBees Jenkins Platform

Fri, 06/19/2015 - 13:00


Standardizing build environments is a best practice for improving Jenkins resiliency, since generic and easily replaceable build environments reduce the impact of an outage within a build farm. When a slave is configured from a standardized template and Jenkins jobs are configured to install required tooling at runtime, any slave in a given pool can seamlessly take on any downed slaves’ workload. This concept is known as fungible slaves and was coined by Andrew Bayer at a Jenkins User Conference.

The problem with such a setup is not the setup itself but the process to achieve it. Configuring a machine to act as slave inside your infrastructure can be tedious and time consuming. This is especially true when the same setup has to be replicated on a large pool of slaves.

Tools for configuration management or a pre-baked image can be excellent solutions to this end, while containers and virtualization are also popular tools for creating generic slave environments. Containerization has risen to prominence for this purpose, with Docker being the fastest rising in popularity and ultimately most popular among Jenkins users.
What is Docker?
Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. Docker containers can be created from a read-only template called a Docker image.  

Docker Swarm is a clustering tool for Docker which unites Docker pools into a single virtual host.

Scalable and resilient slavesDocker images are an easy way to define a template for a slave machine and Docker containers are lightweight enough to perform almost as well as a “bare metal” machine, making Docker a good candidate for hosting fungible slaves.


But what happens when an organization has scaled horizontally, with many masters in their installation and each needing their own slave pool?

The open-source Docker Plugin allows masters to create and automatically tear down slaves in a Docker installation, but the configuration to connect to the Docker host will need to be re-created on every existing Jenkins master and again when a new master is onboarded.

Multiple Docker hosts may also exist, so to ensure the most efficient use of these resources, all Docker hosts in an organization should be pooled together and slave containers run in on an otherwise idle host if possible to maximize the performance of the container (similar to the logic behind the Even Scheduler plugin). This sharing between Docker hosts allows masters’ jobs to be built with minimal queue time and prevents some hosts from sitting idly while others are overloaded.

This pooling is possible with Docker Swarm, but job scheduling and Swarm configuration sharing still require special integrations with Jenkins.

Shared Docker Cloud ConfigurationAs a part of the next release of CloudBees Jenkins Platform, Docker Swarm may be configured as a slave pool whose configuration may be shared between all managed or “client” masters in an organization. This prevents the pain of having to configure a Swarm for each master and updating all such configurations should the installation change in any way (location, FS, max containers, etc).

Like other shareable clouds, the Docker Swarm cloud can be created as an object in CloudBees Jenkins Operations Center:docker-cloud.PNG

From there, you can configure the location of the Docker Swarm host, any credentials for connecting to it, as well as which Docker image(s) should be used for creating slaves and how many containers should be up at any given time.

docker-shared.PNGWhere do I start?
  1. This feature is included in CloudBees Jenkins Operations Center, part of the CloudBees Jenkins Platform. Contact sales@cloudbees.com for more information.
  2. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:
    1. Docker Build and Publish plugin
    2. Jenkins Docker Workflow DSL
    3. Docker Traceability
    4. Docker Hub Trigger Plugin
    5. Docker Custom Build Environment plugin

Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Building modern, real world software delivery pipelines with Jenkins and Docker

Thu, 06/18/2015 - 13:00
TL;DR: This blog outlines the key use-cases enabled by the newly released Docker plugins in the Jenkins communities. You can drill into more depth with an in-depth blog for each of the use case. The CloudBees team has actively worked within the community to release these plugins.Lately, I have been fascinated by how lean manufacturing radically improved the production of goods - the key being a fully orchestrated, automated delivery pipeline. We are at the “lean” inflection point in the software computing history where light-weight containers viz Docker and Jenkins will bring a rapid improvements in software delivery. I suggest that you read more on the how/why in Kohsuke’s White Paper.

The executive summary of this White Paper is that Docker provides a common currency between Dev and Ops teams in expressing environments. Jenkins provides the marketplace through orchestration with Workflow whereby the currencies are easily exchanged between these teams.

The CloudBees team has been in the forefront of these changes through our role in the Jenkins community. Our team members have seen and often contributed to requests for enhancements with Jenkins and Docker as the industry pokes its way through this new era. This experience has helped us capture the canonical use cases that help deliver modern pipelines. Today, I am happy to announce the general availability of a number of Docker plugins in OSS that help organizations adopt CD at scale with Jenkins and Docker.
There are two primary meta-use cases that these plugins help you tackle:Meta-Use-Case 1: Constructing CD pipelines with Jenkins and DockerLet’s construct a simplified pipeline, the steps outlined below increase in sophistication: Jenkins Credit Union (JCU) has a Java web application that is delivered as a .war file that runs on a Tomcat container. Screen Shot 2015-06-10 at 1.57.06 PM.png
  1. In the simplest use case, both of the application binary and middleware containers (the .war and TC) are built independently as Docker containers and and “baked” into one container which is finally pushed to a registry (company “Gold” Docker image).The Docker Build and Publish plugin can be used to achieve this goal by giving Jenkins the ability to build and package applications into Docker containers, as well as publishing them a images to both private and public Docker registries like Docker Hub.
  2. Now, the JCU team wants to hand this container to the QA team for the the “TESTING” stage. The QA team pulls the container and tests it before pushing it downstream. You can extend the chain of deliveries to “STAGING” and “PRODUCTION” stages and teams. In this case, the JCU team can either chain jobs together or use Jenkins Docker Workflow DSL (ignore this at the moment) to build the pipeline.
  3. Everything’s going fine and peachy, until...the JCU security team issues a security advisory about the Tomcat docker image. The JCU security team updates the Tomcat Docker image and pushes it to the Docker Hub registry. At this point, the Dev job that “baked” images is automatically tickled and builds a new image (application binary + middleware) without any human input. The tickle is achieved through the Docker Hub Notification plugin, which lets Docker Hub trigger application and slave environment builds. The QA job is triggered after the bake process and as part of the pipeline execution.
  4. Despite all the testing possible, the Ops team discovers that there is a bug in the application code and they will like to know which component team is responsible for the issue. The Ops team used the Docker Traceability plugin feature to let Jenkins know which bits have been deployed in production. This plugin lets them find the build that caused the issues in production.

I had mentioned that we would ignore the Workflow initially - let’s get back to it now. 

Most real world pipelines are more complex than the canonical BUILD->TEST->STAGE->PRODUCTION - Jenkins Workflow makes it possible to implement those pipelines. The Jenkins Docker Workflow DSL provides first class support within Workflow to address the above use cases as part of an expressed workflow. Once implemented, the workflow becomes executable, and once executed, it becomes possible to visualize which one is successful vs not, where the problems are located, etc. The red/green image in the picture above is the Workflow Stage View feature that is available in the CloudBees Jenkins Platform.
The above steps layout a canonical use case for building pipelines with Jenkins. The examples can get more sophisticated if you bring the full power of Workflow and the ability to kick of connected Docker containers through Docker Compose to bear.

Meta-Use-Case 2: Providing build environments with DockerJCU has multiple departments and each of these departments has its own Jenkins master and corresponding policies with how build environments can be setup.
Use case 1: The “PRODUCTION” team of the e-banking software has a requirement that all builds happen on a sanitized and locked-down build environments. They can use the Docker Slaves feature of the CloudBees Jenkins Platform to lock-down these environments and provide them to their teams. This not only makes sure that those build/test environment will always be clean, but it will also provides increased isolation as no build executing on a given machine will have access to the other Jenkins jobs concurrently executing on that same machine, but on a different Docker host.
JCU is also using CloudBees Jenkins Platform to manage multiple masters, so they can use the “Shared Configuration” to share these slaves across all client masters.
Use case 2: The CTO team wants to have the flexibility to have custom environments for working with custom stacks. The Docker Custom Build Environment plugin allows Docker images and files to serve as template for Jenkins slaves, reducing the administrative overhead of a slave installation to only updating a few lines in a handful of environment definitions for potentially thousands of slaves. 

In this way, the overhead involved in maintaining hundreds or even thousands of slaves is reduced to changing a few lines in the company's Docker slave Dockerfile.
Closing thoughtsThe above set of use cases and corresponding plugins push the boundary for Continuous Delivery within the organization ahead. As experience with Docker grows, the Jenkins community will continue building out features to keep up with the requirements.
I hope you have fun playing with all the goodies just released.
Where do I start?
  1. All the plugins are open sourced, so you can install them from the update center or you can install CloudBees Jenkins Platform to get them quickly.
  2. Read more about the impact of Docker and Jenkins on IT in Kohsuke's White Paper
  3. Read more about the use cases in the blogs:
    1. Docker Build and Publish plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Jenkins Docker Workflow DSL
    4. Docker Traceability
    5. Docker Hub Trigger Plugin
    6. Docker Custom Build Environment plugin
  4. More information can be found in the newly released Jenkins Cookbook
  5. Read all of our CloudBees Jenkins Platform plugin documentation

Harpreet SinghVice President of Product Management 
CloudBees

Harpreet is the Vice President of Product Management and is based out of San Jose. Follow Harpreet on Twitter
Categories: Companies

Orchestrating Workflows with Jenkins and Docker

Thu, 06/18/2015 - 13:00
docker-workflow.pngMost real world pipelines are more complex than the canonical BUILD→TEST→STAGE→PRODUCTION flow. These pipelines often have stages which should not be triggered unless certain conditions are met, while others should trigger only if the first’s conditions fall through. Jenkins Workflow helps writes these pipelines, allowing complex deployments to be better represented and served by Jenkins.The Jenkins Workflow Docker plugin extends these workflows even further to provide first class support for Docker images and containers. This plugin allows Jenkins to build/release Docker images and leverage Docker containers for customized and reproducible slave environments. What is Docker?
Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. They can also be created from a read-only template called a Docker image.  

Docker images can be created from an environment definition called a Dockerfile or from a running Docker container which has been committed as an image. Once a Docker image exists, it can be pushed to a registry like Docker Hub and a container can be created from that image, creating a runtime environment with a guaranteed set of tools and applications installed to it. Similarly, containers can be committed to images which are then committed to Docker Hub.
What is Workflow?
Jenkins Workflow is a new plugin which allows Jenkins to treat continuous delivery as a first class job type in Jenkins. Workflow allows users to define workflow processes in a single place, avoiding the need to coordinate flows across multiple build jobs. This can be particularly important in complex enterprise environments, where work, releases and dependencies must be coordinated across teams. Workflows are defined as a Groovy script either within a Workflow job or checked into the workspace from an external repository like Git.Docker for simplicityIn a nutshell, the CloudBees Docker Workflow plugin adds a special entry point named Docker that can be used in any Workflow Groovy script. It offers a number of functions for creating and using Docker images and containers, which in turn can be used to package and deploy applications or as build environments for Jenkins.

Broadly speaking, there are two areas of functionality: using Docker images of your own, or created by the worldwide community, to simplify build automation; and creating and testing new images. Some projects will need both aspects and you can follow along with a complete project that does use both: see the demonstration guide.

Jenkins Build Environments and Workflow
Before getting into the details, it is helpful to know the history of configuring build environments in Jenkins. Most project builds have some kind of restrictions on the computer which can run the build. Even if a build script (e.g. an Ant build.xml) is theoretically self-contained and platform-independent, you have to start somewhere and say what tools you expect to use.

Since a lot of people needed to do this, back in 2009 I worked with Kohsuke Kawaguchi and Tom Huybrechts to add a facility to Jenkins’ predecessor - Hudson - for “tools”. Now a Jenkins administrator can go the system configuration page and say that Ant 1.9.0 and JDK 1.7.0_67 should be offered to projects which want them, so download and install them from public sites on demand. From a traditional job, this becomes a pulldown option in the project configuration screen, and from a Workflow, you can use the tool step:node('libqwerty') {
 withEnv(["PATH=${tool 'Ant 1.9.0'}/bin:${env.PATH}"]) {
   sh 'ant dist-package'
 }
 archive 'app.zip'
} While this is a little better, this still leaves a lot of room for error. What if you need Ant 1.9.3—do you wait for a Jenkins administrator? If you want to scale up to hundreds of builds a day, who is going to maintain all those machines?Clear, reproducible build environments with DockerDocker makes it very easy for the project developer to try a stock development-oriented image on Docker Hub or write a customized one with a short Dockerfile:

FROM webratio/ant:1.9.4RUN apt-get install libqwerty-devel=1.4.0Now the project developer is in full control of the build environment. Gone are the days of “huh, that change compiled on my machine”; anyone can run the Docker image on their laptop to get an environment identical to what Jenkins uses to run the build.

Unfortunately, if other projects need different images, the Jenkins administrator will have to get involved again to set up additional clouds. Also there is the annoyance that before using an image you will need to tweak it a bit to make sure it is running the SSH daemon with a predictable user login, and a version of Java new enough to run Jenkins slaves.What if all this hassle just went away? Let us say the Jenkins administrators guaranteed one thing only:If you ask to build on a slave with the label docker, then Docker will be installed.and proceeded to attach a few dozen beefy but plain-vanilla Linux cloud slaves. With CloudBees Docker Workflow, you can use these build servers as they come.

// OK, here we come
node('docker') {
 // My project sources include both build.xml and a Dockerfile to run it in.
 git 'https://git.mycorp.com/myproject.git'
 // Ready?
 docker.build('mycorp/ant-qwerty:latest').inside {
   sh 'ant dist-package'
 }
 archive 'app.zip'
}Embedded in a few lines of Groovy instructions is a lot of power. First we used docker.build to create a fresh image from a Dockerfile definition. If you are happy with a stock image, there is no need for even this:

node('docker') {
 git 'https://git.mycorp.com/myproject.git'
 docker.image('webratio/ant:1.9.4').inside {
   sh 'ant dist-package'
 }
 archive 'app.zip'
}docker.image just asks to load a named image from a registry, in this case the public Hub. .inside asks to start the image in a new throwaway container, then run other build steps inside it. So Jenkins is really running docker exec abc123 ant dist-package behind the scenes. The neat bit is that your single project workspace directory is transparently available inside or outside the container, so you do not need to copy in sources, nor copy out build products. The container does not need to run a Jenkins slave agent, so it need not be “contaminated” with a Java installation or a jenkins user account.

Easily adaptable CD pipelinesThe power of Workflow is that structural changes to your build are just a few lines of script away. Need to try building the same sources twice, at the same time, in different environments?

def buildIn(env) {
 node('docker') {
   git 'https://git.mycorp.com/myproject.git'
   docker.image(env).inside {
     sh 'ant dist-package'
   }
 }
}
parallel older: {
 buildIn 'webratio/ant:1.9.3'
}, newer: {
 buildIn 'webratio/ant:1.9.4'
}Simplified application deployments
So far everything I have talked about assumes that Docker is “just” the best way to set up a clear, reproducible, fast build environment, but the main use for Docker is to simplify deployment of applications to production. We already saw docker.build creating images, but you will want to test them from Jenkins, too. To that end, you can .run an image while you perform some tests against it. And you can .push an image to the public or an internal, password-protected Docker registry, where it is ready for production systems to deploy it.
Try it yourselfLook at the demo script to see all of the above-mentioned use-cases in action. This demo highlights that you can use multiple containers running concurrently to test the interaction between systems.

In the future, we may want to build on Docker Compose to make it even easier to setup and tear-down complex assemblies of software, all from a simple Jenkins workflow script making use of freestanding technologies. You can even keep that flow script in source control, too, so everything interesting about how the project is built is controlled by a handful of small text files.

Closing thoughtsBy this point you should see how Jenkins and Docker can work together to empower developers to define their exact build environment and reliably reproduce application binaries ready for operations to use, all with minimal configuration of Jenkins itself.

Download CJE 15.05 or install CloudBees Docker Workflow on any Jenkins 1.596+ server and get started today!

Where do I start?
  1. The CloudBees Docker Workflow plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. Documentation on this plugin is available in the CloudBees Jenkins Platform documentation
  3. A more technical version of this blog is available on the CloudBees Developer Blog
  4. More information on this feature is available in the new Jenkins Cookbook
  5. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs.
    1. Docker Build and Publish plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Docker Traceability
    4. Docker Hub Trigger Plugin
    5. Docker Custom Build Environment plugin

Jesse Glick
Developer Extraordinaire
CloudBees

Jesse Glick is a developer for CloudBees and is based in Boston. He works with Jenkins every single day. Read more about Jesse on the Meet the Bees blog post about him and follow him on Twitter.
Categories: Companies

A guide to cutting-edge Jenkins and continuous delivery

Thu, 06/18/2015 - 13:00


jenkins-chef.jpg
Original source is: http://cliparts.co/clipart/2478772Jenkins is the leading CI and CD server in the world, choking the market with a solid 70% share and boasting over 111,000 active installations around the world building 5,190,252 jobs as of 2015. This represents a 34% growth in the number of active installations and a 69% growth in the number of running build jobs. In other words, Jenkins is blowing up.

With such explosive growth also comes some pains. One of the great challenges of Jenkins is keeping up with its latest and greatest features, of which there are many given the 1,000+ plugins available today. It’s a Herculean effort, but it’s one that CloudBees customers have unanimously been itching for us to undertake.

Some of this knowledge already exists in the Jenkins community, but a lot of it has also been floating around CloudBees as “tribal knowledge”. At CloudBees, we have seen a rainbow of plugins, use cases, installation sizes, and pipelines, so much so that our engineering, support, and solutions architect teams have been maintaining their own internal guides and knowledgebases on what works and what doesn’t.

As much as I’m sure the support team enjoys giving these recommendations over Zendesk, it’s not necessarily the most efficient way to disseminate it, and to not formally document it paywalls this information from the rest of the Jenkins community.

Given all of this, CloudBees is proud to announce the release of the first version of our use-cases and best practices eBook -- Jenkins Cookbook: Cutting Edge Best Practices for Continuous Delivery. This guide will cover information that Jenkins users of all levels will appreciate, from hardware and sizing recommendations for a Jenkins installation to guidelines on leveraging Jenkins Workflow with Docker for continuous delivery.

As this guide evolves in its next few releases, it will expand to cover topics such as security, Jenkins cloud installations, mobile development, and guidelines for optimizing builds.

Where do I start?
  1. Jenkins Cookbook: Cutting Edge Best Practices for Continuous Delivery is available here and as a PDF
  2. To make a topic request for future editions, email me at tkennedy@cloudbees.com


Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

JUC Speaker Blog Series: Martin Hobson, JUC U.S. East

Wed, 06/17/2015 - 01:05
After months of planning, the Jenkins community is very proud to present the 2015 Jenkins User
Conference World Tour! JUC U.S. East is the first of the four conferences in the circuit, beginning on Thursday, June 18 in the nation's capital, Washington, D.C.! Registration opens bright and early at 7:00AM, with a keynote address by Kohsuke Kawaguchi, founder of the Jenkins project.

To celebrate JUC U.S. East, the last U.S. East speaker blog post was published on the jenkins-ci.org blog. Martin Hobson has been using Jenkins for a while now in his four-person software development team. But once he started working with a much larger group, he realized that operating at a large scale is much different and much more complex than he thought. In his lightning talk, "Visualizing VM Provisioning with Jenkins and Google Charts” Martin will teach you how to instrument and analyze complex builds in large scale environments.

Read his full blog post here. If you want to attend Martin's talk, you can still register for JUC U.S. East!



Thank you to the sponsors of the Jenkins User Conference World Tour:





Categories: Companies

JUC Europe speaker blog post from Stephan Hochdörfer

Tue, 06/16/2015 - 17:29
This week, in only a couple of days, the Jenkins butler will begin his journey around the globe. JUC U.S. East begins this Thursday (!!) and JUC Europe is shortly after, with Day 1 on June 23. As you can imagine, everyone is excited! All of the speakers are looking forward to their sessions...including Stephan Hochdörfer who will be presenting his talk called "Jenkins for PHP Projects" at JUC Europe on Day 1. bitExpert moved to Jenkins about five years ago and continue to use it on a daily basis for continuous integration needs. Attend his talk to learn more, and read his full blog post from the Jenkins Speaker Blog Series on jenkins-ci.org.

Still need your ticket to JUC? The dates are coming up very quickly! Register for a JUC near you.



Thank you to the sponsors of the Jenkins User Conference World Tour:
Categories: Companies

JUC Speaker Blog Series: Damien Coraboeuf, JUC Europe

Fri, 06/12/2015 - 16:57
The Jenkins User Conference 2015 World Tour is quickly approaching. JUC U.S. East is next week, June 18-19 and JUC Europe is shortly after on June 23-24. JUC Israel is a little further, on July 16, and JUC U.S. West isn't until the fall: September 2-3 (so you still have plenty of time to register for those conferences!)
The JUC Speaker Blog Series on jenkins-ci.org is still going strong! This week the community published an entry by Damien Coraboeuf, Continuous Delivery Expert at Clear2Pay. In his organization, his team ran into a major problem: there was no way for them to manually maintain thousands of jobs on a day to day basis. Read his full post here and attend his talk at JUC Europe to learn about the solution they came up with and how they implemented it...with Jenkins!Still need your ticket to JUC? If you register with a friend you can get two tickets for the price of one! Register for a JUC near you.



Thank you to the sponsors of the Jenkins User Conference World Tour:

Categories: Companies

Multi-tenancy with Jenkins

Tue, 06/09/2015 - 06:17
Overview

As your Jenkins use increases, you will likely extend your Jenkins environment to new team members and perhaps to new teams or departments altogether. It's quite a common trend, for example, to begin using Jenkins within a development team then extend Jenkins to a quality assurance team for automating tests for the applications built by the development teams. Or perhaps your company is already using Jenkins and your team (a DevOps or shared tooling kind of team) has a mission to implement Jenkins as a shared offering for a larger number of teams.

Regardless, the expansion is an indication your teams are automating more of their development process, which is a good sign. It should go without saying, organizations are seeing a lot of success automating their development tool chain with Jenkins, allowing their teams to focus on higher value, innovative work and reducing time wasted on mundane tasks.

No one wants this, after all (no dev managers or scrum masters, anyway):

-xkcd.com/303/

At the same time, if not planned, the expansion which was meant to extend those successes to more teams could have unintended consequences and lead to bottlenecks, downtime, and pain. Besides avoiding the pain, there are also proactive steps you can take to further increase your efficiency along the way.

What is multi-tenancy?

For the purposes of this blog post, let's define multi-tenancy for Jenkins: multi-tenancy with Jenkins means supporting multiple users, teams, or organizations within the same Jenkins environment and partitioning the environment accordingly.

Why go multi-tenant?

You might ask --- "Jenkins is pretty easy to get up-and-running; why not just create a new Jenkins instance?" To some extent, I agree! Jenkins is as simple as java -jar jenkins.war, right? This may be true, but many teams are connected in one way or another… if two related but distinct teams or departments work on the related components, it's ideal that they have access to the same Jenkins data.

Implementing Jenkins - at least, implementing it well - takes some forethought. While it is indeed easy to spin up a new Jenkins instance, if your existing team using Jenkins already has a great monitoring strategy in place or a well-managed set of slave nodes attached to their Jenkins instance, reusing a well-managed Jenkins instance seems like a good place to start. I mean, who wants to wear a pager on the weekend for Jenkins, anyway?

Establishing an efficient strategy for Jenkins re-use in an organization can help reduce costs, increase utilization, enhance security, and ensure auditability/traceability/governance within the environment.

What features can I use to set up multi-tenancy?

As you begin to scale your Jenkins use, there are a number of existing features available to help:

  • Views
    • The views feature in the Jenkins core allows you to customize the lists of plugins and tabs on the home screen for better user experience when using a multi-tenant Jenkins instance.

  • Folders
    • The Folders plugin, developed in-house at CloudBees, is even more powerful than views for optimizing your Jenkins environment for multi-tenancy. Unlike views, Folders actually create a new context for Jenkins.

    • This new context allows for example, creating folder-specific environment variables. From the documentation: "You can [also] create an arbitrary level of nested folders. Folders are namespace aware, so Job A in Folder A is logically different than Job A in Folder B".
  • Distributed Builds
    • If you're not already using Jenkins distributed builds, you should be! With distributed builds, Jenkins can execute build jobs on remote machines (slave nodes) to preserve the performance of the Jenkins web app itself.

    • If you extend your Jenkins environment to additional teams, all the more reason to focus on preserving the master's performance.

    • Even better, distributed builds allow you to set up build nodes capable of building the various types of applications your distributed teams will likely require (Java, .NET, iOS, etc.)

  • Cleaning Up Jobs
    • When the Jenkins environment is shared, system cleanup tasks become more critical.

    • Discarding Old Builds and setting reasonable timeouts for builds will help ensure your build resources are available to your teams.

  • Credentials API
    • Jenkins allows managing and sharing credentials across jobs and nodes. Credentials can be set up and secured at the Folder level, allowing team-specific security settings and data.

Stressing the multi-tenancy model

As you scale your Jenkins use, you will find there are some stress points where it can be... less than ideal to share a single Jenkins master across teams:

  • Global configuration for plugins
    • Some plugins support only global configuration. For example, the Maven plugin's build step default options are global. Similarly, the Subversion SCM plugin's version configuration is a global setting.

    • If two teams want to use the same plugin differently, there aren't many options (even worse: different versions of the same plugin).

  • Plugin Installation and Upgrades
    • While Jenkins allows plugins to be installed without a restart, some plugins do require a restart on install. Further, all plugins require a Jenkins restart on update.

    • Some plugins have known performance, backward compatibility, and security limitations. These may be acceptable for one team, but perhaps not all your users.

  • Slave Re-use
    • When multiple-teams use the same slaves, they usually share access to them. As mentioned above, care must be taken to clean up slave nodes after executing jobs.

    • Securing access for sensitive jobs or data in the workspace is a challenge.

    • Scale
      • Like any software application, a single Jenkins master can only support so many builds and job configurations.

      • While determining an actual maximum configuration is heavily environment-specific (available system resources, number and nature of jobs, etc.), Jenkins tends to perform best with no more than 100-150 active, configured executors.

      • While we've seen some Jenkins instances with 30,000+ job configurations, Jenkins will need more resources and start-up times will increase as the job count increases.

    • Single Point of Failure
      • If more and more teams use the same Jenkins instance, when outages occur, the impact becomes larger.

      • When Jenkins need to be restarted for plugin updates or core upgrades, more teams will be impacted.

      • As teams rely more and more on Jenkins, particularly for automating processes beyond development (e.g.: QA, security, and performance test automation), down time for Jenkins becomes less acceptable.

    Tipping Point

    Hopefully this article saves you some time by laying out some of the stress points you'll encounter when setting up multi-tenancy in Jenkins. Eventually, you'll get to a tipping point where running a single, large multi-tenant Jenkins master may not be worth it. For that reason, we recommend developing a strategy for taking your multi-tenancy strategy to the next level: creating multiple Jenkins masters.

    For each organization, the answer is a little different, but CloudBees recommends establishing a process for creating multiple Jenkins masters. In a follow-up post, we'll highlight how CloudBees Jenkins Platform helps manage multiple Jenkins masters. With CloudBees Jenkins Operations Center, your multi-tenancy strategy is simply expanded to masters as well, making your Jenkins masters part of the same Jenkins Platform. We'll also share some successful strategies (and some not so successful strategies) for determining when to split your masters.

    Categories: Companies

    JUC Speaker Blog Series: Will Soula, JUC U.S. East

    Mon, 06/08/2015 - 21:42
    This year will be Will Soula's third time presenting at a Jenkins User Conference, fourth year as an attendee, and his first time at a JUC on the East Coast! In his presentation this year, Will will be talking about what Drilling Info uses to bring their entire organization together. ChatOps allows everyone to come together, chat and learn from each other in the most efficient way. 
    This post on the Jenkins blog is by Will Soula, Senior Configuration Management/Build Engineer at Drilling Info . If you have your ticket to JUC U.S. East, you can attend his talk "Chat Ops and Jenkins" on Day 1.

    Still need your ticket to JUC? If you register with a friend you can get two tickets for the price of one! Register for a JUC near you.




    Thank you to the sponsors of the Jenkins User Conference World Tour:









    Categories: Companies

    JUC East Speaker Blog Andrew Phillips, XebiaLabs

    Wed, 06/03/2015 - 22:45
    At JUC U.S. East, Andrew Phillips will talk about the elephant in the room when it comes to continuous delivery: Automated Testing! Automated testing is critically important, but as your software delivery time becomes faster and faster, it becomes more difficult to keep track of all of your results. Andrew will show you how to more easily manage your automated testing with Jenkins.

    This post on the Jenkins blog is by Andrew Phillips, VP of Products at XebiaLabs. If you have your ticket to JUC U.S. East, you can attend his talk "How to Optimize Automated Testing with Everyone's Favorite Butler" on Day 1.

    Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.




    Thank you to the sponsors of the Jenkins User Conference World Tour:



    Categories: Companies

    Jenkins User Conference U.S. East Speaker Highlight: Peter Vilim

    Wed, 06/03/2015 - 21:52
    In his presentation, Peter will be focusing on developing Jenkins plugins. He hopes to share his Jenkins experiences that he has had at Delphix and while he was in graduate school.
      Even if you do not plan to write your own plugins any time soon, if you attend this talk you will learn about what makes plugins work and how to better evaluate which plugins to pick for your own projects using Jenkins.

      This post on the Jenkins blog is by Peter Vilim, Member of Technical Staff at Delphix. If you have your ticket to JUC U.S. East, you can attend his talk "Providing a First Class User Experience with Jenkins Plugins" on Day 1.

      Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.


      Thank you to the sponsors of the Jenkins User Conference World Tour:


      Categories: Companies

      Jenkins User Conference Europe Speaker Highlight: Nobuaki Ogawa

      Mon, 06/01/2015 - 23:10
      From the very first time that Nobuaki Ogawa used JenkinsCI, he knew that it would change everything! Since then, almost all the work he completed in this last year was entirely with the help of continuous delivery with Jenkins.

      Jenkins takes care of everything he needs.

      In his lightning talk, "Jenkins Made Easy," Nobuaki will share a basic case of implementing continuous delivery with Jenkins. Read his blog post on the Jenkins-ci.org website to learn more about his talk!

      Do you already have your ticket to JUC Europe? If so, attend Nobuaki's lightning talk "Jenkins Made Easy" on Day 2.

      If you still need your ticket to JUC, you can register with a friend to get 2 tickets for the price of 1! Register here for any of the Jenkins User Conferences.


      Thank you to the sponsors of the Jenkins User Conference World Tour:

      Categories: Companies

      cdSummit News!

      Mon, 06/01/2015 - 17:32

      cdSummit is almost here! June 18-19, the Continuous Deliver Summit World Tour will begin with the first stop in Washington, D.C.

      On the new and improved cdSummit U.S. East page, you can see the entire agenda for each day of the conference. On Day 2, Gene Kim, author of The Phoenix Project, will be presenting the keynote address to the attendees of cdSummit and the concurrently running Jenkins User Conference. He will also be presenting his keynote at cdSummit U.S. West.
      For his keynote address, "Top DevOps Enterprise Adoption Patterns: A Fifteen Year Study Of High Performing IT Organizations," Gene will talk about the findings of his study and how DevOps isn't just for the "unicorns," but can be implemented at any organization.

      cdSummit dates are approaching quickly, so if you haven't registered yet, now is the time to do so! Register with a friend to get two tickets for the price of one. AND all attendees from either the cdSummit or the Jenkins User Conference can freely attend any session at either event.

      *Please note that Gene Kim will not be a keynote presenter at cdSummit Europe.




      Thank you to the sponsors of the CD Summit World Tour:

      Categories: Companies

      [Podcast] Enable Fast Cycles with Continuous Delivery

      Thu, 05/28/2015 - 21:56
      Over the last 10 years, continuous integration brought tangible improvements to the software delivery lifecycle - improvements that enabled adoption of agile delivery practices.

      The software industry is now progressing to the next maturity phase with continuous delivery.
      Dan Juengst, CloudBeesWith its flexible plugin architecture and numerous plugins, Jenkins is like the person who knows everyone and can work with everyone. 
      Enterprises can utilize any existing tools - whether developed in-house or licensed from a commercial vendor - or bring in a new tool and know that Jenkins will work with it. 
      Jenkins bridges both sides of the development/operations divide, bringing both teams together and enabling collaboration between them.
      Outlook Series' Michael Lippis interviews Dan Juengst to gain CloudBees' perspective on continuous delivery.
      Listen to the Podcast here (13 minutes)



      Christina PappasMarketing Funnel ManagerCloudBees
      Follow her on Twitter
      Categories: Companies