Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
CloudBees provides an enterprise Continuous Delivery Platform that accelerates the software development, integration and deployment processes. Building on the power of Jenkins CI, CloudBees enables you to adopt continuous delivery incrementally or organization-wide, supporting on-premise, cloud and hybrid environments.
Updated: 2 hours 21 min ago

Automating application releases with Docker

Fri, 06/19/2015 - 13:00
docker-whale-sea.jpg
Many organizations struggle with releasing their applications and this struggle has birthed an industry of tools designed to simplify the process.  Release management tools allow a release process to be defined as stages in a pipeline, and stages themselves contain sequential steps to be performed before the next begins. Stages are segmented using approval gates to ensure that QA and release managers get the final say on whether an artifact is ready for the next stage in the release pipeline, and the entire process is tracked for reporting purposes.

The goal of such processes is to ensure that only high quality releases are deployed into production and are released on time, and the release manager is responsible for it all.

An obstacle to a smooth release is the structural challenge of maintaining identical testing and production environments. When these environments differ, it allows an opportunity for unexpected regressions to slip through testing and botch a release. Ideally, all environments will be identical and contain the same dependent libraries and tooling for the application, as well as the same network configurations.
What is Docker?

Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. They can also be created from a read-only template called a Docker image.  

Docker images can be created from an environment definition called a Dockerfile or from a running Docker container which has been committed as an image. Once a Docker image exists, it can be pushed to a registry like Docker Hub and a container can be created from that image, creating a runtime environment with a guaranteed set of tools and applications installed to it. Similarly, containers can be committed to images which are then committed to Docker Hub.

Cookie-cutter environments and application packagingThe versatility and usability of Docker has made it a popular choice among DevOps-driven organizations. It has also made Docker an ideal choice for creating the standardized and repeatable environments that an organization needs for both creating identical testing and production environments as well as for packaging portable applications.

If an application is packaged in a Docker image, testing and deploying is a matter of creating a container from that image and running tests against the application inside. If the application passes the tests, then they should be stored in a registry and eventually deployed to production.

Automating the releaseAccording to Forrester Research, the top pain of release management is a lack of visibility into the release management process and their process’ lack of automation.


However, the testing, deploying, and releasing stages of these pipelines can be orchestrated by Jenkins  using the CloudBees Docker Build and Publish plugin. This plugin creates a new build step for building and packaging applications into Docker containers, as well as publishing them a images to both private and public Docker registries like Docker Hub.Screen Shot 2015-06-10 at 1.57.06 PM.png

Testing and QAApplications packaged in Docker images can be tested by running them as a container. Docker allows containers to be linked, granting the linked container shell access and allowing it to run scripts against the application’s container. This link can also be made between the Docker application container and another container which is packaging a service the application needs to run against for a true integration test, such as a test database.

Promotion and ReleaseJenkins supports the concept of promotion, where tested and approved artifacts are promoted to the next stage in a pipeline. Promotion is compatible with both traditional Jenkins jobs and the new Jenkins Workflow, and promotions can be set to only trigger if manually approved by particular users or team members.

In this case, the artifact is a Docker image containing our application, and once it  is promoted it can be manually or automatically moved to the next stage of its pipeline. The next stage can range from a pre-production staging area to a registry like Docker Hub, where the promoted image is known as a “Gold” image ready for deployment.

The promotion can also trigger any other number of pre-release actions, such as notifications and sending data about the artifact to a company dashboard.

Where do I start?
  1. The CloudBees Docker Build and Publish plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform. 
  2. More information can be found in the newly released Jenkins Cookbook
  3. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:
    1. Docker Slaves with the CloudBees Jenkins Platform
    2. Jenkins Docker Workflow DSL
    3. Docker Traceability
    4. Docker Hub Trigger Plugin
    5. Docker Custom Build Environment plugin


Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Disaster-proofing slaves with Docker Swarm and the CloudBees Jenkins Platform

Fri, 06/19/2015 - 13:00


Standardizing build environments is a best practice for improving Jenkins resiliency, since generic and easily replaceable build environments reduce the impact of an outage within a build farm. When a slave is configured from a standardized template and Jenkins jobs are configured to install required tooling at runtime, any slave in a given pool can seamlessly take on any downed slaves’ workload. This concept is known as fungible slaves and was coined by Andrew Bayer at a Jenkins User Conference.

The problem with such a setup is not the setup itself but the process to achieve it. Configuring a machine to act as slave inside your infrastructure can be tedious and time consuming. This is especially true when the same setup has to be replicated on a large pool of slaves.

Tools for configuration management or a pre-baked image can be excellent solutions to this end, while containers and virtualization are also popular tools for creating generic slave environments. Containerization has risen to prominence for this purpose, with Docker being the fastest rising in popularity and ultimately most popular among Jenkins users.
What is Docker?
Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. Docker containers can be created from a read-only template called a Docker image.  

Docker Swarm is a clustering tool for Docker which unites Docker pools into a single virtual host.

Scalable and resilient slavesDocker images are an easy way to define a template for a slave machine and Docker containers are lightweight enough to perform almost as well as a “bare metal” machine, making Docker a good candidate for hosting fungible slaves.


But what happens when an organization has scaled horizontally, with many masters in their installation and each needing their own slave pool?

The open-source Docker Plugin allows masters to create and automatically tear down slaves in a Docker installation, but the configuration to connect to the Docker host will need to be re-created on every existing Jenkins master and again when a new master is onboarded.

Multiple Docker hosts may also exist, so to ensure the most efficient use of these resources, all Docker hosts in an organization should be pooled together and slave containers run in on an otherwise idle host if possible to maximize the performance of the container (similar to the logic behind the Even Scheduler plugin). This sharing between Docker hosts allows masters’ jobs to be built with minimal queue time and prevents some hosts from sitting idly while others are overloaded.

This pooling is possible with Docker Swarm, but job scheduling and Swarm configuration sharing still require special integrations with Jenkins.

Shared Docker Cloud ConfigurationAs a part of the next release of CloudBees Jenkins Platform, Docker Swarm may be configured as a slave pool whose configuration may be shared between all managed or “client” masters in an organization. This prevents the pain of having to configure a Swarm for each master and updating all such configurations should the installation change in any way (location, FS, max containers, etc).

Like other shareable clouds, the Docker Swarm cloud can be created as an object in CloudBees Jenkins Operations Center:docker-cloud.PNG

From there, you can configure the location of the Docker Swarm host, any credentials for connecting to it, as well as which Docker image(s) should be used for creating slaves and how many containers should be up at any given time.

docker-shared.PNGWhere do I start?
  1. This feature is included in CloudBees Jenkins Operations Center, part of the CloudBees Jenkins Platform. Contact sales@cloudbees.com for more information.
  2. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:
    1. Docker Build and Publish plugin
    2. Jenkins Docker Workflow DSL
    3. Docker Traceability
    4. Docker Hub Trigger Plugin
    5. Docker Custom Build Environment plugin

Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Building modern, real world software delivery pipelines with Jenkins and Docker

Thu, 06/18/2015 - 13:00
TL;DR: This blog outlines the key use-cases enabled by the newly released Docker plugins in the Jenkins communities. You can drill into more depth with an in-depth blog for each of the use case. The CloudBees team has actively worked within the community to release these plugins.Lately, I have been fascinated by how lean manufacturing radically improved the production of goods - the key being a fully orchestrated, automated delivery pipeline. We are at the “lean” inflection point in the software computing history where light-weight containers viz Docker and Jenkins will bring a rapid improvements in software delivery. I suggest that you read more on the how/why in Kohsuke’s White Paper.

The executive summary of this White Paper is that Docker provides a common currency between Dev and Ops teams in expressing environments. Jenkins provides the marketplace through orchestration with Workflow whereby the currencies are easily exchanged between these teams.

The CloudBees team has been in the forefront of these changes through our role in the Jenkins community. Our team members have seen and often contributed to requests for enhancements with Jenkins and Docker as the industry pokes its way through this new era. This experience has helped us capture the canonical use cases that help deliver modern pipelines. Today, I am happy to announce the general availability of a number of Docker plugins in OSS that help organizations adopt CD at scale with Jenkins and Docker.
There are two primary meta-use cases that these plugins help you tackle:Meta-Use-Case 1: Constructing CD pipelines with Jenkins and DockerLet’s construct a simplified pipeline, the steps outlined below increase in sophistication: Jenkins Credit Union (JCU) has a Java web application that is delivered as a .war file that runs on a Tomcat container. Screen Shot 2015-06-10 at 1.57.06 PM.png
  1. In the simplest use case, both of the application binary and middleware containers (the .war and TC) are built independently as Docker containers and and “baked” into one container which is finally pushed to a registry (company “Gold” Docker image).The Docker Build and Publish plugin can be used to achieve this goal by giving Jenkins the ability to build and package applications into Docker containers, as well as publishing them a images to both private and public Docker registries like Docker Hub.
  2. Now, the JCU team wants to hand this container to the QA team for the the “TESTING” stage. The QA team pulls the container and tests it before pushing it downstream. You can extend the chain of deliveries to “STAGING” and “PRODUCTION” stages and teams. In this case, the JCU team can either chain jobs together or use Jenkins Docker Workflow DSL (ignore this at the moment) to build the pipeline.
  3. Everything’s going fine and peachy, until...the JCU security team issues a security advisory about the Tomcat docker image. The JCU security team updates the Tomcat Docker image and pushes it to the Docker Hub registry. At this point, the Dev job that “baked” images is automatically tickled and builds a new image (application binary + middleware) without any human input. The tickle is achieved through the Docker Hub Notification plugin, which lets Docker Hub trigger application and slave environment builds. The QA job is triggered after the bake process and as part of the pipeline execution.
  4. Despite all the testing possible, the Ops team discovers that there is a bug in the application code and they will like to know which component team is responsible for the issue. The Ops team used the Docker Traceability plugin feature to let Jenkins know which bits have been deployed in production. This plugin lets them find the build that caused the issues in production.

I had mentioned that we would ignore the Workflow initially - let’s get back to it now. 

Most real world pipelines are more complex than the canonical BUILD->TEST->STAGE->PRODUCTION - Jenkins Workflow makes it possible to implement those pipelines. The Jenkins Docker Workflow DSL provides first class support within Workflow to address the above use cases as part of an expressed workflow. Once implemented, the workflow becomes executable, and once executed, it becomes possible to visualize which one is successful vs not, where the problems are located, etc. The red/green image in the picture above is the Workflow Stage View feature that is available in the CloudBees Jenkins Platform.
The above steps layout a canonical use case for building pipelines with Jenkins. The examples can get more sophisticated if you bring the full power of Workflow and the ability to kick of connected Docker containers through Docker Compose to bear.

Meta-Use-Case 2: Providing build environments with DockerJCU has multiple departments and each of these departments has its own Jenkins master and corresponding policies with how build environments can be setup.
Use case 1: The “PRODUCTION” team of the e-banking software has a requirement that all builds happen on a sanitized and locked-down build environments. They can use the Docker Slaves feature of the CloudBees Jenkins Platform to lock-down these environments and provide them to their teams. This not only makes sure that those build/test environment will always be clean, but it will also provides increased isolation as no build executing on a given machine will have access to the other Jenkins jobs concurrently executing on that same machine, but on a different Docker host.
JCU is also using CloudBees Jenkins Platform to manage multiple masters, so they can use the “Shared Configuration” to share these slaves across all client masters.
Use case 2: The CTO team wants to have the flexibility to have custom environments for working with custom stacks. The Docker Custom Build Environment plugin allows Docker images and files to serve as template for Jenkins slaves, reducing the administrative overhead of a slave installation to only updating a few lines in a handful of environment definitions for potentially thousands of slaves. 

In this way, the overhead involved in maintaining hundreds or even thousands of slaves is reduced to changing a few lines in the company's Docker slave Dockerfile.
Closing thoughtsThe above set of use cases and corresponding plugins push the boundary for Continuous Delivery within the organization ahead. As experience with Docker grows, the Jenkins community will continue building out features to keep up with the requirements.
I hope you have fun playing with all the goodies just released.
Where do I start?
  1. All the plugins are open sourced, so you can install them from the update center or you can install CloudBees Jenkins Platform to get them quickly.
  2. Read more about the impact of Docker and Jenkins on IT in Kohsuke's White Paper
  3. Read more about the use cases in the blogs:
    1. Docker Build and Publish plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Jenkins Docker Workflow DSL
    4. Docker Traceability
    5. Docker Hub Trigger Plugin
    6. Docker Custom Build Environment plugin
  4. More information can be found in the newly released Jenkins Cookbook
  5. Read all of our CloudBees Jenkins Platform plugin documentation

Harpreet SinghVice President of Product Management 
CloudBees

Harpreet is the Vice President of Product Management and is based out of San Jose. Follow Harpreet on Twitter
Categories: Companies

Orchestrating Workflows with Jenkins and Docker

Thu, 06/18/2015 - 13:00
docker-workflow.pngMost real world pipelines are more complex than the canonical BUILD→TEST→STAGE→PRODUCTION flow. These pipelines often have stages which should not be triggered unless certain conditions are met, while others should trigger only if the first’s conditions fall through. Jenkins Workflow helps writes these pipelines, allowing complex deployments to be better represented and served by Jenkins.The Jenkins Workflow Docker plugin extends these workflows even further to provide first class support for Docker images and containers. This plugin allows Jenkins to build/release Docker images and leverage Docker containers for customized and reproducible slave environments. What is Docker?
Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. They can also be created from a read-only template called a Docker image.  

Docker images can be created from an environment definition called a Dockerfile or from a running Docker container which has been committed as an image. Once a Docker image exists, it can be pushed to a registry like Docker Hub and a container can be created from that image, creating a runtime environment with a guaranteed set of tools and applications installed to it. Similarly, containers can be committed to images which are then committed to Docker Hub.
What is Workflow?
Jenkins Workflow is a new plugin which allows Jenkins to treat continuous delivery as a first class job type in Jenkins. Workflow allows users to define workflow processes in a single place, avoiding the need to coordinate flows across multiple build jobs. This can be particularly important in complex enterprise environments, where work, releases and dependencies must be coordinated across teams. Workflows are defined as a Groovy script either within a Workflow job or checked into the workspace from an external repository like Git.Docker for simplicityIn a nutshell, the CloudBees Docker Workflow plugin adds a special entry point named Docker that can be used in any Workflow Groovy script. It offers a number of functions for creating and using Docker images and containers, which in turn can be used to package and deploy applications or as build environments for Jenkins.

Broadly speaking, there are two areas of functionality: using Docker images of your own, or created by the worldwide community, to simplify build automation; and creating and testing new images. Some projects will need both aspects and you can follow along with a complete project that does use both: see the demonstration guide.

Jenkins Build Environments and Workflow
Before getting into the details, it is helpful to know the history of configuring build environments in Jenkins. Most project builds have some kind of restrictions on the computer which can run the build. Even if a build script (e.g. an Ant build.xml) is theoretically self-contained and platform-independent, you have to start somewhere and say what tools you expect to use.

Since a lot of people needed to do this, back in 2009 I worked with Kohsuke Kawaguchi and Tom Huybrechts to add a facility to Jenkins’ predecessor - Hudson - for “tools”. Now a Jenkins administrator can go the system configuration page and say that Ant 1.9.0 and JDK 1.7.0_67 should be offered to projects which want them, so download and install them from public sites on demand. From a traditional job, this becomes a pulldown option in the project configuration screen, and from a Workflow, you can use the tool step:node('libqwerty') {
 withEnv(["PATH=${tool 'Ant 1.9.0'}/bin:${env.PATH}"]) {
   sh 'ant dist-package'
 }
 archive 'app.zip'
} While this is a little better, this still leaves a lot of room for error. What if you need Ant 1.9.3—do you wait for a Jenkins administrator? If you want to scale up to hundreds of builds a day, who is going to maintain all those machines?Clear, reproducible build environments with DockerDocker makes it very easy for the project developer to try a stock development-oriented image on Docker Hub or write a customized one with a short Dockerfile:

FROM webratio/ant:1.9.4RUN apt-get install libqwerty-devel=1.4.0Now the project developer is in full control of the build environment. Gone are the days of “huh, that change compiled on my machine”; anyone can run the Docker image on their laptop to get an environment identical to what Jenkins uses to run the build.

Unfortunately, if other projects need different images, the Jenkins administrator will have to get involved again to set up additional clouds. Also there is the annoyance that before using an image you will need to tweak it a bit to make sure it is running the SSH daemon with a predictable user login, and a version of Java new enough to run Jenkins slaves.What if all this hassle just went away? Let us say the Jenkins administrators guaranteed one thing only:If you ask to build on a slave with the label docker, then Docker will be installed.and proceeded to attach a few dozen beefy but plain-vanilla Linux cloud slaves. With CloudBees Docker Workflow, you can use these build servers as they come.

// OK, here we come
node('docker') {
 // My project sources include both build.xml and a Dockerfile to run it in.
 git 'https://git.mycorp.com/myproject.git'
 // Ready?
 docker.build('mycorp/ant-qwerty:latest').inside {
   sh 'ant dist-package'
 }
 archive 'app.zip'
}Embedded in a few lines of Groovy instructions is a lot of power. First we used docker.build to create a fresh image from a Dockerfile definition. If you are happy with a stock image, there is no need for even this:

node('docker') {
 git 'https://git.mycorp.com/myproject.git'
 docker.image('webratio/ant:1.9.4').inside {
   sh 'ant dist-package'
 }
 archive 'app.zip'
}docker.image just asks to load a named image from a registry, in this case the public Hub. .inside asks to start the image in a new throwaway container, then run other build steps inside it. So Jenkins is really running docker exec abc123 ant dist-package behind the scenes. The neat bit is that your single project workspace directory is transparently available inside or outside the container, so you do not need to copy in sources, nor copy out build products. The container does not need to run a Jenkins slave agent, so it need not be “contaminated” with a Java installation or a jenkins user account.

Easily adaptable CD pipelinesThe power of Workflow is that structural changes to your build are just a few lines of script away. Need to try building the same sources twice, at the same time, in different environments?

def buildIn(env) {
 node('docker') {
   git 'https://git.mycorp.com/myproject.git'
   docker.image(env).inside {
     sh 'ant dist-package'
   }
 }
}
parallel older: {
 buildIn 'webratio/ant:1.9.3'
}, newer: {
 buildIn 'webratio/ant:1.9.4'
}Simplified application deployments
So far everything I have talked about assumes that Docker is “just” the best way to set up a clear, reproducible, fast build environment, but the main use for Docker is to simplify deployment of applications to production. We already saw docker.build creating images, but you will want to test them from Jenkins, too. To that end, you can .run an image while you perform some tests against it. And you can .push an image to the public or an internal, password-protected Docker registry, where it is ready for production systems to deploy it.
Try it yourselfLook at the demo script to see all of the above-mentioned use-cases in action. This demo highlights that you can use multiple containers running concurrently to test the interaction between systems.

In the future, we may want to build on Docker Compose to make it even easier to setup and tear-down complex assemblies of software, all from a simple Jenkins workflow script making use of freestanding technologies. You can even keep that flow script in source control, too, so everything interesting about how the project is built is controlled by a handful of small text files.

Closing thoughtsBy this point you should see how Jenkins and Docker can work together to empower developers to define their exact build environment and reliably reproduce application binaries ready for operations to use, all with minimal configuration of Jenkins itself.

Download CJE 15.05 or install CloudBees Docker Workflow on any Jenkins 1.596+ server and get started today!

Where do I start?
  1. The CloudBees Docker Workflow plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. Documentation on this plugin is available in the CloudBees Jenkins Platform documentation
  3. A more technical version of this blog is available on the CloudBees Developer Blog
  4. More information on this feature is available in the new Jenkins Cookbook
  5. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs.
    1. Docker Build and Publish plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Docker Traceability
    4. Docker Hub Trigger Plugin
    5. Docker Custom Build Environment plugin

Jesse Glick
Developer Extraordinaire
CloudBees

Jesse Glick is a developer for CloudBees and is based in Boston. He works with Jenkins every single day. Read more about Jesse on the Meet the Bees blog post about him and follow him on Twitter.
Categories: Companies

A Guide to Cutting-Edge Jenkins and Continuous Delivery

Thu, 06/18/2015 - 13:00


jenkins-chef.jpg
Original source is: http://cliparts.co/clipart/2478772Jenkins is the leading CI and CD server in the world, choking the market with a solid 70% share and boasting over 111,000 active installations around the world building 5,190,252 jobs as of 2015. This represents a 34% growth in the number of active installations and a 69% growth in the number of running build jobs. In other words, Jenkins is blowing up.

With such explosive growth also comes some pains. One of the great challenges of Jenkins is keeping up with its latest and greatest features, of which there are many given the 1,000+ plugins available today. It’s a Herculean effort, but it’s one that CloudBees customers have unanimously been itching for us to undertake.

Some of this knowledge already exists in the Jenkins community, but a lot of it has also been floating around CloudBees as “tribal knowledge.” At CloudBees, we have seen a rainbow of plugins, use cases, installation sizes and pipelines, so much so that our engineering, support and solutions architect teams have been maintaining their own internal guides and knowledgebases on what works and what doesn’t.

As much as I’m sure the support team enjoys giving these recommendations over Zendesk, it’s not necessarily the most efficient way to disseminate it, and to not formally document it paywalls this information from the rest of the Jenkins community.

Given all of this, CloudBees is proud to announce the release of the first version of our use-cases and best practices eBook -- Jenkins Cookbook: Cutting Edge Best Practices for Continuous Delivery. This guide will cover information that Jenkins users of all levels will appreciate, from hardware and sizing recommendations for a Jenkins installation to guidelines on leveraging Jenkins Workflow with Docker for continuous delivery.

As this guide evolves in its next few releases, it will expand to cover topics such as security, Jenkins cloud installations, mobile development, and guidelines for optimizing builds.

Where do I start?
  1. Jenkins Cookbook: Cutting Edge Best Practices for Continuous Delivery is available here and as a PDF
  2. To make a topic request for future editions, email me at tkennedy@cloudbees.com


Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

JUC Speaker Blog Series: Martin Hobson, JUC U.S. East

Wed, 06/17/2015 - 01:05
After months of planning, the Jenkins community is very proud to present the 2015 Jenkins User
Conference World Tour! JUC U.S. East is the first of the four conferences in the circuit, beginning on Thursday, June 18 in the nation's capital, Washington, D.C.! Registration opens bright and early at 7:00AM, with a keynote address by Kohsuke Kawaguchi, founder of the Jenkins project.

To celebrate JUC U.S. East, the last U.S. East speaker blog post was published on the jenkins-ci.org blog. Martin Hobson has been using Jenkins for a while now in his four-person software development team. But once he started working with a much larger group, he realized that operating at a large scale is much different and much more complex than he thought. In his lightning talk, "Visualizing VM Provisioning with Jenkins and Google Charts” Martin will teach you how to instrument and analyze complex builds in large scale environments.

Read his full blog post here. If you want to attend Martin's talk, you can still register for JUC U.S. East!



Thank you to the sponsors of the Jenkins User Conference World Tour:





Categories: Companies

JUC Europe speaker blog post from Stephan Hochdörfer

Tue, 06/16/2015 - 17:29
This week, in only a couple of days, the Jenkins butler will begin his journey around the globe. JUC U.S. East begins this Thursday (!!) and JUC Europe is shortly after, with Day 1 on June 23. As you can imagine, everyone is excited! All of the speakers are looking forward to their sessions...including Stephan Hochdörfer who will be presenting his talk called "Jenkins for PHP Projects" at JUC Europe on Day 1. bitExpert moved to Jenkins about five years ago and continue to use it on a daily basis for continuous integration needs. Attend his talk to learn more, and read his full blog post from the Jenkins Speaker Blog Series on jenkins-ci.org.

Still need your ticket to JUC? The dates are coming up very quickly! Register for a JUC near you.



Thank you to the sponsors of the Jenkins User Conference World Tour:
Categories: Companies

JUC Speaker Blog Series: Damien Coraboeuf, JUC Europe

Fri, 06/12/2015 - 16:57
The Jenkins User Conference 2015 World Tour is quickly approaching. JUC U.S. East is next week, June 18-19 and JUC Europe is shortly after on June 23-24. JUC Israel is a little further, on July 16, and JUC U.S. West isn't until the fall: September 2-3 (so you still have plenty of time to register for those conferences!)
The JUC Speaker Blog Series on jenkins-ci.org is still going strong! This week the community published an entry by Damien Coraboeuf, Continuous Delivery Expert at Clear2Pay. In his organization, his team ran into a major problem: there was no way for them to manually maintain thousands of jobs on a day to day basis. Read his full post here and attend his talk at JUC Europe to learn about the solution they came up with and how they implemented it...with Jenkins!Still need your ticket to JUC? If you register with a friend you can get two tickets for the price of one! Register for a JUC near you.



Thank you to the sponsors of the Jenkins User Conference World Tour:

Categories: Companies

Multi-tenancy with Jenkins

Tue, 06/09/2015 - 06:17
Overview

As your Jenkins use increases, you will likely extend your Jenkins environment to new team members and perhaps to new teams or departments altogether. It's quite a common trend, for example, to begin using Jenkins within a development team then extend Jenkins to a quality assurance team for automating tests for the applications built by the development teams. Or perhaps your company is already using Jenkins and your team (a DevOps or shared tooling kind of team) has a mission to implement Jenkins as a shared offering for a larger number of teams.

Regardless, the expansion is an indication your teams are automating more of their development process, which is a good sign. It should go without saying, organizations are seeing a lot of success automating their development tool chain with Jenkins, allowing their teams to focus on higher value, innovative work and reducing time wasted on mundane tasks.

No one wants this, after all (no dev managers or scrum masters, anyway):

-xkcd.com/303/

At the same time, if not planned, the expansion which was meant to extend those successes to more teams could have unintended consequences and lead to bottlenecks, downtime, and pain. Besides avoiding the pain, there are also proactive steps you can take to further increase your efficiency along the way.

What is multi-tenancy?

For the purposes of this blog post, let's define multi-tenancy for Jenkins: multi-tenancy with Jenkins means supporting multiple users, teams, or organizations within the same Jenkins environment and partitioning the environment accordingly.

Why go multi-tenant?

You might ask --- "Jenkins is pretty easy to get up-and-running; why not just create a new Jenkins instance?" To some extent, I agree! Jenkins is as simple as java -jar jenkins.war, right? This may be true, but many teams are connected in one way or another… if two related but distinct teams or departments work on the related components, it's ideal that they have access to the same Jenkins data.

Implementing Jenkins - at least, implementing it well - takes some forethought. While it is indeed easy to spin up a new Jenkins instance, if your existing team using Jenkins already has a great monitoring strategy in place or a well-managed set of slave nodes attached to their Jenkins instance, reusing a well-managed Jenkins instance seems like a good place to start. I mean, who wants to wear a pager on the weekend for Jenkins, anyway?

Establishing an efficient strategy for Jenkins re-use in an organization can help reduce costs, increase utilization, enhance security, and ensure auditability/traceability/governance within the environment.

What features can I use to set up multi-tenancy?

As you begin to scale your Jenkins use, there are a number of existing features available to help:

  • Views
    • The views feature in the Jenkins core allows you to customize the lists of plugins and tabs on the home screen for better user experience when using a multi-tenant Jenkins instance.

  • Folders
    • The Folders plugin, developed in-house at CloudBees, is even more powerful than views for optimizing your Jenkins environment for multi-tenancy. Unlike views, Folders actually create a new context for Jenkins.

    • This new context allows for example, creating folder-specific environment variables. From the documentation: "You can [also] create an arbitrary level of nested folders. Folders are namespace aware, so Job A in Folder A is logically different than Job A in Folder B".
  • Distributed Builds
    • If you're not already using Jenkins distributed builds, you should be! With distributed builds, Jenkins can execute build jobs on remote machines (slave nodes) to preserve the performance of the Jenkins web app itself.

    • If you extend your Jenkins environment to additional teams, all the more reason to focus on preserving the master's performance.

    • Even better, distributed builds allow you to set up build nodes capable of building the various types of applications your distributed teams will likely require (Java, .NET, iOS, etc.)

  • Cleaning Up Jobs
    • When the Jenkins environment is shared, system cleanup tasks become more critical.

    • Discarding Old Builds and setting reasonable timeouts for builds will help ensure your build resources are available to your teams.

  • Credentials API
    • Jenkins allows managing and sharing credentials across jobs and nodes. Credentials can be set up and secured at the Folder level, allowing team-specific security settings and data.

Stressing the multi-tenancy model

As you scale your Jenkins use, you will find there are some stress points where it can be... less than ideal to share a single Jenkins master across teams:

  • Global configuration for plugins
    • Some plugins support only global configuration. For example, the Maven plugin's build step default options are global. Similarly, the Subversion SCM plugin's version configuration is a global setting.

    • If two teams want to use the same plugin differently, there aren't many options (even worse: different versions of the same plugin).

  • Plugin Installation and Upgrades
    • While Jenkins allows plugins to be installed without a restart, some plugins do require a restart on install. Further, all plugins require a Jenkins restart on update.

    • Some plugins have known performance, backward compatibility, and security limitations. These may be acceptable for one team, but perhaps not all your users.

  • Slave Re-use
    • When multiple-teams use the same slaves, they usually share access to them. As mentioned above, care must be taken to clean up slave nodes after executing jobs.

    • Securing access for sensitive jobs or data in the workspace is a challenge.

    • Scale
      • Like any software application, a single Jenkins master can only support so many builds and job configurations.

      • While determining an actual maximum configuration is heavily environment-specific (available system resources, number and nature of jobs, etc.), Jenkins tends to perform best with no more than 100-150 active, configured executors.

      • While we've seen some Jenkins instances with 30,000+ job configurations, Jenkins will need more resources and start-up times will increase as the job count increases.

    • Single Point of Failure
      • If more and more teams use the same Jenkins instance, when outages occur, the impact becomes larger.

      • When Jenkins need to be restarted for plugin updates or core upgrades, more teams will be impacted.

      • As teams rely more and more on Jenkins, particularly for automating processes beyond development (e.g.: QA, security, and performance test automation), down time for Jenkins becomes less acceptable.

    Tipping Point

    Hopefully this article saves you some time by laying out some of the stress points you'll encounter when setting up multi-tenancy in Jenkins. Eventually, you'll get to a tipping point where running a single, large multi-tenant Jenkins master may not be worth it. For that reason, we recommend developing a strategy for taking your multi-tenancy strategy to the next level: creating multiple Jenkins masters.

    For each organization, the answer is a little different, but CloudBees recommends establishing a process for creating multiple Jenkins masters. In a follow-up post, we'll highlight how CloudBees Jenkins Platform helps manage multiple Jenkins masters. With CloudBees Jenkins Operations Center, your multi-tenancy strategy is simply expanded to masters as well, making your Jenkins masters part of the same Jenkins Platform. We'll also share some successful strategies (and some not so successful strategies) for determining when to split your masters.

    Categories: Companies

    JUC Speaker Blog Series: Will Soula, JUC U.S. East

    Mon, 06/08/2015 - 21:42
    This year will be Will Soula's third time presenting at a Jenkins User Conference, fourth year as an attendee, and his first time at a JUC on the East Coast! In his presentation this year, Will will be talking about what Drilling Info uses to bring their entire organization together. ChatOps allows everyone to come together, chat and learn from each other in the most efficient way. 
    This post on the Jenkins blog is by Will Soula, Senior Configuration Management/Build Engineer at Drilling Info . If you have your ticket to JUC U.S. East, you can attend his talk "Chat Ops and Jenkins" on Day 1.

    Still need your ticket to JUC? If you register with a friend you can get two tickets for the price of one! Register for a JUC near you.




    Thank you to the sponsors of the Jenkins User Conference World Tour:









    Categories: Companies