Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
CloudBees provides an enterprise Continuous Delivery Platform that accelerates the software development, integration and deployment processes. Building on the power of Jenkins CI, CloudBees enables you to adopt continuous delivery incrementally or organization-wide, supporting on-premise, cloud and hybrid environments.
Updated: 56 min 40 sec ago

Jenkins User Conference U.S. West Speaker Highlight: Kaj Kandler

Thu, 08/27/2015 - 21:56
When Kaj attended JUC Boston in 2014, he was surprised to see how many enterprise Jenkins users had developed plugins to use for themselves. In his Jenkins blog post, Kaj shares some insight on developing enterprise-ready plugins.

This post on the Jenkins blog is by Kaj Kandler,  Integration Manager at Black Duck Software, Inc. If you have your ticket to JUC U.S. West, you can attend his talk "Making Plugins that are Enterprise Ready" on Day 1.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for the last Jenkins User Conference of the year: JUC U.S. West.


Thank you to the sponsors of the Jenkins User Conference World Tour:


Categories: Companies

Volume 9 of the Jenkins Newsletter: Continuous Information is out!

Thu, 08/27/2015 - 16:07
The next issue of the Jenkins Newsletter, Continuous Information is out!

There has been so much Jenkins content all from all over the world from events, to articles, blogs, training and everything in between:

  • Learn more about how Jenkins works with technologies like Kubernetes, Docker and Postman
  • Find a Meetup near you or another Jenkins event in your area
  • Find the latest news about Jenkins User Conference U.S. West
  • Read some articles and blog posts and expand your Jenkins knowledge

Catch up on the latest Jenkins news every quarter and sign up to receive Continuous Information directly to your inbox every quarter. 
Categories: Companies

JUC Session Blog Series: Christian Lipphardt, JUC Europe

Tue, 08/25/2015 - 20:57
At the Jenkins user conference in London this year I stumbled into what turned out to be the most interesting session to my mind, From Virtual Machines to Containers: Achieving Continuous Integration, Build Reproducibility, Isolation and Scalability (a mouthful), from folks at a software shop by the name of Camunda.

The key aspect of this talk was the extension of the “code-as-configuration” model to nearly the entire Jenkins installation. Starting from a chaotic set of hundreds of hand-maintained jobs, corresponding to many product versions tested across various environmental combinations (I suppose beyond the abilities of the Matrix Project plugin to handle naturally), they wanted to move to a more controlled and reproducible definition.

Many people have long recognized the need to keep job configuration in regular project source control rather than requiring it to be stored in $JENKINS_HOME (and, worse, edited from the UI). This has led to all sorts of solutions, including the Literate plugin a few years back, and now various initialization modes of Workflow that I am working on, not to mention the Templates plugin in CloudBees Jenkins Enterprise.

In the case of Camunda they went with the Job DSL plugin, which has the advantage of being able to generate a variable number of job definitions from one script and some inputs (it can also interoperate meaningfully with other plugins in this space). This plugin also provides some opportunity for unit-testing its output, and interactively examining differences in output from build to build (harking back to a theme I encountered at JUC East).

They took the further step of making the entire Jenkins installation be stood up from scratch in a Docker container from a versioned declaration, including pinned plugin versions. This is certainly not the first time I have heard of an organization doing that, but it remains unusual. (What about Credentials, you might ask? I am guessing they have few real secrets, since for reproducibility and scalability they are also using containerized test environments, which can use dummy passwords.)

As a nice touch, they added Elasticsearch/Kibana statistics for their system, including Docker image usage and reports on unstable (“flaky”?) tests. CloudBees Jenkins Operations Center customers would get this sort of functionality out of the box, though I expect we need to expand the data sources streamed to CJOC to cover more domains of interest to developers. (The management, as opposed to reporting/analysis, features of CJOC are probably unwanted if you are defining your Jenkins environment as code.)

One awkward point I saw in their otherwise impressive setup was the handling of Docker images used for isolated build environments. They are using the Docker plugin’s cloud provider to offer elastic slaves according to a defined image, but since different jobs need different images, and cloud definitions are global, they had to resort to using (Groovy) scripting to inject the desired cloud configurations. More natural is to have a single cloud that can supply a generic Docker-capable slave (the slave agent itself can also be inside a Docker container), where the job directly requests a particular image for its build steps. The CloudBees Docker Custom Build Environment plugin can manage this, as can the CloudBees Docker Workflow plugin my team worked on recently. Full interoperation with Swarm and Docker Machine takes a bit more work; my colleague Nicolas de Loof has been thinking about this.

The other missing piece was fully automated testing of the system, particularly Jenkins plugin updates. For now it seems they prototype such updates manually in a temporary copy of the infrastructure, using a special environment variable as a “dry-run” switch to prevent effects from leaking into the outside world. (Probably Jenkins should define an API for such a switch to be interpreted by popular plugins, so that the SMTP code in the Mailer plugin would print a message to some log rather than really sending mail, etc.) It would be great to see someone writing tests atop the Jenkins “acceptance test harness” to validate site-specific functions, with a custom launcher for their Jenkins service.

All told, a thought-provoking presentation, and I hope to see a follow-up next year with their next steps!

We hope you enjoyed JUC Europe! 

Here is the abstract for Christian's talk "From Virtual Machines to Containers: Achieving Continuous Integration, Build Reproducibility, Isolation and Scalability." 

Here are the slides for his talk and here is the video

If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
Categories: Companies

Managing a Jenkins Docker Infrastructure: Docker Garbage Collector

Mon, 08/24/2015 - 16:18
Using Docker for Continuous Delivery is great. It brings development teams an impressive flexibility, as they can manage environments and test resources by themselves, and, at same time, enforce clean isolation with other teams sharing the same host resources.

But a side effect on enabling Docker on build infrastructure is disk usage, as pulling various Docker images consumes hundreds megabytes. The layered architecture of Docker images ensures that you'll share the lower level layers as much as possible. However, as those layers get updated with various fixes and upgrades, the previous ones remain on disk, and can result, after few months, in huge disk usage within /var/lib/docker.

Jenkins monitors can alert on disk consumption on build executors. However, a more proactive solution should be implemented versus simply making the node offline until administrator handle the issue "ssh-ing" to the server.
Docker does not offer a standard way to address image garbage collection, so most production teams have created their own tool, including folks at Spotify who open-sourced docker-gc script.

On a Jenkins infrastructure, a scheduled task can be created to run this maintenance script on all nodes. I did it for my own usage (after I had to handle filesystem full error). To run the script on all docker enabled nodes, I'm using a workflow job. Workflow make it pretty trivial to setup such a GC .




The script I'm using relies on a "docker" label to be used on all nodes with docker support. Jenkins.instance.getLabel("docker").nodes returns all the build nodes with this label, so I can iterate on them and run a workflow node() block to execute the docker-gc script within a sh shell script command:

def nodes = Jenkins.instance.getLabel("docker").nodes
for (n in nodes) {
node (n.nodeName) {
sh 'wget -q -O - https://raw.githubusercontent.com/spotify/docker-gc/master/docker-gc | bash'
}

}

docker-gc script do check images not used by a container. When an image existed last run of the script, but is not used by a container,

I hope that the Docker project will soon release an official docker-gc command. This will benefit to infrastructure teams, eliminating the need to re-invent custom solutions to the same common issue.
Categories: Companies

JUC Session Blog Series: Tom Canova, JUC U.S. East

Thu, 08/20/2015 - 15:32
I was pleased to be able to attend the D.C. Jenkins user conference this year, where I gave a talk on the progress of the Workflow plugin suite for Jenkins. One highlight was seeing Jenkins Workflows with Parallel Steps Boosts Productivity and Quality by Tom Canova of ibmchefwatson.com. Naturally the title made me curious: how were people in the field using parallelism in workflows?

The project he works on is a little unusual for someone coming from the software-delivery mindset, since while the ultimate deliverable is still software, what Jenkins is spending most of its time on is running that software (rather than a compiler or automated tests): the result is a summary of a big set of online recipes crunched through some natural language processing into a machine-friendly format. Each “build” is a dry-run of Chef Watson’s preparation for the dinner service, if you will.

Since slicing & dicing all that messy web HTML can take a long time, Tom’s process follows a pretty standard three-stage fork-join model. In the first stage, one Jenkins slave finds a site index with a list of recipes, collecting a list of every recipe to be processed. In the main, second stage, a number of distributed slaves each pick up a subset of recipes, parse them, and dump the JSON result into Cloudant, using a 5Gb heap. Finally all the results are summarized and archived, and some follow-on jobs are triggered (I think in part as a workaround for missing Workflow plugin integrations). All told, the parallelization can cut a twenty-hour build into two hours, giving developers quicker feedback. Doing this from a traditional “freestyle” project would be tough—you would really need to set up a custom grid engine instead of using the Jenkins slave network you already have.

Another unusual aspect of Tom’s setup was that the build history was really curated. Whereas some teams treat Jenkins builds as dispensable records created and then trimmed at a furious rate, here there may only be a few a week, and each one is examined by the developers to see how their changes affected the sample output. (The analysis is put right in the build description.)

One interesting thing the developers do is interactively compare output from one build to another. After all, they want to judge whether their code changes produced reasonable changes in the result, or whether unexpected and unwanted effects arose in real data sets. For this they just do a diff (I think outside Jenkins) between build artifacts. After the talk I suggested to Tom that it would be useful for “someone” to write a Jenkins plugin which displays the diff between matching build artifacts of consecutive builds. This reminded me of something my team started producing when I worked on NetBeans: a readable summary of the changes in major application features from one build to the next.

As a final note, I did try to get some meal advice from the live system. Whether I can convince my wife to let me cook this is another matter:

Basque Red Beet Pasta Salad

1 poblano pepper
½lb fusilli
½c cranberry juice
1½c crumbled queso blanco
3T achiote paste
5 red beets
3c cubed, peeled butternut squash
3 halved tomatoes
¼c olive oil
½T chopped candied ginger
cocoa

Hmm. Looks like Jenkins still has its job cut out for it!

We hope you enjoyed JUC U.S. East!
Here is the abstract for Tom's talk "Jenkins Workflows with Parallel Steps Boosts Productivity and Quality." 
Here are the slides for his talk and here is the video.

If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
Categories: Companies

CloudBees Jenkins Platform on Amazon Web Services

Tue, 08/18/2015 - 19:26
CloudBees Jenkins Platform available on AWS Marketplace
We are delighted to announce the immediate availability of CloudBees Jenkins Platform 15.05 on the AWS Marketplace.

The two components of CloudBees Jenkins Platform, are offered as a bring your own license mode with a free trial.
With these AWS marketplace offerings, you can seamlessly provision your virtual machines of Jenkins masters and Operations Centers and interact directly with AWS services, including Amazon EC2, S3, Route53 and Lambda from within Jenkins.


CloudBees Jenkins Platform on AWS Marketplace

Virtual Machines Specifications
CloudBees Jenkins Enterprise and CloudBees Jenkins Operations Center AWS Marketplace AMIs are built with the following components:

  • Ubuntu 14.04 LTS (Trusty Tahr)
  • OpenJDK 8
    • Installed as a Debian package from the "ppa:openjdk-r/ppa" repository
  • CloudBees Jenkins Enterprise (resp CloudBees Jenkins Operations Center)
    • Installed as a Debian package
    • Running as a SystemD service
    • Listening on port 8080 (resp 8888)
    • JENKINS_HOME set to "/var/lib/jenkins"
  • Git
    • Installed as a Debian package from the "ppa:git-core/ppa" repository
  • HAProxy
    • Installed as a Debian package from the "ppa:vbernat/haproxy-1.5" repository
    • Listening on port 80 and forwarding to the Jenkins process (port 8080 resp. 8888)
    • Capable of listening on HTTPS:443 if configured (docs here)
  • SSH connection
    • Listen on port 22
    • User "ubuntu", SSH public key (aka EC2 key pair) provisioned through AWS management console. This user has "sudo" privileges.


Security and Maintenance of the Servers
  • Firewall: firewall rules are defined in the AWS Management Console with EC2 Security Groups. CloudBees recommends to restrict access (inbound rules) from a limited IP Range, not allowing "all the internet" to access to the VM ; this is particularly important for the SSH and HTTP protocols. Deploying the VM in an Amazon VPC instead of "EC2 Classic" offers finer security settings.
  • OS Administrators are invited to frequently apply security fixes on the operating system of the VM ("sudo apt-get update" then "sudo apt-get upgrade")
  • Jenkins Administrators are invited to frequently apply upgrade the Jenkins plugins and the Jenkins Core through Jenkins administration console
  • Jenkins Administrator are invited to secure their Jenkins server enabling Authentication and Authorization on their newly created instances
  • Jenkins Administrators are invited to connect slave node to the Jenkins masters according to the needs of the project teams (CentOS, Ubuntu, Redhat Enterprise Linux, Windows Server...) and to disable builds on the masters
  • Jenkins Administrators are invited to frequently backup the Jenkins data (aka JENKINS_HOME) using the CloudBees Backup Plugin and/or doing a backup of the VM File System through AWS EC2 services (EBS snapshot ...)

Licensing
CloudBees Jenkins Platform is distributed on the AWS Marketplace on a Bring Your Own License mode. You can provision your Virtual Machines with the marketplace images and then enter your license details or start a free evaluation from the welcome screen of the created Jenkins instance.

Screencast: Installing CloudBees Jenkins Enterprise on Amazon Web Services
This screencast shows how to install a CloudBees Jenkins Enterprise VM on Amazon Web Services using the AWS Marketplace. The installation of CloudBees Jenkins Operations Center is similar, you just have to choose CloudBees Jenkins Operations Center instead of CloudBees Jenkins Enterprise in the marketplace.




More Resources
Categories: Companies

Jenkins User Conference U.S. West Speaker Highlight: Andrew Phillips

Tue, 08/18/2015 - 19:04
In his presentation, Andrew will be taking a broader view than his talk at JUC U.S. East and will discuss common challenges you may come across and the solutions that you may need when moving from Continuous Integration to Continuous Delivery.

This post on the Jenkins blog is by Andrew Phillips, VP, Product Management, XebiaLabs. If you have your ticket to JUC U.S. West, you can attend his talk "Sometimes Even the Best Butler Needs a Footman: Building an Enterprise Continuous Delivery Machine Around Jenkins" on Day 1.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for the last Jenkins User Conference of the year: JUC U.S. West.


Thank you to the sponsors of the Jenkins User Conference World Tour:


Categories: Companies

CloudBees Jenkins Platform on Microsoft Azure

Mon, 08/10/2015 - 15:43
CloudBees Jenkins Platform available on Microsoft Azure Marketplace
We are delighted to announce the immediate availability of CloudBees Jenkins Platform 15.05 on the Microsoft Azure Marketplace.

The two components of CloudBees Jenkins Platform, CloudBees Jenkins Enterprise and CloudBees Jenkins Operations Center, are available as products in the Azure marketplace on a Bring Your On License mode with a free trial. You can seamlessly provision your virtual machines of Jenkins masters and Operations Centers with these Azure marketplace products.

CloudBees Jenkins Platform on Azure Marketplace
Virtual Machines Specifications
CloudBees Jenkins Enterprise and CloudBees Jenkins Operations Center Azure Marketplace images are built with the following components:

  • Ubuntu 14.04 LTS (Trusty Tahr)
  • OpenJDK 8
    • Installed as a Debian package from the "ppa:openjdk-r/ppa" repository
  • CloudBees Jenkins Enterprise (resp CloudBees Jenkins Operations Center)
    • Installed as a Debian package
    • Running as a SystemD service
    • Listening on port 8080 (resp 8888)
    • JENKINS_HOME set to "/var/lib/jenkins"
  • Git
    • Installed as a Debian package from the "ppa:git-core/ppa" repository
  • HAProxy
    • Installed as a Debian package from the "ppa:vbernat/haproxy-1.5" repository
    • Listening on port 80 and forwarding to the Jenkins process (port 8080 resp. 8888)
    • Capable of listening on HTTPS:443 if configured(docs here)
  • SSH connection
    • Listen on port 22
    • User account created by the Azure platform during the provisioning of the VM according to the username and password or ssh key defined in Azure Portal. This user has "sudo" privileges. CloudBees recommends to use SSH keys rather than password ; if password is required, CloudBees recommend to use a password with at least 20 random chars


Security and maintenance of the servers
  • Firewall: firewall rules are defined in Azure Management Portal. CloudBees recommends to restrict access (inbound rules) from I limited IP Range, not allowing "all the internet" to access to the VM ; this is particularly important for the SSH and HTTP (non secured, non HTTPS) protocols.
  • OS Administrators are invited to frequently apply security fixes on the operating system of the VM ("sudo apt-get update" then "sudo apt-get upgrade")
  • Jenkins Administrators are invited to frequently apply upgrade the Jenkins plugins and the Jenkins Core through Jenkins administration console
  • Jenkins Administrator are invited to secure their Jenkins server enabling Authentication and Authorization on their newly created instances
  • Jenkins Administrators are invited to connect slave node to the Jenkins masters according to the needs of the project teams (CentOS, Ubuntu, Redhat Enterprise Linux, Windows Server...) and to disable builds on the masters
  • Jenkins Administrators are invited to frequently backup the Jenkins data (aka JENKINS_HOME) using the CloudBees Backup Plugin and/or doing a backup of the VM File System through Azure Compute service

Licensing
CloudBees Jenkins Platform is distributed on the Microsoft Azure Marketplace on a Bring Your Own License mode. You can provision your Virtual Machines with the marketplace images and then enter your license details or start a free evaluation.

Screencast: installing CloudBees Jenkins Enterprise on Microsoft Azure
This screencast shows how to install a CloudBees Jenkins Enterprise VM on Microsoft Azure using the Azure Marketplace. The installation of CloudBees Jenkins Operations Center is similar, you just have to choose CloudBees Jenkins Operations Center instead of CloudBees Jenkins Enterprise in the marketplace.


More Resources
Categories: Companies

Jenkins Workflow - Using the Global Library to implement a re-usable function to call a secured HTTP Endpoint

Fri, 08/07/2015 - 21:39
Jenkins Workflow - Using the Global Library to implement a re-usable function to call a secured HTTP endpoint
This blog post will demonstrate how to access an external system using HTTP with that is protected by a session based logon. This type of integration is often required to retrieve information to be used in the workflow, or, to trigger events on existing external systems, such as a deployment framework. These types of systems often have some form of authentication mechanism - it may be Basic Auth based or, using some form of session-based security. This blog post will show how you can use the following features
  • Jenkins Workflow plugin
  • Git-based Workflow Global Library repository
  • curl cookie handling

Getting StartedRequirements
  1. JDK 1.7+
  2. Jenkins 1.609+
  3. An external system secured with session based authentication - (in this example I use a Jenkins server with security enabled as an example - although it is not recommended you do this for real as Jenkins has better token based APIs to use)
Installation
  1. Download and install JDK 1.7 or higher
  2. Download and install Jenkins
  3. Start Jenkins
Setup Jenkins
  • Update plugins - Make sure you have the latest Workflow plugins by going to Manage Jenkins –> Manage Plugins -> Updates and selecting any Workflow-related updates. Restart Jenkins after the updates are complete. As of this writing the latest version is 1.8
  • Global libraries repo - Jenkins exposes a Git repository for hosting global libraries meant to be reused across multiple CD pipelines managed on the Master. We will setup this repository so you can build on it to create your own custom libraries. If this is a fresh Jenkins Install and you haven’t setup this git repository, follow these instructions to setup.

Important - Before proceeding to the next steps, make sure your Jenkins instance is running

See Workflow Global Library for details on how to set up the shared library - note that if security is enabled, the ssh format works best.
If using ssh ensure that your public key has been configured in Jenkins.
To initialise the git repository:
git clone ssh://<USER>@<JENKINS_HOST>:<SSH_PORT>/workflowLibs.git

Where
  • USER is a user a valid user that can authenticate 
  • JENKINS_HOST is the DNS name of the Jenkins server you will be running the workflow on. If running on the CloudBees Jenkins platform, this is the relevant Client Master not the Jenkins Operations Center node.
  • SSH_PORT is the ssh port defined in Jenkins configuration:


Note the repository is initially empty.
To set things up after cloning, start with:
git checkout -b master
Now you may add and commit files normally. For your first push to Jenkins you will need to set up a tracking branch:
git push --set-upstream origin master
Thereafter it should suffice to run:
git push
Creating a Shared Function to Access Jenkins
Create the groovy script in the workflow library:
cd workflowLibs
mkdir -p src/net/harniman/workflow/jenkins
curl -O \ https://gist.githubusercontent.com/harniman/8f1418af794d26035171/raw/941c86041adf3e9c9bfcffaf9e650e3365b9ee55/Client.groovy
git add *
git commit
git push
This will make the following script
    net.harniman.workflow.jenkins.Client
available for use by workflow scripts using this syntax:
    def jc = new net.harniman.workflow.jenkins.Client()
and methods accessed using:
     def response=jc.test(<host>, <user>, <pass>, <cmd>)

Note:
  • We must NOT define an enclosing class for the script as we want to call step functions such as sh - see https://github.com/jenkinsci/workflow-plugin/tree/master/cps-global-lib for further details on this restriction.
  • scripts follow the usual Groovy package naming formats and thus need to be in the appropriate directory structure
  • it is unnecessary to go out of process to access this Jenkins instance - it can be accessed from Groovy via the Jenkins model, however, this is to show how a client for form based access could be built
  • The below script is built to access Jenkins as an example for demonstration

This is the actual contents of Client.groovy:
package net.harniman.workflow.jenkins
String test (host, user, pass, cmd) {
    node("shared") {
        init="curl -s -c .cookies ${host}"
        userAttr="j_username=${user}"
        passAttr="j_password=${pass}"
    jsonAttr="json=%7B%22j_username%22%3A+%22${user}%22%2C+%22j_password%22%3A+%22${pass}%22%2C+%22remember_me%22%3A+false%2C+%22from%22%3A+%22%2F%22%7D"
        login="curl -i -s -b .cookies -c .cookies -d $userAttr -d $passAttr -d 'from=%2F' -d $jsonAttr -d 'Submit=log+in' $host/j_acegi_security_check"
        cmd="curl -L -s -b .cookies -c .cookies $host/$cmd"
        echo "Initialising HTTP Connection"
        sh "${init} > .init 2>&1" 
        echo "Performing Login"
        sh "${login} > .login 2>&1"
        def loginresponse = readFile '.login'
        if (loginresponse =~ /Location:.*loginError/) {
            echo "Error loging in"
            error"Unable to login. Response = $loginresponse"
        }
        echo "Invoking command"
        sh "${cmd} > .output 2>&1"
        def output = readFile '.output'
        sh "rm .init .login .output"
        return output
    }
}

Creating the workflowCreate a new workflow job with the Workflow Definition as follows:
def jc = new net.harniman.workflow.jenkins.Client()
def response=jc.test("http://localhost:8080”, "annie.admin", "password", "whoAmI")
echo "======================="
echo response

Ensure you substitute in real values for:
  • <host>
  • <user>
  • <pass>
  • <cmd>
You can substitute in the required cmd for the query - for instance, it could be <jobname>/config.xml to retrieve a job’s config.
Run the JobTime to check that it all works, so go ahead and trigger the build. 
If it works successfully you should see the raw body of the response printed in the console log.
Further ImprovementsIf you look closely you can see in this example, the username and password is output to the console. This is not recommended, and you can avoid this by integrating with credentials and having the workflow pass in the credentials ID to be used. See this article for more details on the syntax with a sh step using curl.
ConclusionThe Workflow Global Library provides the ability to share common libraries (groovy scripts) across many workflow jobs to help keep workflows DRY.
In the original version of this post, we used the Groovy built in HTTP libraries to make the HTTP call. On further investigation and testing of failure scenarios we found this to be brittle when the master restarts, The underlying reason for this behaviour is that ALL  the workflow logic runs within the process space of the master, this includes any logic operating within a node(){} block.

To ensure survivability of the workflow that is executing during a master restart, it is necessary to perform the bulk of the work inside workflow steps. These are designed to be passed to a separate executor for execution rather than executed within the master’s process space. We have therefore moved the HTTP call, which may take an amount of time to run, to be called within a sh step using curl.

We have still enabled the packaging of this into a shared library module, thus simplifying the workflow script and promoting re-use. It is necessary to ensure subsequent curl requests share the cookiejar. Should you want to leverage the power of groovy to perform such a call, this should be done by calling a separate groovy script from an sh or bat step.

ReferencesJenkins Workflow - Getting Started by Udaypal Aarkoti






Categories: Companies

Introducing the JUC Speaker Blog Series for JUC U.S. West

Fri, 08/07/2015 - 21:08
The Jenkins Community has introduced a blog series for the Jenkins User Conference speakers at the upcoming event in Santa Clara, CA. The first post is by Carlo Cadet from PerfectoMobile about his talk titled "Fast Feedback: Jenkins + Functional and Non-Functional Mobile Application Testing, Without Pulling Your Hair Out!"

You can find the blog series on the Jenkins-CI blog and you can attend Carlo’s talk in room MR2 at 11:30am on September 2nd. Registrations are currently $299 and the community is offering a Buy-One-Get-One FREE deal! Take advantage of this offer today and register here.

See you at JUC U.S. West!
Categories: Companies

JUC Session Blog Series: Mario Cruz, JUC Europe

Fri, 07/31/2015 - 19:06
At his “From DevOps to NoOps”, Mario Cruz, CTO of Choose Digital, talked about automating manual tasks to achieve a NoOps organization, using Jenkins, AWS and Docker.

For Mario, NoOps is not about the elimination of Ops, it is the automation of manual processes, being the end state of adopting a DevOps culture, or, quoting Forrester, a DevOps focus on collaboration evolves into a NoOps focus on automation. At Choose Digital, the developers own the complete process, from writing code through production deployment. By using AWS Elastic Beanstalk and Docker they can scale up and down automatically. Docker and containers are the best thing to adopt DevOps, enabling running the same artifact in your machine and in production.

Mario mentioned that Jenkins is a game changer for continuous build, deploy, testing and closing the feedback loop. They use DEV@Cloud because of the same reason they use AWS, it is not their core business, and prefer to use services from companies with the expertise to run anything not core to the business. On their journey to adopt Docker they developed several Docker related plugins that they are discarding for the ones recently announced by CloudBees, like the Traceability plugin, a very important feature for auditing and compliance.



About deployment, Choose Digital uses Blue-Green deployment, creating a new environment and updating Route53 CNAMEs when the new deployment passes some tests ran by Jenkins, and even running Netflix Chaos Monkey. With Beanstalk swap environment urls both old and new deployments can be running at the same time, and reverting a broken deployment is just a matter of switching the CNAME back to the previous url without needing a new deployment. The old environments are kept around 2 days to account for caching and ensure all users are running in the new environment.


Only parts of the stack are replaced because doing it in the whole stack at peak time takes around 34 minutes, so only small parts on the AWS Elastic Beanstalk stack are deployed, in order to do it faster and more often. For some complex cases, such as database migrations, features are turned off by default and turned on at low traffic hours.

After deployment, logs and metrics are important, for example using NewRelic has proven very helpful to understand performance issues. Using these metrics the deployments are scaled automatically from around 25 to 250 servers at peak time.
We hope you enjoyed JUC Europe!Here is the abstract and link to the video recording of his talk.
If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
Categories: Companies

JUC Session Blog Series: Gus Reiber and Tom Fennelly, JUC Europe

Fri, 07/31/2015 - 19:03
Evolving the Jenkins UI
Two-for-one special Gus and Tom presented the Jenkins UI from the Paleolithic past to the shining future. Clearly comfortable with their material, they mixed jokes with demos and some serious technical meat. They spoke with candor and frankness about the current limits of the UI and how CloudBees, Inc. is working with the community to overcome them.

Tom took a divisive approach, specifically dividing monolithic CSS, JS, and page structure into clean, modular elements. “LESS is more” was a key point, using LESS to divide CSS into separate imports and parameterize it. He also explained work to put a healthy separation in the previously sticky relationship between plugin functionality and front-end code.


Tom showed off a completely new themes engine built upon these changes. This offers each installation and user the ability to customize the Jenkins experience to their personal aesthetics or improve accessibility, such as for the visually impaired. Gus brought a vision for a clean, dynamic UI offering a streamlined user interface. His goal was to aim for “third level” changes which enable completely new uses. For example, views that can become reports. Also he announced a move towards scalable layouts for mobile use, so “I know if I need to come back early [from lunch] because my build is broken or if I can have a beer over lunch.”

Radical change comes with risk, and to balance this, Gus repeatedly solicit community feedback to see if changes work well. Half-seriously, he mentioned previously going as far as giving out his mother’s phone number to make it easy for people to reach out.

Wrapping up, questions showed that while the new UI changes aren’t ready yet, CloudBees, Inc. is actively engaging with the community to shape the new look and feel of Jenkins, and the future is promising!
We hope you enjoyed JUC Europe!Here is the abstract for Tom and Gus's talk, "Evolving the Jenkins UI." Here are the slides for their talk and here is the video.
If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
Categories: Companies

JUC Session Blog Series: Daniel Spilker, JUC Europe

Fri, 07/31/2015 - 18:59
Daniel Spilker's talk "Configuration as Code - The Job DSL Plugin" continued a theme from Kohsuke's keynote speech: Maintaining a large number of jobs through the Jenkins UI is difficult. No single job builds everything; you may even need complex build pipelines for every branch. This means: Lots of copy & paste between jobs and manual editing of text areas in the Jenkins UI. And if you miss important options behind 'Advanced…' buttons, you'll need a few attempts to get it right!

What you want instead are ways to set up new build pipelines quickly, to be able to refactor jobs without hassle, to have traceability of job configuration changes and to even be able to use IDEs for any scripts you're writing.

Since this is a common problem, several plugins exist that address some of these problems: Job Config History plugin allows you to determine who changed a job configuration; Literate Plugin stores the configuration in an SCM and can build multiple branches; Template Project plugin allows to reuse parts of job configurations in other jobs; Workflow Plugin makes it easy to build job pipelines. And then of course there is Job DSL Plugin, which aims to accomplish all of the goals mentioned above.

The Job DSL Plugin provides a DSL (domain specific language) based on Groovy that makes UI options available as keywords and functions. For example, a simple job definition could look like the following:

job('job-dsl-plugin') {
scm {
github('jenkinsci/job-dsl-plugin')
}
steps {
gradle('clean build')
}
publishers {
archiveArtifacts('**/job-dsl.hpi')
}
}

To use this DSL in Jenkins, you need to install the Job DSL Plugin and set up a so-called 'seed' job: A freestyle project that has a 'Process Job DSL' build step. When you build this seed job the specified Job DSL (e.g. stored in SCM) will be evaluated. In the example above, a job 'job-dsl-plugin' will be created if necessary, and then configured to check out from GitHub, build using Gradle, and archive a generated artifact.

The Job DSL plugin has a large user community of 70 committers that so far have created 500 pull requests and added support for 125 other plugins in the DSL, like the Gradle and Git plugins shown in the example above. Despite its name the plugin can also be used to generate views and items that aren't jobs, such as folders from the CloudBees Folders Plugin. If a plugin is not specifically supported by Job DSL, users can still make use of it by generating appropriate XML for the job's config.xml.

Since the DSL is based on Groovy, users can use features such as variables, loops and conditions in their DSL. Users can even define functions and classes in their scripts. Any Java library can be used as well, provided it was made available in the job workspace e.g. by a simple Gradle build script before executing the Job DSL script.

Advanced features of Job DSL include the ability to use IntelliJ IDEA to write DSL scripts, as the 'core' part of the Job DSL is a Java library, and a command-line version of Job DSL to generate job configurations outside Jenkins that allows you to review changes in job configurations before applying them to make sure what you're generating is correct.

Daniel ended the talk with some best practices, like recommending that adoption of Job DSL should happen gradually, that Job DSL scripts should always be stored in SCM to get traceability, and that smart use of Groovy will avoid repetition.

We hope you enjoyed JUC Europe!Here is the abstract for Daniel's talk, "Configuration as Code: The Job DSL Plugin".Here are the slides for his talk, and here is the video.
If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
Categories: Companies

JUC Session Blog Series: Andrew Phillips, JUC U.S. East

Fri, 07/31/2015 - 18:10
How to Optimize Automated Testing with Everyone's Favorite Butlerby Andrew Phillips, XebiaLabs
Andrew set the tone of the presentation with a key point stressing that Testing needs to be automated. The session brought out various implications and challenges in Test automation and looked at tooling in Test Automation.

The talk covered various aspects in Test Automation with the motivation to not just automate quality checks but analyze results, re-align tests to make them relevant and effective and map them to the core use-cases.

Test automation best practices were discussed for Parallelizing tests with orchestrated Jenkins pipelines and using ephemeral test slaves, keeping the test jobs simple and self-contained within their test-data dependencies.

Andrew also covered the use of Jenkins to invoke test tools via plugins and scripts sourced from SCM.

Andrew addressed "Testing 101" (in today’s automation world) and walked through the shift-left paradigm in quality with the following aspects-
  • Testers are developers
  • Test code = production code
    • Conway’s law
    • Measure quality
  • Link tests to use-cases
  • Radical parallelization
      • Fail faster...
      • Kill the nightlies
Then he covered Jenkins as the automation engine to orchestrate tests with the following well-known plugins:
  • Multi-job
  • Workflow
  • Copy artifact
Making sense of scattered test results is still a challenge… There still isn’t enough tooling or a solution to give you a “quality OK” sense every time something changes -
  • Many test tools for each test levels, no single place to validate if “this release” is good to go live!
  • Traceability, requirement coverage
    • Minimize MTTR
    • Have I tested this enough
    • Support for failure analysis
We hope you enjoyed JUC U.S. East!If you would like to learn more about this talk, here is the abstract for "How to Optimize Automated Testing with Everyone's Favorite Butler".And here are the slides and video.
If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
Categories: Companies

Clustering Jenkins with Kubernetes in the Google Container Engine

Thu, 07/23/2015 - 13:00
While we’ve already discussed how to use the Google Container Engine to host elastic Jenkins slaves, it is also possible to host the master itself in the Google Container Engine. Architecting Jenkins in this way lets Jenkins installations run more frictionlessly and reduces an administrator’s burden by taking advantage of the Google Container Engine’s container scheduling, health-checking, resource labeling, and automated resource management. Other administrative tasks, like container logging, can also be handled by the Container Engine and the Container Engine itself is a hosted service.

What is Kubernetes and the Google Container Engine?Kubernetes is an open-source project by Google which provides a platform for managing Docker containers as a cluster. Like Jenkins, Kubernetes’ orchestrating and primary node is known as the “master”, while the node which hosts the Docker containers is called a “minion”. “Pods” host containers/services should on the minions and are defined as JSON pod files.Source: http://blog.arungupta.me/

The Google Cloud Platform hosts the Google Container Engine, a Kubernetes-powered platform for hosting and managing Docker containers, as well as the Google Container Registry, a private Docker image registry hosted on the Google Cloud Platform.  The underlying Kubernetes architecture provisions  Docker containers quickly, while the Container Engine creates and manages your Kubernetes clusters.
Automating Jenkins server administrationGoogle Container Engine is a managed service that uses Kubernetes as its underlying container orchestration tool. Jenkins masters, slaves, and any containerized application running in the Container Engine will benefit from automatic health-checks and restarts of unhealthy containers. The how-to on setting up Jenkins masters in the Google Container Engine is outlined in full here.

The gist is that Jenkins master runs from a Docker image and is part of a Kubernetes Jenkins cluster. The master itself must have its own persistent storage where the $JENKINS_HOME with all of its credentials, plugins, and job/system configurations can be stored. This separation of master and $JENKINS_HOME into 2 locations allows the master to be fungible and therefore easily replaced should it go offline and need to be restarted by Kubernetes. The important “guts” that make a master unique all exist in the $JENKINS_HOME and can be mounted to the new master container on-demand. Kubernetes own load balancer then handles the re-routing of traffic from the dead container to the new one.The Jenkins master itself is defined as a Pod (raw JSON here). This where ports for slave/HTTP requests, the Docker image for the master, the persistent storage mount, and the resource label (“jenkins”) can all be configured.
The master will also need 2 services to run to ensure it can connect to its slaves and answer HTTP requests without needing the exact IP address of the linked containers:
  • service-http - defined as a JSON file in the linked repository, allows HTTP requests to be routed to the correct port (8080) in the Jenkins master container’s firewall.
  • service-slave - defined in the linked JSON file, allows slaves to connect to the Jenkins master over port 50000.


Where do I start?
  1. The Kubernetes plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. Instructions on how to set up a Jenkins master in the Google Container Engine are available on GitHub.
  3. The Google Container Engine offers a free trial.
  4. The Google Container Registry is a free service.
  5. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:
    1. Docker Build and Publish Plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Jenkins Docker Workflow DSL
    4. Docker Traceability
    5. Docker Hub Trigger Plugin
    6. Docker Custom Build Environment plugin



Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

On-demand Jenkins slaves with Kubernetes and the Google Container Engine

Thu, 07/23/2015 - 13:00
In a previous series of blogs, we covered how to use Docker with Jenkins to achieve true continuous delivery and improve existing pipelines in Jenkins.

The CloudBees team and the Jenkins community have now also created the Kubernetes plugin, allowing Jenkins slaves to be built as Docker images and run in Docker hosts managed by Kubernetes, either on the Google Cloud Platform or on a more local Kubernetes instance. These elastic slaves are then brought online as Jenkins schedules jobs for them and destroyed after their builds are complete, ensuring masters have steady access to clean workspaces and minimizing builds’ resource footprint.
What is Kubernetes and the Google Container Engine?Kubernetes is an open-source project by Google which provides a platform for managing Docker containers as a cluster. Like Jenkins, Kubernetes’ orchestrating and primary node is known as the “master”, while the node which hosts the Docker containers is called a “minion”. “Pods” host containers/services should on the minions and are defined as JSON pod files.Source: http://blog.arungupta.me/

The Google Cloud Platform hosts the Google Container Engine, a Kubernetes-powered platform for hosting and managing Docker containers, as well as the Google Container Registry, a private Docker image registry hosted on the Google Cloud Platform.  The underlying Kubernetes architecture provisions  Docker containers quickly, while the Container Engine creates and manages your Kubernetes clusters.
Elastic, custom, and clean: Kubernetes slavesAs the demand on a Jenkins master increases, often so too do the build resources required. Many organizations architect for this projected growth by ensuring that their build/test environments are fungible, and therefore easily replaced and templated (e.g. as Docker images). Such fungibility makes slave resources highly scalable and resilient should some go offline or new ones need to be created quickly or automatically.

Kubernetes allows Jenkins installations to leverage any of their Docker slave images as templates for on-demand slave instances, which Jenkins can ask Kubernetes to launch as needed. The Kubernetes plugin now supports launching these slaves in any Kubernetes instance, including the Google Cloud Platform’s Container Engine.

Once a Kubernetes Pod running the slave container is deployed, the Jenkins jobs requesting that specific slave via traditional labels are built inside the Pod’s slave container. Kubernetes then brings the slave’s Pod offline after its build completes.

Where do I start?
  1. The Kubernetes plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. The Google Container Engine offers a free trial.
  3. The Google Container Registry is a free service.
  4. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:
    1. Docker Build and Publish Plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Jenkins Docker Workflow DSL
    4. Docker Traceability
    5. Docker Hub Trigger Plugin
    6. Docker Custom Build Environment plugin



Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Jenkins Container Support Juggernaut Arrives at Kubernetes, Google Container Registry

Wed, 07/22/2015 - 17:11
TL; DR: Jenkins now publishes Docker containers to Google Container Registry. Use Kubernetes to run isolated containers as slaves in Jenkins.

Last month, I wrote about exciting news with Jenkins namely its support for Docker. This month, I am happy to announce that Jenkins continues on its march for container technology support by providing support for Kubernetes.Overview of all technology components in this blog:
Kubernetes
Kubernetes is a system to help manage a cluster of Linux containers as a single system. Kubernetes is an open source project that was started by Google and now supported by various companies such Red Hat, IBM and others.

Kubernetes and DockerAs teams graduate beyond simple use cases with Docker, they realise that containers are not really meant to be deployed as a single unit.The next question is, how to do you start these containers across multiple hosts, how can these containers be grouped together and treated as a single unit of deployment? This is the use case that Kubernetes solves.

Google Container RegistryThe container registry is a service by Google to securely host, share and manager private container repositories and is part of the Google Container Engine service.
Interplay of these technology pieces
Kubernetes, Google Container Registry, Docker and JenkinsWhile Kubernetes focusses on the deployment side of Docker, Jenkins focuses the entire lifecycle of moving your docker containers from development to production. If a team builds a CD pipeline, the pipeline is managed through Jenkins which moves the containers through the pipeline (Dev->QA->Prod) and the containers finally deployed using Kubernetes. Thus, the four technologies make for a powerful combination for building CD pipelines.

Kubernetes and Jenkins announcementWith Docker, I talked about 2 meta-use cases  
  • Building CD pipelines with Docker and 
  • Using Docker containers as Jenkins slaves.



Today, the Jenkins community brings both stories to the Kubernetes.

Use case 1: Building CD pipelines with Google Container RegistryThe first use case enables teams to work with Google Container Registry (GCR). The community has taken the Docker Build and Publish plugin and extended it so that builds can publish containers to GCR. Details on this blog.

Use case 2: First class support for Jenkins WorkflowJenkins Workflow is fast becoming the standard way to build real world pipelines with Jenkins. Build managers can use the Workflow DSL to build these pipelines The community has provided support for Kubernetes by adding a kubernetes DSL that launches a build within a Kubernetes cluster.

Use case 3: Running docker containers as Jenkins slaves through KubernetesOne of the common issues in Jenkins is isolating slaves. Today, if an errant build contaminates the build machine, it may impact downstream builds. If these slaves are running as Docker containers, any “leakages” from previous builds is eliminated. With the Kubernetes plugin and Docker Custom Build Environment plugin, Jenkins can get a build slave from Kubernetes and run builds within the containers.

What’s Next?The CloudBees and Google teams have collaborated on these plugins and you can expect to see more efforts to support more use cases between Jenkins and Kubernetes. Some of these use cases, involve piggy-backing on the Docker support released by the community (for example Docker Traceability and Docker Notifications plugin).
If you are a developer and want to contribute to this effort reach out on the Jenkins developer alias (hint talk to Nicolas DeLoof ;-))

Closing Thoughts:The OSS community has innovated in the last couple of months, they have quickly added support for Docker and Kubernetes and have established Jenkins as the premier way to build modern real world continuous delivery pipelines.
I hope you have fun playing with all the goodies just released.

Where do I start?





Harpreet SinghVice President of Product Management 
CloudBees

Harpreet is the Vice President of Product Management and is based out of San Jose. Follow Harpreet on Twitter
Categories: Companies

Orchestrating deployments with Jenkins Workflow and Kubernetes

Wed, 07/22/2015 - 13:00
In a previous series of blogs, we covered how to use Docker with Jenkins to achieve true continuous delivery and improve existing pipelines in Jenkins. While deployments of single Docker containers were supported with this initial integration, the CloudBees team and Jenkins community’s most recent work on Jenkins Workflow will also let administrators launch and configure clustered Docker containers with Kubernetes and the Google Cloud Platform.
What is Workflow?
Jenkins Workflow is a new plugin which allows Jenkins to treat continuous delivery as a first class job type in Jenkins. Workflow allows users to define workflow processes in a single place, avoiding the need to coordinate flows across multiple build jobs. This can be particularly important in complex enterprise environments, where work, releases and dependencies must be coordinated across teams. Workflows are defined as a Groovy script either within a Workflow job or checked into the workspace from an external repository like Git.

Docker for simplicityIn a nutshell, the CloudBees Docker Workflow plugin adds a special entry point named Docker that can be used in any Workflow Groovy script. It offers a number of functions for creating and using Docker images and containers, which in turn can be used to package and deploy applications or as build environments for Jenkins.

Broadly speaking, there are two areas of functionality: using Docker images of your own or created by the worldwide community to simplify build automation; and creating and testing new images. Some projects will need both aspects and you can follow along with a complete project that does use both: see the demonstration guide.
Jenkins Workflow Deployments with KubernetesAs mentioned in the previous blog, the Google Cloud Platform also supports pushing Docker images to the Google Container Registry and deploying them to the Google Container Engine with Kubernetes.
Jenkins Workflow now also supports using the Google Cloud Platform’s Container Registry as a Docker image registry. Additionally, it also exposes a few new Kubernetes and Google Cloud Platform-specific steps to complement Workflow’s existing Docker features. These steps allow Jenkins to securely connect to a given Kubernetes cluster, as well as remotely instruct the Kubernetes cluster manager to launch a given Docker image as a container in a Kubernetes Pod, change existing settings like the target cluster or context, and set the target number of replicas in a cluster.

Where do I start?
  1. The Workflow plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. The CloudBees Docker Workflow plugin is another open-source plugin available in the OSS update center or as part of the CloudBees Jenkins Platform.
  3. The Google Cloud Registry Auth plugin is an open-source plugin developed by Google, so it available to download from the open source update center or packaged as part of the CloudBees Jenkins Platform.
  4. The Kubernetes plugin is another open-source plugin  available from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  5. The Google Container Engine offers a free trial.
  6. The Google Container Registry is a free service.
  7. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:
    1. Docker Build and Publish Plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Jenkins Docker Workflow DSL
    4. Docker Traceability
    5. Docker Hub Trigger Plugin
    6. Docker Custom Build Environment plugin




Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Secure application deployments with Jenkins, Kubernetes, and the Google Cloud Platform

Wed, 07/22/2015 - 13:00
In a previous series of blogs, we covered how to use Docker with Jenkins to achieve true continuous delivery and improve existing pipelines in Jenkins.

Docker can be used in conjunction with Jenkins to provide customized build and runtime environments for testing or production, trigger application builds, automate application packaging/releases, and deploy traceable containers. The new Jenkins Workflow plugin can also programmatically orchestrate these CD pipelines, while the CloudBees Jenkins Platform further builds on the above to give Jenkins masters shareable Docker build resources. All together, these features allow a Jenkins administrator or user to easily set up a CD pipeline and ensure that build/test environments are fungible, and therefore highly scalable.

The CloudBees team and the open-source community have enhanced this existing Docker story by adding Kubernetes and Google Container Registry support to Jenkins, giving Jenkins administrators the ability to leverage both Google’s container management tool and cloud container platform to run a highly-scalable and managed runtime for Jenkins.
Cookie-cutter environments and application packagingThe versatility and usability of Docker has made it a popular choice among DevOps-driven organizations. It has also made Docker an ideal choice for creating the standardized and repeatable environments that an organization needs for both creating identical testing and production environments as well as for packaging portable applications.
If an application is packaged in a Docker image, testing and deploying is a matter of creating a container from that image and running tests against the application inside. If the application passes the tests, then they should be stored in a registry and eventually deployed to production. Screen Shot 2015-06-10 at 1.57.06 PM.png
Leveraging the Google Container Registry
The Jenkins community has now added support for releasing applications as Docker images to the Google Container Registry, a free service offered by Google, and using Google’s own services to securely deploy applications across their multi-region datacenters.  
The Google Container Registry encrypts all Docker images and allows administrators to restrict push/pull access with ACLs on projects and storage buckets. Authentication is performed with their Cloud Platform OAuth over SSL, and Jenkins now supports this via the Google Container Registry Auth plugin developed by Google.
The CloudBees Docker Build and Publish Plugin adds a new build step to Jenkins jobs for building and packaging applications into Docker containers, then publishing them as Docker images to your registry of choice with the Google OAuth credentials mentioned above.Securely deploying with the Google Cloud PlatformThe Docker Build and Publish plugin doesn’t require the Kubernetes plugin to integrate with the Google Container Registry. However, installing both unlocks the option of using the Google Cloud Platform and its underlying Kubernetes cluster to securely deploy Docker images as containers.
The Google Cloud Platform supports directly deploying Docker images from their Container Registry to their Container Engine. Deployments can be to particular regions and clusters, and they happen on a configured schedule. Once deployed, the application can  then be run as a highly-available cluster. Kubernetes will perform regular health-checks on the application instances, restarting them as necessary.Source: http://googlecloudplatform.blogspot.com/2015_01_01_archive.html

Where do I start?
  1. The CloudBees Docker Build and Publish plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. The Google Cloud Registry Auth plugin is an open-source plugin developed by Google, so it available to download from the open source update center or packaged as part of the CloudBees Jenkins Platform.
  3. (Optional) The Kubernetes plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  4. The Google Container Engine offers a free trial.
  5. The Google Container Registry is a free service.
  6. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:




    Tracy Kennedy
    Associate Product ManagerCloudBees 
    Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
    Categories: Companies

    JUC Session Blog Series: Robert Fach, JUC Europe

    Tue, 07/21/2015 - 17:09
    A Reproducible Build Environment with Jenkins Robert Fach, TechniSat Digital GmbH
    TechniSat develops and produces consumer and information technology products.

    In this talk Robert introduced what build reproducibility is and explained how TechniSat has gone about achieving it.

    What is binary reproducibility: same “inputs” should always produce the same outputs, today tomorrow, next month and in 15 years time! 15-20 years is needed for TechniSat to support the automotive industry.

    TechniSat has a rare and unique constraint where the customer can dictate what modules a feature can impact, but a release contains all modules and they are all rebuilt and tested so you need to ensure unchanged modules are not impacted.

    You need to identify and track everything that has influence on the input.
    • Source code, toolchains and build system validation, and everything else….
    The benefit of a reproducible build environment gives a new level of trust to the customer – that you are tracking things correctly to know what has gone into each build. Then you can support them in the future (so you can make a bug fix without bringing in any extra variability into the build)! It can also be used to find issues in the builds (random GUIDs created and embedded for the build can be detected as what should be binary identical and what shouldn't be).

    Why is it hard?Source code tracking: it is an easy and "bread and butter" method of managing sources (tags…), but what about if the source control system changes over time? (you need to make sure that the SCM stays compatible over time).

    OS tracking: File system – large code base with 1000's of files – some File systems may not perform well, but changing file systems can change file ordering which can affect the build. Locale issues can affect the build as well (marcos based on __DATE__, __TIME__ etc..)

    Compiler: Picking up a new version of the compiler for bug fixes may bring in new libraries or optimizations (branch prediction) that could change the binary. You need to know about anything based on heuristics in the compiler and the switches that control the features so you can disable them, since after the fact it can be too late! You can create a seed for any random generations (namespace mangling -frandom-seed)

    Dealing with complexity & scale.As you scale out and distribute the build, it needs to be tracked and controlled even more.

    This adds a requirement for a “release manager,” a system that controls what, how and where (release specification). This system maps the requirements onto the Jenkins jobs, which use a special plugin to control the job configuration (to pass variables to source control, scripts etc.). The Jenkins job maps to a Jenkins slave.

    For each release, the release manager creates a new release environment. This includes a brand new Jenkins master configured with the slaves that are required for the build. The slaves are mapped onto infrastructure. The infrastructure is currently managed SQA Systems, artefact repository, KVM cluster (with Openstack coming soon) and individual KVM hosts.

    After the release the infrastructure is archived (os tools Jenkins etc…). Also record the salt commands used). (provides one level of way to reproduce). The specification provides another way to recreate the environment (but it is not always reliable as something may have been missed).To create new builds in a change, you can clone an archived set of infrastructure so Jenkins can show trend history.

    Performance Lessons learned (a little bit random at the end of the talk).
    • Use tmpfs inside VMs for fast random I/O file systems.
    • Try to use nfs read-only cache to save network bandwidth
    • Put Jenkins Workspace in a dedicated lvm in the host rather than network
    To learn more, you can view Robert's slides and video from his talk.

    We hope you enjoyed JUC EuropeHere is the abstract for Robert's talk, "A Reproducible Build Environment with Jenkins." Here are the slides for his talk.
    If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
    Categories: Companies