Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
CloudBees provides an enterprise Continuous Delivery Platform that accelerates the software development, integration and deployment processes. Building on the power of Jenkins CI, CloudBees enables you to adopt continuous delivery incrementally or organization-wide, supporting on-premise, cloud and hybrid environments.
Updated: 2 hours 24 min ago

JUC Session Blog Series: Mario Cruz, JUC Europe

Fri, 07/31/2015 - 19:06
At his “From DevOps to NoOps”, Mario Cruz, CTO of Choose Digital, talked about automating manual tasks to achieve a NoOps organization, using Jenkins, AWS and Docker.

For Mario, NoOps is not about the elimination of Ops, it is the automation of manual processes, being the end state of adopting a DevOps culture, or, quoting Forrester, a DevOps focus on collaboration evolves into a NoOps focus on automation. At Choose Digital, the developers own the complete process, from writing code through production deployment. By using AWS Elastic Beanstalk and Docker they can scale up and down automatically. Docker and containers are the best thing to adopt DevOps, enabling running the same artifact in your machine and in production.

Mario mentioned that Jenkins is a game changer for continuous build, deploy, testing and closing the feedback loop. They use DEV@Cloud because of the same reason they use AWS, it is not their core business, and prefer to use services from companies with the expertise to run anything not core to the business. On their journey to adopt Docker they developed several Docker related plugins that they are discarding for the ones recently announced by CloudBees, like the Traceability plugin, a very important feature for auditing and compliance.



About deployment, Choose Digital uses Blue-Green deployment, creating a new environment and updating Route53 CNAMEs when the new deployment passes some tests ran by Jenkins, and even running Netflix Chaos Monkey. With Beanstalk swap environment urls both old and new deployments can be running at the same time, and reverting a broken deployment is just a matter of switching the CNAME back to the previous url without needing a new deployment. The old environments are kept around 2 days to account for caching and ensure all users are running in the new environment.


Only parts of the stack are replaced because doing it in the whole stack at peak time takes around 34 minutes, so only small parts on the AWS Elastic Beanstalk stack are deployed, in order to do it faster and more often. For some complex cases, such as database migrations, features are turned off by default and turned on at low traffic hours.

After deployment, logs and metrics are important, for example using NewRelic has proven very helpful to understand performance issues. Using these metrics the deployments are scaled automatically from around 25 to 250 servers at peak time.
We hope you enjoyed JUC Europe!Here is the abstract and link to the video recording of his talk.
If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
Categories: Companies

JUC Session Blog Series: Gus Reiber and Tom Fennelly, JUC Europe

Fri, 07/31/2015 - 19:03
Evolving the Jenkins UI
Two-for-one special Gus and Tom presented the Jenkins UI from the Paleolithic past to the shining future. Clearly comfortable with their material, they mixed jokes with demos and some serious technical meat. They spoke with candor and frankness about the current limits of the UI and how CloudBees, Inc. is working with the community to overcome them.

Tom took a divisive approach, specifically dividing monolithic CSS, JS, and page structure into clean, modular elements. “LESS is more” was a key point, using LESS to divide CSS into separate imports and parameterize it. He also explained work to put a healthy separation in the previously sticky relationship between plugin functionality and front-end code.


Tom showed off a completely new themes engine built upon these changes. This offers each installation and user the ability to customize the Jenkins experience to their personal aesthetics or improve accessibility, such as for the visually impaired. Gus brought a vision for a clean, dynamic UI offering a streamlined user interface. His goal was to aim for “third level” changes which enable completely new uses. For example, views that can become reports. Also he announced a move towards scalable layouts for mobile use, so “I know if I need to come back early [from lunch] because my build is broken or if I can have a beer over lunch.”

Radical change comes with risk, and to balance this, Gus repeatedly solicit community feedback to see if changes work well. Half-seriously, he mentioned previously going as far as giving out his mother’s phone number to make it easy for people to reach out.

Wrapping up, questions showed that while the new UI changes aren’t ready yet, CloudBees, Inc. is actively engaging with the community to shape the new look and feel of Jenkins, and the future is promising!
We hope you enjoyed JUC Europe!Here is the abstract for Tom and Gus's talk, "Evolving the Jenkins UI." Here are the slides for their talk and here is the video.
If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
Categories: Companies

JUC Session Blog Series: Daniel Spilker, JUC Europe

Fri, 07/31/2015 - 18:59
Daniel Spilker's talk "Configuration as Code - The Job DSL Plugin" continued a theme from Kohsuke's keynote speech: Maintaining a large number of jobs through the Jenkins UI is difficult. No single job builds everything; you may even need complex build pipelines for every branch. This means: Lots of copy & paste between jobs and manual editing of text areas in the Jenkins UI. And if you miss important options behind 'Advanced…' buttons, you'll need a few attempts to get it right!

What you want instead are ways to set up new build pipelines quickly, to be able to refactor jobs without hassle, to have traceability of job configuration changes and to even be able to use IDEs for any scripts you're writing.

Since this is a common problem, several plugins exist that address some of these problems: Job Config History plugin allows you to determine who changed a job configuration; Literate Plugin stores the configuration in an SCM and can build multiple branches; Template Project plugin allows to reuse parts of job configurations in other jobs; Workflow Plugin makes it easy to build job pipelines. And then of course there is Job DSL Plugin, which aims to accomplish all of the goals mentioned above.

The Job DSL Plugin provides a DSL (domain specific language) based on Groovy that makes UI options available as keywords and functions. For example, a simple job definition could look like the following:

job('job-dsl-plugin') {
scm {
github('jenkinsci/job-dsl-plugin')
}
steps {
gradle('clean build')
}
publishers {
archiveArtifacts('**/job-dsl.hpi')
}
}

To use this DSL in Jenkins, you need to install the Job DSL Plugin and set up a so-called 'seed' job: A freestyle project that has a 'Process Job DSL' build step. When you build this seed job the specified Job DSL (e.g. stored in SCM) will be evaluated. In the example above, a job 'job-dsl-plugin' will be created if necessary, and then configured to check out from GitHub, build using Gradle, and archive a generated artifact.

The Job DSL plugin has a large user community of 70 committers that so far have created 500 pull requests and added support for 125 other plugins in the DSL, like the Gradle and Git plugins shown in the example above. Despite its name the plugin can also be used to generate views and items that aren't jobs, such as folders from the CloudBees Folders Plugin. If a plugin is not specifically supported by Job DSL, users can still make use of it by generating appropriate XML for the job's config.xml.

Since the DSL is based on Groovy, users can use features such as variables, loops and conditions in their DSL. Users can even define functions and classes in their scripts. Any Java library can be used as well, provided it was made available in the job workspace e.g. by a simple Gradle build script before executing the Job DSL script.

Advanced features of Job DSL include the ability to use IntelliJ IDEA to write DSL scripts, as the 'core' part of the Job DSL is a Java library, and a command-line version of Job DSL to generate job configurations outside Jenkins that allows you to review changes in job configurations before applying them to make sure what you're generating is correct.

Daniel ended the talk with some best practices, like recommending that adoption of Job DSL should happen gradually, that Job DSL scripts should always be stored in SCM to get traceability, and that smart use of Groovy will avoid repetition.

We hope you enjoyed JUC Europe!Here is the abstract for Daniel's talk, "Configuration as Code: The Job DSL Plugin".Here are the slides for his talk, and here is the video.
If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
Categories: Companies

JUC Session Blog Series: Andrew Phillips, JUC U.S. East

Fri, 07/31/2015 - 18:10
How to Optimize Automated Testing with Everyone's Favorite Butlerby Andrew Phillips, XebiaLabs
Andrew set the tone of the presentation with a key point stressing that Testing needs to be automated. The session brought out various implications and challenges in Test automation and looked at tooling in Test Automation.

The talk covered various aspects in Test Automation with the motivation to not just automate quality checks but analyze results, re-align tests to make them relevant and effective and map them to the core use-cases.

Test automation best practices were discussed for Parallelizing tests with orchestrated Jenkins pipelines and using ephemeral test slaves, keeping the test jobs simple and self-contained within their test-data dependencies.

Andrew also covered the use of Jenkins to invoke test tools via plugins and scripts sourced from SCM.

Andrew addressed "Testing 101" (in today’s automation world) and walked through the shift-left paradigm in quality with the following aspects-
  • Testers are developers
  • Test code = production code
    • Conway’s law
    • Measure quality
  • Link tests to use-cases
  • Radical parallelization
      • Fail faster...
      • Kill the nightlies
Then he covered Jenkins as the automation engine to orchestrate tests with the following well-known plugins:
  • Multi-job
  • Workflow
  • Copy artifact
Making sense of scattered test results is still a challenge… There still isn’t enough tooling or a solution to give you a “quality OK” sense every time something changes -
  • Many test tools for each test levels, no single place to validate if “this release” is good to go live!
  • Traceability, requirement coverage
    • Minimize MTTR
    • Have I tested this enough
    • Support for failure analysis
We hope you enjoyed JUC U.S. East!If you would like to learn more about this talk, here is the abstract for "How to Optimize Automated Testing with Everyone's Favorite Butler".And here are the slides and video.
If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
Categories: Companies

Clustering Jenkins with Kubernetes in the Google Container Engine

Thu, 07/23/2015 - 13:00
While we’ve already discussed how to use the Google Container Engine to host elastic Jenkins slaves, it is also possible to host the master itself in the Google Container Engine. Architecting Jenkins in this way lets Jenkins installations run more frictionlessly and reduces an administrator’s burden by taking advantage of the Google Container Engine’s container scheduling, health-checking, resource labeling, and automated resource management. Other administrative tasks, like container logging, can also be handled by the Container Engine and the Container Engine itself is a hosted service.

What is Kubernetes and the Google Container Engine?Kubernetes is an open-source project by Google which provides a platform for managing Docker containers as a cluster. Like Jenkins, Kubernetes’ orchestrating and primary node is known as the “master”, while the node which hosts the Docker containers is called a “minion”. “Pods” host containers/services should on the minions and are defined as JSON pod files.Source: http://blog.arungupta.me/

The Google Cloud Platform hosts the Google Container Engine, a Kubernetes-powered platform for hosting and managing Docker containers, as well as the Google Container Registry, a private Docker image registry hosted on the Google Cloud Platform.  The underlying Kubernetes architecture provisions  Docker containers quickly, while the Container Engine creates and manages your Kubernetes clusters.
Automating Jenkins server administrationGoogle Container Engine is a managed service that uses Kubernetes as its underlying container orchestration tool. Jenkins masters, slaves, and any containerized application running in the Container Engine will benefit from automatic health-checks and restarts of unhealthy containers. The how-to on setting up Jenkins masters in the Google Container Engine is outlined in full here.

The gist is that Jenkins master runs from a Docker image and is part of a Kubernetes Jenkins cluster. The master itself must have its own persistent storage where the $JENKINS_HOME with all of its credentials, plugins, and job/system configurations can be stored. This separation of master and $JENKINS_HOME into 2 locations allows the master to be fungible and therefore easily replaced should it go offline and need to be restarted by Kubernetes. The important “guts” that make a master unique all exist in the $JENKINS_HOME and can be mounted to the new master container on-demand. Kubernetes own load balancer then handles the re-routing of traffic from the dead container to the new one.The Jenkins master itself is defined as a Pod (raw JSON here). This where ports for slave/HTTP requests, the Docker image for the master, the persistent storage mount, and the resource label (“jenkins”) can all be configured.
The master will also need 2 services to run to ensure it can connect to its slaves and answer HTTP requests without needing the exact IP address of the linked containers:
  • service-http - defined as a JSON file in the linked repository, allows HTTP requests to be routed to the correct port (8080) in the Jenkins master container’s firewall.
  • service-slave - defined in the linked JSON file, allows slaves to connect to the Jenkins master over port 50000.


Where do I start?
  1. The Kubernetes plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. Instructions on how to set up a Jenkins master in the Google Container Engine are available on GitHub.
  3. The Google Container Engine offers a free trial.
  4. The Google Container Registry is a free service.
  5. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:
    1. Docker Build and Publish Plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Jenkins Docker Workflow DSL
    4. Docker Traceability
    5. Docker Hub Trigger Plugin
    6. Docker Custom Build Environment plugin



Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

On-demand Jenkins slaves with Kubernetes and the Google Container Engine

Thu, 07/23/2015 - 13:00
In a previous series of blogs, we covered how to use Docker with Jenkins to achieve true continuous delivery and improve existing pipelines in Jenkins.

The CloudBees team and the Jenkins community have now also created the Kubernetes plugin, allowing Jenkins slaves to be built as Docker images and run in Docker hosts managed by Kubernetes, either on the Google Cloud Platform or on a more local Kubernetes instance. These elastic slaves are then brought online as Jenkins schedules jobs for them and destroyed after their builds are complete, ensuring masters have steady access to clean workspaces and minimizing builds’ resource footprint.
What is Kubernetes and the Google Container Engine?Kubernetes is an open-source project by Google which provides a platform for managing Docker containers as a cluster. Like Jenkins, Kubernetes’ orchestrating and primary node is known as the “master”, while the node which hosts the Docker containers is called a “minion”. “Pods” host containers/services should on the minions and are defined as JSON pod files.Source: http://blog.arungupta.me/

The Google Cloud Platform hosts the Google Container Engine, a Kubernetes-powered platform for hosting and managing Docker containers, as well as the Google Container Registry, a private Docker image registry hosted on the Google Cloud Platform.  The underlying Kubernetes architecture provisions  Docker containers quickly, while the Container Engine creates and manages your Kubernetes clusters.
Elastic, custom, and clean: Kubernetes slavesAs the demand on a Jenkins master increases, often so too do the build resources required. Many organizations architect for this projected growth by ensuring that their build/test environments are fungible, and therefore easily replaced and templated (e.g. as Docker images). Such fungibility makes slave resources highly scalable and resilient should some go offline or new ones need to be created quickly or automatically.

Kubernetes allows Jenkins installations to leverage any of their Docker slave images as templates for on-demand slave instances, which Jenkins can ask Kubernetes to launch as needed. The Kubernetes plugin now supports launching these slaves in any Kubernetes instance, including the Google Cloud Platform’s Container Engine.

Once a Kubernetes Pod running the slave container is deployed, the Jenkins jobs requesting that specific slave via traditional labels are built inside the Pod’s slave container. Kubernetes then brings the slave’s Pod offline after its build completes.

Where do I start?
  1. The Kubernetes plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. The Google Container Engine offers a free trial.
  3. The Google Container Registry is a free service.
  4. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:
    1. Docker Build and Publish Plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Jenkins Docker Workflow DSL
    4. Docker Traceability
    5. Docker Hub Trigger Plugin
    6. Docker Custom Build Environment plugin



Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Jenkins Container Support Juggernaut Arrives at Kubernetes, Google Container Registry

Wed, 07/22/2015 - 17:11
TL; DR: Jenkins now publishes Docker containers to Google Container Registry. Use Kubernetes to run isolated containers as slaves in Jenkins.

Last month, I wrote about exciting news with Jenkins namely its support for Docker. This month, I am happy to announce that Jenkins continues on its march for container technology support by providing support for Kubernetes.Overview of all technology components in this blog:
Kubernetes
Kubernetes is a system to help manage a cluster of Linux containers as a single system. Kubernetes is an open source project that was started by Google and now supported by various companies such Red Hat, IBM and others.

Kubernetes and DockerAs teams graduate beyond simple use cases with Docker, they realise that containers are not really meant to be deployed as a single unit.The next question is, how to do you start these containers across multiple hosts, how can these containers be grouped together and treated as a single unit of deployment? This is the use case that Kubernetes solves.

Google Container RegistryThe container registry is a service by Google to securely host, share and manager private container repositories and is part of the Google Container Engine service.
Interplay of these technology pieces
Kubernetes, Google Container Registry, Docker and JenkinsWhile Kubernetes focusses on the deployment side of Docker, Jenkins focuses the entire lifecycle of moving your docker containers from development to production. If a team builds a CD pipeline, the pipeline is managed through Jenkins which moves the containers through the pipeline (Dev->QA->Prod) and the containers finally deployed using Kubernetes. Thus, the four technologies make for a powerful combination for building CD pipelines.

Kubernetes and Jenkins announcementWith Docker, I talked about 2 meta-use cases  
  • Building CD pipelines with Docker and 
  • Using Docker containers as Jenkins slaves.



Today, the Jenkins community brings both stories to the Kubernetes.

Use case 1: Building CD pipelines with Google Container RegistryThe first use case enables teams to work with Google Container Registry (GCR). The community has taken the Docker Build and Publish plugin and extended it so that builds can publish containers to GCR. Details on this blog.

Use case 2: First class support for Jenkins WorkflowJenkins Workflow is fast becoming the standard way to build real world pipelines with Jenkins. Build managers can use the Workflow DSL to build these pipelines The community has provided support for Kubernetes by adding a kubernetes DSL that launches a build within a Kubernetes cluster.

Use case 3: Running docker containers as Jenkins slaves through KubernetesOne of the common issues in Jenkins is isolating slaves. Today, if an errant build contaminates the build machine, it may impact downstream builds. If these slaves are running as Docker containers, any “leakages” from previous builds is eliminated. With the Kubernetes plugin and Docker Custom Build Environment plugin, Jenkins can get a build slave from Kubernetes and run builds within the containers.

What’s Next?The CloudBees and Google teams have collaborated on these plugins and you can expect to see more efforts to support more use cases between Jenkins and Kubernetes. Some of these use cases, involve piggy-backing on the Docker support released by the community (for example Docker Traceability and Docker Notifications plugin).
If you are a developer and want to contribute to this effort reach out on the Jenkins developer alias (hint talk to Nicolas DeLoof ;-))

Closing Thoughts:The OSS community has innovated in the last couple of months, they have quickly added support for Docker and Kubernetes and have established Jenkins as the premier way to build modern real world continuous delivery pipelines.
I hope you have fun playing with all the goodies just released.

Where do I start?





Harpreet SinghVice President of Product Management 
CloudBees

Harpreet is the Vice President of Product Management and is based out of San Jose. Follow Harpreet on Twitter
Categories: Companies

Orchestrating deployments with Jenkins Workflow and Kubernetes

Wed, 07/22/2015 - 13:00
In a previous series of blogs, we covered how to use Docker with Jenkins to achieve true continuous delivery and improve existing pipelines in Jenkins. While deployments of single Docker containers were supported with this initial integration, the CloudBees team and Jenkins community’s most recent work on Jenkins Workflow will also let administrators launch and configure clustered Docker containers with Kubernetes and the Google Cloud Platform.
What is Workflow?
Jenkins Workflow is a new plugin which allows Jenkins to treat continuous delivery as a first class job type in Jenkins. Workflow allows users to define workflow processes in a single place, avoiding the need to coordinate flows across multiple build jobs. This can be particularly important in complex enterprise environments, where work, releases and dependencies must be coordinated across teams. Workflows are defined as a Groovy script either within a Workflow job or checked into the workspace from an external repository like Git.

Docker for simplicityIn a nutshell, the CloudBees Docker Workflow plugin adds a special entry point named Docker that can be used in any Workflow Groovy script. It offers a number of functions for creating and using Docker images and containers, which in turn can be used to package and deploy applications or as build environments for Jenkins.

Broadly speaking, there are two areas of functionality: using Docker images of your own or created by the worldwide community to simplify build automation; and creating and testing new images. Some projects will need both aspects and you can follow along with a complete project that does use both: see the demonstration guide.
Jenkins Workflow Deployments with KubernetesAs mentioned in the previous blog, the Google Cloud Platform also supports pushing Docker images to the Google Container Registry and deploying them to the Google Container Engine with Kubernetes.
Jenkins Workflow now also supports using the Google Cloud Platform’s Container Registry as a Docker image registry. Additionally, it also exposes a few new Kubernetes and Google Cloud Platform-specific steps to complement Workflow’s existing Docker features. These steps allow Jenkins to securely connect to a given Kubernetes cluster, as well as remotely instruct the Kubernetes cluster manager to launch a given Docker image as a container in a Kubernetes Pod, change existing settings like the target cluster or context, and set the target number of replicas in a cluster.

Where do I start?
  1. The Workflow plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. The CloudBees Docker Workflow plugin is another open-source plugin available in the OSS update center or as part of the CloudBees Jenkins Platform.
  3. The Google Cloud Registry Auth plugin is an open-source plugin developed by Google, so it available to download from the open source update center or packaged as part of the CloudBees Jenkins Platform.
  4. The Kubernetes plugin is another open-source plugin  available from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  5. The Google Container Engine offers a free trial.
  6. The Google Container Registry is a free service.
  7. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:
    1. Docker Build and Publish Plugin
    2. Docker Slaves with the CloudBees Jenkins Platform
    3. Jenkins Docker Workflow DSL
    4. Docker Traceability
    5. Docker Hub Trigger Plugin
    6. Docker Custom Build Environment plugin




Tracy Kennedy
Associate Product ManagerCloudBees 

Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Secure application deployments with Jenkins, Kubernetes, and the Google Cloud Platform

Wed, 07/22/2015 - 13:00
In a previous series of blogs, we covered how to use Docker with Jenkins to achieve true continuous delivery and improve existing pipelines in Jenkins.

Docker can be used in conjunction with Jenkins to provide customized build and runtime environments for testing or production, trigger application builds, automate application packaging/releases, and deploy traceable containers. The new Jenkins Workflow plugin can also programmatically orchestrate these CD pipelines, while the CloudBees Jenkins Platform further builds on the above to give Jenkins masters shareable Docker build resources. All together, these features allow a Jenkins administrator or user to easily set up a CD pipeline and ensure that build/test environments are fungible, and therefore highly scalable.

The CloudBees team and the open-source community have enhanced this existing Docker story by adding Kubernetes and Google Container Registry support to Jenkins, giving Jenkins administrators the ability to leverage both Google’s container management tool and cloud container platform to run a highly-scalable and managed runtime for Jenkins.
Cookie-cutter environments and application packagingThe versatility and usability of Docker has made it a popular choice among DevOps-driven organizations. It has also made Docker an ideal choice for creating the standardized and repeatable environments that an organization needs for both creating identical testing and production environments as well as for packaging portable applications.
If an application is packaged in a Docker image, testing and deploying is a matter of creating a container from that image and running tests against the application inside. If the application passes the tests, then they should be stored in a registry and eventually deployed to production. Screen Shot 2015-06-10 at 1.57.06 PM.png
Leveraging the Google Container Registry
The Jenkins community has now added support for releasing applications as Docker images to the Google Container Registry, a free service offered by Google, and using Google’s own services to securely deploy applications across their multi-region datacenters.  
The Google Container Registry encrypts all Docker images and allows administrators to restrict push/pull access with ACLs on projects and storage buckets. Authentication is performed with their Cloud Platform OAuth over SSL, and Jenkins now supports this via the Google Container Registry Auth plugin developed by Google.
The CloudBees Docker Build and Publish Plugin adds a new build step to Jenkins jobs for building and packaging applications into Docker containers, then publishing them as Docker images to your registry of choice with the Google OAuth credentials mentioned above.Securely deploying with the Google Cloud PlatformThe Docker Build and Publish plugin doesn’t require the Kubernetes plugin to integrate with the Google Container Registry. However, installing both unlocks the option of using the Google Cloud Platform and its underlying Kubernetes cluster to securely deploy Docker images as containers.
The Google Cloud Platform supports directly deploying Docker images from their Container Registry to their Container Engine. Deployments can be to particular regions and clusters, and they happen on a configured schedule. Once deployed, the application can  then be run as a highly-available cluster. Kubernetes will perform regular health-checks on the application instances, restarting them as necessary.Source: http://googlecloudplatform.blogspot.com/2015_01_01_archive.html

Where do I start?
  1. The CloudBees Docker Build and Publish plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  2. The Google Cloud Registry Auth plugin is an open-source plugin developed by Google, so it available to download from the open source update center or packaged as part of the CloudBees Jenkins Platform.
  3. (Optional) The Kubernetes plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
  4. The Google Container Engine offers a free trial.
  5. The Google Container Registry is a free service.
  6. Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:




    Tracy Kennedy
    Associate Product ManagerCloudBees 
    Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
    Categories: Companies

    JUC Session Blog Series: Robert Fach, JUC Europe

    Tue, 07/21/2015 - 17:09
    A Reproducible Build Environment with Jenkins Robert Fach, TechniSat Digital GmbH
    TechniSat develops and produces consumer and information technology products.

    In this talk Robert introduced what build reproducibility is and explained how TechniSat has gone about achieving it.

    What is binary reproducibility: same “inputs” should always produce the same outputs, today tomorrow, next month and in 15 years time! 15-20 years is needed for TechniSat to support the automotive industry.

    TechniSat has a rare and unique constraint where the customer can dictate what modules a feature can impact, but a release contains all modules and they are all rebuilt and tested so you need to ensure unchanged modules are not impacted.

    You need to identify and track everything that has influence on the input.
    • Source code, toolchains and build system validation, and everything else….
    The benefit of a reproducible build environment gives a new level of trust to the customer – that you are tracking things correctly to know what has gone into each build. Then you can support them in the future (so you can make a bug fix without bringing in any extra variability into the build)! It can also be used to find issues in the builds (random GUIDs created and embedded for the build can be detected as what should be binary identical and what shouldn't be).

    Why is it hard?Source code tracking: it is an easy and "bread and butter" method of managing sources (tags…), but what about if the source control system changes over time? (you need to make sure that the SCM stays compatible over time).

    OS tracking: File system – large code base with 1000's of files – some File systems may not perform well, but changing file systems can change file ordering which can affect the build. Locale issues can affect the build as well (marcos based on __DATE__, __TIME__ etc..)

    Compiler: Picking up a new version of the compiler for bug fixes may bring in new libraries or optimizations (branch prediction) that could change the binary. You need to know about anything based on heuristics in the compiler and the switches that control the features so you can disable them, since after the fact it can be too late! You can create a seed for any random generations (namespace mangling -frandom-seed)

    Dealing with complexity & scale.As you scale out and distribute the build, it needs to be tracked and controlled even more.

    This adds a requirement for a “release manager,” a system that controls what, how and where (release specification). This system maps the requirements onto the Jenkins jobs, which use a special plugin to control the job configuration (to pass variables to source control, scripts etc.). The Jenkins job maps to a Jenkins slave.

    For each release, the release manager creates a new release environment. This includes a brand new Jenkins master configured with the slaves that are required for the build. The slaves are mapped onto infrastructure. The infrastructure is currently managed SQA Systems, artefact repository, KVM cluster (with Openstack coming soon) and individual KVM hosts.

    After the release the infrastructure is archived (os tools Jenkins etc…). Also record the salt commands used). (provides one level of way to reproduce). The specification provides another way to recreate the environment (but it is not always reliable as something may have been missed).To create new builds in a change, you can clone an archived set of infrastructure so Jenkins can show trend history.

    Performance Lessons learned (a little bit random at the end of the talk).
    • Use tmpfs inside VMs for fast random I/O file systems.
    • Try to use nfs read-only cache to save network bandwidth
    • Put Jenkins Workspace in a dedicated lvm in the host rather than network
    To learn more, you can view Robert's slides and video from his talk.

    We hope you enjoyed JUC EuropeHere is the abstract for Robert's talk, "A Reproducible Build Environment with Jenkins." Here are the slides for his talk.
    If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
    Categories: Companies

    Announcing the New CloudBees Partner Program

    Tue, 07/21/2015 - 16:56
    As adoption of Jenkins continues to grow, our ecosystem of resellers, services partners, training partners and technology partners plays a critical role delivering the enterprise-scale Jenkins solutions and complimentary tools our joint customers are seeking. Which is why we are pleased to announce today that we are deepening our commitment to our partner ecosystem through our newly redesigned partner program.
    Our redesigned partner program offers greater value to our partner community, ensures that customers seeking enterprise Jenkins solutions have expanded choice from trusted partners and addresses the specific needs of our services partners, resellers and training partners.
    As the Enterprise Jenkins company, CloudBees is committed to helping our partners become more successful. Through the delivery of key sales and technical tools, resources, training and content, we will ensure that our partners are positioned to effectively deliver the CloudBees Enterprise Jenkins solutions that enable customers to respond rapidly to the software delivery needs of the business.
    The new program expands its scope to include new tracks for reseller and training partners. In addition, for reseller and services partners, there are two participation tiers, Platinum and Gold.
    • Platinum: for partners ready to increase their commitment to delivering Jenkins solutions at enterprise-scale.
    • Gold: for partners looking to ramp up their Jenkins practices. 
    All partners will receive enhanced benefits and access to additional resources from CloudBees to drive growth and profitability, uncover new business opportunities and capitalize on growth of Jenkins adoption. Benefits include:
    • Dedicated Partner Managers
    • Software for demo use
    • Partner onboarding and enablement
    • Discounts on classroom training
    • Joint marketing activities
    • Opportunity registration and lead distribution
    • Access to sales and technical assets and tools via a new partner portal (Coming Q3)
    We invite you to join our community of partners and take advantage of the opportunities Jenkins presents for you and your customers. Contact the CloudBees partner team to discuss the participation type and tier that best meets your organization's needs. Your partner manager will then provide an agreement and will help get you started with key partner communications and training.

    Durga Sammeta, Sr. Director, Global Alliance and Channels, dsammeta@cloudbees.comMichael Anderson, Sr. Partner Manager, manderson@cloudbees.com

    Learn more: Visit www.cloudbees.com/partner/joinView the presentation from our July 16 Partner Update
    With our partners, we can deliver even greater value to customers. 
    Categories: Companies

    Template Hierarchies and Using Aux Models to Simplify Template Construction

    Fri, 07/17/2015 - 12:37


    The blog post will demonstrate how you can simplify your Jenkins Job Templates through creating re-usable models using Auxiliary Templates.

    Concepts and DefinitionsBefore we dive into the detail, we will recap on some of the concepts of Templates. The Templates plugin, available with the CloudBees Jenkins Platform, captures the sameness of configuration in multiple places. Administrators define templates of jobs/build steps/publishers and replicate them while creating new jobs. Changes are made in one central location and are reflected in all dependent configurations. The Templates Plugin lets you define four types of template:
    • Job 
    • Folder 
    • Builder 
    • Auxiliary
    This post focuses on Auxiliary and Job templates.

    Templates are conceptually broken down into a few pieces:ModelA model defines a set of attributes that constitutes a template. Roughly speaking, you can think of this as a class of an object-oriented programming language. For example, if you are creating a template for your organization’s standard process of running a code coverage, you might create a model that has attributes like “packages to obtain coverage”, “tests to run.”AttributeAn attribute defines a variable, what kind of data it represents, and how it gets presented to the users. This is somewhat akin to a field definition in a class definition.
    InstanceAn instance is a use of a model. It supplies concrete values to the attributes defined in the template. Roughly speaking, the model-to-instance relationship in the template plugin is like the class-to-object relationship in a programming language. You can create a lot of instances from a single template.
    TransformerA transformer is a process of taking an instance and mapping it into the “standard” Jenkins configuration, so that the rest of Jenkins understands this and can execute it. This can be logically thought of as a function.


    Structuring TemplatesTypical implementations will consist of multiple templates. Templates can be created for each new requirement, but oftentimes, there is a degree of similarity between the required templates. As with all good system implementations, a little advance planning and ongoing refactoring will pay dividends in the long run.

    The Templates plugin provides two mechanisms to reuse previously designed models:
    • Auxiliary Templates
    • Template Inheritance
    Template InheritanceTypically you may create a template that performs a typical activity - in this example we assume that the job will need to access a web server running on standard ports. The Model will include the hostname of the server.

    We configure the Job Template with the required parameter:




    The full config.xml for the job is shown below:

    <job-template plugin="cloudbees-template@4.17">
    <actions/>
    <description>template-web-host</description>
    <displayName>template-web-host</displayName>
    <attributes>
    <template-attribute>
    <name>name</name>
    <displayName>Name</displayName>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    <template-attribute>
    <name>hostname</name>
    <displayName>hostname</displayName>
    <helpHtml>hostname</helpHtml>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    </attributes>
    <properties/>
    <instantiable>true</instantiable>
    <help>template-web-host</help>
    <transformer class="com.cloudbees.workflow.template.WorkflowTransformer" plugin="cloudbees-workflow-template@1.3">
    <template><flow-definition/></template>
    <sandbox>false</sandbox>
    <script>echo "The value of hostname is $hostname"</script>
    <scriptSandbox>false</scriptSandbox>
    </transformer>
    </job-template>

    So let’s assume that having done this, we now identify the need for a new Job Template that will access a Web Server on a non-standard port. We could make changes to the original template to add complexity and make it universal, but, we don’t want to do that and risk impacting the existing job instances. We could also just create a brand new Job Template and ignore that the two are related. Duplicating the model definition may not seem a big deal when there is only one attribute, but what if there were many? A better approach is to create a new job that inherits from the previous one - this allows the existing model to be re-used and extended.

    We will create a new Job Template. This time, we ensure that the `Super Type` is specified as the previously created Job Template:




    We then add our additional attribute:




    We now have a full job configuration of:

    <job-template plugin="cloudbees-template@4.17">
    <actions/>
    <description>template-web-host-non-standard-port</description>
    <displayName>template-web-host-non-standard-port</displayName>
    <attributes>
    <template-attribute>
    <name>name</name>
    <displayName>Name</displayName>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    <template-attribute>
    <name>port</name>
    <displayName>port</displayName>
    <helpHtml>port</helpHtml>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    </attributes>
    <properties/>
    <superType>
    Template examples/Template-Hierarchies/template-web-host
    </superType>
    <instantiable>true</instantiable>
    <help>template-web-host-non-standard-port</help>
    <transformer class="com.cloudbees.workflow.template.WorkflowTransformer" plugin="cloudbees-workflow-template@1.3">
    <template><flow-definition/></template>
    <sandbox>false</sandbox>
    <script>
    echo "The value of hostname is $hostname" echo "The value of port is $port"
    </script>
    <scriptSandbox>false</scriptSandbox>
    </transformer>
    </job-template>

    Next we go ahead and create a Job Instance from this new template:


    We now have both the original and the new attributes available.

    Using the inheritance approach saves the need to duplicate common Model definitions in each Job Template Variant.


    Auxiliary ModelsAn alternative to inheriting templates to add further details is to use nested Auxiliary Models.
    A nested auxiliary model allows a Model to compose other Models. See the section called “Auxiliary Template”

    In this example, we want to model different server types used in a deployment workflow. Both the MySQL and Apache servers extend a base server. This is represented as:



    The servers are defined as Auxiliary Templates. We create them using New Item->Auxiliary Template.

    Create the Auxiliary TemplatesWe create a new Auxiliary Template called aux-server-base. We add three attributes - hostname, userid and password.

    The full config.xml for aux-server-base is:

    <com.cloudbees.hudson.plugins.modeling.impl.auxiliary.AuxModel plugin="cloudbees-template@4.17">
    <actions/>
    <description>aux-server-base</description>
    <displayName>aux-server-base</displayName>
    <attributes>
    <template-attribute>
    <name>hostname</name>
    <displayName>hostname</displayName>
    <helpHtml>hostname</helpHtml>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    <template-attribute>
    <name>userid</name>
    <displayName>userid</displayName>
    <helpHtml>userid</helpHtml>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    <template-attribute>
    <name>password</name>
    <displayName>password</displayName>
    <helpHtml>password</helpHtml>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    </attributes>
    <properties/>
    <instantiable>true</instantiable>
    </com.cloudbees.hudson.plugins.modeling.impl.auxiliary.AuxModel>

    We will repeat for aux-apache-server. Ensure that the Super type is set to the base server aux template.




    We then add the required extra attribute. The resultant config.xml looks like:

    <com.cloudbees.hudson.plugins.modeling.impl.auxiliary.AuxModel plugin="cloudbees-template@4.17">
    <actions/>
    <description>aux-apache-server</description>
    <displayName>aux-apache-server</displayName>
    <attributes>
    <template-attribute>
    <name>httpPort</name>
    <displayName>httpPort</displayName>
    <helpHtml>httpPort</helpHtml>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    </attributes>
    <properties/>
    <superType>
    Template examples/Template-Hierarchies/aux-server-base
    </superType>
    <instantiable>true</instantiable>
    <help>aux-apache-server</help>
    </com.cloudbees.hudson.plugins.modeling.impl.auxiliary.AuxModel>


    Repeat for the MySQL-Server. The resultant config.xml looks like:

    <com.cloudbees.hudson.plugins.modeling.impl.auxiliary.AuxModel plugin="cloudbees-template@4.17">
    <actions/>
    <description>aux-mysql-server</description>
    <displayName>aux-mysql-server</displayName>
    <attributes>
    <template-attribute>
    <name>mysqlPassword</name>
    <displayName>mysqlPassword</displayName>
    <helpHtml>mysqlPassword</helpHtml>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    <template-attribute>
    <name>mysqlUserid</name>
    <displayName>mysqlUserid</displayName>
    <helpHtml>mysqlUserID</helpHtml>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    <template-attribute>
    <name>mysqlPort</name>
    <displayName>mysqlPort</displayName>
    <helpHtml>mysqlPort</helpHtml>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    </attributes>
    <properties/>
    <superType>
    Template examples/Template-Hierarchies/aux-server-base
    </superType>
    <instantiable>true</instantiable>
    <help>aux-mysql-server</help>
    </com.cloudbees.hudson.plugins.modeling.impl.auxiliary.AuxModel>

    Create Job TemplatesTo use the previously created Models in a Job Template, it is necessary to add an Attribute with the type “Nested auxiliary models”. Note the chosen value for Attribute ID will be used to reference the Model in the transformer.


    The nested model then needs to be selected from the drop down list. In this instance we will select ‘aux-apache-server’. For now we will select the UI Mode of “Single Value”. I will cover the different modes later in the post.

    Transformers access these values as an instance (or as a java.util.List of instances) of auxiliary models. The instance is accessed using the ID allocated to the Nested Auxillary Model in the template definition. In our example this would be apacheHost, so the attributes are accessed as apacheHost.httpPort etc.

    Our full config.xml for the Job Template is:
    <job-template plugin="cloudbees-template@4.17">
    <actions/>
    <description>template-deploy-single-apache</description>
    <displayName>template-deploy-single-apache</displayName>
    <attributes>
    <template-attribute>
    <name>name</name>
    <displayName>Name</displayName>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.TextFieldControl"/>
    </template-attribute>
    <template-attribute>
    <name>apacheHost</name>
    <displayName>apacheHost</displayName>
    <helpHtml>apacheHost</helpHtml>
    <control class="com.cloudbees.hudson.plugins.modeling.controls.NestedAuxModelControl">
    <itemType>
    Template examples/Template-Hierarchies/aux-apache-server
    </itemType>
    <mode>SINGLE</mode>
    </control>
    </template-attribute>
    </attributes>
    <properties/>
    <instantiable>true</instantiable>
    <transformer class="com.cloudbees.workflow.template.WorkflowTransformer" plugin="cloudbees-workflow-template@1.3">
    <template><flow-definition/></template>
    <sandbox>false</sandbox>
    <script>
    echo "apacheHost.hostname : $apacheHost.hostname" echo "apacheHost.userid : $apacheHost.userid" echo "apacheHost.httpPort : $apacheHost.httpPort"
    </script>
    <scriptSandbox>false</scriptSandbox>
    </transformer>
    </job-template>


    Explanation of the UI ModesNested auxiliary models can have one of 4 different UI modes:

    • Single Value
    • Single Value (choice of all the subtypes of the specified model)
    • List of values
    • List of values, including all subtypes

    Single valueThe attribute will hold one and only one value of the specified aux model. The UI will show the configuration of the aux model inline, and the user will not really see that it is a part of another model. This is useful for splitting a common fragment into a separate model and reusing it.

    The input will look as follows:


    The model attributes are accessed using this style:
    apacheHost.hostname

    For example, in workflow this looks like:

    echo "apacheHost.hostname : $apacheHost.hostname"
    echo "apacheHost.userid : $apacheHost.userid"
    echo "apacheHost.httpPort : $apacheHost.httpPort"

    Single value (choice of all the subtypes of the specified model)The attribute will hold one and only one value, but the type can be any of the instantiable concrete subtypes of the specified aux model (including itself, unless it's abstract.) The user will see a radio button to select one of the possible types.

    The input will look as follows:


    The model attributes are accessed using this style: host.hostname

    It is also possible to determine the Type of the model using attributeid.model.id

    Below is a workflow example:

    echo "host.hostname : $host.hostname"
    echo "host.userid : $host.userid"
    if (host.model.id.contains("aux-apache-server")) {
        echo "Type: $host.model.id"
        echo "host.httpPort : $host.httpPort"
    }

    List of valuesThe attribute will hold arbitrary number of the specific aux model. Example of this is the JDK/Ant/Maven configuration in Jenkins system config page. Users will use add/remove buttons to modify the list.

    The input will look as follows:


    The model attributes are accessed by iterating through the array. Below is a workflow example:

    apacheHosts.each {
        userid=it.userid
        host=it.hostname
        echo "Host: $host, Userid = $userid"
    }

    List of values, including all subtypesSomewhat like "list of values" above, but you'll specify the base model type (typically abstract one) in the nested model field above, and Jenkins will allow users to instantiate arbitrary concrete subtypes. The UI will show an add button with a drop-down menu, with which users will select the specific subtype to instantiate. An example of this is the tool installer configuration to JDK/Ant/Maven.

    The input will look as follows:



    The model attributes are accessed by iterating through the array. Below is a workflow example:
    hosts.each {
        userid=it.userid
        host=it.hostname
        echo "Host: $host, Userid = $userid"
    }

    ConclusionThe CloudBees Jenkins Platform provides a rich templating capability that allows job complexity to be abstracted away from users, only exposing the relevant fields that they are required to enter.

    The ability to extract out a hierarchy of common Models enables the template developer to re-use the definitions efficiently across many individual Job Templates and enforces standardised naming and usage patterns.

    ReferencesTemplate plugin user manual
    Categories: Companies

    Jenkins Workflow - Creating a Class to Wrap Access to a Secured HTTP Endpoint

    Tue, 07/14/2015 - 14:40
    This blog post will demonstrate how to access an external system using HTTP with form-based security. This will show how you can use the following features:
    • Jenkins Workflow plugin
    • Groovy built-in libraries such as HttpClient
    • Git-based Workflow Global Library repository
    Getting StartedRequirements
    1. JDK 1.7+
    2. Jenkins 1.609+
    3. An external system secured with form-based authentication - e.g., a Jenkins server with security enabled and anonymous access rescinded
    Installation
    1. Download and install JDK 1.7 or higher
    2. Download and install Jenkins
    3. Start Jenkins
    Setup Jenkins
    • Update plugins - Make sure you have the latest Workflow plugins by going to Manage Jenkins –> Manage Plugins -> Updates and selecting any Workflow-related updates. Restart Jenkins after the updates are complete. As of this writing the latest version is 1.8.
    • Global libraries repo - Jenkins exposes a Git repository for hosting global libraries meant to be reused across multiple CD pipelines managed on the Master. We will setup this repository so you can build on it to create your own custom libraries. If this is a fresh Jenkins install and you haven’t setup this Git repository, follow these instructions to setup.


    Important - Before proceeding to the next steps, make sure your Jenkins instance is running

    See Workflow Global Library for details on how to set up the shared library - note that if security is enabled, the ssh format works best.If using ssh ensure that your public key has been configured in Jenkins. To initialise the git repository:

    git clone ssh://<USER>@<JENKINS_HOST>:<SSH_PORT>/workflowLibs.git
    Where
    • USER is a user a valid user that can authenticate
    • JENKINS_HOST is the DNS name of the Jenkins server you will be running the workflow on. If running on the CloudBees Jenkins platform, this is the relevant client master not the Jenkins Operations Center node.
    • SSH_PORT is the ssh port defined in Jenkins configuration:

    Screen Shot 2015-07-09 at 09.15.36.png

    Note the repository is initially empty.
    To set things up after cloning, start with:
    git checkout -b master

    Now you may add and commit files normally. For your first push to Jenkins you will need to set up a tracking branch:

    git push --set-upstream origin master

    Thereafter it should suffice to run:

    git push

    Creating a Shared Class to Access JenkinsCreate a class in the workflow library:

    cd workflowLibs
    mkdir -p src/net/harniman/workflow/jenkins
    curl -O \
    https://gist.githubusercontent.com/harniman/36a004ddd5e1c0635edd/raw/3997ddab0ab571c902068afad60cbc56eeda07cb/Server.groovy
    git add *
    git commit
    git push
    This will make the following class

    net.harniman.workflow.jenkins.Server
    available for use by workflow scripts using this syntax:

    def jenkins=new net.harniman.workflow.jenkins.Server(<parameters>)
    and methods accessed using:

    jenkins.logon()
    jenkins.getURL(<supplied url>)
    Note:
    • Classes follow the usual Groovy package naming formats and thus need to be in the appropriate directory structure
    • It is unnecessary to go out of process to access this Jenkins instance - it can be accessed from Groovy via the Jenkins model, however, this is to show how a client for form-based access could be built
    • The below script is built to access Jenkins as an example for demonstration

    This is the actual contents of Server.groovy:

    package net.harniman.workflow.jenkins


    import org.apache.commons.httpclient.Header
    import org.apache.commons.httpclient.HostConfiguration
    import org.apache.commons.httpclient.HttpClient
    import org.apache.commons.httpclient.NameValuePair
    import org.apache.commons.httpclient.methods.GetMethod
    import org.apache.commons.httpclient.methods.PostMethod
    import org.apache.commons.httpclient.cookie.CookiePolicy
    import org.apache.commons.httpclient.params.HttpClientParams


    class Server {
    String user
    String password
    String host
    Integer port
    String proto
    HttpClient client = new HttpClient()

    Server(String user, String password, String host, Integer port, String proto) {
    this.user = user
    this.password=password
    this.host=host
    this.port=port
    this.proto=proto
    client.getHostConfiguration().setHost(host, port, proto);
    client.getParams().setCookiePolicy(CookiePolicy.BROWSER_COMPATIBILITY);
    }
    HttpClient getHttpClient() {
    return client
    }
    Integer logon() {
    String logonInitiationURL = proto + "://" + host + ":" + port + "/login?from=%2F"
    String logonSubmitURL = proto + "://" + host + ":" + port + "/j_acegi_security_check"


    // We need to make a call first to set up the session
    // HttpClient will automatically handle cookies based on the
    // CookiePolicy set above
    GetMethod get = new GetMethod(logonInitiationURL)
    client.executeMethod(get);
    get.releaseConnection()
    PostMethod authpost = new PostMethod(logonSubmitURL)


    def json='{"j_username": "'+user+'", "j_password": "'+password+'", "remember_me": false, "from": "/"}'


    def param1=new NameValuePair("j_username", user)
    def param2=new NameValuePair("j_password", password)
    def param3=new NameValuePair("from", "/")
    def param4=new NameValuePair("json", json)
    def param5=new NameValuePair("Submit", "log in")


    authpost.addParameters(param1)
    authpost.addParameters(param2)
    authpost.addParameters(param3)
    authpost.addParameters(param4)
    authpost.addParameters(param5)


    client.executeMethod(authpost)

    // We need to follow the redirect to understand whether authentication
    // was successful
    // 200 = Success
    // 401 = Credentials failure

    Header header = authpost.getResponseHeader("location");
    def response
    if (header != null) {
    String newuri = header.getValue();
    if ((newuri == null) || (newuri.equals(""))) {
    newuri = "/";
    }
    GetMethod redirect = new GetMethod(newuri);
    client.executeMethod(redirect);
    response=redirect.getStatusCode()
    redirect.releaseConnection()
    }
    authpost.releaseConnection()
    return response
    }


    String getURL(String URL) {
    GetMethod get = new GetMethod(URL)
    client.executeMethod(get)
    def body = get.getResponseBodyAsString()
    get.releaseConnection()
    return body
    }

    }
    Creating the workflowCreate a new workflow job with the Workflow Definition as follows:

    stage "Initialise"
    def jenkins = new net.harniman.workflow.jenkins.Server("annie.admin","password", "jenkins.beedemo.local", 80, "http" )


    def response


    stage "Logon"
    response = jenkins.logon()
    echo "Response = $response"


    stage "Query"
    response = jenkins.getURL("http://jenkins.beedemo.local/roles/whoAmI")
    echo "Response = $response"


    Ensure you substitute in real values for:
    • <USERID>
    • <PASSWORD>
    • <HOST>
    • <PORT>
    • <JENKINS_HOST>
    You can substitute in the required URL for the query - for instance, it could be <jobname>/config.xml to retrieve a job’s config. Run the Job Time to check that it all works, so go ahead and trigger the build. When you run the job you should see each stage being processed, the stage name output in the logs and shown in the list of running steps. If you have CloudBees Enterprise workflow extensions then you will also have a Stage View output like below: Screen Shot 2015-07-09 at 15.59.01.png

    If it works successfully you should see the raw body of the response printed in the console log.
    A successful authentication results in
    Response = 200
    being printed in the console log. Anything else indicates a failure with the Logon step. Conclusion The Workflow Global Library provides the ability to share common libraries (Groovy scripts) across many workflow jobs to help keep workflows DRY.

    In this instance, we have also shown how the built-in Groovy libraries can be used to simplify making calls to external systems. These could be configuration databases, ticketing platforms or custom deployment tools. The underlying Groovy libraries can be leveraged to implement complex capabilities such as handling authentication, following redirects, etc. The advantage of using groovy rather than using command line tools from say an `sh` step, is that you have the option perform the logic outside of a `node` block. When executing inside a `node` block, an executor is allocated/consumed, whereas code outside the node block runs as a flyweight task and does not consume an executor. References HttpClient Examples
    Jenkins Workflow - Getting Started by Udaypal Aarkoti
    Categories: Companies

    Continuous Delivery with CloudBees Jenkins Platform and AWS Lambda

    Mon, 07/06/2015 - 11:52
    Last month, CloudBees announced its CloudBees Jenkins Platform now runs on AWS. The extension is a new level of integration that enables customers to easily install the CloudBees Jenkins Platform and run it on Amazon Web Services’ EC2 cloud service.

    Building off this initial integration, CloudBees and AWS are actively working to rollout additional integration points - the latest of which being the integration with AWS Lambda. This blog will present how to implement a continuous delivery pipeline with Jenkins and AWS Lambda service.

    AWS LambdaAWS Lambda is a new service (Nov 2014) proposed by AWS to help implement event driven architectures on the cloud such as:
    • Transforming data as it reaches the cloud,
    • Perform notification, audit and analysis,
    • Start workflows.
    AWS Lambda function can be written in javascript/Node.js and Java languages.

    The new style of programming introduced by AWS Lambda requires new best practices to build, test and deploy these services.

    Let's implement a continuous delivery pipeline for AWS Lambda using CloudBees Jenkins Platform and the CloudBees AWS CLI Plugin for Jenkins.

    We will use the CreateThumbnail sample described in AWS Lambda Walkthrough 2: Handling Amazon S3 Events Using the AWS CLI (Node.js). The source code is available at https://github.com/CloudBees-community/aws-lambda-create-thumbnail.

    AWS Lambda function CreateThumbail.js
    Jenkins project to build, test and deploy an AWS Lambda functionCloudBees Jenkins Platform
    Jenkins ProjectCreate a Freestyle project named "aws-lambda-create-thumbnail".


    Source code configurationIn the "Source Code Management" section, select "Git" enter the repository URL "https://github.com/CloudBees-community/aws-lambda-create-thumbnail.git"

    Git configuration








    AWS CLI ConfigurationIn the "Build Environment" section, tick the "Setup AWS CLI".
    Define the desired set of credentials and AWS region.

    AWS CLI configuration in Jenkins




    If the desired credentials are not yet created, click on the "Add" button of the field "API credentials".
    In the "Add Credentials" screen, select the kind "Amazon Web Services Basic Credentials", enter the Access Key ID and the Secret Access Key and then click on "Add".
    Please note that you can also access the advanced configuration to define a human readable ID for the created Jenkins credentials and help readability when using Jenkins workflow.

    Add AWS credentials








    Build steps and AWS CLIFor this Node.js function demo, we will use a build automation based on simple shell scripts rather than on a specific build framework.

    In the Build section, click on "Add build step" and select "Execute shell".

    The aws-lambda-create-thumbnail project contains a "build" script to package the Node.js function. Invoke this build script in the command.
    # BUILD
    ./build

    Once the AWS Lambda function is packaged, we upload it to AWS Lambda invoking the AWS CLI command "aws lambda update-function-code":
    # UPLOAD AMAZON LAMBDA FUNCTION TO AWS
    aws lambda update-function-code \
    --region us-east-1 \
    --function-name CreateThumbnail \
    --zip-file file://./target/CreateThumbnail.zip

    It is time now to test the deployed function using the AWS CLI "aws lambda invoke":
    # TEST AMAZON LAMBDA FUNCTION ON AWS
    aws lambda invoke \
    --invocation-type Event \
    --function-name CreateThumbnail \
    --payload file://./src/test/input.json \
    target/test-outputfile.json

    Build shell steps and AWS CLI invocations










    Creating an AWS Lambda functionIt is time to create the AWS Lambda function if it doesn't already exists.
    AWS Lambda FunctionIn AWS Management Console, create an AWS Lambda function named "CreateThumbnail":
    New AWS Lambda function








    Lambda function code with Node.jsSelect the Runtime type Node.js, you can keep the Code entry type and Code template to their default values Edit code inline and Hello World.
    Define the handler to CreateThumbnail.handler as defined in CreateThumbnail.js.
    Select the Role IAMLambdaExecute as defined in AWS Lambda Walkthroughs (Node.js) » ... » Step 2.2: Create an IAM Role (execution role) (Amazon S3 Events).
    AWS Lambda function code












    AWS Lambda Function creationWe can now create the function clicking on Create Lambda function:
    Create Lambda function




    The AWS Management Console renders a dashboard of the function with valuable graphs and logs:
    AWS Management Console dashboard for AWS Lambda functions












    ConclusionIn this blog we have shown you how easy it is to create a continuous delivery pipeline to continuously build, test and deploy an AWS Lambda service with CloudBees Jenkins Platform and the CloudBees AWS CLI Plugin.

    The CloudBees AWS CLI plugin is intended to be the Swiss Army knife to automate from a Jenkins job any action on an AWS infrastructure such as:
    • Copy generated artifacts to Amazon S3
    • Deploy a new version of an AWS Elastic Beanstalk application
    • Restore a snapshot of an Amazon RDS database before a test
    • Create an ephemeral environment (EC2, RDS ...) to run an integrated test

    Resources
    Categories: Companies

    As Jenkins Grows Up, We Invite Our Business Partners To Grow With Us.

    Tue, 06/30/2015 - 08:00
    As I am writing this post, CloudBees reached a milestone in the number of employees. I think the milestone hit many of us by surprise. “Really,” we thought. “So soon?” But if you look back over the past couple of quarters, it’s pretty apparent that our internal growth was inevitable.  
    The number of Jenkins deployments is rapidly rising. At last measure, there are more than 100,000 active installations of Jenkins running. And, as enterprise companies deploy more and more Jenkins, the need for enterprise-grade solutions are accelerating at a very similar rate. A recent blogby CloudBees CEO Sacha Labourey discusses how organizations are transforming their use of Jenkins as a Continuous Integration (CI) tool to using it as a platform to bring enterprise-wide Continuous Delivery (CD). And as our customers have matured their deployments, so have the solutions and offerings from CloudBees, including the most recent launch of CloudBees Jenkins Platform.
    The fact is… we are growing. And as we grow, our partners- resellers, services providers, training partners and technology partners- will all play an increasingly critical role delivering the enterprise-scale Jenkins solutions and complimentary tools and platforms our joint customers are seeking.  
    Which is why we are committed to equipping our partners with the skills, resources and tools to help you get the most from the opportunity that Jenkins offers. Next month, CloudBees will announce new developments in our Partner Program to meet the needs of our growing partner ecosystem and to help all maximize the vast opportunities Jenkins presents. All current or potential partners- including global resellers, service providers and training partners are invited to attend our informational webinar on July 16 at 11 am ET. This presentation will provide an overview of the latest product developments and expanded opportunities available to partners to help grow your business through enterprise-scale Jenkins solutions.
    We look forward to sharing these exciting developments with you next month and working with you to uncover new opportunities, deliver the latest in Jenkins innovations and solutions to our joint customers, and expand your business.

    Durga SammetaGlobal Alliances and Channels

    Durga is Senior Director of Global Alliances and Channels and is based in San Jose.


    Categories: Companies

    CloudBees Jenkins Platform: Accelerating CD in Enterprises

    Wed, 06/24/2015 - 14:26
    If you follow CloudBees and Jenkins, you must’ve heard a flurry of announcements at Jenkins User Conference in Washington DC and London.

    This blog summarizes all the new goodies that CloudBees has launched at these conferences.The Launch of the CloudBees Jenkins PlatformOrganizations have matured from using Jenkins to do Continuous Integration to using it as a platform to bring enterprise-wide Continuous Delivery and they have used CloudBees products CloudBees Jenkins Enterprise and CloudBees Jenkins Operations Center to do so.

    With the launch of the CloudBees Jenkins Platform, we bundle these in one easy consumable package with couple of editions (Team, Enterprise) serving small teams and enterprise administrators.

    Screenshot 2015-06-09 18.41.56.png

    Each edition comes with features that are outlined here (refer to the CJP documentation for details)Screenshot 2015-06-09 18.44.07.png

    Screenshot 2015-06-09 18.44.42.png

    Welcome to “Solution Packs”The CloudBees Jenkins Platform allows CloudBees to serve enterprise audiences with specific needs better. We do so through the ability to deliver specific feature sets through “solution packs”. One of the first pack that we are launching today, is the Amazon Solution Pack.

    This pack lets customers share “elastic slaves” hosted on AWS with all Jenkins masters managed by CloudBees Jenkins Operation Center within an organization - these masters themselves are running on-premise or in the cloud. In addition, the CloudBees Jenkins Platform lets users directly use Amazon Web Services  within a Jenkins job using AWS CLI. Thus, developers can access any service that is accessible through the CLI as part of their build and deployment jobs.

    In addition, we are providing AMIs on the Amazon market place to help Amazon customers easily bootstrap the complete CloudBees Jenkins Platform - i.e. both CloudBees Jenkins Enterprise and CloudBees Jenkins Operations Center - on AWS.

    CloudBees Jenkins Platform for Pivotal CF
    Last November, we announced a partnership with Pivotal and the release of CloudBees Jenkins Enterprise on Pivotal CF. After a successful launch, our customers quickly came back to us and asked support for CloudBees Jenkins Operation Center as well, so that those organizations could roll out an enterprise-wide Continuous Delivery platform fully hosted on the Pivotal CF platform.

    So today, we are proud to announce the extension of this partnership with the release of CloudBees Jenkins Operation Center on Pivotal CF. And since we just announced a new packaging of our offering (see above), in a few weeks, we will be providing the complete CloudBees Jenkins Platform on Pivotal CF and remove individual references to CloudBees Jenkins Enterprise and CloudBees Jenkins Operations Center.

    CloudBees Jenkins Platform for Microsoft Azure
    Microsoft customers have demanded CloudBees Jenkins on the Azure platform for quite a while - and I am happy to announce that Microsoft and CloudBees have signed a partnership to make Azure a prime location for your Jenkins deployments. Full support will come in several steps.

    Today, we are releasing CloudBees Jenkins Operations Center/CloudBees Jenkins Enterprise for Microsoft Azure - these are Azure images that help Microsoft customers be up and running quickly with both CloudBees products. The current images are based on our November 2014 release but we will be updating them in the next few weeks with the May CloudBees Jenkins Platform release.

    My crystal ball tells me that there will be a lot of interesting announcements as we take the partnership forward.

    New features in the CloudBees Jenkins PlatformAt the Jenkins User Conference, we also announced a number of new CloudBees Jenkins Platform features:
    1. Stabilize production masters and eliminate downtime to teams caused by jobs that aren’t stable: The ability to promote jobs from masters used for testing to masters used for production:
      1. Some of the most sophisticated IT departments, use CloudBees Jenkins Operations Center to manage Jenkins and create new jobs on a test master and then job is stable they want to promote it to production. We have made this process easy and seamless. Features include:
        1. Validate if the job will run successfully on the target master before promoting. Examples include checks mentioned in the next bullet.
        2. Implicitly perform validation before promoting aka “pre-flight checks”. These checks include:
          1. Validate that the core versions of Jenkins on the test and production masters are compatible.
          2. Validate that plugins that are used on the test master are available on the production master.
        3. A job is re-promoted perhaps after a few fixes, then the ability to preserve the history of the job on the target master.
    1. Build cross organizational and cross master pipelines: Trigger jobs across masters
      1. This feature helps organizations build CD pipelines that span across masters. Thus, enabling scenarios such as, jobs on Dev teams master tickle jobs on QA teams master. Some of the features are:
        1. Integrated with CloudBees Role-based Access Control: thus jobs can only be triggered by employees with the right permission
        2. Ease of use features such as a quick path browser to easily navigate to the downstream jobs if it is on a different master within a cluster.
    1. Improved UX, especially regarding the getting started experience:
      1. A very common ask is to make Jenkins UI more modern, we have taken first steps to address it in our product. If you have opinions, both positive or negative, we will like to hear about it.
    New features in CJP but delivered in OSSAt CloudBees, we wear two hats both: Open Source and Proprietary product :-). So some of the biggest features that we have delivered this semester actually landed in OSS (hence are available both in open source and as part of the CloudBees Jenkins Platform).

    Workflow ImprovementsAt the end of last year, CloudBees with the Jenkins community delivered a substantial new sub-system in Jenkins: Jenkins Workflow. Jenkins Workflow helps build real world pipelines programmatically. We have been busy since since and have released multiple new versions (8 to be specific). Workflow 1.8 brings in notable new features including increased support for third-party plugins such as Copy Artifact, Credentials Binding, etc. They are now all supported as first class citizen as part of a workflow definition. You can refer to the compatibility matrix for up-to-date information.

    The following table captures most plugin update changes (Refer to the release notes for details): .Plugin SupportImproved Workflow DSL FeaturesImproved DSL FeaturesCopy ArtifactbuildStep: get downstream build propertiesLoad script from SCM: CI as source codeCredentials BindingwaitUntil: wait for external event before starting Safe Restart: restart Jenkins if WF isn’t runningMercurialSleep: sleep for sometime before proceeding
    Rebuild: rebuild jobs with initial parametersfileExists: check if a file exists before proceeding
    Build Token Root: securely trigger buildswithEnv: attach env variables before proceeding
    Parallel Test Executor

    Mail: support for mail within a workflow step

    Perforce



    The most interesting (well at-least to me :-)) is the ability to do CI-as-code with the “load script from SCM” feature. With this feature, developers can check in their build script into the source code repository and point their Jenkins job to the repository and Jenkins uses the script as its job configuration.

    The CloudBees and Jenkins community will continue to add support in Workflow for Jenkins plugins - so watch this space.Continuous Delivery with Docker I saved the best for the last: a big effort from CloudBees in conjunction with the community has been providing first class support for Docker to build continuous delivery pipelines. I have written a separate blog to call this feature set out. Parting thoughts:I am pleased to see the breadth of solutions that we bring to the market today. It isn’t often that a release includes partnerships and solutions offered in the variety of domains as we did today. I am excited that we have pushed the boundaries by enabling modern, sophisticated pipelines with Jenkins and Docker.

    What gets me most excited is the potential product and open source opportunities across CloudBees and Jenkins as we go ahead.

    I would like to quote Robert Frost on behalf of CloudBees and Jenkins:
    The woods are lovely, dark and deep,   But I have promises to keep,   And miles to go before I sleep,   And miles to go before I sleep.

    • Harpreet Singh
    Links
    1. The Docker and Jenkins White Paper 
    2. Jenkins and Docker Blog
    3. CJP Documentation
    4. Workflow Release Notes
    5. Workflow Compatibility Support




    Harpreet SinghVice President of Product Management 
    CloudBees

    Harpreet is the Vice President of Product Management and is based out of San Jose. Follow Harpreet on Twitter
    Categories: Companies

    Bringing Continuous Delivery to Cloud-Scale with Jenkins, Docker and "Tiger"

    Tue, 06/23/2015 - 17:13
    At JUC London I attended Bringing Continuous Delivery to Cloud-Scale with Jenkins, Docker and "Tiger" talk by Kohsuke Kwaguchi and Harpreet Singh.

    "Continuous Delivery", "Cloud" and "Docker" - all buzzwords in the - this talk premises to be of high interest - or just vapor-ware! - room was packed; Here are my live notes


    Kohsuke and Harpreet introduced the "Tiger" project they are working on (one of them asking for more and more features, the other implementing them when he's not doing a talk at some conference - I let you guess who's who).

    CloudBees is focussing on Continuous Delivery (further noted "CD" for consistency). They took Tesla car as a sample to illustrate this, as a Tesla car can receive upgrades during the night to fix a technical issue identified on running cars one day before, and let users benefit the latest fixes/features with minimal delay.

    To reconcile Dev and Ops tools within a single workflow to embrace all the continuous delivery process, workflow plugin is a key component to offer better flexibility. Docker is another major brick on the lego-puzzle team have to build to address CD challenge. With lightweight isolation it offers better reproducibility. a set of Docker-related plugins have been announced at JUC DC. Combined together, they allow to package the app and resources into container, and orchestrate their usage through the CD pipeline.

    • build and publish docker image (with credentials support for private repositories)
    • listen to dockerhub event so jenkins do trigger a build when some image is updated, to ensure everything is always up-to-date
    • workflow support to make docker images and container first class citizens in workflow DSL.


    Kohsuke made a live demo of such an integration. He committed a fix to a demo project, which triggered a jenkins build to publish a fresh new Docker image. DockerHub notification then do trigger the CD workflow to upgrade the production application with this up-to-date Docker image. Docker traceability do record docker image fingerprints so we can check which Docker image was used and which jenkins build did created it.

    Other demonstrated use-case is about managing build environment with Docker images. Docker plugin let you use docker containers as jenkins slaves. Docker Custom build environment let you control this directly from job configuration, or as a Dockerfile committed to your SCM side-by-side with project source code.

    Docker definitively is a major component in Jenkins way to address the CD challenge. CloudBees is also working on addressing large scale installations with support for Docker-shared execution within CloudBees Jenkins Operation Center. Harpreet also announced plan to deliver Kubernetes support on next release. Operation Center is evolving to embrace multi-master installation, with "promotion" for jobs to get moved from one master to another, cross master triggers, and such multi-master interactions.

    CloudBees product line is evolving into CloudBees platform : Team Edition for small team, Enterprise edition for larger installations, with "packs" for specific set of additional features (Amazon support for sample), and a fresh new "Tiger" project - here we go - aka Jenkins-as-a-Service, dedicated to big companies.


    DEV@Cloud already do offer such a service with thousands jenkins masters hosted on Amazon and elastic build slave infrastructure. Tiger goal is to offer the same experience behind company firewall. Multi-tenanted masters and slaves provisioned on-demand without administration hell, is built on top of CloudBees platform so do benefit all the tooling provided by CloudBees Jenkins Enterprise (security, monitoring, visualization). 
    Kohsuke made a quick demo of this new product. From CloudBees Jenkins Operation Center web UI he provisioned a fresh new client master. Tiger is managing the underlying infrastructure - based on Mesos and Docker containers - to find adequate "box" to host this new instance and storage bucket, and is sharing build resources the same way. Within the minute you get a fresh new jenkins master setup, ready to host team jobs and builds. Tiger is moving jenkins to the Cloud-scale with such a multi-tenant distributed solution.
    So, Docker again. Seems this is not about the Tiger I expected but actually some Tiger Whale...


    Jenkins / Docker / Continuous Delivery story is just starting, and lot's more feature and tools integration will come to offer simpler/better/faster (Daft Punk TM) Continuous Delivery.



    Categories: Companies

    Templating Jenkins Build Environments with Docker Containers

    Tue, 06/23/2015 - 02:56

    Builds often require that credentials or tooling be available to the slave node which runs it. For a small installation with few specialized jobs, this may be manageable using generic slaves, but when these requirements are multiplied by the thousands of jobs that many organizations running per day, managing and standardizing these slave environments becomes more challenging.
    What is Docker?
    Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

    Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. They can also be created from a read-only template called a Docker image.  
    Docker images can be created from an environment definition called a Dockerfile or from a running Docker container which has been committed as an image. Once a Docker image exists, it can be pushed to a registry like Docker Hub and a container can be created from that image, creating a runtime environment with a guaranteed set of tools and applications installed to it. Similarly, containers can be committed to images which are then committed to Docker Hub.
    Docker for bootstrapping and templating slaves
    Docker has established itself as a popular and convenient way to bootstrap isolated and reproducible environments, which enables Docker containers to be the most maintainable slave environments. Docker containers’ tooling and other configurations can be version controlled in an environment definition called a Dockerfile, and Dockerfiles allows multiple identical containers can be created quickly using this definition or for more customized off-shoots to be created by using that Dockerfile's image as a base.

    The CloudBees Custom Builds Environment Plugin allows Docker images and files to serve as template for Jenkins slaves, reducing the administrative overhead of a slave installation to only updating a few lines in a handful of environment definitions for potentially thousands of slaves.
    Building with Docker ContainersThis plugin adds the option “Build inside a Docker container” in the build environment configuration of a job. To enable it, simply scroll to the “Build Environment” section of any Jenkins job and select the “Build inside a Docker container” option. You will then be able to specify whether a slave container should be created from a Dockerfile checked into the workspace (e.g. the file was in the root of the project) or whether to pull an explicit image from a Docker registry to use as the slave container.














    Customized slave environments
    For generic builds, you can leverage the most popular Jenkins slave image in Docker Hub called evarga/jenkins-slave or create a new image with a custom Dockerfile for any specialized builds that requires some build dependencies which should need to be available in the workspace, such as credentials.

    To create a custom environment, you will need to create your own Docker slave image. This can be done by creating a new Dockerfile or running an existing slave image such as “evarga/jenkins-slave”, then installing the necessary custom tooling or credentials and committing your changes to a new image.

    To create a new image from a Dockerfile, you can simply edit the below copy of the “evarga/Jenkins-slave” file using the Dockerfile guidelines and reference:
    FROM ubuntu:trustyMAINTAINER Ervin Varga <ervin.varga@gmail.com>RUN apt-get updateRUN apt-get -y upgradeRUN apt-get install -y openssh-serverRUN sed -i 's|session    required     pam_loginuid.so|session    optional     pam_loginuid.so|g' /etc/pam.d/sshdRUN mkdir -p /var/run/sshdRUN apt-get install -y openjdk-7-jdkRUN adduser --quiet jenkinsRUN echo "jenkins:jenkins" | chpasswdEXPOSE 22CMD ["/usr/sbin/sshd", "-D"]

    Builds which are built within a Docker container will be identifiable by the Docker icon displayed inline within a job’s build history.










    Where do I start?
    1. The CloudBees Docker Custom Build Environment Plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
    2. Other plugins complement and enhance the pipelines possible with this plugin. Read more about their uses cases in these blogs:
      1. Docker Build and Publish plugin
      2. Docker Slaves with the CloudBees Jenkins Platform
      3. Jenkins Docker Workflow DSL
      4. Docker Traceability
      5. Docker Hub Trigger Plugin


    1. More information can be found in the newly released Jenkins Cookbook




    Tracy Kennedy
    Associate Product ManagerCloudBees 

    Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
    Categories: Companies

    Triggering Docker pipelines with Jenkins

    Tue, 06/23/2015 - 02:45









    As our blog series has demonstrated so far, Docker containers have a variety of uses within a CD pipeline and an organization's architecture. Jenkins can package applications into Docker containers and track them through a build pipeline into production. Builds themselves can be run in Docker containers thanks to Jenkins Workflow and the Custom Build Environments plugin, guaranteeing standardized, isolated, and clean environments for build executions. Pools of Docker hosts can also be shared between Jenkins masters using the CloudBees Jenkins Platform, creating the redundancy needed to ensure enough slaves are always online and available for masters. Combined, these solutions offer a great way to manage and create a Docker architecture for Jenkins and other internal applications, but what happens when it's time to upgrade these runtimes for security updates or new application releases? With Docker, changes to the base image require a rebuild of all containers in production from the new base image. But before we get too far into what this means, let's first review what Docker is.

    What is Docker?Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

    Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. They can also be created from a read-only template called a Docker image.  
    Docker images can be created from an environment definition called a Dockerfile or from a running Docker container which has been committed as an image. Once a Docker image exists, it can be pushed to a registry like Docker Hub and a container can be created from that image, creating a runtime environment with a guaranteed set of tools and applications installed to it. Similarly, containers can be committed to images which are then committed to Docker Hub.

    Docker Hub is a Docker image registry which is offered by Docker Inc. as both a hosted service and a software for on-premise installations. Docker Hub allows images to be shared and pulled for use as containers or as dependencies for other Docker images. Docker containers can also be committed to Docker Hub as images to save them in their current state. Docker Hub is to Docker images what GitHub has become for many developers’ code — an essential tool for version and access control.
    When the music fades...
    There will inevitably be a time when the painstakingly-crafted Docker images that your organization has created will need to be updated for whatever reason. While Docker is fun and popular, it isn't (yet) so magical that it eliminates this evergreen maintenance. However, these upgrades need not be painful, so long as they are tested and validated before being pushed to production. 

    Jenkins can now trigger these tests and re-deploys using the CloudBees Docker Hub Notification plugin. This plugin allows any changes to images in Docker Hub to trigger builds within Jenkins, including slave re-builds, application packaging, application releases via Docker images, and application deployments via Docker containers.
    Monitoring for changes with Docker HubThis plugin adds new a build trigger to both standard Jenkins jobs and Jenkins Workflows. This trigger is called “Monitor Docker Hub for image changes” and allows Jenkins to track when a given Docker image is rebuilt, whether that image is simply referenced by the job or is in a given repository.


    Once a job has been triggered, the build’s log will state what the trigger was (e.g. “triggered by push to <Docker Hub repo name>”). 
    Docker Hub Hook ChainingDocker Hub itself supports webhook chains, which you can read more about in Docker’s webhook documentation. If you have added several webhooks for different operations, the callback that each service is doing is done in a chain. If one hook fails higher up in the chain, then any following webhooks will not be run. This can be useful if using the downstream Jenkins job as a QA check before performing any other operations based on the pushed image.

    <jenkins-url>/jenkins/dockerhub-webhook/details will list all events triggered by all hook events, and you will be linked directly to the build, while Docker Hub’s webhook will link back to the Jenkins instance. You can also push tags to the Docker Hub repository.

    Docker Image Pulls
    This plugin also adds a build step for pulling images from Docker Hub. This is a simple build step that does a “docker pull” using the specified ID, credentials, and registry URL, allowing the Docker image that triggered the build to be pulled into the workspace for testing.
    Where do I start?
    1. The CloudBees Docker Hub Notification Plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
    2. Other plugins complement and enhance the pipelines possible with this plugin. Read more about their uses cases in these blogs:
      1. Docker Build and Publish plugin
      2. Docker Slaves with the CloudBees Jenkins Platform
      3. Jenkins Docker Workflow DSL
      4. Docker Traceability
      5. Docker Custom Build Environment plugin

    1. More information can be found in the newly released Jenkins Cookbook



    Tracy Kennedy
    Associate Product ManagerCloudBees 

    Tracy Kennedy is an associate product manager for CloudBees and is based in Richmond. Read more about Tracy in her Meet the Bees blog post and follow her on Twitter.
    Categories: Companies

    Traceability of Docker Images and Containers in Jenkins

    Fri, 06/19/2015 - 14:48
    Organizations are constantly striving to release software faster, to get their product into users' hands sooner and get feedback for improvement or correct problems. Software is never going to be perfect in its first iteration and the end users might actually want something different than what is produced. There is great value to the business if new features can be delivered and bugs fixed in a timely fashion; or a full-course change is required. Working on a minimal viable product (MVP) and using Agile practices, development teams can, in theory, produce a new working product at the end of every sprint. However, there is a big difference in continuously developing a product and continuously delivering that product to users.

    Software is a world of interdependencies and all of those interdependencies have to be validated at various stages before a product is released. Are the external library files consistent? Is the database version the same? Are all the required packages installed on the target host OS? There are countless things that can go wrong when moving from development to testing to production.

    Tools like Jenkins, Chef, and Puppet have helped to automate the flow of software through various stages and ensure a consistent environment. By continuously integrating all software dependencies and standardizing the configuration management of the environments, teams have reduced the number of variables in a delivery pipeline and eliminated potential problems allowing for more automation and, thus, expediting the delivery of the software.
    The emergence of Docker and containers has further reduced the variables present in a delivery pipeline. With Docker, a single image can move from development to testing and finally to production without changing the application or the underlying configuration. As long as the Docker host is consistent then all containers with that image should work across all environment stages.

    What is Docker?
    Docker is an open-source project that provides a platform for building and shipping applications using containers. This platform enables developers to easily create standardized environments that ensure that a testing environment is the same as the production environment, as well as providing a lightweight solution for virtualizing applications.

    Docker containers are lightweight runtime environments that consist of an application and its dependencies. These containers run “on the metal” of a machine, allowing them to avoid the 1-5% of CPU overhead and 5-10% of memory overhead associated with traditional virtualization technologies. They can also be created from a read-only template called a Docker image.  
    Docker images can be created from an environment definition called a Dockerfile or from a running Docker container which has been committed as an image. Once a Docker image exists, it can be pushed to a registry like Docker Hub and a container can be created from that image, creating a runtime environment with a guaranteed set of tools and applications installed to it. Similarly, containers can be committed to images which are then committed to Docker Hub.
    The Interdependency Problem

    The immutability of the Docker container goes a long way towards facilitating continuous delivery but it does not completely solve the problem of interdependencies. Docker containers are built upon images, both parent and base images. An application can run on an Apache parent image with a base image of CentOS. These images, and the containers they are used in, are all uniquely identified and versioned to account for change over time much like binary artifacts or gems.

    In addition to image dependencies an application is not always contained in a single container; Dockerized applications are increasingly deployed as microservices. As Martin describes, breaking up monolithic applications into discrete functional units that interoperate is a great way to help teams continuously deliver parts of an application without requiring a release cycle of the entire application and every team involved. Not only are there image dependencies but we now have microservice dependencies. The level of abstraction has moved up a rung.
    Traceability with Fingerprinting and Docker
    Despite, or because of, all of the automation inherent in a continuous delivery pipeline things still break. When they do it is necessary to quickly identify and correct the problem across all of the dependencies that go into a running application. Visibility and traceability across all dependencies in an application are paramount to continuously delivering and running that application. To that end, Jenkins allows teams to track artifacts with a "Fingerprint", letting users see what went into a build and where that build is being used. Combined with the Deployment Notification Plugin  this fingerprint can be used to track when and where a package has been deployed by Chef or Puppet. This traceability is very useful for both developers and operations. If a bug is found in development it can be quickly traced to everywhere it has been deployed. Conversely, if a problem occurs in production the operations team can easily find the deployed build in Jenkins and see all components included.

    The addition of the CloudBees Docker Traceability plugin enables Jenkins to now extend this same traceability to Docker images, showing the build and deployment history of each container and the related images. This plugin requires the Docker Commons plugin which provides the fingerprints for all Docker images and it is available to everyone in the Jenkins community.



    The CloudBees Docker Traceability plugin provides both an overall view from the Jenkins sidebar of all containers currently registered and deployed and a detailed view of the container build from the build page. The Docker image ids are provided for all parent images and the base image used. In addition the Docker image ID is searchable in Jenkins to quickly find where and when it is deployed and how and when it was built.

    Using this information, it is possible to determine if something changed in the code for a container or if one of the parent images or base image of a container changed from one build to another helping to determine the root cause of any problems in the overall application.

    Where do I start?
    1. The CloudBees Docker Traceability Plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
    2. Other plugins complement and enhance the pipelines possible with this plugin. Read more about their uses cases in these blogs:
      1. Docker Build and Publish plugin
      2. Docker Slaves with the CloudBees Jenkins Platform
      3. Jenkins Docker Workflow DSL
      4. Docker Hub Trigger Plugin
      5. Docker Custom Build Environment plugin
    1. More information can be found in the newly released Jenkins Cookbook

    Patrick Wolf
    Product ManagerCloudBees 

    Patrick Wolf is a product manager for CloudBees and is based in San Jose. 
    Categories: Companies