Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
Updated: 1 day 16 hours ago

CloudBees Jenkins Platform 2.7.19

Tue, 09/20/2016 - 20:32

We are happy to announce the immediate availability of CloudBees Jenkins Platform 2.7.19. This is the first version of the CloudBees Jenkins Platform based on Jenkins 2. The User eXperience of Jenkins has been dramatically revisited and you will benefit from the following improvements.

Improved User Experience Installation wizard

An installation wizard helps you to select the plugins that are relevant for your continuous delivery platform. CloudBees has selected a set of cohesive plugins to propose a default installation.

(Click on thumbnails to display larger image)

Revisited wizard to create new items

The screen to create new jobs and new items is much more intuitive. It helps Jenkins users to instantly find the type of job they need.

Revised job configuration screen

The Jenkins configuration pages for jobs and items have been revisited with tabs to help clarify the information.

Secured by default

The CloudBees Jenkins Platform is now secured by default. New instances are secured by an “Unlock Jenkins” screen and Jenkins admins are invited to enable security and create a first user on the system.

Better Installation and Upgrades Offline installation

The CloudBees Jenkins Platform can now be installed offline with a large and cohesive set of plugins available through the plugin selection wizard.

Better upgrades

Upgrades of Jenkins could be hectic in the past, due to issues with some plugins not getting upgraded. The root cause of the issue was a concept called “pinned” plugins. The concept behind pinned plugins has been retrofitted and plugins now get updated through the installation process.

Beekeeper Upgrade Assistant and the CloudBees Assurance Program

The CloudBees Jenkins Platform is now integrated with the CloudBees Assurance Program so that you can keep your servers up-to-date with plugin versions verified by CloudBees. We will provide details of the new Beekeeper Upgrade Assistant and the CloudBees Assurance Program in a follow-on blog post.

New Release Model to Get the Best Out of Your Platform

In addition to the CloudBees Assurance Program, CloudBees has adopted a Rolling Release Model and a continuous delivery approach to more efficiently deliver new features with smaller increments. Rolling releases of the CloudBees Jenkins Platform will be published regularly (multiple times per quarter) so that you will be able to more frequently apply smaller upgrades to your platform. Customers who prefer to use a very stable version, rather than benefiting from ongoing, smaller feature improvements, will be able to choose our fixed release that will be published yearly. The fixed release will limit changes during the year, and before the next fixed release, to security fixes and the correction of critical bugs.

Smooth upgrade path

To upgrade your CloudBees Jenkins Platform, you just perform the standard upgrade procedure that you already use for CJP 1.x (replacing the war file, installing the latest .rpm or .deb…) with one additional requirement: Start upgrading CloudBees Jenkins Operations Center and then upgrade the client masters. CloudBees Jenkins Platform 16.06 will remain supported until April 2017. More details are available on our Support Lifecycle and Update Policies page.

Getting Started

Visit our Getting Started page and try CloudBees Jenkins Platform V2!

Blog Categories: Jenkins
Categories: Companies

How to Integrate JMeter into Jenkins

Mon, 09/19/2016 - 20:38

This is a guest post by Dmitri Tikhansi from BlazeMeter.

Continuous integration (CI), test automation and “shifting left” are becoming the standard for DevOps, developers and QA engineers. But despite the importance of performance, and the understanding that systems are complex and it can be challenging to identify and fix bottlenecks in a short time, load testing is still not an integral part of the CI workflow.

Jenkins easily enables users to integrate load testing into its workflow. By using Jenkins as part of the CI process and to trigger jobs by commits, users are taking advantage of automation and process speed-up capabilities.

Advantages of Integrating JMeter into Jenkins

JMeter is one of the most popular open-source load testing systems. By integrating JMeter into Jenkins, users can enjoy:

  • Unattended test executions right after software build and deploy
  • Automatic build failures in case of performance degradation
  • Easy access to test reports that show application metric trends - all tests are in one place and available to anybody with the correct permissions
  • Automated routine work of test configuration, execution and baseline results analysis. Users’ hands and minds are free for more important, complex and interesting tasks.
How to Integrate JMeter into Jenkins
  1. Store JMeter results as XML (recommended because they are easier to use) or CSV
  2. Specify the command to run your test in the Execute Shell Batch Command section
  3. Check the Console Output tab to verify the execution was successful
  4. Find your files in the project’s workspace
  5. Specify Build Parameters
How to Use the Performance Plugin

To view JMeter reports on Jenkins, you can use the Performance plugin.

  1. Install the Performance plugin
  2. Configure the plugin
    The Performance plugin can be added as a “Post-build Action”. When the JMeter test is finished, the plugin will:
    • Collect the data
    • Conditionally fail the build if the error threshold is exceeded
    • Build or update the performance trend chart for the project

Configuration options explained:

  • Performance report - For JMeter you will need to upload a file in XML format.
  • Select mode -  The choices are in the Relative Threshold and Error Threshold. The Relative Threshold compares the difference from the previous test, and if it exceeds the defined value the build will be marked as failed/unstable. The Error Threshold marks the the build as unstable or failed if the amount of errors will exceed the specified value.
  • Build result - If the JMeter test doesn’t generate the output jtl file(s) - the build will be failed.
  • Use error thresholds on single build - Define error thresholds for the current build.
  • Average response time threshold - Set the maximum acceptable value of the Average Response Time metric.
  • Use relative thresholds for build comparison - Set the percentage difference of errors. The “source” build can be either the previous build or “known good” build which is used as a baseline.
  • Performance per Test Case Mode - If you need to see separate graphs for each single test case on the Performance Trend chart you can trigger the behavior by this option.
  • Show Throughput Chart - Set whether to display “Throughput” trend chart on project dashboard or not.

Jenkins can also be integrated with BlazeMeter and Taurus, for faster and easier results and more analysis options. 

Congratulations on adding performance tests to your continuous integration process!

 

Blog Categories: Jenkins
Categories: Companies

Standardize Jenkins with the CloudBees Jenkins Enterprise Distribution

Wed, 09/14/2016 - 21:51

Over the past couple of years, CloudBees has worked towards making Jenkins more accessible for both our CloudBees Jenkins Platform customers and Jenkins community users: first through best practices, then a knowledge center with training, support, and CloudBees documentation.

As Jenkins grows to over 132,000 installations with 6,400,000 plugins installed worldwide, the need for a healthy ecosystem of reliable and feature-complete plugins is greater than ever.

Manually verifying the compatibility and usability of all the plugins in a given Jenkins installation is a Herculean and expensive effort. Often administrators simply don’t have the time or resources to do this due diligence, but this exposes them to the instabilities that incompatible plugins can cause, as well as function-breaking updates between versions.

A more stable Jenkins foundation 

To solve these pains, CloudBees is taking on this work and producing a new distribution of Jenkins: CloudBees Jenkins Enterprise. This distribution makes Jenkins and plugin management easier with pre-tested sets of some of the most popular community plugins, as well as plugins that are popular among CloudBees customers in particular. This distribution undergoes the highest levels of verification in the CloudBees Assurance Program, ensuring its components are verified for their compatibility with the CloudBees Jenkins Platform and the quality of their overall functionality.

In the long-term, CloudBees will verify all of the top 100 community plugins, but today we have verified most of the top 30, as well as select plugins for our customers’ more niche use-cases and tool sets. This release we paid special attention to plugins around security and credentialing to ensure that we have an impact on the widest range of Jenkins users; while not everyone uses a given SCM or build agents, credentials are fundamental to any installation.

A smoother update experience

Version conflicts are often at the heart of a failed update and while the CloudBees Jenkins Enterprise distribution offers a stable set of recommended versions, that alone is not sufficient to ensure these verified versions are actually used. To this end, CloudBees has also created the Beekeeper Upgrade Assistant, which offers compliance views of installed verified components as well as optional automatic enforcement mechanisms.

Beekeeper offers an atomic update mechanic to minimize the potential for instability caused by version conflicts and the lead time needed for upgrades. This, in turn, better enables administrators to leverage the new rolling releases of the CloudBees Jenkins Platform to get the latest stable features and fixes as soon as they are available.

Contact CloudBees sales or start a CloudBees Jenkins Platform trial to get started with the CloudBees Jenkins Enterprise distribution today!

The road ahead

Over the coming months, the CloudBees Assurance Program team will work on improving the Beekeeper upgrade experience, as well as verifying a larger set of useful plugins for our customers. In the near future, CloudBees Network will also start offering a marketplace-style view of the CloudBees Jenkins Platform and CloudBees Jenkins Enterprise distribution components to make it even easier to keep up with the latest and recommended feature sets.

Stay tuned as we roll out these things in the coming months and feel free to contact me with any feedback.

 

Blog Categories: Company News
Categories: Companies

DevOps Express Puts Organizations on Track for DevOps Adoption

Wed, 09/14/2016 - 17:13

DevOps Express: Stronger Together

Today, fourteen DevOps technology leaders will stand together to announce a joint initiative to bring DevOps knowledge, integrated tooling, support and best practices to the marketplace to streamline the adoption of DevOps through a solution oriented approach.  This is a significant milestone in the DevOps space- key DevOps vendors working together as a group to deliver greater value to the DevOps market than any of us could deliver individually. Let’s just pause for a second and appreciate the gravity of this.

The Genesis of DevOps Express

So how did this all start? Earlier this year at our customer advisory board meeting, our customers presented their DevOps reference architectures. Presentation after presentation we noticed the same technologies being leveraged in their environments as part of their DevOps strategy.

Which made us wonder.  For customers looking for a starting point “could we help them by presenting a solutions oriented approach that would be 80% proven? What if we could work with DevOps vendors representing the leaders in their respective categories of the software delivery process and deliver an actionable path for customers?” We saw an immediate opportunity to collaborate with our peers in the DevOps space and create something even more valuable than we could deliver on our own.

A Solution-Oriented Approach to DevOps

DevOps Express is the first of its kind alliance of DevOps industry leaders and includes founding members: CloudBees, Sonatype, Atlassian, BlazeMeter, CA Technologies, Chef, DevOps Institute, GitHub, Infostretch, JFrog, Puppet, Sauce Labs, SOASTA and SonarSource.

This alliance of popular technology vendors and service providers, coupled with our collective years of expertise, will deliver best practices and integrated architectures to make it easier and more flexible for enterprises to adopt DevOps. DevOps Express provides a framework for industry partners to deliver reference architectures that are better integrated and better supported. Creating reliable, proven and actionable reference architectures for organizations will accelerate DevOps adoption and minimize risk for organizations. Together, we will strive to deliver a more streamlined approach to DevOps adoption and enable enterprises to realize business value of DevOps more quickly.

We have some exciting developments in DevOps best practices and architectures coming soon.  In the meantime, we invite you to learn more about this exciting industry initiative.

Visit devops-express.com and learn more about this new alliance among industry leaders

Blog Categories: Company NewsJenkins
Categories: Companies

Jenkins World Speaker Highlight: How to Integrate JMeter into Jenkins

Tue, 09/13/2016 - 22:45

This is a guest post by Jenkins World speaker Dmitri Tikhansi from Blazemeter.

Continuous integration (CI), test automation and “shifting left” are becoming the standard for DevOps, developers and QA engineers. But despite the importance of performance, and the understanding that systems are complex and it can be challenging to identify and fix bottlenecks in a short time, load testing is still not an integral part of the CI workflow.

Jenkins easily enables users to integrate load testing into its workflow. By using Jenkins as part of the CI process and to trigger jobs by commits, users are taking advantage of automation and process speed-up capabilities.

Advantages of Integrating JMeter into Jenkins

JMeter is one of the most popular open-source load testing systems. By integrating JMeter into Jenkins, users can enjoy:

  • Unattended test executions right after software build and deploy
  • Automatic build failures in case of performance degradation
  • Easy access to test reports that show application metric trends - all tests are in one place and available to anybody with the correct permissions
  • Automated routine work of test configuration, execution and baseline results analysis. Users’ hands and minds are free for more important, complex and interesting tasks.
How to Integrate JMeter into Jenkins
  1. Store JMeter results as XML (recommended because they are easier to use) or CSV
  2. Specify the command to run your test in the Execute Shell Batch Command section
  3. Check the Console Output tab to verify the execution was successful
  4. Find your files in the project’s workspace
  5. Specify Build Parameters
How to Use the Performance Plugin

To view JMeter reports on Jenkins, you can use the Performance plugin.

  1. Install the Performance plugin
  2. Configure the plugin
    The Performance plugin can be added as a “Post-build Action”. When the JMeter test is finished, the plugin will:
    • Collect the data
    • Conditionally fail the build if the error threshold is exceeded
    • Build or update the performance trend chart for the project

Configuration options explained:

  • Performance report - For JMeter you will need to upload a file in XML format.
  • Select mode -  The choices are in the Relative Threshold and Error Threshold. The Relative Threshold compares the difference from the previous test, and if it exceeds the defined value the build will be marked as failed/unstable. The Error Threshold marks the the build as unstable or failed if the amount of errors will exceed the specified value.
  • Build result - If the JMeter test doesn’t generate the output jtl file(s) - the build will be failed.
  • Use error thresholds on single build - Define error thresholds for the current build.
  • Average response time threshold - Set the maximum acceptable value of the Average Response Time metric.
  • Use relative thresholds for build comparison - Set the percentage difference of errors. The “source” build can be either the previous build or “known good” build which is used as a baseline.
  • Performance per Test Case Mode - If you need to see separate graphs for each single test case on the Performance Trend chart you can trigger the behavior by this option.
  • Show Throughput Chart - Set whether to display “Throughput” trend chart on project dashboard or not.

Jenkins can also be integrated with BlazeMeter and Taurus, for faster and easier results and more analysis options. 

Congratulations on adding performance tests to your continuous integration process!

 

Blog Categories: Jenkins
Categories: Companies

Are You All In on DevOps? There’s Only One Place To Find Out!

Fri, 09/09/2016 - 05:04

Everyone’s talking about DevOps, but who’s really putting DevOps into practice? There’s only one way to find out, you have to tell us. That’s why we created the Are You All in on DevOps? quiz. Through questions developed by our DevOps experts, you’ll learn if you’re a DevOps Dabbler, DevOps Daredevil or DevOps Driver.

If you’re curious about the methodology behind the quiz magic, our team of experts looked at some key DevOps components. First, we thought about collaboration. How well does your team work together? With the line of business?

Then, we asked about software delivery times and how frequently you deploy code. An important part of app development is testing so, of course, we asked about automated testing. Not to be forgotten, we also asked about versioning and microservices. Now, this alone wasn’t enough to truly assess your DevOps prowess, so you’ll find some questions about clothing selection versioning and party style as well. After all, DevOps isn’t just all business – it’s personal too.

Now, what are you waiting for? Go take the quiz! We promise you its better than Buzzfeed. Then, stop by the CloudBees booth at Jenkins World (Booth #210) to pick up a sticker based on your DevOps persona. You’ll need to show us that you took the quiz by either Tweeting your results or showing us a screenshot of your persona. Now take your sticker and go show the world your DevOps proficiency level!

WHICH DEVOPS PERSONA ARE YOU?
Take the quiz and find out!

 DevOps Daredevil DevOps Driver

 

Categories: Companies

“JenkinsOps” at NPR

Fri, 09/09/2016 - 03:31

This post was co-authored by Grant Dickie and Paul Miles.

At NPR, we’re using Jenkins in all sorts of interesting ways. Like most companies, we started by setting up the standard continuous integration builds that automatically package the various applications that power npr.org — Jenkins’ bread-and-butter of sorts. But Jenkins, thanks to its plugin architecture, is capable of many more things. In an initiative dubbed “JenkinsOps”, we are extending our usage of Jenkins to do things that help us deliver software more efficiently to our audiences. In this blog post, we’ll share a little bit about how we configure and run our many unit and integration tests using Jenkins with the help of Jobs DSL and a few other plugins.​

The Problem Statement

Unit tests, integration tests, and automated smoke tests are executed on our staging environments. As the team has grown, so has the number of branches and staging environments. We wanted to run all of the tests that we can throughout development and especially before merging code, but the overhead of setting up and managing test jobs was too burdensome. We needed a solution that simplified building test jobs in Jenkins for an arbitrary number of stage environments and codebases.

We needed to preserve our meticulously organized way of reporting test status which made things a bit more difficult. Our QA engineer spent countless hours setting up test jobs into categories via Jenkins view tabs. The tabs grouped tests by codebase and environment, and additionally by status (e.g. “Passing”, “Failed”, and “Disabled”). Each time a test suite was run, Jenkins would automatically place tests into their respective views based on job state. While time consuming to set up, the result was a well-documented portfolio of failing, passing, and invalid tests by codebase and environment.

The Solution

After a lot of research, prototyping, and discussion we came up with a custom scripted Jenkins job that generates unit and integration test jobs for each of our environments.

Inputs

  • The name of the staging environment
  • The Git branch that it is attached to
  • The slack channel to send notifications

Outputs

  • A “master job” that runs all the tests
  • Views showing all the tests on a per-codebase basis
  • Additional snapshot views showing all failed tests for that environment

 

 

Above is an illustration of a test folder automatically generated for our WWW codebase on our development server (here named “stage-4”). When jobs are completed inside of the Stage4_WWW folder, developers are notified on the Slack channel provided.

The Details

Each of our unit and integration tests are quite simple under the hood, running via PHPUnit or JUnit depending on the codebase. Results get encoded and recorded as pass / fail. What remained for us to do is to quickly create and maintain test jobs via code. Jenkins provides one solution in the form of Templates; i.e., a way to marshal changes across similar jobs. While that solved the issue of managing existing jobs, it didn’t help with creating new ones on the fly. Each of the existing solutions seemed to encourage knowing exactly what was need in terms of jobs and environments. We had no idea how many environments we would need or what they would be named.

What we really needed was a programmatic approach. Our QA engineer Bill Claytor had already worked with a relatively new plugin: Jobs DSL. He used it to create automated smoke test jobs against any codebase and staging environment. Jobs DSL is an open-source initiative to provide a structured language in Groovy for building jobs. It’s maintained in part by Justin Ryan of Netflix and a tight community of engineers. Bill has successfully automated the process of setting up our smoke tests to run against any environment using this tool. In addition, he created these test using Folders, which puts similarly named jobs into Jenkins folders properly segregated jobs apart from one another, preventing confusion when staging servers had the same codebases and tests.

All of this knowledge was put into our then-newest endeavor: using Jobs DSL and Groovy to build test jobs on demand. Already, our systems administrators had embarked on version-controlling staging and production environments into Chef as part of an overall “DevOps” initiative. We felt it appropriate to call our journey of version controlling Jenkins test jobs and automating their creation and management as “JenkinsOps”. After some research and group discussion the tools we landed on were using custom Groovy scripts to call Job DSL code that would generate our PHPUnit and JUnit test jobs based on targeted filesystems.

The first time a codebase is deployed to a new environment by our build server, it will post to a separate Jenkins test server job that initiates the process of setting up the test jobs. These jobs are labeled “Master<Codebase>Jobs” as they handle starting the whole update process. Separate jobs labeled “<Codebase>ConfigCreate” then use Groovy scripts to traverse file directories in our codebases to find tests:

https://gist.github.com/jdickie/faff5046a9c80f0688e89720dd2e3d8a

A separate Groovy file and associated Jenkins job was created to traverse each of our main codebases to generate a manifest of test jobs. This manifest gets passed to a downstream job that loops through the manifest and creates jobs based off of pre-defined Job DSL code. This example generates a single job that SSH’s into our website code and initiates a PHPUnit test:

https://gist.github.com/jdickie/2aa65b19de519078a5f11b553e4c908d

The logic to create these jobs, along with the jobs that set up the automation process, were also version-controlled. Doing this allowed for us to make sure we could port over the automation framework to any new Jenkins server, if needed, in the future.

As a design principle, every test in the codebase maps to a separate Jenkins job. There are a few exceptions where we use test suites, but that is an exception to the rule. The primary motivation for this is to have history at a more granular level. In order to make sure developers aren’t bombarded with test jobs and to reduce complexity, we use the MultiJobs plugin to generate “Master” test jobs. These are what gets activated via Jenkins automation to run all tests for a codebase and environment. When all tests within a MultiJob are run, the job notifies the provided Slack channel and outputs a report of all unit and integration tests.

 

 

The final output is a Jenkins folder that contains the codebase and environment settings to run all unit and integration tests. Also within that folder are the same “Master<Codebase>Job,” “<Codebase>ConfigCreate,” and “<Codebase>JobCreate” jobs. The design principle here is that each codebase and environment folder combination has its own self-contained updating mechanisms. When the next trigger for a QA build comes through, the folder for that environment and codebase is automatically detected and run instead of the whole top-down creation process.

 

  Conclusion

We hope that this has shed some light on the very powerful Job Builder plugins and inspires you to think about how you can leverage Jenkins to do some great and interesting things in your organization. We (Grant Dickie and Paul Miles) are going to be speaking at Jenkins World 2016 this September, where we’ll share more about “JenkinsOps” at NPR. Besides talking about other great output from the JenkinsOps initiative, we’ll talk about some of the other cultural shifts and delivery challenges that are associated with the work.

Categories: Companies

Let Your Voice Be Heard, Take the 2016 Jenkins Community Survey!

Thu, 09/08/2016 - 19:37

Just as in past years, this year CloudBees is again sponsoring a survey of the Jenkins community. The goal is for the community to get some objective insights into what Jenkins users would like to see in the Jenkins project.

The survey will be open until the end of September. This is your chance to be heard and influence the next 10 years of software development!

It is extremely valuable for the community to best understand how people are using Jenkins and what improvements they would like to see. Accordingly, we are providing an added incentive to fill out the survey. Two lucky survey-takers will win either a pass to Jenkins World 2017 (1st prize) or a $100 Amazon gift card (2nd prize).

Our lawyers tell us it can’t all be fun, so we now break for the boring legal stuff…the terms and conditions.

2016 Jenkin Community Survey Terms and Conditions:

  1. The survey will be open from September 8th to September 30th, 2016. If you submit a completed survey, you will be entered into a drawing for a free pass to Jenkins World 2017 (1st prize) and a $100 Amazon gift card (2nd prize). Yeah, you can only enter the contest once, so please don’t over-stuff the survey box. After the survey closes, we’ll draw a name to choose the winner…and maybe it will be you!

  2. If you do not supply your name and email address, you are not eligible to win. Think about it – we have no way to contact you. If you do supply your name and email address, we’ll send you the survey results.

Eligibility:

  1. The Amazon gift card can only be won by someone who lives in a country where you can buy from Amazon. If you live in a country without Amazon access, we will send you $100 via PayPal. If you live in a country under U.S. embargo, we’re sorry, but there’s not much we can do here.

  2. You must be 18 years old or older (20 or older in Japan).

  3. You must use Jenkins or be affiliated with its use.

  4. The winner is responsible for any federal, state and local taxes, import taxes and fees that may apply.

  5. This survey is administered by CloudBees, Inc., 2001 Gateway Place, Suite 670W, San Jose, CA 95110, +1-408-805-3552, info@cloudbees.com. If you’d like to send us feedback or have questions, please email us at jenkins-survey@cloudbees.com. And no, we do not accept bribes to rig the contest.

  6. Regardless of whether you win the 2017 Jenkins World pass or Amazon gift card, you will have the satisfaction that you’re providing input that will help make Jenkins even better. Thank you in advance for sharing your thoughts with the community!

  7. Oh, and the best part…no purchase necessary!

Take the survey here

 

Blog Categories: JenkinsCompany News
Categories: Companies

Continuous Integration (CI) Pipeline with NetApp and CloudBees Enterprise Jenkins

Wed, 09/07/2016 - 23:08
This is a guest post by Bikash Roy Choudhury, Principal Architect at NetApp.

In a previous blog post, I wrote about continuous integration (CI), continuous delivery (CD), continuous deployment and the challenges organizations may undergo while aspiring to adopt a DevOps practice. The shift toward agile development has forced business owners to be more exploratory and innovative, emphasizing speed in the application development workflows. The new type of applications is also known as adopting microservices; run them as cloud-native applications.

Testing code in an iterative manner improves code quality by identifying bugs in the code in the early stages. Multiple instances of the code can be developed, built, and deployed in containers. CloudBees Enterprise Jenkins is one of the most popular CI tools that are commonly used by developers. Customers choose to run different services, including Jenkins, in containers, which provides homogeneity in the development and deployment environments along with horizontal scalability. This means applications developed in one platform should run on another. Containers are ephemeral in nature but still require persistent storage for resiliency, data recovery, and scalability. While CloudBees Jenkins is also being widely consumed for CD, in this blog we focus on the CI pipeline with CloudBees Enterprise Jenkins and ONTAP 9.

During the code development and deployment process, data is generated, stored, processed, and managed on NetApp storage solutions. NetApp offerings provide persistent storage for Docker containers with the NetApp Docker Volume Plug-In (nDVP). NetApp also jointly worked with CloudBees to develop a plug-in that reduces developer code checkout from source code repositories such as GIT, Perforce, and Continuous build and test cycle time and developer workspace creation and at the same time improves storage space efficiency and reduces storage costs. Native NetApp® technologies such as thin-provisioned FlexVol® volumes, FlexClone® volumes, and Snapshot® copies seamlessly integrate with CloudBees Enterprise Jenkins builder templates using RESTful APIs. CI pipeline with CloudBees Enterprise Jenkins and NetApp improves overall customer or user experience through automation, iterative testing, and data resiliency.

The primary reasons for the NetApp and CloudBees joint activity are to abstract and integrate NetApp technologies and empower the CI admins and the developers in seamlessly integrating the CI workflow using RESTful APIs. The CI team and developers no longer have to depend on storage admins to configure and expose the functionalities that accelerate the development process. This integration also leads to additional benefits to the business and the application owners in the development environments. For more information, refer to TR-4547. Following are some of the benefits with this approach:

  • Improve developer productivity. This allows instantaneous user workspaces and dev/test environments for databases. These database environments may be used for patch testing, database changes, unit testing by developers, and QA during staging without risking the source codebase or the production database.
  • Provide faster time to market. Reduced checkout/build times and iterative testing (fail fast, fix fast) reduce the errors and thus reduce the technical debt. This also improves the code quality.
  • Improve storage space efficiency. User workspaces and databases created for dev/test do not take any additional storage space from their parent production volumes. This reduces the storage costs in cloud/platform 3 environments and yet provides total ownership and control over the data.
  • Enable developers to use native NetApp technologies in the development workflow by using APIs in a consumable model.

Figure 1) CI pipeline with Jenkins and Docker using ONTAP APIs.

The CI and development environments should adhere to some best practices for better code quality and manageability. As illustrated in Figure 1, having a local SCM repository on NetApp storage is recommended. The source code can be cloned from a private or public repository, or new code can be created for development.

Separate development branches or CI code branches can be created on different NetApp volumes. If the code branch is small, then the entire source code is pulled in a single development branch or CI code branch volume. This development branch or CI code branch volume is used as a location to sync up with all the dependencies such as tools, RPMs, libraries, compilers, and so on to perform a full build.

After a successful full build in the CI code branch volume, a NetApp Snapshot copy is taken on the volume. The CI code branch volume now consists of source code, all the dependencies, .jar files (if this is Java code), and all the prebuild artifacts. Now the CI environment is complete. This process reduces a considerable amount of traffic to the SCM volume. Only code changes are submitted or checked into the SCM volume. The builds (developer, CI, or nightly) are performed in the CI code branch volumes.

The developer logs in and checks the latest NetApp Snapshot copy and creates an instant clone of the CI code branch volume. This clone is storage space efficient and is prepackaged with everything that the developer would need to write and make changes to the code. This clone is used as a workspace for the developer. After proper code changes are submitted, reviewed, and checked by Gerrit, unit tests, or some kind of pre-check-in analysis tool, the changes are pushed and committed to the SCM volume.

The changes submitted in the SCM are propagated into the respective CI code branch volume, and an incremental build is performed followed by a NetApp Snapshot copy. Every Snapshot copy taken after an update to the CI environment provides the developer with the most recent cloned copy of the code changes. This is an iterative and important phase of the CI pipeline.

A predefined set of scheduled CI tests is performed on successful developer builds to further harden the code changes by identifying any errors in the code. Depending on the requirement and the development scenario, a nightly build may be scheduled at the end of the day. Upon successful completion of the CI or nightly build, the contents of the CI code branch volume are zipped and copied in the build artifact volume. The copy of the build can be now promoted to QA for additional testing and further deploying it into production from the build artifact volume.

In the CI pipeline setup, the Jenkins master runs in a container. All the components such as the local SCM repository (GIT), development branches or CI code branches, user workspaces, and build artifact illustrated in Figure 1) are mounted on Docker containers. These components run as a Jenkins slaves ties to the Jenkins master. The Docker containers use the nDVP to mount the NetApp volumes to provide persistent storage.

This entire workflow, which uses NetApp volumes, Snapshot copies, and FlexClone volumes, is stitched in the CloudBees Jenkins CI pipeline using Docker containers and ONTAP® RESTful APIs. The Jenkins master runs in a Docker container on a physical host or a VM. The Jenkins slaves also run in Docker containers in sibling mode.

Figure 2) Docker on Docker: Jenkins master-slave in a sibling setup.

The architecture of the NetApp and Jenkins plug-in using a Docker container is as shown in Figure 2. The Docker engine runs on the VM or the physical host and passes the Docker socket from the host to the container and runs nDVP on the host VM as well as the Jenkins master container. The main purpose of nDVP is to attach NetApp volumes to containers while spinning them off in order to leverage features such as storage efficiency and resiliency that NetApp has to offer. For more information, refer to TR-4547. To download the Jenkins plug-in, visit https://github.com/netapp

 

Blog Categories: Jenkins
Categories: Companies

Jenkins World Speaker Highlight: High Velocity iOS CI with Native macOS Virtualization and Jenkins

Wed, 08/31/2016 - 16:02

This is a guest post by Jenkins World speaker Manisha Arora, Co-Founder of Veertu.

We, at Veertu Inc. have been doing virtualization for a while. Veertu’s technical co-founders were part of the team that built KVM and nested virtualization technology, SDN and SDS for public cloud at Ravello Systems. So, it’s been pretty interesting for us to see the evolution of infrastructure in the linux and windows domain over the years, and how it has kept pace with innovations in the application development area. However, when you look at mobile app dev, it’s pretty clear that dev and test infrastructure is still trying to catch up with the innovations in the app dev domain. This is even more obvious for Apple device ecosystem.

So, we started to work on a software solutions that are based on our innovative technology which virtualizes OS X (now called macOS) based infrastructure, with the goal that all app dev that’s being done on OS X (Native OS X apps, iOS, watchOS, tvOS apps, apps that need to run in Safari) can leverage this infrastructure whenever, wherever and however they need it.

In our session at Jenkins World 2016, we will describe and show you the application of this for high velocity iOS build and test workloads.

As most of you already know, iOS dev, build and test is done using xCode and Simulator, which is Apple’s development SDK and runs on macOS. Different iOS projects want different versions of Xcode, different ruby versions, different gems and different dependency managers. When there are teams of 4 or more iOS developers working on these projects, the scale and complexity of build and test environments for code commits significantly increases. In the linux world, this is addressed by executing build and test workloads on a cloud infrastructure (public or private), where specific environments(with all the dependencies) are spun on-demand for every code commit/build. The build and test infrastructure can scale up/down inline with the speed of application development. The same isn’t true for iOS application development on the Apple platform. There is no server grade infrastructure technology available for Apple platform, which addresses this challenge completely.  In the last 3-4 years, a lot of service providers seem to have come up with offerings for iOS build and test environments on-demand, but after careful investigation and multiple customer conversation, we concluded that none of these providers can meet the demands of customer’s dynamic workloads in real time.

The problem is unsolved and growing with exponential growth of mobile app dev. We are developing a native infrastructure virtualization platform for macOS, which will sit on top of mac operating system and enable the users to take a cluster of mac hardware(mac minis, pros etc..) and convert it into a private cloud. While there have been attempts to do this in the past with desktop based virtualization, it doesn’t really meet the needs of CI dev/test workloads. CI dev/test workloads for iOS need server grade features like, high performance with no overhead in virtual instances, small footprint, non-intrusive execution of multiple instances in a single piece of hardware and faster boot times. Our upcoming technology will address all this and offer a solution, which users can use to very quickly build, control and manage on-demand private cloud for iOS build and test from within their Jenkins CI process and other CI platforms.

Creating Veertu macOS Cloud in Jenkins

Configure Veertu Cloud or iOS build/test

We are very excited that we have been invited to showcase and share this with everyone at Jenkins World 2016 in Santa Clara, CA. For more details, visit https://www.cloudbees.com/high-velocity-ios-ci-native-os-x-virtualization-plugin

Manisha Arora
Co-Founder
 Veertu

This is a guest post written by Jenkins World 2016 speaker Manisha Arora. Leading up to the event, there will be many more blog posts from speakers giving you a sneak peak of their upcoming presentations. Like what you see? Register for Jenkins World! For 20% off, use the code JWHINMAN

 

Blog Categories: Jenkins
Categories: Companies

Audit Trail Dashboard with CloudBees Jenkins Analytics

Tue, 08/30/2016 - 17:19
CloudBees Jenkins Analytics

Analytics is an important feature of the CloudBees Jenkins Platform.  Elasticsearch is used to index build and performance data of CloudBees Jenkins Enterprise masters that are connected to CloudBees Jenkins Operations Center (and optionally index data from CloudBees Jenkins Operations Center as well), and display that information via a set of built-in Kibana dashboards. However, your are not limited to the provided dashboards and may modify them or create completely new dashboards.  Kibana is exposed via the CloudBees Jenkins Operations Center Analytics Dashboard Creator link, allowing you to customize existing dashboards or create new ones. In this post, I will walk you through the process of creating a custom Kibana dashboard for a very specific use case - a Jenkins Audit Trail dashboard.

Jenkins and Audit Tracking

Tracking changes is an important part of most enterprise organizations - whether it is for legal compliance, enterprise policies, other standards, all of the above or something else entirely.  In the IT space, this type of tracking is often referred to as audit logging or an audit trail. Valentina Armenise explored audit logging strategies for Jenkins over a year ago in the following post: https://www.cloudbees.com/blog/best-practices-setting-jenkins-auditing-and-compliance

In this post, we will explore how to integrate one of the plugins mentioned in Valentina’s post,  the Audit Trail plugin, with CloudBees Jenkins Analytics - to provide a centralized audit dashboard of audit activity across multiple CloudBees Jenkins Enterprise masters connected to the CloudBees Jenkins Operations Center.

Putting it all Together - Audit Trail Analytics Logstash and Syslog

In order to view data in a CloudBees Jenkins Analytics dashboard, it has to be available in the Elasticsearch index configured to be used by CloudBees Jenkins Operations Center. However, we don’t want to pollute the existing build-* and metrics-* indexes with audit data, so we will create a new Elasticsearch index - and we will see how this will make it much easier to create our custom Audit Trail dashboard.  

There are a number of ways to push data into Elasticsearch, but we are going to use Logstash because it is able to easily consume the Syslog format as input and because the Audit Trail plugin supports Syslog as one of its outputs. More specifically, the Audit Trail plugin supports the output of RFC 3164 compliant Syslog - making it very easy for Logstash to consume and manipulate.

We are going to run Logstash as a Docker container using the official image from Docker Hub. Starting the logstash container with the following command - the logstash configuration is included in the docker run command - will allow it to consume Syslog output from the Jenkins Audit Trail plugin and output that data into a custom ‘audit-trail-*’ Elasticsearch index via the Logstash Elasticsearch plugin using the dynamic syntax for the index - in this case, a new index will be created everyday  (of course replacing the elasticsearch_url, password, user_name with your values and possibly the ports):

docker run -d -p 5000:5000/udp -p 5000:5000 --restart=always --name=logstash logstash:2.3 logstash -e 
  'input {syslog {port => 5000 type => syslog}} 
    filter {mutate {rename => { "program" => "master" }}} 
    output {stdout { } 
    elasticsearch {hosts => "http://{elasticsearch_url}" index => "audit_trail-%{+YYYY.MM.dd}" password => {password} user => "{user_name}"}}'

Also note that we are using the logstash mutate filter to rename the Syslog ‘program’ field to ‘master’ as we will use this field to capture the name of the Jenkins master where the audit activity occurred and renaming it will provide for more meaningful labels in the Kibana dashboard we will create.

One other important consideration is in regards to how Elasticsearch creates indexes and indexes data.  If you don’t create an index ahead of time or need dynamic indexes, as in this case where we are creating a new index each day, Elasticsearch will create the index and dynamically map fields based on the first record that is pushed to Elasticsearch to be indexed.  This is very important in regards to aggregating data for Kibana based reports, as certain aspects of Kibana dashboards may be difficult to manage and result in undesirable output if fields are analyzed.  By default, Elasticsearch will analyze all string based fields with the built-in standard analyzer and will break up fields based on Unicode text segmentation. So, in the case of our master (Syslog App Name) field, it will be tokenized based on dashes, spaces and will be lower-cased.  In order to avoid having the master field analyzed we will use an Elasticsearch index template that will allow us to preconfigure the mapping for the ‘master’ field based on a wild-card match to the index name:

curl -u username:password -XPUT http://elasticsearch_url/_template/template_audit_trail -d '
{
  "template": "audit_trail-*",
  "mappings": {
    "syslog": {
      "properties": {
        "master": {
          "type": "string",
          "index": "not_analyzed",
          "store": true
        }
      }
    }
  }
}
'

Once you have the Logstash container running and have set up the custom index template for the ‘master’ field, you are ready to install and configure the Audit Trail plugin in Jenkins.

First, install the Audit Trail plugin via your Jenkins Plugin Manager.  Once it is installed you will need to configure it under Manage Jenkins » System Configuration to point to your Logstash instance.  Here is an example what the Audit Trail configuration looks like on a Jenkins master:

jenkins_master_config.png

We have specified the Jenkins master name as the value of the ‘Syslog App Name’ field. Once this is saved, all auditable actions will be pushed to Elasticsearch via Logstash, the same Elasticsearch instance we have configured for CloudBees Jenkins Operations Center to use.

Analytics Dashboard Creator - Kibana

Now that we have Audit Trail data flowing into a new custom index in Elasticsearch it is time to create a custom dashboard to display that data. We start by clicking on the Analytics Dashboard Creator link in CloudBees Jenkins Operations Center, bringing up the Kibana interface. NOTE: Before running through these instructions you will want to push some Audit Trail data from two or more masters to Elasticsearch in order to verify that everything is working - you can save a few job configurations and the Jenkins system config for example.

What follows is a detailed set of instructions to:

  1. Create a new audit_trail-* index pattern
  2. Create a new saved Kibana search, Audit Trail Search, based on that index pattern
  3. Set up two visualizations based on the Audit Trail Search
  4. Create a new Kibana dashboard that consumes the new search and two new visualizations
  5. Create a new CloudBees Jenkins Analytics view that will display the new dashboard in CloudBees Jenkins Operations Center
Configure Index Pattern

The first thing we need to do is to add our new audit_trail-* index pattern by clicking Settings and selecting Indices as seen below:

After you have entered the pattern, click the Create button.
Note: The trailing asterisk is very important because the audit data is being indexed daily and without it there will be no match. Non-wildcard index patterns require an exact match.

Create a Saved Search

Now that we have a new index pattern, we will be able to create a new search based on it. Click on the Discover tab in the top navigation and then select the audit_trail-* index pattern from the left drop-down:

Next, under Available Fields, hover over the master field and click the add button; then do the same for the the message field (Time  is included by default):​

Next, click the save button (disk icon in upper right), name the new search Audit Trail Search and click the Save button:

Now that we have a custom search saved, we can create some visualizations based on it. 

Create Visualizations

Select the Visualize tab in the top menu and then select Pie chart from the list of new visualizations:

Next, we will need to select a search source for the visualization - select From a saved search and then select Audit Trail Search:

 

Now we need to customize our new Pie chart visualization. We will stick with the default metrics aggregation type of count, but in order to display something useful we will add a bucket to our visualization by selecting Split Slices underneath the Select bucket types:​

Now we need to configure the bucket we just added. For the Aggregation select Terms, then for the Field select master, for Order select Top and for Size enter 15 (default values for everything else): 

Your new pie chart visualization should look something like this:

Finally, click on the save icon and save the visualization with a Title of Audit Trail Masters:

Next we will create an Area chart visualization to summarize audit activity on multiple CloudBees Jenkins Enterprise masters over a timeline:

 

For Step 2 select the saved Audit Trail Search once again. Then to configure the Area chart visualization go with the default settings for the Y-Axis - Count - select X-Axis for the bucket type with an Aggregation value of Date Histogram, @timestamp as the Field value and Auto as the Interval. Then click on Add sub-buckets, selecting Split Area as the bucket type, Terms as the Sub Aggregation value, master for the Field, an Order of Top with a Size of 10 and select metric: count as the Order By value:

Save the visualization with a Title of Audit Trail Summary.

Create a Kibana Dashboard

Now that we have a saved search and two Audit Trail visualizations, we can put them together into a cohesive dashboard.  Click on the Dashboard link in the top navigation and then click the 
New Dashboard button.

Next, click on the Add Visualization button, search by the word Audit and select the Audit Trail Masters visualization. Resize the visualization slightly and then move to the top-right corner of the dashboard.  Repeat those same steps only select the Audit Trail Summary visualization. Resize the Audit Trail Summary visualization to be the same height as the Audit Trail Masters visualization and take up the rest of the width.  Next, click on the Searches tab and select the Audit Trail Search that we created earlier (if you don’t see, search for Audit).  Adjust the width of the Audit Trail Search visualization to take up the entire width and adjust the height to your liking.  You have a dashboard that looks something like the following:

Now click on the Save Dashboard button, name it Audit Trail, check the Store time with dashboard checkbox and click the Save button.

Create a CloudBees Jenkins Analytics Audit Trail View

 

Now that we have an Audit Trail dashboard saved in Kibana we can add it to CloudBees Jenkins Operations Center as new view.  From the root of CloudBees Jenkins Operations Center create a new view named Audit Trail and select Custom Analytics View as the type and click the OK button:

Name the view Audit Trail and then click the  Add button to add a dashboard.  Enter Audit Trail as the dashboard Name and select the Audit Trail dashboard for the Dashboard selection, and then click the OK button:

You should now have a new CloudBees Jenkins Analytics Audit Trail view similar to the one pictured below (although you most likely won’t have as much data as is shown here, at least not yet):

Summary

In this post we explored how it is possible to create custom CloudBees Jenkins Analytic Dashboards with a completely new data source  - in this case data from the Audit Trail plugin. You can apply these techniques to any data that you push into Elasticsearch and create your own custom CloudBees Jenkins Analytic Dashboards.

 

 

Blog Categories: Developer ZoneJenkins
Categories: Companies

Jenkins World Speaker Highlight: Enforcing Jenkins Best Practices

Fri, 08/26/2016 - 19:09

This is a guest post by Jenkins World speaker David Hinske, Release Engineer at Goodgame Studios.

Hey there, my name is David Hinske and I work at Goodgame Studios (GGS), a game development company in Hamburg, Germany. As Release Engineer in a company with several development teams, using several Jenkins instances comes in handy. While this approach works fine in our company and gives the developers a lot of freedom, we came across some long-term problems concerning maintenance and standards. Problems, which where mostly caused by misconfiguration/non-usage of plugins. With “configuration as code” in mind, I took the approach to apply static code-analysis with the help of SonarQube, a platform that manages code quality, for all of our Jenkins-Job-configurations.

As a small centralized team, we were looking for an easy way to control the health of our growing Jenkins infrastructure. With considering “configuration as code”, I developed a simple extension of SonarQube to manage the quality and usage of all spawned Jenkins-instances. The given SonarQube features (like customized rules/metrics, quality profiles and dashboards) allow us and the development-teams to analyze and measure the quality of all created jobs in our company. Even though a Jenkins configuration analysis can not cover all SonarQube’s axes of code quality, I think there is still potential for conventions/standards, duplications, complexity, potential bugs (misconfiguration) and design and architecture.

The results of this analysis can be used by all people involved in working with Jenkins. To achieve this, I developed a simple extension of SonarQube, containing everything which is needed to hook up our SonarQube with our Jenkins environment. The implementation contains a new basic-language “Jenkins” and an initial set of rules were defined.

Of course the needs depend strongly on the way Jenkins is being used, so not every rule implemented will be useful for every team, but this applies as well as all other code-analysis. The main inspiration for the rules were developer feedback and some articles found on the web. The different possibilities to use and configure Jenkins provides a lot of potential for many more rules. With this new approach of quality-analysis, we can enforce best practices like:

  • Polling must die (Trigger a build due to pushes instead of poll the repository every x minutes)
  • Use Log Rotator (Not using log rotator can result in disk space problems on the master)
  • Use slaves/labels (Jobs should be defined where to run)
  • Don’t build on the master (In larger systems, don’t build on the master)
  • Enforce plugin usage (For example: Timestamp, Mask-Passwords)
  • Naming sanity (Limit project names to a sane (e.g. alphanumeric) character set)
  • Analyze Groovy Scripts (For example: Prevent System.exit(0) in System Groovy Scripts)

Besides taking control over all configuration of any Jenkins instance we want, there is also room for additional metrics, like measuring the amount and different types of jobs (Freestyle/Maven etc…) to get an overview about the general load of the Jenkins instance. A more sophisticated idea is to measure complexity of jobs and even pipelines. As code, job configuration gets harder to understand as more steps are involved. On the one hand, scripts, conditions and many parameters can negatively influence the readability, especially if you have external dependencies (like scripts) in different locations. On the other hand, pipelines can also grow very complex when many jobs are involved and chained for execution. It will be very interesting for us to see where and why complex pipelines are being created.

For visualization we rely on the data and its interpretation of SonarQube, which offers a big bandwidth of widgets. Everybody can use and customize the dashboards. Our centralized team for example has a separate dashboard where we can get a quick overview over all instances.

The problem of “growing” Jenkins with maintenance problems is not new. Especially when you have many developers involved, including with the access to create jobs and pipelines themselves, an analysis like this SonarQube plugin provides can be useful for anyone who wants to keep their Jenkins in shape. Customization and standards are playing a big role in this scenario. This talk surely is not an advertisement for my developed plugin, it is more about the crazy idea of using static code analysis for Jenkins job configuration. I haven’t seen anything like it so far and I feel that there might be some potential behind this idea.

Join me at my Enforcing Jenkins Best Practices session at the 2016 Jenkins World to hear more!

David Hinske
Release Engineer
 Goodgame Studios

This is a guest post written by Jenkins World 2016 speaker David Hinske. Leading up to the event, there will be many more blog posts from speakers giving you a sneak peak of their upcoming presentations. Like what you see? Register for Jenkins World! For 20% off, use the code JWHINMAN

 

Blog Categories: Jenkins
Categories: Companies

Now On DevOps Radio: Leading the Transformation, Live - with Gary Gruver!

Thu, 08/25/2016 - 02:49

Achieving a DevOps transformation is much easier said than done. You don’t just flip a switch and “do” DevOps. It’s also not about buying DevOps tools.

Don’t you wish you could just sit down and talk with someone who’s done it all before? You can! This week, we’re excited to share that Gary Gruver, author and Jenkins World 2016 keynote, joined us on DevOps Radio to talk about leading a DevOps transformation. So plug in your headphones, shut your office door and get comfortable: You’re going to want to hear this!

For those of you who don’t know Gary, he’s co-author of A Practical Approach to Large-Scale Agile Development, a book in which he documents how HP revolutionized software development while he was there, as director of the LaserJet firmware development lab. He’s also the author of Leading the Transformation, an executive guide to transforming software development processes in large organizations. His impressive experience doesn’t stop at author and director at HP, though. As Macys.com’s VP of quality engineering, release and operations, he led the retailer’s transition to continuous delivery.

In this episode of DevOps Radio, Gary and DevOps Radio host Andre Pino dive into the topics covered in Gary’s two books. They talk through the reality of leading a transformation, discussing practical steps that Gary took. They also bring up challenges — ones that Gary faced, and ones that you might, too.

So, what are you waiting for?! Tune in to the latest episode of DevOps Radio. It’s available now on the CloudBees website and on iTunes. Join the conversation on Twitter by tweeting out to @CloudBees and including #DevOpsRadio in your post!

You can meet Gary in person at Jenkins World and hear his conference keynote. Register now! Use promotion code JWHINMAN and you’ll get a 20% discount. Meanwhile, learn more about Gary on his website.

 

Blog Categories: JenkinsCompany News
Categories: Companies

Jenkins World Speaker Highlight: Secure Container Development Pipelines with Jenkins

Tue, 08/23/2016 - 20:17

This is a guest post by Jenkins World speaker Anthony Bettini, Founder and CEO at FlawCheck.

At FlawCheck, we’re really excited about presenting to the Jenkins community at the upcoming Jenkins World 2016 in Santa Clara! FlawCheck will be presenting on “Secure Container Development Pipelines with Jenkins” in Exhibit Hall C, on Day 2 (September 14) from 2:00 PM - 2:45 PM. At FlawCheck, most of our time is spent with customers who are using Jenkins to build Docker containers, but are concerned about the security risks. FlawCheck’s enterprise customers want to use enterprise policies to define which containers, they are building with Jenkins, reach production and then continuously monitor them for compliance.

Building security into the software development lifecycle is already difficult for large enterprises following a waterfall development process. With Docker, particularly in continuous integration and continuous deployment environments, the challenge is even more difficult. Yet, for enterprises to do continuous deployment, security needs to be coupled with the build and release process and the process needs to be fully automated, scalable and reliable.

If you’re interested in container security and security of open source software passing through Jenkins environments, we’d encourage you to grab a seat at the FlawCheck talk, “Secure Container Development Pipelines with Jenkins” in Exhibit Hall C, on Day 2 (September 14) from 2:00 PM - 2:45 PM. In the meantime, follow us on Twitter @FlawCheck and register for a free account at https://registry.flawcheck.com/register.

Anthony Bettini
Founder and CEO
 FlawCheck

This is a guest post written by Jenkins World 2016 speaker Anthony Bettini. Leading up to the event, there will be many more blog posts from speakers giving you a sneak peak of their upcoming presentations. Like what you see? Register for Jenkins World! For 20% off, use the code JWHINMAN

 

Blog Categories: Jenkins
Categories: Companies

Top 9 Reasons You Need to Go ALL IN and Attend Jenkins World

Sat, 08/20/2016 - 21:09

The countdown is on. Jenkins World 2016 is coming to the Santa Clara Convention Center, September 13-15. It’ll be the world’s largest gathering of Jenkins users ever - come interact with the community and learn about everything Jenkins. The lead organizing sponsor is CloudBees, along with a number of premier Jenkins ecosystem vendors who are also sponsoring. Jenkins World will offer attendees opportunities to learn, explore and network. This year’s theme is “ALL IN” as Jenkins users, experts and thought leaders prepare to go ALL IN on DevOps.

Need more convincing? Below are nine reasons for YOU to go ALL IN at Jenkins World 2016:

  1. Hear keynotes from industry leaders - Kohsuke Kawaguchi, founder of the Jenkins project kicks off the conference with the opening keynote this year. Other keynotes you won’t want to miss include Sacha Labourey, CEO of CloudBees, and Gary Gruver, former DevOps exec at Macys.com and HP, and industry author. Rumor has it that Gene Kim may make a guest appearance, too!
  2. Attend training/workshop add-on options – Come to Jenkins World as an attendee, leave as a Jenkins master by attending Jenkins certification training and/or learning the fundamentals of Docker and Jenkins. Additional workshops are available, covering topics such as plugin development, Jenkins certification and automating pipelines with the Pipeline plugin.
  3. FREE certification! Your Jenkins World registration also provides you with the option to take a certification exam completely FREE! Did we mention it was FREE?
  4. Rub shoulders with the Jenkins stars - Get access to some of the best Jenkins experts in the world - attend their sessions and network with them. The sessions cover a range of topics such as: infrastructure as code, security, containers, pipeline automation, best practices, scaling Jenkins and community development projects.
  5. Visit a variety of Jenkins ecosystem vendors – At the expanded Sponsor Expo you can check out a range of technologies and services that help you optimize software delivery with Jenkins.
  6. Pick up your next read – CloudBees Senior Consultant Viktor Farcic has recently published his book, The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices. Be one of 200 lucky attendees to get a free copy and be sure to catch his session on September 13.
  7. Do you follow CommitStrip? They will be onsite and YOU can help them paint a custom Jenkins-themed mural!
  8. Meet the Butler and snap a selfie – You may have seen the Butler on @CloudBees engaged in some Xtreme social media adventures to get himself to Jenkins World, but at Jenkins World you’ll have the chance to meet him in person. Don’t forget to snap a pic with him at the social media station and share with your friends.
  9. Spiff up your wardrobe – Always a hit, this year’s t-shirt promises to be hotter than ever. No more hints – attend and find out why!

All of this and so much more awaits you at Jenkins World. Go “All In” and register now! Use this code JWHGILMORE and get 20% off your conference registration.

See you in Santa Clara!

Categories: Companies

Service Discovery (The DevOps 2.0 Toolkit)

Fri, 08/19/2016 - 22:34

Service discovery is the answer to the problem of trying to configuration our services when they are deployed to clusters. In particular, the problem is caused by a high level of dynamism and elasticity. Services are not, anymore, deployed to a particular server but somewhere within a cluster. We are not specifying the destination but the requirement. Deploy anywhere as long as there is the specified amount of CPUs and memory, certain type of hard disk and so on.

Static configuration is not an option anymore. How can we statically configure a proxy if we do not know where our services will be deployed? Even if we do, they will be scaled, descaled and rescheduled. The situation might change from one minute to another. If a configuration is static, we would need an army of operators monitoring the cluster and changing the configuration. Even if we could afford it, the time required to apply changes manually would result in downtime and, probably, prevent us from continuous delivery or deployment. Manual configuration of our services would be another bottleneck that, even with the rest of improvements would slow down everything.

Hence, service discovery enters the scene. The idea is simple. Have a place where everything will be registered automatically and from where others can request info. It always has three components: Service discovery consists of a registry, registration process and discovery or templating.

There must be a place where information is stored. That must be some kind of a lightweight database that is resistant to failure. It must have an API that can be used to put, get and remove data. Some of the commonly used tools for these types are etcd and Consul.

Next, we need a way to register information whenever a new service is deployed, scaled or stopped. Registrator is one of those. It monitors Docker events and adds or removes data from the registry of choice.

Finally, we need a way to change configurations whenever data in the registry is updated. There are plenty of tools in this area, confd and Consul Template being just a few. However, this can quickly turn into an endeavor that is too complicated to maintain. Another approach is to incorporate discovery into our services. That should be avoided when possible since it introduces too much coupling. Both approaches to discovery are slowly fading in favor of software-defined networks (SDN). The idea is that SDNs are created around services that form a group so that all the communication is flowing without any predefined values. Instead of finding out where the database is, let SDN have a target called DB. That way, your service will not need to know anything but the network endpoint.

Service discovery creates another question. What should we do with a proxy?

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.

The book is available from Amazon (Amazon.com and other worldwide sites) and LeanPub.

Blog Categories: Developer Zone
Categories: Companies

Jenkins World Speaker Highlight: Continuously Delivering Continuous Delivery Pipelines

Thu, 08/18/2016 - 22:23

This is a guest post by Jenkins World speaker Neil Hunt, senior DevOps architect at Aquilent.

In smaller companies with a handful of apps and fewer silos, implementing CD pipelines to support these apps is fairly straightforward, using one of the many delivery orchestration tools available today. There is likely a constrained tool set to support - not an abundance of flavors of applications and security practices - and generally fewer cooks in the kitchen. But in a larger organization, I have found that there are seemingly endless unique requirements and mountains to climb to reach this level of automation on each new project.

Enter the Jenkins Pipeline plugin. My recently departed former company, a large financial services organization with a 600+ person IT organization and 150+ application portfolio, set out to implement continuous delivery enterprise-wide. After considering several pipeline orchestration tools, we determined the Pipeline plugin (at the time called Workflow) to be the superior solution for our company. Pipeline has continued Jenkins’ legacy of presenting an extensible platform with just the right set of features to allow organizations to scale its capabilities as they see fit, and do so rapidly. As early adopters of Pipeline with a protracted set of requirements, we used it both to accelerate the pace of on-boarding new projects and to reduce the ongoing feature delivery time of our applications.

In my presentation at Jenkins World, I will demonstrate the methods we used to enable this. A few examples:

  • We leveraged the Pipeline Remote File Loader plugin to write shared common code and sought and received community enhancements to these functions.


Jenkinsfile, loading a shared AWS utilities function library


awsUtils.groovy, snippets of some AWS functions

  • We migrated from EC2 agents to Docker-based agents running on Amazon’s Elastic Container Service, allowing us to spin up new executors in seconds and for teams to own their own executor definitions.

Pipeline run #1 using standard EC2 executors, spinning up EC2 instance for each node; Pipeline run #2 using shared ECS cluster with near-instant instantiation of a Docker slave in the cluster for each node.

  • We also created a Pipeline Library of common pipelines, enabling projects that fit certain models to use ready-made end-to-end pipelines. Some examples:
    • Maven JAR Pipeline: Pipeline that clones Git repository, builds JAR file from pom.xml, deploys to Artifactory, and runs Maven release plugin to increment next version
    • Anuglar.JS Pipeline: Pipeline that executes a grunt and bower build, then runs S3 sync to Amazon S3 bucket in dev, then stage, then prod buckets.
    • Pentaho Reports Pipeline: Pipeline that clones Git repository, constructs zip file, and executes Pentaho Business Intelligence Platform CLI to import new set of reports in dev, stage, then prod servers.

Perhaps most critically, a shout-out to the saving grace of this quest for our security and ops teams: the manual input step! While the ambition of continuous delivery is to have as few of these as possible, this was the single-most pivotal feature in convincing others of Pipeline’s viability, since now any step of the delivery process could be gate-checked by an LDAP-enabled permission group. Were it not for the availability of this step, we may still be living in the world of: “This seems like a great tool for development, but we will have a segregated process for production deployments.” Instead, we had a pipeline full of many input steps at first, and then used the data we collected around the longest delays to bring management focus to them and unite everyone around the goal of strategically removing them, one by one.

Going forward, having recently joined Aquilent’s cloud solutions architecture team, I’ll be working with our project teams here to further mature the use of these Pipeline plugin features as we move towards continuous delivery. Already, we have migrated several components of our healthcare.gov project to Pipeline. The team has been able to consolidate several Jenkins jobs into a single, visible delivery pipeline, to maintain the lifecycle of the pipeline with our application code base in our SCM, and to more easily integrate with our external tools.

Due to functional shortcomings in the early adoption stages of the Pipeline plugin and the ever-present political challenges of shifting organizational policy, this has been and continues to be far from a bruise-free journey. But we plodded through many of these issues to bring this to fruition and ultimately reduced the number of manual steps in some pipelines from 12 down to one and brought our 20+ Jenkins-minute pipelines to only six minutes, after months of iteration. I hope you’ll join this session at Jenkins World and learn about our challenges and successes in achieving the promise of continuous delivery at enterprise scale.

Neil Hunt
Senior DevOps Architect
 Aquilent

This is a guest post written by Jenkins World 2016 speaker Neil Hunt. Leading up to the event, there will be many more blog posts from speakers giving you a sneak peak of their upcoming presentations. Like what you see? Register for Jenkins World! For 20% off, use the code JWHINMAN

 

Blog Categories: Jenkins
Categories: Companies

Join the Jenkins World Sticker Competition!

Tue, 08/16/2016 - 20:01

We’re thrilled to announce our first Jenkins Butler design contest! Design a unique version of the Jenkins Butler and submit it before September 9, 2016. Voting will take place at Jenkins World, at the sticker exchange booth hosted by Sticker Mule.

We partnered with Sticker Mule, so the winner who produces the winning design will get a $100 credit on stickermule.com to turn their designs into die cut custom stickers.

Please see how to enter and the rules for the competition below. If you have any questions, please contact us: fboruvka@cloudbees.com.

How to enter:

Entering the competition is easy. Just submit your design (ensuring the rules have been followed) and send it to fboruvka@cloudbees.com before September 9, 2016.

Rules:
  • One design per person
  • Must include the Jenkins Butler
  • Design would need to be sketched, drawn, digitally drawn including dimensions
  • The reason behind your design
  • All entries must be submitted before September 9, 2016
  • The design must be original (not been previously created, not copyrighted, etc.)

Good luck!

 

Blog Categories: Jenkins
Categories: Companies

Cluster Orchestration (The DevOps 2.0 Toolkit)

Mon, 08/15/2016 - 22:33

When I was an apprentice, I was taught to treat servers as pets. I would treat them with care. I would make sure that they are healthy and well fed. If one of them gets sick, finding the cure was of utmost priority. I even gave them names. One was Garfield, and the other was Gandalf. Most companies I worked for had a theme for naming their servers. Mythical creatures, comic book characters, animals and so on. Today, when working with clusters, the approach is different. Cloud changed it all. Pets became cattle. When one of them gets sick, we kill them. We know that there's almost an infinite number of healthy specimens so curing a sick one is a waste of time. When something goes wrong, destroy it and create a new one. Our applications are built with scaling and fault tolerance in mind, so a temporary loss of a single node is not a problem. This approach goes hand in hand with a change in architecture.

If we want to be able to deploy and scale easily and efficiently, we want our services to be small. Smaller things are easier to reason with. Today, we are moving towards smaller, easier to manage, and shorter lived services. The excuse for not defining our architecture around microservices is gone. They were producing too many problems related to operations. After all, the more things to deploy, the more problems infrastructure department has trying to configure and monitor everything. With containers, each service is self-sufficient and does not create infrastructure chaos thus making microservices an attractive choice for many scenarios.

With microservices packed inside containers and deployed to a cluster, there is a need for a different set of tools. There is the need for cluster orchestration. Hence, we got Mesos, Kubernetes and Docker Swarm (just to name a few). With those tools, the need to manually SSH into servers disappeared. We got an automated way to deploy and scale services that will get rescheduled in case of a failure. If a container stops working, it will be deployed again. If a whole node fails, everything running on it will be moved to a healthy one. And all that is done without human intervention. We design a behavior and let machines take over. We are closer than ever to a widespread use of self-healing systems that do not need us.

While solving some of the problems, cluster orchestration tools created new ones. Namely, if we don't know, in advance, where will our services run, how to we configure them?

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.

The book is available from Amazon (Amazon.com and other worldwide sites) and LeanPub.

Blog Categories: Developer Zone
Categories: Companies

Containers and Immutable Deployments (The DevOps 2.0 Toolkit)

Tue, 08/09/2016 - 21:05

Even though CM alleviated some of the infrastructure problems, it did not make them go away. The problem is still there, only in a smaller measure. Even though it is now defined as code and automated, infrastructure hell continues to haunt us. Too many often conflicting dependencies quickly become a nightmare to manage. As a result, we tend to define standards. You can use only JDK7. The web server must be JBoss. These are the mandatory libraries. And so on, and so forth. The problem with such standards is that they are an innovation killer. They prevent us from trying new things (at least during working hours).

We should also add testing into the mix. How do you test a web application on many browsers? How do you make sure that your commercial framework works on different operating systems and with different infrastructure? The list of testing combinations is infinite. More importantly, how do we make sure that testing environments are exactly the same as production? Do we create a new environment every time a set of tests is run? If we do, how much time does such an action take?

CM tools were not addressing the cause of the problem but trying to tame it. The difficulty lies in the concept of mutable deployments. Every release brings something new and updates the previous version. That, in itself, introduces a high level of unreliability.

The solution to those, and a few other problems, lies in immutable deployments. As a concept, immutability is not something that came into being yesterday. We could create a new VM with each release and move it through the deployment pipeline all the way until production. The problem with VMs, in this context, is that they are heavy on resources and slow to build and instantiate. We want both fast and reliable. Either of those without the other does not cut it in today's market. Those are some of the reasons why Google has been using containers for a long time. Why doesn't everyone use containers? The answer is simple. Making containers work is challenging and that's where Docker enters the game. First, they made containers easy to use. Then they extended them with some of the things that we, today, consider a norm.

With Docker we got an easy way to create and run containers that provide immutable and fast deployments and isolation of processes. We got a lightweight and self-sufficient way to deploy applications and services without having to worry about infrastructure.

However, Docker itself proved not to be enough. Today, we do not run things on servers but inside clusters and we need more than containers to manage such deployments.

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.

The book is available from Amazon (Amazon.com and other worldwide sites) and LeanPub.

This post is part of a new blog series all about the DevOps 2.0 Toolkit. Follow along in the coming weeks. Each post builds upon the last!

The DevOps 2.0 Toolkit
Configuration Management (The DevOps 2.0 Toolkit)
Containers and Immutable Deployments (The DevOps 2.0 Toolkit)
Cluster Orchestration (The DevOps 2.0 Toolkit) 
Service Discovery (The DevOps 2.0 Toolkit) 
Dynamic Proxies (The DevOps 2.0 Toolkit)
Zero-Downtime Deployment (The DevOps 2.0 Toolkit) 
Continuous Integration, Delivery, And Deployment (The DevOps 2.0 Toolkit)

Blog Categories: Developer Zone
Categories: Companies