Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
Updated: 16 hours 7 min ago

Using Multi-branch Pipelines in the Apache Maven Project

Thu, 03/23/2017 - 16:56

This is a post about how using Jenkins and Pipeline has enabled the Apache Maven project to work faster and better.

Most Java developers should have at least some awareness of the Apache Maven project. Maven is used to build a lot of Java projects. In fact the Jenkins project and most Jenkins plugins are currently built using Maven.

After the release of Maven 3.3.9 in 2015, at least from the outside, the project might have appeared to be stalled. In reality, the project was trying to resolve a key issue with one of its core components: Eclipse Aether. The Eclipse Foundation had decided that the Aether project was no longer active and had started termination procedures.

Behind the scenes the Maven Project Management Committee was negotiating with the Eclipse Foundation and getting all the IP clearance from committers required in order to move the project to Maven. Finally in the second half of 2016, the code landed as Maven Resolver.

But code does not stay still.

There had been other changes made to Maven since 3.3.9 and the integration tests had not been updated in accordance with the project conventions.

The original goal had been to get a release of Maven itself with Resolver and no other major changes in order to provide a baseline. This goal was no longer possible.

In January 2017, the tough decision was taken.

Reset everything back to 3.3.9 and merge in each feature cleanly, one at a time, ideally with a full clean test run on the main supported platforms: Linux and Windows, Java 7 and 8.

In a corporate environment, you could probably spend money to work your way out of trying to reconstruct a subset of 14 months of development history. The Apache Foundation is built on volunteers. The Maven project committers are all volunteers working on the project in their spare time.

What was needed was a way to let those volunteers work in parallel preparing the various feature branches while ensuring that they get feedback from the CI server so that there is very good confidence of a clean test run before the feature branch is merged to master.

Enter Jenkins Pipeline Multibranch and the Jenkinsfile.

A Jenkinsfile was set up that does the following:

  1. Determines the current revision of the integration tests for the corresponding branch of the integration tests repository (falling back to the master branch if there is no corresponding branch)
  2. Checks out Maven itself and builds it with the baseline Java version (Java 7) and records the unit test results
  3. In parallel on Windows and Linux build agents, with both Java 7 and Java 8. Checks out the single revision of the integration tests identified in step 1 and runs those tests against the Maven distribution built in step 2, recording all the results at the end.

There’s more enhancements planned for the Jenkinsfile (such as moving to the declarative syntax) but with just this we were able to get all the agreed scope merged and cut two release candidates.

The workflow is something like this:

  1. Developer starts working on a change in a local branch
  2. The developer recognizes that some new integration tests are required, so creates a branch with the same name in the integration tests repository.
  3. When the developer is ready to get a full test run, they push the integration tests branch (integration tests have to be pushed first at present) and then push the core branch.
  4. The Apache GitPubSub event notification system sends notification of the commit to all active subscribers.
  5. The Apache Jenkins server is an active subscriber to GitPubSub and routes the push details into the SCM API plugin’s event system.
  6. The Pipeline Multibranch plugin creates a branch project for the new branch and triggers a build
  7. Typically the build is started within 5 seconds of the developer pushing the commit.
  8. As the integration tests run in parallel, the developer can get the build result as soon as possible.
  9. Once the branch is built successfully and merged, the developer deletes the branch.
  10. GitPubSub sends the branch deletion event and Jenkins marks the branch job as disabled (we keep the last 3 deleted branches in case anyone has concerns about the build result)

The general consensus among committers is that the multi-branch project is a major improvement on what we had before. 

Notes
  • While GitPubSub itself is probably limited in scope to being used at the Apache Software Foundation, the subscriber code that routes events from source control into the SCM API plugin’s event system is relatively small and straightforward and would be easy to adapt if you have a custom Git hosting service, i.e. if you were in the 4% on this totally unscientific poll I ran on twitter:

    If you use Git at work, please answer this poll. The git server we use is:


    - Stephen Connolly (@connolly_s) March 17, 2017

  • There is currently an issue whereby changes to the integration test repository do not trigger a build. This has not proved to be a critical issue so far as typically developers change both repositories if they are changing the integration tests.

 

Blog Categories: Jenkins
Categories: Companies

“Workflow” Means Different Things to Different People

Wed, 03/22/2017 - 21:38

Wikipedia defines the term workflow as “an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes” - processes that make things or just generally get work done. Manufacturers can thank workflows for revolutionizing the production of everything from cars to chocolate bars. Management wonks have built careers on applying workflow improvement theories like Lean and TQM to their business processes.

What does workflow mean to the people who create software? Years ago, probably not much. While this is a field where there’s plenty of complicated work to move along a conceptual assembly line, the actual process of building software historically has included so many zigs and zags that the prototypical pathway from A to Z was less of a straight line than a more of a sideways fever chart.

Today, workflow, as a concept, is gaining traction in software circles, with the universal push to increase businesses’ speed, agility and focus on the customer. It’s emerging as a key component in an advanced discipline called continuous delivery that enables organizations to conduct frequent, small updates to apps so companies can respond to changing business needs.

So, how does workflow actually work in continuous delivery environments? How do companies make it happen? What kinds of pains have they experienced that have pushed them to adopt workflow techniques? And what kinds of benefits are they getting?

To answer these questions, it makes sense to look at how software moves through a continuous delivery pipeline. It goes through a series of stages to ensure that it’s being built, tested and deployed properly. While organizations set up their pipelines according to their own individual needs, a typical pipeline might involve a string of performance tests, Selenium tests for multiple browsers, Sonar analysis, user acceptance tests and deployments to staging and production. To tie the process together, an organization would probably use a set of orchestration tools such as the ones available in Jenkins.

Assessing your processes

Some software processes are simpler than others. If the series of steps in a pipeline is simple and predictable enough, it can be relatively easy to define a pipeline that repeats flawlessly – like a factory running at full capacity.

But this is rare, especially in large organizations. Most software delivery environments are much more complicated, requiring steps that need to be defined, executed, revised, run in parallel, shelved, restarted, saved, fixed, tested, retested and reworked countless times.

Continuous delivery itself smooths out these uneven processes to a great extent, but it doesn’t eliminate complexity all by itself. Even in the most well-defined pipelines, steps are built in to sometimes stop, veer left or double back over some of the same ground. Things can change – abruptly, sometimes painfully – and pipelines need to account for that.

The more complicated a pipeline gets, the more time and cost get piled onto a job. The solution: automate the pipeline. Create a workflow that moves the build from stage to stage, automatically, based on the successful completion of a process – accounting for any and all tricky hand-offs embedded within the pipeline design.

Again, for simple pipelines, this may not be a hard task. But, for complicated pipelines, there are a lot of issues to plan for. Here are a few:

  • Multiple stages – In large organizations, you may have a long list of stages to accommodate, with some of them occurring in different locations, involving different teams.
  • Forks and loops – Pipelines aren’t always linear. Sometimes, you’ll want to build in a re-test or a re-work, assuming some flaws will creep in at a certain stage.
  • Outages – They happen. If you have a long pipeline, you want to have a workflow engine ensure that jobs get saved in the event of an outage.
  • Human interaction – For some steps, you want a human to check the build. Workflows should accommodate the planned – and unplanned – intervention of human hands.
  • Errors – They also happen. When errors crop up, you want an automated process to let you restart where you left off.
  • Reusable builds – In the case of transient errors, the automation engine should allow builds to be used and re-used to ensure that processes move forward.

In the past, software teams have automated parts of the pipeline process using a variety of tools and plugins. They have combined the resources in different ways, sometimes varying from job to job. Pipelines would get defined, and builds would move from stage to stage in a chain of jobs — sometimes automatically, sometimes with human guidance, with varying degrees of success.

As the pipeline automation concept has advanced, new tools are emerging that program in many of the variables that have thrown wrenches into more complex pipelines over the years. Some of the tools are delivered by vendors with big stakes in the continuous delivery process – known names like Chef, Puppet, Serena and Pivotal. Other popular continuous delivery tools have their roots in open source, such as Jenkins.

While we are mentioning Jenkins, the community recently introduced functionality, specifically to help automate workflows. Jenkins Pipeline (formerly known as Workflow) gives a software team the ability to automate the whole application lifecycle – simple and complex workflows, automation processes and manual steps. Teams can now orchestrate the entire software delivery process with Jenkins, automatically moving code from stage to stage and measuring the performance of an activity at any stage of the process.

Conclusion
Over the last 10 years continuous integration brought tangible improvements to the software delivery lifecycle – improvements that enabled the adoption of agile delivery practices. The industry continues to evolve. Continuous delivery has given teams the ability to extend beyond integration to a fully formed, tightly wound delivery process drawing on tools and technologies that work together in concert.

Pipeline brings continuous delivery forward another step, helping teams link together complex pipelines and automate tasks every step of the way. For those who care about software, workflow means business.

This blog entry was originally posted on Network World.

 

 

Blog Categories: Jenkins
Categories: Companies

Prerequisites for a Successful Enterprise Continuous Delivery Implementation

Thu, 03/16/2017 - 17:27

Continuous delivery as a methodology and tool to meet the ever-increasing demand to deliver software at the speed of ideas is quickly gaining the attention of businesses today. Continuous delivery, with its emphasis on keeping software in a release-ready state at all times, is a natural evolution from continuous integration and agile software development practices. However, the cultural and operational challenges to achieving continuous delivery are much greater. For most organizations, continuous delivery requires adaptation and extension of existing software release processes. The roles, relationships and responsibilities of people across the organization can also be impacted. The tools used to deliver, update and maintain software must support automation and collaboration properly, in order to minimize delays and provide tight feedback cycles across the business.

Organizations looking to transition to continuous delivery should consider the following seven pre-requisites – these are practical steps that will allow them to successfully execute the cultural and operational changes within the regulatory and business constraints they face.

1. Development, quality assurance and operations teams must have shared goals; and communicate

While continuous integration limits its scope to the development team, continuous delivery embraces the testing phases of the quality assurance team (QA) and the deployments to staging and production environments that are managed by the production operations team. This is a major transformation in software development, and to succeed in transforming a continuous integration platform into a continuous delivery platform, integrating the QA and operations teams in its governance, as well involving the development team is critical. Collaboration and communication are a vital component of successful software development today, and in a continuous delivery environment it has to take centre stage.

2. Continuous integration must be working prior to moving to continuous delivery

Continuous delivery is an extension of continuous integration. The prerequisite to continuous delivery is to have continuous integration in place and working during the project, including source control management, automated builds and unit tests, as well as continuous builds of the software.

3. Automate and version everything

Continuous delivery involves the continuous repetition of many tasks such as building applications and packages, deploying applications and configurations, resetting environments and databases. All these tasks in continuous delivery should be automated with tools and scripts, and kept under version control so that everything can be audited and reproduced.

4. Sharing tools and procedures between teams is critical

Continuous delivery aims to validate the deployment procedures and automation used in the production environment. To do this successfully, these procedures and automations must be used as early on as possible so that they are extensively tested when used to deploy software to production. In most cases, the same tools can be used in all environments, e.g. integration, staging and production.

The automation scripts should be managed in shared source code repositories so that each team - development, QA and operations – can enhance tools and procedures. Mechanisms like pull-requests can help the governance of these shared tools and scripts.

5. The application must be production-friendly to make deployments non-events

Applications should simplify their deployment and rollback procedures so that deployments in production become non-events. A major step to achieve this is to reduce the number of components and of configuration parameters deployed. The ease of rollbacks is important when deploying new versions; that is, allowing the ability to quickly rollback in case of problems. Feature toggles help to de-couple the deployment of binaries from feature activation - a rollback can then simply be the deactivation of a feature, thanks to a toggle.

Special attention should be paid to any changes of database schemas, as this can make deployments and rollbacks much more complex. The schema-less design pattern of NoSQL databases brings a lot of flexibility, moving the responsibility of the schema from the database to the code. This concept can also be applied to relational databases.

6. The infrastructure must be project-friendly, it’ll empower the people and teams

Infrastructures should provide all the tooling (GUIs, APIs and SDKs) and documentation required to empower the development and QA team and make them autonomous in their work. These tasks include:

  • Deploying the application version of their choice in an environment
  • Managing configuration parameters (view, modify, export, import)
  • Managing databases (creating snapshots of data, restoring a database snapshot)
  • Allowing view, search and notification alerts on application logs

Public cloud platforms, mainly Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), are examples of project-friendly platforms.

7. Application versions must be ready to be shipped into production

One of the most important goals of continuous delivery is to allow the product owner to decide to deploy into production any version of the application that successfully goes through the continuous delivery pipeline; not only the version delivered at the end of an iteration with a “beautiful” version number.

Reaching this target requires many changes in the way applications are designed:

  • Features that are not yet validated by the QA team should be hidden from end users. Feature toggles and feature branches are two key ways to implement this.
  • Build tools should evolve from the concept of semantic versions separated by intermediate unidentified snapshot versions, to a continuous stream of non-semantic versions. Subversion repositories help provide ordered version numbers thanks to a revision number. Git, the free, open-source distributed version control system, is more complex to use for this, due to its un-ordered commit hashes; special tooling may be useful to make this version identifier more “human readable.”

The crux is that continuous delivery is not just about a set of tools, it is also about the people and organizational culture. Technology, people and process need to be aligned to make continuous delivery successful; and a collaborative approach is fundamental to its success. Implementing these best practices can allow organizations to reap the rewards of a more fluid, automated approach to software development – and one that provides business agility too.

Cyrille Le Clerc
Director of Product Management
CloudBees

Follow Cyrille on Twitter

This blog entry was originally posted on Beta News.

 

Blog Categories: Developer ZoneCloud Platform
Categories: Companies

Meet the Bees: Steven Christou

Thu, 03/09/2017 - 22:33

In every Meet the Bees blog post, you’ll learn more about a different CloudBees Bee. Let’s buzz on over to California and meet Steven Christou.​

Who are you? What is your role at CloudBees?

My name is Steven Christou I am currently Tech Enablement at CloudBees.

My primary role involves helping engaging customers with Support related questions, and providing more efficient tooling for the Support team. I do a lot of coding for our backend infrastructure as well as making diagnosing support issues easier for the team.

What makes CloudBees different from other companies?

The people. Seriously. I have never worked with a set of engineers who are phenomenally gifted in coding. I have learned far working side by side with these engineers than any other environment previously. There’s always something new to learn and they’re always willing to help me learn more. I also never worked with a more adventurous group of engineers who are always willing to strive to learn something new and take the extra steps to make themselves more efficient.

What are some of the most common mistakes to avoid while using Jenkins?

One of the most common mistakes I find when engaging in support is upgrading. There are a few tips I have for upgrading Jenkins. Firstly use a package manager. Upgrading using a package manager (like apt, or yum) make it far easier to do upgrades and reduce the complexity involved with moving everything to custom locations. On this note I would also recommend for upgrades to not just replace the war as issues involving the init scripts will not be upgraded.

Another common mistake I find is customers will upgrade, and if something breaks immediately downgrade. This is ill-advised and strongly recommend against. Jenkins will try it’s hardest to maintain backward compatibility with newer versions of the plugins, however there’s no guarantee that upgrading then downgrading will not cause significantly more issues. I always recommend to customers to use a test environment or clone of their production instance and do the upgrade. Trigger a few jobs and confirm nothing causes issues.

I would also like to recommend my talk Help! My Jenkins is Down! which talks more in depth about some more common issues encountered when managing a Jenkins instance.

Do you have any advice for someone starting a career in the CI/CD/DevOps/Jenkins space?

Do not be afraid to ask questions. I have been in the community for a while now and I will say that everyone I have interacted with has been extremely welcoming. I am always on the Jenkins IRC channel (#jenkins on freenode) as schristou88 and I am always willing to try my best to help out. There are plenty of resources on the internet which provides best practices and advice in the CI/CD space. I would recommend starting out with learning the most important tool, Jenkins, and working out from there. Jenkins is one of the core tools for a DevOps engineer and has over 1000 plugins to fit almost every requirement.

What has been the best thing you have worked on since joining CloudBees?

That’s a secret :)

If you could eat only one meal for the rest of your life, what would it be?

Kohsuke Kawaguchi introduced me to Japanese Curry, and I have not found anything more amazing than that!

Vanilla or chocolate or some other flavor, what’s your favorite ice cream flavor and brand?

I like Vanilla custard. Most I get from ice cream shops are amazing!

Blog Categories: Jenkins
Categories: Companies

Meet the Bees: Steven Christou

Thu, 03/09/2017 - 22:33

In every Meet the Bees blog post, you’ll learn more about a different CloudBees Bee. Let’s buzz on over to California and meet Steven Christou.​

Who are you? What is your role at CloudBees?

My name is Steven Christou I am currently Tech Enablement at CloudBees.

My primary role involves helping engaging customers with Support related questions, and providing more efficient tooling for the Support team. I do a lot of coding for our backend infrastructure as well as making diagnosing support issues easier for the team.

What makes CloudBees different from other companies?

The people. Seriously. I have never worked with a set of engineers who are phenomenally gifted in coding. I have learned far working side by side with these engineers than any other environment previously. There’s always something new to learn and they’re always willing to help me learn more. I also never worked with a more adventurous group of engineers who are always willing to strive to learn something new and take the extra steps to make themselves more efficient.

What are some of the most common mistakes to avoid while using Jenkins?

One of the most common mistakes I find when engaging in support is upgrading. There are a few tips I have for upgrading Jenkins. Firstly use a package manager. Upgrading using a package manager (like apt, or yum) make it far easier to do upgrades and reduce the complexity involved with moving everything to custom locations. On this note I would also recommend for upgrades to not just replace the war as issues involving the init scripts will not be upgraded.

Another common mistake I find is customers will upgrade, and if something breaks immediately downgrade. This is ill-advised and strongly recommend against. Jenkins will try it’s hardest to maintain backward compatibility with newer versions of the plugins, however there’s no guarantee that upgrading then downgrading will not cause significantly more issues. I always recommend to customers to use a test environment or clone of their production instance and do the upgrade. Trigger a few jobs and confirm nothing causes issues.

I would also like to recommend my talk Help! My Jenkins is Down! which talks more in depth about some more common issues encountered when managing a Jenkins instance.

Do you have any advice for someone starting a career in the CI/CD/DevOps/Jenkins space?

Do not be afraid to ask questions. I have been in the community for a while now and I will say that everyone I have interacted with has been extremely welcoming. I am always on the Jenkins IRC channel (#jenkins on freenode) as schristou88 and I am always willing to try my best to help out. There are plenty of resources on the internet which provides best practices and advice in the CI/CD space. I would recommend starting out with learning the most important tool, Jenkins, and working out from there. Jenkins is one of the core tools for a DevOps engineer and has over 1000 plugins to fit almost every requirement.

What has been the best thing you have worked on since joining CloudBees?

That’s a secret :)

If you could eat only one meal for the rest of your life, what would it be?

Kohsuke Kawaguchi introduced me to Japanese Curry, and I have not found anything more amazing than that!

Vanilla or chocolate or some other flavor, what’s your favorite ice cream flavor and brand?

I like Vanilla custard. Most I get from ice cream shops are amazing!

Blog Categories: Jenkins
Categories: Companies

Now on DevOps Radio: Poppin’ Fresh DevOps, Featuring General Mills DevOps Engineer, Sam Oyen

Tue, 02/28/2017 - 21:55

In the latest episode of DevOps Radio, Sam Oyen, DevOps engineer at General Mills, sits down with host Andre Pino to discuss how the company behind well-known icons such as the Pillsbury Doughboy and Betty Crocker is using DevOps. Sam talks about how she fell into DevOps, why she loves it, what she enjoys most about Jenkins World and concludes with some advice for women in the IT industry.

At General Mills, Sam is part of the team that manages all of .Net, Android and iOS applications for the entire organization. Sam’s team works on websites and related applications for brands like Pillsbury and Betty Crocker. The team, more than 130 developers worldwide, supports thousands of apps - from external apps to internal business apps and internal websites used for tracking data. Sam explains that each application requires developers to tailor the platform to meet specific needs. Using the Templates feature from CloudBees, the team is able to use one template for about 95% of their Jenkins jobs.

Sam was drawn to the DevOps field because of her love of problem solving and collaboration. It is these two concepts that she felt were exemplified through the “Ask the Experts” booth and white board stations at Jenkins World.

While Sam doesn’t feel there’s a big difference for men and women in DevOps, she does say it’s important for women to have allies. The biggest thing for both genders to embrace is that it’s okay – even good – to fail early and often. At first that seems counterintuitive, but the ability to fail fast is one of the value drivers for business as a result of continuous delivery processes and a DevOps culture.

Looking to upgrade your morning routine with something besides biscuits or toaster strudel? Check out the latest episode of DevOps Radio on the CloudBees website or on iTunes. Make sure you never miss an episode by subscribing to DevOps Radio via RSS feed. You can also join the conversation on Twitter by tweeting out to @CloudBees and including #DevOpsRadio in your post.

 

 

 

Blog Categories: Company NewsJenkins
Categories: Companies

Cluster-wide Copy Artifacts

Mon, 02/27/2017 - 18:26

CloudBees Jenkins Enterprise lets you operate many Client Masters (multiple Jenkins masters) from a central place: CloudBees Jenkins Operations Center.

This is, for example, very useful to be able to spread the load across teams, and leave teams to decide more freely which plugins they want to install, how they want to configure their jobs on their master, and so on.

Use case

When you start using multiple masters, and you are writing a deployment pipeline for example, you may need to reference artifacts coming from a build on another master.

This is now possible with the 2.7 release of CloudBees Jenkins Enterprise. A specific new Pipeline step is provided, and it is also supported on FreeStyle, Maven and Matrix job types.

How do I use it?

It is very straightforward. For a full explanation, please refer to the official documentation. You can use fine-grained options to select the build you need in the upstream job (e.g. the last build, stable or not, some build by its number, etc.).

From a Pipeline script

For example, let’s say I would like to get www.war file generated by the last completed build (i.e. even if failed, and will exclude the currently running builds) from the build-www-app-job job, located in the team-www-folder folder. And I want this to time out after a maximum of 1 hour, 2 minutes and 20 seconds. Here is how I could do it:

node('linux && x86') {
 copyRemoteArtifacts
   from: 'jenkins://41bd83b2f8fe36fea7d8b1a88f9a70f3/team-www-folder/build-www-app-job',
   includes: '**/target/www.war',
   selector: [$class: 'LastCompletedRemoteBuildSelector'],
   timeout: '1h 2m 20s'
}

In general, for such a complex case, it is strongly recommended to use the Pipeline Snippet Generator to generate the right code. See an illustration about that below:

From a FreeStyle Job

Just look for the new Copy archived artifacts from remote/local jobs step, then you will find a very similar UI to the one above in the Pipeline Snippet Generator:

And there’s more!

This is just a quick overview. To get the full picture, please refer to the official “Cluster-wide copy artifacts” documentation.

 

Blog Categories: Developer Zone
Categories: Companies

Build a Global Continuous Delivery Practice - it is Easy as of Today!

Thu, 02/23/2017 - 17:41

Today Starts a New Era for Companies Who Want to Setup a Global Continuous Delivery Practice.

In the last few years, CloudBees has witnessed first hand the evolution and adoption of DevOps and continuous delivery (CD) in organizations. 

Originally, most of our discussions were “Jenkins” discussions. Teams within organizations had made the decision to use Jenkins as their de facto tool for continuous integration (CI) and/or continuous delivery (CD). As Jenkins became their unique gateway to production (i.e. anything that lands in production has to travel through a Jenkins pipeline to get there), Jenkins became as critical as production itself for them: if you can’t upgrade or fix anything into production, you have a big (production) problem! To make those teams successful, we provided a number of extensions on top of Jenkins (such as role-based access control and other features), as well as 24/7 support backed by our worldwide team of Jenkins experts. This is today an extraordinarily successful CloudBees offering that helps hundreds of teams and thousands of users around the globe operate a rock-solid Jenkins cluster.

In the last few years, however, the tone of these discussions has changed. We are now meeting with a lot of enterprises that are looking at building a formal continuous delivery “practice” in their organization. They want to standardize the way continuous delivery happens, across the board. They want to be able to compare the productivity and velocity of all of their teams. For them, gone are the days of team-specific continuous delivery solutions. They have learned a lot through what leading-edge teams have done, they have setup proof of concepts and they are now ready to leverage their critical mass to formalize, at scale, the best practices that fit their business.

What they are looking for is a single, unified continuous delivery solution that gives them visibility into all of their teams and applications, this, in turn, requires a platform that knows how to integrate with legacy, traditional and leading edge environments, from AIX to Docker on AWS - not one different CD solution per project or technology of the day! If speed and agility matter for individual applications, it certainly matters to the IT organization itself! As such, they can’t afford to have a one month lag time anytime they onboard a new team. They can’t even afford one day. They want to onboard new teams or new projects in a snap and give them a best of breed environment to build those delivery pipelines. Also, they want a platform that’s cost efficient. Efficient, both in terms of how it manages the underlying infrastructure at scale, and also in how much (or, rather little) work is involved in managing the platform itself. 

Consequently, in order to fulfill that need, CloudBees is launching today CloudBees Jenkins Enterprise, the first and only platform that enables continuous delivery at scale for enterprises, based on the de facto DevOps hub, Jenkins. 

CloudBees Jenkins Enterprise is a full-fledged platform that can be deployed anywhere: Linux, VMware, OpenStack, AWS, to name a few. It takes ownership of the provided infrastructure and provides a fully-managed Continuous Environment built on Jenkins. Based on Docker containers, CloudBees Jenkins Enterprise provides a self-service, elastic CD environment that can be centrally managed. It also enables enterprises to set up global policies and best practices that can be enforced among all teams across the organization. Furthermore, the platform automatically handles backup and restore, automatically detects faulty behaviors and properly recovers from those situations. This leads to a continuous delivery platform with a very low cost of maintenance and an excellent usage of the infrastructure through high-density Jenkins deployments that can readily scale up to thousands of teams, and tens of thousands of projects and users. 

If you are interested to know more, I’d suggest reading the excellent blog post by Brian Dawson, Product Marketing Manager at CloudBees.

Onward,

Sacha

Blog Categories: Company News
Categories: Companies

How to Enable Enterprise DevOps with CD as a Service and Distributed Pipelines

Wed, 02/22/2017 - 06:58

This week, we launched CloudBees Jenkins Enterprise to enable enterprise-wide DevOps through CD as a Service.

Why is this important to you? Simply put, the results are in. Organizations which implement continuous delivery (CD) in support of enterprise-wide DevOps see significant improvement in release frequency, cycle time and mean time to recovery. More importantly, such improvements lead to a more agile, more responsive more competitive overall business.

To successfully implement continuous delivery and DevOps in a large, mature enterprise, there are specific needs and obstacles which must be addressed. Let’s look at them:

  • Support for heterogeneous tools and practices to enable integration across the organization’s entire technology portfolio.
  • Resiliency and high availability to prevent disruptions in the delivery pipeline of business-critical applications.
  • Enterprise security and compliance capabilities to protect valuable intellectual property and ensure adherence to the organization’s established standards.
  • Ability to unify process across multiple disconnected silos so that teams and stakeholders can deliver software rapidly and repeatedly.
  • Scalability to support on-boarding all of your teams in a stable reliable environment.

Traditionally, meeting the requirement for scalability has been the biggest challenge. Much of this has to do with the way continuous delivery has been adopted and the nature of the available CD solutions.

CD and DevOps adoption has often begun within individual teams as grassroots efforts. The tools used for such grassroots implementations fall largely into two categories:

  • Lightweight, single server web applications not architected for large scale deployments.
  • Public SaaS solutions which are cloud-based but, implemented on the same single server model as the web applications.

These solutions present issues when growing CD from one team to an entire organization. Common problems are:

  • On a shared single service instance, the increasing workload overwhelms the server and the result is downtime, slow builds, compromised data and broken pipelines.
  • As teams stand up their own instances, infrastructure costs increase. The ability to share practices is limited, and you have developers acting as tool admins.
  • Single server cloud instances address the infrastructure cost and reduce administration overhead but still suffer from disconnected teams and carry the risk of having critical data and processes off-premise, controlled by a third-party.

CloudBees Jenkins Enterprise enables you to scale without instability by implementing the only solution with a Distributed Pipeline Architecture (DPA). To better understand DPA, it helps to look at what happens when traditional solutions scale.

When we setup CD for a single team, things look good. We can deliver a single service through our CD pipeline with speed:

DPA Image 1 - Revised.png

But as we add teams, instability of our CD server increases. Our speed decreases. We are unable to update business-critical services. Single server, single point of failure.

Distributed Pipeline Architecture 2

The elasticity of the Distributed Pipeline Architecture distributes teams’ CD workloads across multiple isolated servers, providing high levels of scalability. Now multiple teams using multiple pipelines can deliver multiple business-critical services reliably. Scaling with DPA enables speed AND stability.

Distributed Pipeline Architecture 3

Building on the scalability enabled by DPA, CloudBees Jenkins Enterprise supports enterprise-wide DevOps with other best-in-class features:

  • Integration of all of your tools and processes - Leverages the vast ecosystem of Jenkins 1,200+ integrations, the CloudBees Assurance Program then curates and verifies the top ones.
  • Reduced infrastructure costs - Dynamically allocates appropriate resources providing a high-density and very efficient use of infrastructure.
  • Secure project isolation – Each team, project or application can have their own execution environment, keeping projects and data fully secured and isolated.
  • Fault tolerant and self-healing – Build services that have stopped are detected and restarted automatically.
  • Business continuity - CloudBees Jenkins Enterprise automatically handles real-time backup of the entire platform and fully automates the recovery process.
  • Centralized management - All management activities can be performed centrally thanks to CloudBees Jenkins Operation Center, providing a very low cost of ownership.

CloudBees Jenkins Enterprise Architecture Graphic

The launch of CloudBees Jenkins Enterprise is important to you because enterprise DevOps built on the practice of continuous delivery is how you remain competitive in today’s market. To do this you require the scalability, security, manageability and resiliency provided by Enterprise and the only Distributed Pipeline Architecture. Deploy CD as a service in minutes on your existing infrastructure.

Brian Dawson
DevOps Dude and Jenkins Marketing Manager
CloudBees

 

 

Blog Categories: Company News
Categories: Companies

Meet the Bees: Arnaud Héritier

Wed, 02/08/2017 - 23:40

In every Meet the Bees blog post, you’ll learn more about a different CloudBees Bee. This time we are in France, visiting Arnaud Héritier.

Who are you? What is your role at CloudBees?

My name is Arnaud Héritier and I’m a Support Delivery Manager in the Customer Engagement team at CloudBees.

I have used various open-source projects and contributed to them since the beginning of my career. My main contributions are to Apache Maven and Jenkins, thus it was obvious for me to join CloudBees when the opportunity came in May 2015.

Previously I had different roles (Developer, IT Architect, IT Consultant, Professional Services, Release Manager, Forge Manager, …) in various types of companies (software vendors, IT consulting, …). I also worked (in)directly for various companies in non-IT sectors like media, banking, insurance, …

In my spare time I’m contributing to Les cast Codeurs, a French podcast about IT and Java ecosystems, and I’m also leading the program team of the conference Devoxx France.

Developer support engineers are in front of our customers to help them in their daily tasks.

This is a really interesting position for various reasons:

  • We are in contact with many different people
    • Jenkins administrators and users on customers side,
    • Engineers, product management, professional services and many others departments internally at CloudBees,
    • The Jenkins community which is really different compared to many other open-source projects
  • Since Jenkins is used in many contexts we have the opportunity to work on many technical environments and to address many different use-cases from CI to CD.

What does a typical day look like for you?

Our support delivery team is based in 3 timezones (Europe, US east and Australia) to provide 24/7 support. Each team starts its day with a short meeting to synchronize on open/new cases and to request some help. This is a great opportunity to have an overview of what is in progress between 2 regions and to share our knowledge on the different kinds of cases we are managing.

While one of us is taking care to review all incoming cases all others are processing them. We have many different kind of activities:

  • Reply to usage questions and provide advice
  • Troubleshoot an issue with a customer in a screen sharing session
  • Reproduce an issue and propose a fix to the community or our engineering team
  • Contribute to our knowledge base, the product documentation or to documentations provided by the community
  • And much more!

What do you think the future holds for Jenkins?

A really bright future!! I have been a part of the Jenkins community for a very long time (I think I still had some hair when I talked for the first time with Kohsuke on the mailing list) and Jenkins is the only CI/CD tool with such a vibrant community. All of this is due to its extensibility which allowed to create a really large ecosystem, and Kohsuke’s kind leadership. Today, automation tools are evolving to cover more modern usages like continuous delivery/deployment and Jenkins is leading this direction. When you remember that 3 years ago, Pipeline didn’t exist and when you see what Jenkins users are achieving with it today, that’s really impressive! When you see how Blue Ocean is now providing a new user experience … that’s just awesome!

What are some of your best tips and tricks for using Jenkins?

  1. KISS!! Keep It Simple, Silly! Just because you have in your hands an ecosystem with more than 1,000 plugins that doesn’t mean you need to use all of them.
  2. Anticipate: Jenkins is the heart of your automation. You cannot rely on a deprecated tool. Keep it up-to-date, setup a test instance and try new features which may bring some value to you.
  3. Scale: CloudBees Jenkins Solutions are making it easier to scale horizontally. Enjoy it! You’ll use your resources more efficiently, you’ll simplify your platform and you’ll make it robust.

Do you have any advice for someone starting a career in the CI/CD/DevOps/Jenkins space?

Collaborate, communicate and automate!

This is the heart of DevOps and while Jenkins will help you to automate various processes to achieve your CI/CD objectives, the most important thing will be: you!

You find nowadays many resources to discover and learn about this: DevOps Express, DevOps Radio, …

What are some of the most common mistakes to avoid while using Jenkins?

To believe that Jenkins will do auto-magically everything for you. Jenkins is a framework, providing many services, but you are responsible for making it useful!

What is your favorite form of social media and why?

Twitter without a doubt. You have the opportunity to talk with so many different people. Some known, some unknown, but you can exchange on many subjects, technical or not, serious or not, … I really love this.

But let’s be honest nothing replaces meeting people IRL and that’s why I love to attend various conferences as an attendee but also as speaker to share my passion.

Categories: Companies

Meet the Bees: Arnaud Héritier

Wed, 02/08/2017 - 23:40

In every Meet the Bees blog post, you’ll learn more about a different CloudBees Bee. This time we are in France, visiting Arnaud Héritier.

Who are you? What is your role at CloudBees?

My name is Arnaud Héritier and I’m a Support Delivery Manager in the Customer Engagement team at CloudBees.

I have used various open-source projects and contributed to them since the beginning of my career. My main contributions are to Apache Maven and Jenkins, thus it was obvious for me to join CloudBees when the opportunity came in May 2015.

Previously I had different roles (Developer, IT Architect, IT Consultant, Professional Services, Release Manager, Forge Manager, …) in various types of companies (software vendors, IT consulting, …). I also worked (in)directly for various companies in non-IT sectors like media, banking, insurance, …

In my spare time I’m contributing to Les cast Codeurs, a French podcast about IT and Java ecosystems, and I’m also leading the program team of the conference Devoxx France.

Developer support engineers are in front of our customers to help them in their daily tasks.

This is a really interesting position for various reasons:

  • We are in contact with many different people
    • Jenkins administrators and users on customers side,
    • Engineers, product management, professional services and many others departments internally at CloudBees,
    • The Jenkins community which is really different compared to many other open-source projects
  • Since Jenkins is used in many contexts we have the opportunity to work on many technical environments and to address many different use-cases from CI to CD.

What does a typical day look like for you?

Our support delivery team is based in 3 timezones (Europe, US east and Australia) to provide 24/7 support. Each team starts its day with a short meeting to synchronize on open/new cases and to request some help. This is a great opportunity to have an overview of what is in progress between 2 regions and to share our knowledge on the different kinds of cases we are managing.

While one of us is taking care to review all incoming cases all others are processing them. We have many different kind of activities:

  • Reply to usage questions and provide advice
  • Troubleshoot an issue with a customer in a screen sharing session
  • Reproduce an issue and propose a fix to the community or our engineering team
  • Contribute to our knowledge base, the product documentation or to documentations provided by the community
  • And much more!

What do you think the future holds for Jenkins?

A really bright future!! I have been a part of the Jenkins community for a very long time (I think I still had some hair when I talked for the first time with Kohsuke on the mailing list) and Jenkins is the only CI/CD tool with such a vibrant community. All of this is due to its extensibility which allowed to create a really large ecosystem, and Kohsuke’s kind leadership. Today, automation tools are evolving to cover more modern usages like continuous delivery/deployment and Jenkins is leading this direction. When you remember that 3 years ago, Pipeline didn’t exist and when you see what Jenkins users are achieving with it today, that’s really impressive! When you see how Blue Ocean is now providing a new user experience … that’s just awesome!

What are some of your best tips and tricks for using Jenkins?

  1. KISS!! Keep It Simple, Silly! Just because you have in your hands an ecosystem with more than 1,000 plugins that doesn’t mean you need to use all of them.
  2. Anticipate: Jenkins is the heart of your automation. You cannot rely on a deprecated tool. Keep it up-to-date, setup a test instance and try new features which may bring some value to you.
  3. Scale: CloudBees Jenkins Solutions are making it easier to scale horizontally. Enjoy it! You’ll use your resources more efficiently, you’ll simplify your platform and you’ll make it robust.

Do you have any advice for someone starting a career in the CI/CD/DevOps/Jenkins space?

Collaborate, communicate and automate!

This is the heart of DevOps and while Jenkins will help you to automate various processes to achieve your CI/CD objectives, the most important thing will be: you!

You find nowadays many resources to discover and learn about this: DevOps Express, DevOps Radio, …

What are some of the most common mistakes to avoid while using Jenkins?

To believe that Jenkins will do auto-magically everything for you. Jenkins is a framework, providing many services, but you are responsible for making it useful!

What is your favorite form of social media and why?

Twitter without a doubt. You have the opportunity to talk with so many different people. Some known, some unknown, but you can exchange on many subjects, technical or not, serious or not, … I really love this.

But let’s be honest nothing replaces meeting people IRL and that’s why I love to attend various conferences as an attendee but also as speaker to share my passion.

Categories: Companies

Now On DevOps Radio: Choice Hotels and the “Inn-side” Scoop on DevOps Adoption

Tue, 02/07/2017 - 22:41

In this episode, Brian Mericle, distinguished engineer at Choice Hotels International, “checks in” with DevOps Radio host, CloudBees CMO and occasional Jenkins butler, Andre Pino to provide the inn-side scoop on what Choice Hotels - the 75+ year-old franchise that includes Comfort Suites, Comfort Inn, Sleep Inn and Cambria brands - is doing in terms of deployment. He’ll also share his secrets for managing big DevOps teams, without reservations.

Brian explains how Choice Hotel’s DevOps adoption was driven by the need to maintain a competitive advantage in terms of its website and services. Choice Hotels provides web-based services to its franchisees and, in this case, needed to update the central reservation system used by the various Choice Hotels properties to book rooms. The goal was to get to market faster and be a driving force in the market, but manual patterns and processes were presenting roadblocks. By implementing DevOps, Choice Hotels was able to accelerate the upgrade of its central reservation system.

Choice Hotels employees a 250+ person development staff. Brian admits that with a team that big, transitioning to a DevOps culture does not happen overnight. Brian believes there is no such thing as over-communication, and that everyone in IT should always know what’s happening. He tries to meet with all his teams in order to ensure transparency. Sounds like five-star hospitality to us!

This “suite” episode is available now on the CloudBees website and on iTunes. You can also join the conversation on Twitter by tweeting out to @CloudBees and including #DevOpsRadio in your post. Before you check-out make sure you subscribe to DevOps Radio where you can catch up on past episodes.

 

 

 

Blog Categories: Company News
Categories: Companies

Now On DevOps Radio: Choice Hotels and the “Inn-side” Scoop on DevOps Adoption

Tue, 02/07/2017 - 22:41

In this episode, Brian Mericle, distinguished engineer at Choice Hotels International, “checks in” with DevOps Radio host, CloudBees CMO and occasional Jenkins butler, Andre Pino to provide the inn-side scoop on what Choice Hotels - the 75+ year-old franchise that includes Comfort Suites, Comfort Inn, Sleep Inn and Cambria brands - is doing in terms of deployment. He’ll also share his secrets for managing big DevOps teams, without reservations.

Brian explains how Choice Hotel’s DevOps adoption was driven by the need to maintain a competitive advantage in terms of its website and services. Choice Hotels provides web-based services to its franchisees and, in this case, needed to update the central reservation system used by the various Choice Hotels properties to book rooms. The goal was to get to market faster and be a driving force in the market, but manual patterns and processes were presenting roadblocks. By implementing DevOps, Choice Hotels was able to accelerate the upgrade of its central reservation system.

Choice Hotels employees a 250+ person development staff. Brian admits that with a team that big, transitioning to a DevOps culture does not happen overnight. Brian believes there is no such thing as over-communication, and that everyone in IT should always know what’s happening. He tries to meet with all his teams in order to ensure transparency. Sounds like five-star hospitality to us!

This “suite” episode is available now on the CloudBees website and on iTunes. You can also join the conversation on Twitter by tweeting out to @CloudBees and including #DevOpsRadio in your post. Before you check-out make sure you subscribe to DevOps Radio where you can catch up on past episodes.

 

 

 

Blog Categories: Company News
Categories: Companies

Announcing General Availability of Declarative Pipeline

Wed, 02/01/2017 - 04:24

I am very excited to announce the addition of Declarative Pipeline syntax 1.0 to Jenkins Pipeline. We think this new syntax will enable everyone involved in DevOps, regardless of expertise, to participate in the continuous delivery process. Whether creating, editing or reviewing a pipeline, having a straightforward structure helps to understand and predict the flow of the pipeline and provides a common foundation across all pipelines.

Pipeline as Code was one of the pillars of the Jenkins 2.0 release and an essential part of implementing continuous delivery. Defining all of the stages of an application’s CD pipeline within a “Jenkinsfile” and treating it as part of the application code automatically provides all of the benefits inherent in SCM:

  • Retain history of all changes to Pipeline
  • Rollback to a previous pipeline version
  • Review new changes to the pipeline in code review
  • Test new pipeline steps in branches
  • Audit changes to the pipeline
  • Run the same pipeline on a different Jenkins server

We recommend people begin using it for all their pipeline definitions in Jenkins and the CloudBees Jenkins Platform. The plugin has been available for use and testing starting with the 0.1 release that was debuted at Jenkins World in September, it has already been installed over 5,000 times.

If you haven’t tried Pipeline or have considered Pipeline in the past we believe this new syntax is much more approachable with an easy adoption curve to quickly realize all of the benefits of Pipeline as Code. In addition, the predefined structure of Declarative makes it possible to create and edit pipelines with a graphical user interface (GUI). The Blue Ocean team is actively working on a Pipeline Editor that will be included in an upcoming release.

If you have already begun using Pipelines in Jenkins we believe that this new alternative syntax can help expand that usage. The original syntax for defining pipelines in Jenkins is a Groovy DSL that allows most of the features of full imperative programming. This syntax is still fully supported and is now referred to as “Scripted Pipeline Syntax” to distinguish it from “Declarative Pipeline Syntax.” Both use the same underlying execution engine in Jenkins and both will generate the same results in Pipeline Stage View or Blue Ocean visualization. All existing pipeline steps, global variables and shared libraries can be used in either. You can now create more cookie-cutter pipelines and extend the power of Pipeline to all users regardless of Groovy expertise.

Other key features of Declarative Pipeline include:

  • Syntax Checking  
    • Immediate runtime syntax checking with explicit error messages
    • API endpoint for linting Jenkinsfiles
    • CLI command to lint Jenkinsfiles
  • Docker Pipeline plugin integration
    • Run all stages in a single container
    • Run each stage in a different container
  • Easy configuration
    • Quickly define parameters for your pipeline
    • Quickly define environment variables and credentials for your pipeline
    • Quickly define options (such as timeout, retry, build discarding) for your pipeline
  • Conditional actions
    • Send notifications or take actions depending upon success or failure
    • Skip stages based on branches, environment, or other Boolean expression

Be on the lookout for future blog posts here or on Jekins.io detailing specific examples of scenarios or features in Declarative Pipeline.  Andrew Bayer, one of the primary engineers behind Declarative Pipeline, will be presenting at FOSDEM in Brussels, Belgium this weekend. We have also scheduled an online Jenkins Area Meetup (JAM) later this month to demo the features of Declarative Pipeline and give a sneak peek at the upcoming Blue Ocean Pipeline Editor.

In the meantime, we have updated all Pipeline documentation to incorporate a Getting Started guide, a Guided Tour and a Syntax Reference page with numerous examples to help you get on your way. We have also created a Quick Reference card that can be printed and hung nearby. Simply upgrade to the latest version of the Pipeline plugin in Jenkins to enable all of these great features.

 

 

Blog Categories: Jenkins
Categories: Companies

Announcing General Availability of Declarative Pipeline

Wed, 02/01/2017 - 04:24

I am very excited to announce the addition of Declarative Pipeline syntax 1.0 to Jenkins Pipeline. We think this new syntax will enable everyone involved in DevOps, regardless of expertise, to participate in the continuous delivery process. Whether creating, editing or reviewing a pipeline, having a straightforward structure helps to understand and predict the flow of the pipeline and provides a common foundation across all pipelines.

Pipeline as Code was one of the pillars of the Jenkins 2.0 release and an essential part of implementing continuous delivery. Defining all of the stages of an application’s CD pipeline within a “Jenkinsfile” and treating it as part of the application code automatically provides all of the benefits inherent in SCM:

  • Retain history of all changes to Pipeline
  • Rollback to a previous pipeline version
  • Review new changes to the pipeline in code review
  • Test new pipeline steps in branches
  • Audit changes to the pipeline
  • Run the same pipeline on a different Jenkins server

We recommend people begin using it for all their pipeline definitions in Jenkins and the CloudBees Jenkins Platform. The plugin has been available for use and testing starting with the 0.1 release that was debuted at Jenkins World in September, it has already been installed over 5,000 times.

If you haven’t tried Pipeline or have considered Pipeline in the past we believe this new syntax is much more approachable with an easy adoption curve to quickly realize all of the benefits of Pipeline as Code. In addition, the predefined structure of Declarative makes it possible to create and edit pipelines with a graphical user interface (GUI). The Blue Ocean team is actively working on a Pipeline Editor that will be included in an upcoming release.

If you have already begun using Pipelines in Jenkins we believe that this new alternative syntax can help expand that usage. The original syntax for defining pipelines in Jenkins is a Groovy DSL that allows most of the features of full imperative programming. This syntax is still fully supported and is now referred to as “Scripted Pipeline Syntax” to distinguish it from “Declarative Pipeline Syntax.” Both use the same underlying execution engine in Jenkins and both will generate the same results in Pipeline Stage View or Blue Ocean visualization. All existing pipeline steps, global variables and shared libraries can be used in either. You can now create more cookie-cutter pipelines and extend the power of Pipeline to all users regardless of Groovy expertise.

Other key features of Declarative Pipeline include:

  • Syntax Checking  
    • Immediate runtime syntax checking with explicit error messages
    • API endpoint for linting Jenkinsfiles
    • CLI command to lint Jenkinsfiles
  • Docker Pipeline plugin integration
    • Run all stages in a single container
    • Run each stage in a different container
  • Easy configuration
    • Quickly define parameters for your pipeline
    • Quickly define environment variables and credentials for your pipeline
    • Quickly define options (such as timeout, retry, build discarding) for your pipeline
  • Conditional actions
    • Send notifications or take actions depending upon success or failure
    • Skip stages based on branches, environment, or other Boolean expression

Be on the lookout for future blog posts here or on Jekins.io detailing specific examples of scenarios or features in Declarative Pipeline.  Andrew Bayer, one of the primary engineers behind Declarative Pipeline, will be presenting at FOSDEM in Brussels, Belgium this weekend. We have also scheduled an online Jenkins Area Meetup (JAM) later this month to demo the features of Declarative Pipeline and give a sneak peek at the upcoming Blue Ocean Pipeline Editor.

In the meantime, we have updated all Pipeline documentation to incorporate a Getting Started guide, a Guided Tour and a Syntax Reference page with numerous examples to help you get on your way. We have also created a Quick Reference card that can be printed and hung nearby. Simply upgrade to the latest version of the Pipeline plugin in Jenkins to enable all of these great features.

 

 

Blog Categories: Jenkins
Categories: Companies

The State of Jenkins - 2016 Community Survey Results

Tue, 01/31/2017 - 22:22

Last fall, prior to Jenkins World, CloudBees conducted a community survey on behalf of the Jenkins project. We were grateful to receive over 1,200 responses  – and thanks to this input, we gained some interesting insights into what Jenkins users are doing.

Based on the results, it’s safe to say that Jenkins is currently viewed as the #1, continuous integration (CI) server and is rapidly becoming the leading continuous delivery (CD) tool. Adoption of Jenkins 2, which introduced CD pipelines, clear visibility of delivery stages and multiple usability enhancements has skyrocketed to nearly half the active user base.  Once again, there was a lot of consistency in many findings from year-to-year. For example, the number of Jenkins users continues to increase, with 90% of survey respondents considering Jenkins mission-critical.

GET INFOGRAPHIC    |    GET DETAILED SURVEY RESULTS

Here are some of the key findings:

  • The overwhelming majority of respondents, 85% indicated that Jenkins usage had increased. Diving a little deeper, for organizations with more than 50 software projects, almost 30% used Jenkins in 2016 as compared to 16% in 2012.
  • An astounding 46% of respondents were running Jenkins 2, eight months after its release. This matches December 2016 stats from Jenkins.io showing 55% of active installs are running Jenkins 2.
  • Adoption of Jenkins Pipeline for continuous delivery (CD) is accelerating. Respondents who have adopted CD reported that 54% are using Pipeline.
  • The push to production has stayed about the same from last year, 61% of respondents are deploying changes to production at least once per week.
  • Linux is the platform of choice for builds, favored by 85% of respondents, along with 85% choosing Git as the favored source code repository.
  • Half of respondents are deploying applications directly to the cloud, with Amazon Web Services as the favored platform.

We want to thank everyone for completing the survey - and congrats to Iker Garcia for winning a free pass to Jenkins World 2017 and to Dave Leifer for winning the Amazon gift card.

See you at Jenkins World, August 28-31, in San Francisco, California! Register now for the largest Jenkins event on the planet in 2017 and get the Early Bird discount. The Call for Papers is open until March 5 – so submit a talk and share your Jenkins knowledge with the community.

Blog Categories: Jenkins
Categories: Companies

The State of Jenkins - 2016 Community Survey Results

Tue, 01/31/2017 - 22:22

Last fall, prior to Jenkins World, CloudBees conducted a community survey on behalf of the Jenkins project. We were grateful to receive over 1,200 responses  – and thanks to this input, we gained some interesting insights into what Jenkins users are doing.

Based on the results, it’s safe to say that Jenkins is currently viewed as the #1, continuous integration (CI) server and is rapidly becoming the leading continuous delivery (CD) tool. Adoption of Jenkins 2, which introduced CD pipelines, clear visibility of delivery stages and multiple usability enhancements has skyrocketed to nearly half the active user base.  Once again, there was a lot of consistency in many findings from year-to-year. For example, the number of Jenkins users continues to increase, with 90% of survey respondents considering Jenkins mission-critical.

GET INFOGRAPHIC    |    GET DETAILED SURVEY RESULTS

Here are some of the key findings:

  • The overwhelming majority of respondents, 85% indicated that Jenkins usage had increased. Diving a little deeper, for organizations with more than 50 software projects, almost 30% used Jenkins in 2016 as compared to 16% in 2012.
  • An astounding 46% of respondents were running Jenkins 2, eight months after its release. This matches December 2016 stats from Jenkins.io showing 55% of active installs are running Jenkins 2.
  • Adoption of Jenkins Pipeline for continuous delivery (CD) is accelerating. Respondents who have adopted CD reported that 54% are using Pipeline.
  • The push to production has stayed about the same from last year, 61% of respondents are deploying changes to production at least once per week.
  • Linux is the platform of choice for builds, favored by 85% of respondents, along with 85% choosing Git as the favored source code repository.
  • Half of respondents are deploying applications directly to the cloud, with Amazon Web Services as the favored platform.

We want to thank everyone for completing the survey - and congrats to Iker Garcia for winning a free pass to Jenkins World 2017 and to Dave Leifer for winning the Amazon gift card.

See you at Jenkins World, August 28-31, in San Francisco, California! Register now for the largest Jenkins event on the planet in 2017 and get the Early Bird discount. The Call for Papers is open until March 5 – so submit a talk and share your Jenkins knowledge with the community.

Blog Categories: Jenkins
Categories: Companies