Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
CloudBees provides an enterprise Continuous Delivery Platform that accelerates the software development, integration and deployment processes. Building on the power of Jenkins CI, CloudBees enables you to adopt continuous delivery incrementally or organization-wide, supporting on-premise, cloud and hybrid environments.
Updated: 2 hours 17 min ago

CloudBees Raises $23.5m - Because Software is Eating the World

Tue, 01/27/2015 - 14:40
Software and IT used to be a tool (out of many others) that helped businesses be more efficient, reduce cost and sometimes improve sales. In the last decade the situation has radically changed. Software is not a tool that "helps" businesses anymore. Software has become *the* business. Everything companies do, from interacting with prospects, customers, partners, handling their production, creating business differentiation is either highly influenced or fully defined by software. And this is not just some kind of particularism that would only apply to "online companies." This is true of pretty much any company. Think about the automotive industry as an example. Here is an industry that's well grounded in heavy industrial concerns: factory plants, raw material, assembly lines, international shipping, mass recalls, etc. Yet, what increasingly differentiates a car in the eyes of buyers is defined! Auto-pilot features and soon-to-come self-driving cars, software is "eating the world."

This transformation is forcing companies to adapt the way they think about IT processes, how business differentiation gets created and pushed to market. Rather than long 18 to 24-month release cycles where risk is high and value creation sparse, companies are moving towards "Continuous Delivery" (aka CD), where business value is continuously pushed to market, in small iterations and measured to see whether it yields expected results. This not only hugely reduces risk (large projects, time to market, etc.) but makes it possible to react to market changes and competitive moves much faster.

Adopting Continuous Delivery involves many changes within an organization. And ultimately, it implies to have a way to automate the process that starts with value creation (i.e. new software) down to its actual release, going through sophisticated automated build, tests, integrations, staging, validations, etc. The more automated a pipeline gets, the more code-release-measure iterations are possible in a give timeframe and the more value can be pushed to market. Automation is king.

CloudBees is offering the #1 solution on the market to implement enterprise Continuous Delivery at scale, based on the open source Jenkins CI project. With the rapid adoption of Continuous Delivery within organisations, CloudBees went through spectacular growth in 2014. This being just the tip of the CD-iceberg, we decided to gives ourselves the means to to achieve our objectives. Consequently, today we are announcing a US $23.5m capital expansion, led by Lightspeed Venture Partners, including follow-on investment from Matrix Partners, Verizon Ventures and Blue Cloud Ventures.

This investment will make it possible for us to further SCALE-UP and SCALE-OUT.

SCALE-UP as we further invest in our go-to-market to help more and more companies around the globe successfully adopt Continuous Delivery at scale.

SCALE-OUT as we expand the type of use cases and deployment types our solution supports so we can help an even wider range of companies to benefit from Continuous Delivery.

We will be making a lot of announcements in the next weeks and months, so stay tuned.


Sacha Labourey

Categories: Companies

CloudBees in 2015: WE ARE HIRING (Big Time...)

Wed, 01/21/2015 - 18:41
2014 has been a fantastic year for CloudBees. We made some hard strategic decisions and they were exactly the right ones. As a result, we've enjoyed very big growth in 2014 (more to come on that topic) and we have much more in the pipeline for 2015. So what about 2015?

Our motto for 2015 is "SCALE-UP and SCALE-OUT."

SCALE-UP because we are going to accelerate investment in our growth engine (sales and marketing). This will directly translate into hiring in sales (inside, as well as outside roles), marketing (US and EU) as well as support (US and EU). Note that our inside sales hub is in Richmond, VA.

SCALE-OUT because we want to expand to cover more space, more use cases, more depth with Jenkins and Continuous Delivery. Whoever you are (S, M, L or XL), wherever you are (private/public/hybrid cloud, traditional datacenter, etc.), we will satisfy your needs and be #1. That will directly translate into engineering and product management hires. Quite a few of them actually! And the good news is that we are pretty flexible location wise. While we'd rather hire in existing offices (Los Altos (CA), Richmond (VA), Brussels (BE) and Neuchâtel (CH)), we actually have developers in no less than 12 countries today. Those are NOT remote employees: we are, at heart, a distributed company. Distributed is how we operate, how we communicate and how we make decisions. As an example, if you are a top developer, are deadly bored where you are now, feel like you can make a difference on the market and grow a name for yourself, but unfortunately you are not living in the Bay Area, ping us: we are the teleportation machine you need.

We have a few open positions listed on our Career page, but there are more to come so you can either stay tuned and refresh your browser or... not wait and just ping us by sending your résumé with a few words about what would make you excited to work on.


Categories: Companies

Best Practices for Setting up Jenkins Auditing and Compliance

Tue, 01/20/2015 - 17:30

Many Jenkins users look for a recommend a strategy for keeping an audit trail. This article is supposed to be a gap filler until more comprehensive compliance capabilities in JE/JOC are developed.

There are two open source plugins that enable you to track “WHO did WHAT?” in Jenkins: 
  • Audit Trail Plugin: adds an “Audit Trail” section in your Jenkins main configuration page, where it is possible to define where to save logs on who performed particular operations on Jenkins.

Audit Trail - output fileScreen Shot 2015-01-13 at 09.32.28.png
  • Job Config History Plugin: stores all the changes made to jobs (history), saving the config.xml of each job. For each change, it is possible to see the record of the change, compare the difference between the new and the old version and restore a previous version. It is also possible to keep track of the changes made to the system configuration.

Job Config History - compare differences GUI optionScreen Shot 2015-01-13 at 09.24.30.png
Although the Job Config History plugin looks somewhat more complete, this, differently from the Audit Trail Plugin, does not track any information regarding the job execution and the exit status.
Audit Trail - executions of jobs
For this reason, many users use both The Audit Trail plugin and the Job Config History plugins to track both the changes and the executions of jobs.

Packages TraceabilityFor artifacts and packages, Jenkins keep track of those using the fingerprint. However, when they go outside Jenkins, there is no way to track them. Lately Jenkins has become more and more a tool for continuous delivery (CD), and integrating with automation tools is essentials. For this reason, CloudBees has partnered with vendors who offer popular tools, like Chef and Puppet, to enhance Jenkins traceability.

Thanks to the Deployment Notification Plugin applicable to Chef and Puppet, it is possible to keep track of the artifact even when it leaves the jenkins environment to be deployed on a remote server. The information maintained by Jenkins for each deployed package are the date/time, the hostname of the remote server, the environment and the deployment path.

Puppet Plugin - traceability
Audit when Using the Workflow PluginWith the new Workflow plugin, it is possible to configure an entire continuous delivery pipeline in a single job with a simple Groovy-based script, rather than using different jobs and several plugins to orchestrate the execution of the “flow.”

While it is possible to directly insert the Groovy code in the box of the workflow-job ad-hoc configuration page, it is recommended to store the workflow script in a repository and just load it from inside the Groovy box.

Workflow plugin - loading groovy script from repository
In the example above, the workflow script is stored in a local repository, checked out in the building machine, loaded and its functions accessed directly. This guarantees audit and version control since each change made to the script will be committed to the SCM and thus it is possible to keep track of who changed what.

This applies to all Configuration As Code approaches (i.e. DSL plugin, Literate plugin).

Audit when Performing Cluster Operations Using Jenkins Operations Center
With the new (December v1.6) release of Jenkins Operations Center by CloudBees, it is possible to perform management operations to all the client masters in one pass:
  • Restart client masters
  • Backup client masters
  • Run Groovy script on client masters
  • Upgrade Jenkins core on client masters
  • Install new plugins on client masters
  • Enable/disable plugins on client masters
  • Upgrade/downgrade plugin on client masters
Jenkins keeps track of these operations treating them as normal jobs and giving each operation a specific place on the file system where to store execution information and logs.

The same are accessible from the Graphical User Interface (GUI):

Cluster Operations - logs in GUI
If authentication is enabled, information on “WHO did WHAT and WHERE” is visible here.

Although at the moment there is not a unique and satisfying way to implement audit trail, the options presented above, combined together, should give a good coverage to all the activities performed on Jenkins.

Valentina Armenise
Solutions Architect, CloudBees

Follow Valentina on Twitter.

Categories: Companies

#BreakingBuilds Twitter Contest Results

Sat, 01/10/2015 - 01:39
We called out to the Jenkins, CloudBees and continuous delivery (CD) communities for creativity -- and you answered!

At CloudBees, we enjoyed the clever posts that poured in during our Twitter contest to promote everyone’s favorite butler. Participants had to come up with humorous captions to accompany any of several Jenkins Breaking Builds images.

Inside the ’Bees nest, we had our share of fun with Breaking Builds – creating memes that tie together themes from Breaking Bad, the popular TV series, with themes from continuous delivery. The Jenkins butler was impersonating Walter White in his quest for quality "deliveries"; also present was Walter's sidekick Jesse, a little battered and duded up in clean-room gear, making sure their “builds” didn’t break; and Saul the lawyer telling the world “Better Call CloudBees.”

We figured you’d have some good ideas, too, and we were right. Dozens of cool Tweets poured in offering Breaking Bad-inspired twists on everything from coding to CD to workflow to Skyler’s ever-growing pile of plug-ins.

Everybody who participated will get a Breaking Builds t-shirt, a Jenkins pin and a Jenkins sticker, as pictured below:

Before we announce the grand prize winners in each category, let's first look at a couple of worthy runner-ups.

First, we have to recognize the creator of Jenkins, @KohsukeKawa, for his post, which was a jab at Los Pollos Hermanos, Gus Fring's restaurant chain, that also served as a front for meth distribution: "With the Jenkins Butler, delicious deliveries are not a front for something else - they're real!"

Kudos to @faza who tapped into Walter White’s quality obsession with “No need to compromise,” highlighting that Walter can use the technology of his choice to produce the highest quality product possible, because undoubtedly the "plugin" he needs already exists:

@robert_sandell for playing off of sidekick Jesse’s classic line with “Yeah, bitch! Workflows!”:

@Sacha_Labourey also chose to focus on the extensive list of plugins for Jenkins when he posted "W/1000+ plugins & growing, Skyler is not going to see the end of it anytime soon..." as a response to Skyler's plea to Jenkins of "How much is enough?"

However, we ultimately picked out five Tweeters in four categories as our overall winners. If these Tweeters choose to, they can legitimately pick up the phone and let somebody know, in their best Heisenberg voice, that “I won.” These CD kingpins will each receive an Amazon gift card. It might not equal the huge pile of money that Skyler and Walt had to launder, but it will help to buy something cool – that isn’t illegal.

For Most Humorous, blending themes and humor, we had a tie. ‏@bendzone appealed to any developer when he Tweeted, “You ever say ‘forget it… we need to push!’? @cloudbees can help you more than Mr. White."

@ikeike443 channeled his developer overlord persona with “I’m the One Who Builds.”

Onto the numbers game: The award for Most Favorites went to ‏@collignont, who Tweeted, "Anywhere, at any time, We Need to Code! Or the business going to Die!"

Recognition for the Most ReTweets went to ‏@FawzyManaa’s “No, I don’t always look angry, I just put on that face to get developers to fix the build quicker.”

(If Jenkins gave us that look, we’d sure pick up the pace!)

Finally, for the Most Original Tweet - which successfully integrated the themes of continuous delivery, Breaking Bad and Jenkins, we salute ‏@weekstweets for the following: “Become addicted to constant, never ending improvement… while respecting the code.”

Jenkins the butler approves. And we’re sure Walt would give it his blessing as well.

Thanks to everybody who chimed in and had some fun with #BreakingBuilds. Of course, we all know if you use Jenkins you’re NOT breaking builds!

May all your builds never be broken and your software deliveries continuously successful...

Categories: Companies

2015 Predictions for Continuous Integration and Continuous Delivery

Mon, 12/29/2014 - 16:59
What will 2015 bring for DevOps teams everywhere? We asked a few experts what they predict will happen for the year ahead. Read on to see what they had to say and then leave your own prediction in the comments.

CI/CD software is already the main enabler of the industrial revolution happening in software engineering, at all steps of the lifecycle. It will have the same impact on businesses as what MRP and ERP software did for manufacturing and business processes in the 1980/90s. 2015 will confirm this trend and make CI/CD a strategic topic for C-level management, far beyond IT and DevOps discussions.– François Déchery, VP of ServicesContinuous Delivery continues to be as foggy as "Cloud", but becomes as ubiquitous. Continuous Deployment vendors continue to struggle turning operations into coders. Continuous Integration is identified as the core building block to a Continuous Delivery solution.” – James Brown, Director of Global Support at CloudBees“(Docker) containers will become first-class citizen as continuous delivery best "artifact" candidates to go through the validation pipeline. So need for repository to manage promoting them to the next step, and dedicated orchestration tools.” – Nicolas de Loof, Engineer at CloudBeesContinuous Delivery is going to require a fundamental re-alignment of Development and Operations teams and their reporting lines. The successful companies will be those that blur the lines between these hitherto separate reporting lines. There will be a lot of "talk" by companies on wanting Continuous Deployment but only those that change their organisational structure will be successful.– Nigel Harniman, Senior Solution Architect at CloudBeesWith CI workflow & Container technology (Docker) we will now move to distributed integration testing for complex apps.  An example will be ‘chaining’ test together to streamline Long-running test or loosely coupled services.  The other big change I see with CI workflow features like "Pausable Builds/Deploy" is bringing the enterprise business leads into this workflow process to streamline product code deployment outside of the "IT shop" and allow product leads to decide the best way to deliver value to companies customers. – Mario Cruz, Co-Founder and CTO of Choose DigitalContinuous Delivery, Cloud and DevOps (2CD) are three waves that will fundamentally reset the way IT produces business value. Half a dozen start-ups will be at the center of those waves and materially disrupt the IT landscape. They'll be clearly identified by end of 2015 and doing at least $100m in revenues by the end of 2017.All cloud vendors will have put in place a comprehensive Continuous Delivery offering in place, through either build, partner or acquire.Historical enterprise Infrastructure software vendors have, for the most part, fought their internal conflicts between "legacy" software and next-gen public cloud services, and will roll-out full-scale, truly committed, hybrid offeringsThe lightweight container war that got initiated at the end of 2014 will only grow and expand throughout 2015 with new competitors and "open source code wars" to be expected. As with any commoditization layer, we should expect at most 2 vendors to emerge.” – Sacha Labourey, CEO of CloudBees “I think Randy Sparkels sums it up succinctly in the CloudPrognosticator journal (Issue #12 - Dec 2014): Cloud based continuous development, integration and delivery will finally allow the agile melding of requirements, model, code, testing and delivery to be fused cost efficiently into the business workflow - optimizing capital and true benefits realization.” – Ben Walding, Software Engineer at CloudBeesWhat are your predictions for 2015?

Categories: Companies

Webinar Q&A: Continuous Delivery and Pipeline Traceability with Jenkins and Chef

Tue, 12/23/2014 - 17:15
Thank you to everyone who joined us on our webinar, the recording is now available.

Because of the technical difficulties in recording this webinar, here is another recording of the missing information.

And the slides are here.

Below are the questions we received during the webinar Q&A:

Q: Has Chef-client output come out on console or have specific things been captured in this traceability case?
A: I just wanted to show the generated report JSON at the very end of the Chef run. By default, the output is usually redirected to /var/log/chef/client.log, but if you run Chef-client manually you’ll see the output on the stdout of course.

Q: On one slide, it was mentioned to use Environments to version recipes. This is a best practice question then, does that slide then suggest that environments should be of type "appName-Version" and applied to nodes? It points to the cookbooks that are versioned. So that would be different than using Environments of like "test", "dev", etc.
A: For clarity, the recommendation is to use Environments to pin specific cookbook versions to a particular subset of nodes. You can still use environment names like “test”, “dev”, etc. For example, I have “mycookbook” version 1.2.3. I want to roll out “mycookbook” version 1.3.0. Set your CI job to first update “dev” with ‘cookbook “mycookbook”, “= 1.3.0”’, while “test” has ‘cookbook “mycookbook”, “= 1.2.3”’. When you are ready to ready to promote this change from dev to test, the CI job that promotes this to test should set the definition for the “test” environment to ‘cookbook “mycookbook”, “= 1.3.0”’.

Q: What licensing restrictions are there on using open-source Chef in the enterprise? Is it just the enterprise features?
A: You can use OSS Chef for free, no restrictions. In fact, you can also demo the Enterprise paid features from OSS. You can use all the Enterprise features for under 25 nodes for free. For more than 25 nodes, you have to license the Enterprise features.

Q: Do we have a cookbook for the Weblogic server ?
A: You can see all the cookbooks shared in Supermarket at but you can also find many others not on Supermarket via GitHub. There is a weblogic cookbook currently on the Chef Supermarket site.

Q: Would a simple app code only change result in a change to a cookbook (tweak version of app to deploy?) and also trigger app unit / acceptance tests during the Test Kitchen execution?
A: You don't *have* to structure change that way, but I'd recommend it. I find it's easier for auditing to discover what happened, where and when. You could skip those tests, but I wouldn't recommend it unless you're in some emergency breakfix scenario.

Q: Can you recommend a Jenkins plugin for pipeline?
A: For the build pipeline plugin, I'd recommend Build Flow. You could start with something simpler like the Build Pipelines plugin, but that doesn't allow for concurrency. You might not need concurrency. But you might. :-) So I'd recommend starting there.

Q: How do you decide what version to automatically increment in the Jenkins Build Job?

A: Your cookbooks are tagged with versions in them: X.Y.Z (major.minor.patch). If you submit any new change without rolling X or Y, the pipeline just increments Z based on the last known good version. If you roll X or Y, Z resets to zero and the CI job manages versions from there. This makes Z in whatever source you submit effectively useless. The pipeline owns Z, you own X and Y.
Q: Do you have example cookbooks showcasing this setup available for review somewhere, like a GitHub repo?
A: Sure, you can grab the the scripts, cookbooks and configs used in this webinar from my GitHub repository:

Q: Can you provide some tools/utilities for testing the Chef code? If I am not using Vagrant can I use Test-Kitchen. It would be great if you could some utilities which can be used to test Chef code without using Vagrant.

A: Is there a particular aversion to using vagrant? You could run tests by hand, I suppose. Recommended practice is to use vagrant via test-kitchen; it makes your life much easier. You could also try something like minitest and enable the chef-minitest report handler. If there’s a reason you can’t use vagrant, let’s talk about that and figure out a workable approach.
Q: Does Vagrant simply spin up a VM with Chef installed in it?
A: Test-kitchen effectively manages your vagrant files and sets them up to automatically bootstrap chef and grab your cookbook code out of your PWD. You can also use the kitchen config file to pass configs on to vagrant directly if you want addl functions. Check out the bento project for more details on what happens to your vagrant box --
Q: Can you put together a list of all technologies in this demo (Chef, Jenkins, Ruby, Vagrant, etc.)?
A: I think the best is to check out the source code from my GitHub repository. It has a README which contains the answers for your question. My GitHub repository can be found here:

Q: How does the Jenkins plugin obtain information about the status of the Chef deployment?
A: Chef generates a report at the end of Chef-client run and the Chef-handler-Jenkins gem - which is included in the cookbooks - selects only the file related changes and sends a POST request to a specific Jenkins URL. The Chef Tracking Plugin handles that POST data. To be more precise: the Chef Tracking Plugin exposes an API endpoint: http://<JENKINS_URL>/chef/report which is used by Chef to send the reports.

Q: Just saw that Chef mentioned it had deployed a war as a result of the Jenkins job. Is it possible instead to deploy other types of artifacts? E.g. install an .exe in Windows, or apply RPM update packages in SUSE/RHEL?
A: Sure, you can deploy any type of files. There is absolutely no restriction.

Q: Is there a way to trigger deployments from within Jenkins?
A: Sure, you can. You can configure Jenkins to start a build when a commit has been pushed to the repository, or a job A can trigger job B if the build was successful. There are a lot of plugins as well which makes this easier. I’d suggest this one:

Q: IS there any weblogic cookbook?
A: Answered above.

Q: Does Chef use something like a snapshot to revert any manual changes made to nodes or does it only apply settings outlined in a recipe?
A: Snapshots are not used or recommended. Everything in Chef is explicit: Chef only manages settings outlined in your recipe. Chef is not a magic pony and it does not somehow automagically understand everything that has changed on your system.

Q: We speak about serverspec (which tests the server) but is chefspec also needed? To test the cookbook-recipe code?.. Or is Serverspec good enough of a test to move things into production?
A: There are a number of test frameworks to use and I’m a big fan of chefspec. The proposed workflow was an example of the types of functions we may want. I would recommend both testing your code with chefspec and testing your resulting infrastructure with serverspec at different points in your development lifecycle.

Q: What is the best method of using Chef master/master or master/slave setup?
A: High Availability in Chef is accomplished as active/passive. More information on running Chef in HA mode can be found here --

Q: How can I perform hotfix configs using Chef?
A: I think the question is how to deploy a hotfix in this type of pipeline? In the proposed workflow, the most obvious way to deploy a hotfix is to merge it straight into master (bypassing the validation/code review cycles). Merge the code and let your pipeline promote that programatically.

Q: Can we spawn VMs in the cloud instead of using Vagrant?
A: Since version 1.1, Vagrant is no longer tied to VirtualBox and also works with other virtualization software such as AWS EC2. Check for specific vagrant plugins here:, especially the vagrant-aws plugin. Vagrant is only a wrapper around VirtualBox, KVM, AWS EC2, DigitalOcean and so on.

Q: How do you see the future of Chef given the rise of Docker? Do you see that the adoption of Chef will increase or decrease as the adoption of Docker increases?
A: Containers also require configuration. It will be interesting to see where Docker goes in the future. While creating a container is simple with Docker, running complex infrastructure topologies in production is difficult. Complexity never entirely goes away, as engineers we may just move it to different parts of the stack. Configuration management still has a strong role even in a containerized world. I encourage you to look at the work we’ve done with Chef Container for more ways to use Chef with container --
Categories: Companies

Webinar Q&A: Analyze This! Jenkins Cluster Operations and Analytics

Mon, 12/22/2014 - 19:02
Thank you to everyone who joined us on our webinar, the recording is now available.

And the slides are here.

Below are the questions we received during the webinar Q&A:

Q: Is the access control able to serve as a middle point between users and a backing AD/LDAP setup? Defining custom groups that just matter to Jenkins, for instance. Or it just centralizes the config?
A: Yes, CloudBees Role Based Access Control allows you to use a group provided by AD/LDAP or to define your groups in Jenkins.

Q: For these ES analytics, what DB strategy do you use actually? I mean NOSQL or conventionally RDBMS?
A: We use Elasticsearch, which is a document-oriented database and search engine.

Q: How well does the Operation Center servers scale, can it run on multiple instances with a load balancer?
A: Jenkins Operations Center can be clusterized with a load balancer. The load on JOC is limited because it is mostly an orchestrator. JOC can orchestrate dozens of Masters and hundereds of slaves

Q: How do you sync jobs, configs, etc. among Jenkins masters?
A: Jobs and configurations are not synced between masters per se. If you are referring to the HA feature in Jenkins Enterprise, this is done via a shared filesystem between the hot and cold master.

Q: Can the update center help to deploy any resources to the instance's file system that are not part of the Jenkins configuration or plugins? Or is the update limited to the bounds of Jenkins?
A: Custom Update Centers not only serve plugin and Jenkins-core files but also serve tool installers. Popular Tool Installers are installers of Git, JDK, JVM, Maven ... In that sense, Update Centers also deal with deployment of resources of slaves

Q: Do the analytics support a sort of change-back or throttling model to prevent greedy jobs from hogging too much of the resource pool?
A: Analytics is only a reporting engine. It does not affect the slave scheduling behavior.

Q: Are the metrics you generate limited by the amount of history you retain in your Jenkins instance?
A: Builds are reported in real-time, but you can re-index historical builds using a cluster operation. Builds are retained for 3 years by default in the analytics database, even if they are deleted on the remote Jenkins instance.

Q: Is there an API that will allow us to serve up the Jenkins performance charts on an internal website to our clients?
A: We provide the elasticsearch api which you can access using a Jenkins API key.

Q: Are there alerts in form of notifications on analytics sent to admins?
A: You can configure email alerts to be sent when internal metrics reach a threshold.

Q: We periodically see heap or permgen issues in our builds, but the JVM is the one called up by the Maven process to compile the code, not the master instance itself. Would the analytics view allow us to see the JVM memory for the JVM running the compiles?
A: No, Analytics does not include the JVM memory at this time.

Q: If you only have 2 VMs/servers, would it be best just to have 2 masters, or would it be best to create slaves on the existing hardware as the masters to segregate?
A: It's usually best to run builds on slaves before you begin adding more masters.

Q: Can you export the analytics/metrics to an external graphite/grafana server?
A: The performance metrics can be reported to graphite using DropWizard metrics graphite plugin.

Q: Would this be able to interact with something like the Jenkins Mesos plugin similar to the system eBay has set up? I'd like to use Docker containers for my slaves.
Categories: Companies

Webinar Q&A: Orchestrating the CD Process in Jenkins with Workflow

Mon, 12/15/2014 - 22:49
Thank you to everyone who joined us on our webinar, the recording is now available.
And the slides are here.

Below are the questions we received during the webinar Q&A:

Table Of Contents
  • Workflow
      • General Workflow questions
      • Workflow, SCM and libraries
      • Workflow visualization
    • Workflow and Plugin Ecosystem
  • Webinar & Demo Questions
  • Jenkins Dev questions
  • Generic Jenkins Plugin Ecosystem Questions


General Workflow Questions

Q: Where can I find docs on workflow and on samples for complex builds where multiple plugins/ build steps and post build actions are required?
A: See this webinar and the Workflow tutorial.

Q: Where are the docs on the integration of plugins with Jenkins Workflow?
A: See

Q: Is the workflow functionality only available in the enterprise version?
A: No, the Jenkins Workflow Engine is part of Jenkins Open Source (see install here). Jenkins Enterprise by CloudBees adds additional workflow features such as the Stage View Visualisation or CheckPoints to resume the workflow from an intermediate point.

Q: Do you offer transition services to help adopt the solution?
A: Please contact, we will be pleased to introduce you to our service partners.

Q: Do the workflows run on slaves, and across multiple/different slaves for each step?
A: Yes, workflows run on slaves and can span on multiple slaves with the statement “node(‘my-label’)”.

Q: *nix only slaves, or Windows/MacOS also?
A: Jenkins workflows can run on any Jenkins slave including *nix, Windows and MacOS.

Q: Does Jenkins have a way to block some processes from executing if the prerequisites have not yet fired?
A: A flow could wait for some conditions, if that is what you are asking. There is also a Request For Enhancement named “Wait-for-condition step”.

Q: We have some scenarios where 9 prerequisites need to happen before 5 other processes can fire off.
A: Parallel step execution could be a solution. Otherwise, there is a Request For Enhancement named “Wait-for-condition step

Q: How does one implement control points i.e. 'Gating' in Jenkins?
A: The “input” step for human interaction allows you to do it. You can even apply Role Based Access Control to define who can “move” to the next step.

Q: Can we trigger a workflow build using Gerrit events?
A: Any job trigger is supported for Workflow jobs generally. This one currently has a bug.

Q: Can we restrict also a step to run on a particular slave?
A: Yes with a statement “node(‘my-label-my-node-name’)” . You can restrict execution on a particular node or on a node matching a particular label.

Q: Does Jenkins support automatic retries on tasks that fail rather than just failing out right?
A: Yes there is a retry step (“retry(int maxRetries)” ).

Q: Is it possible to do conditional workflow steps?
A: Yes, absolutely. Jenkins workflow support standard Groovy conditional expressions such as “if then else” and “switch case default”.

Q: Does it work okay with the folders plugin?
A: Yes, folders can be used.

Q: Is there a way to call a job or a workflow from a workflow? ie. can I call an existing (complex) freestyle job that is taking care of build and call other jobs as part of the workflow?
A: Yes there is a 'build' step for this purpose.

Q: Is there a special syntax for creating a Publisher in workflow groovy? For example to chain workflows.
A: No special syntax for publishers. The 'build' step can be used to chain workflows, or you can use the standard reverse build trigger on the downstream flow.

Q: How do you handle flows where I may have 3 builds to Dev with only 1 of them going to QA?
A: stage step can take an optional concurrency: 1 (e.g. “stage ‘qa’, concurrency: 1”).

Q: Can Jenkins support multiple java and ant versions - we have a need to compile programs with java 1.5, 1.6, and 1.7 simultaneously?
A: Yes, using the tool step you can select a particular tool version.

Q: And support 32 bit and 64 bit compilations simultaneously?
A: Sure, if you run on appropriate slaves in parallel.


parallel( build32bits: {

     node('linux32') { // SLAVE MATCHING LABEL ‘linux32’
       // ...
}, build64bits: {

     node('linux64') { // SLAVE MATCHING LABEL ‘linux64’
      // ...

Q: Is it possible to have some human approval on workflow step? Like creating externally accessible web callback inside workflow, and continuing once this callback was called?
A: Yes, with the “input” step as show in the webinar.

Q: When you have a build waiting for human input, can you specify which users have permission to continue the build?
A: Yes you can specify a list of approvers. This list of approvers is typically defined with the CloudBees RBAC authorisation (part of Jenkins Enterprise by CloudBees).

Q: In a step can we have a dropdown where they select an option, for example, can we take user input to indicate which feature test environment to deploy to?
A: Yes the input step can accept any parameter just like build parameters, including pulldowns.

Q: What about text input or dropdown in human interaction part? Is that there?
A: Yes you can specify any parameters you like for the input step.

Q: If multiple builds are waiting on the same User Input message (say builds 1, 2 and 3) and the user responds positively to build 3, do builds 1 and 2 continue waiting or do they automatically abort?
A: They would continue waiting, though there are ways that a newer build can abort an earlier build, mainly by using the stage step.

Q: Is this workflow plugin available for current linux LTS release?
A: Yes, available for 1.580.1, the current LTS.

Q: One of the concerns I had was with troubleshooting efforts. I was curious to how that is handled in terms of documentation or support to resolve issues related to Jenkins?
A: There is a publicly available tutorial. If you are a CloudBees customer we offer support, and other providers may be available as well.

Q: Is there documentation for that right now or does CloudBees support troubleshooting efforts particular to Jenkins, that a developer and/or OPS representative might not be familiar with?
A: CloudBees offers support for any Jenkins operational issues.

Q: Does this plugin have access to Jenkins project model? I mean can it be used as a replacement for Jenkins script console?
A: It does have access to the Jenkins project model, yes, so you could use it for that purpose, though it is not intended to replace (say) Scriptler.

Q: I may have missed this being answered but, is that catchError on the card able to parse the log or does it just look for an exit code?
A: Just checks the exit code. There is a known RFE to capture shell command output and let you inspect its contents in Groovy as you wish.

Q: It appears that some of this feature set is in open-source Jenkins, and some in Cloudbees Enterprise. Is there a clear feature matrix that details these differences?
A: The stage view, and checkpoints, are currently the Enterprise additions. All else is OSS.

Q: Is the DSL extensible and available OSS?
A: All steps are added by plugins, so yes it is certainly extensible (and yes the DSL is OSS).

Q: Is it a full fledged Groovy interpreter, e.g., can I @Grab some modules?
A: @Grab grapes is not yet supported, though it has been proposed. But yes it is a full Groovy interpreter.

Q: Is it possible to install an app to the /usr directory on a the Jenkins in the cloud master or slave?
A: There is not currently any special step for app deployment but I expect to see some soon. In the meantime you would use a shell/batch script step.

Q: What mechanism does archive/unarchive use? Do you define your own revision system for it?
A: No this just uses the artifact system already in place in Jenkins.

Q: If I cannot @Grab is there any other possibility to extend the plugin? Can I access APIs of other plugins?
A: Yes you can access the API of other plugins directly from the script (subject to security approval); and you can add other steps from plugins.

Q: How does the workflow plugin interact with multi master systems?
A: There is no current integration with Jenkins Operations Center.

Q: How do you manage security access to trigger jobs or certain steps? (Integration LDAP and so on)
A: Controlling trigger permission to jobs is a general Jenkins feature.

Q: Does Jenkins workflow support parallelization of steps?
A: Yes, using the “parallel” step.

Q: Is there a way to promote jobs or manually trigger a job after job completion? I saw the wait for input but it looked the job was in a running state for that to work
A: The preferred way is to wait for some further condition. The build consumes no executor while waiting (if you are outside any node step).

Q: Can we run arbitrary Groovy code similar to groovy build steps from within the workflow?
A: Yes you can run arbitrary Groovy code, though Workflow is not optimized for this.

Q: We use tests that based on failures invoke more detailed tests and capture more detailed logs...and could jump out (maybe?) out of the Workflow context...
A: Your script could inspect partial test results and decide what to do next on that basis.

Q: When I trigger a Workflow from a normal "Freestyle project" an error occurs: "Web-Workflow is not buildable."
A: There is a known bug in the Parameterized Trigger plugin in this respect.

Q: If the Jenkins master is rebooted for some reason midway through workflow / job build, does the last good state stay in cache and restart automatically once Jenkins is back online? Or does last job require manually invoking the last build step?
A: The workflow resumes automatically once Jenkins comes back online.

Q: It is possible to reuse same workspace between builds, e.g. for incremental builds?
A: Yes the workspace is reused by default, just as for Jenkins freestyle projects. If you run node {...} in your flow, and it grabs a workspace on a slave in build #1, by default build #2 will run in the same directory. However, workspaces are not shared between different workflows / jobs.
Workflow, SCM and libraries.

Q: Would it be possible to pull workflow configuration script from SCM?A: Yes, you can store the workflow definition in an SCM and use the ‘load’ step.

Q: Can Jenkins directly access the SCM system for the workflow.groovy script?
A: Sort of. You can either check out and load() from workspace, or you can store script libraries in Jenkins master. Directly loading the whole flow from SCM is a known RFE.

Q: How do we reuse portions of scripts between different pipelines?
A: Yes, the Groovy language helps to extract functions and data structures to create libraries that can be loaded in workflows with the ‘load’ step and can typically be stored in an SCM. Or there are other options, including Templates integration in Jenkins Enterprise.

Q: Are all of the groovy functions generic? or these are pre-defined functions please?
A: Workflows are written in standard Groovy and allows to use the standard programming constructs such as functions, classes, variables, logical expressions (for loops, if then else ...).

The Domain Specific part of the workflow syntax is brought by the jenkins workflow engine (e.g. “parallel”, “node” ...) plugin steps (e.g. ”git”, “tool” …).

You can in addition write custom workflow libraries written in groovy scripts.

We can expect to see people sharing libraries of workflow scripts.

Q: If I am satisfied with a particular build, can I have optional steps in the workflow to, e.g., add an SCM tag to the respective source, or stage the artifacts to a different (higher level) artifact repository?
A: Yes, you can use simple if-then blocks, etc.

Q: So if I break workflow script into multiple part I have to use 'load' to compose them into the workflow?
A: Yes, or there is already support for loading Groovy classes by name rather than from the workspace. Other options in this space may be added in the future.

Q: How can I reuse similar functions in many workflows? For example right now we are using Build Flow plugin and we have on unittest job and we trigger it by many different workflows with different parameters.
A: You can use the step “load()” to share a common script from each flow; or store Groovy class libraries on Jenkins; or use Templates integration in Jenkins Enterprise.

Q: If we want to automatically start workflow build after someone pushes to a Git repo, can we set up that inside workflow definition?
A: Yes, just use the git step to check out your flow the first time, and configure the SCM trigger in your job.

Workflow Visualization

Q: Any plans for a combined visualization of multiple, related pipelines (for example, the build pipelines for an applications UI WAR and Web service WARs)?
A: CloudBees is working on a “release centric visualisation”, your idea could fit in it. We don’t have any ETA for the release-centric view.

Q: Is that "workflow view" only available on the main job page? I want to know if we could see multiple application's workflows in one place like the Continuous Delivery Pipeline plugin
A: Currently only on the job main page, though we are considering other options too.

Q: Is there any other visualizations possible, other than the tomcat based one? Like build-graph or build-flow?
A: The Jenkins Build Graph View Plugin and other pre-existing job flow views do not allow to visualize the internals of a workflow execution.

Q: So the nice UI is available only in Jenkins Enterprise, right?
A: Yes the table view of stages is available only in Jenkins Enterprise by CloudBees.

Workflow and Plugin Ecosystem

Q: Can existing build step plugins be access from the Groovy script - for example, the Copy Artifact plugin?
A: Some can, though minor changes need to be made to the plugin to support it. In this case, see JENKINS-24887.

Q: How do we figure out the generic step interface for the existing plugins?
A: The plugin step must implement SimpleBuildStep

Q: As far as I saw, it's more for Java development. How good Workflow will do for Windows development environment with Visual Studio and TFS source control environment?
A: Windows development is already well supported by Jenkins with various plugins including a Jenkins TFS Plugin and a Jenkins MSBuild plugin. Jenkins workflow support execution of Windows Batch scripts as it is possible with the Jenkins Step: Windows Batch script.

Q: Is it possible to integrate Jenkins with MSBuild from Microsoft to build .net applications?
A: The msbuild step probably does not yet support the newer APIs needed for Workflow integration but this is likely possible to add. In the meantime you can use the bat step to run msbuild.exe.

Q: Is it possible to invoke HTTP/S API through the workflow?
A: There is no specific step for this yet, but one could be created.In this demo, we invoke “curl” in an “sh” step.

Q: Does it support the job history plugin
A: Yes, the Jenkins Job Configuration History plugin applied to workflow job will track the history of the workflow definition.

Q: Can the workflow perform JIRA JQL workflow changes (i.e. use functions from the JIRA plugin) and update relevant JIRA tickets?
A: There is probably no support from the JIRA plugin for the moment but it is probably not hard to add. In the meantime, raw http calls with “curl” may be a solution.

Q: I'm interested in how JIRA is integrated with workflow. Do you have any info?
A: I am not aware of any particular integration yet. This would just be an RFE for the JIRA plugin, either to integrate with the SimpleBuildStep API in 1.580.x, or to add a dedicated Workflow step if that is more pleasant.

Q: I saw a question about Jenkins workflow integration with JIRA, having Jenkins update JIRA tix, etc. Is the opposite possible - can you incorporate JIRA workflows into Jenkins workflow? like abort deployment if ticket has not passed code rev. in JIRA workflow?
A: This would also need to be a Workflow step defined in the JIRA plugin to query the status of a ticket, etc. In the meantime you could access the JIRA remote API using a command-line client, perhaps.

Q: Does Jenkins have a way to start a job via email notification?
A: Not sure if such a plugin exists, but if it does, it should work the same with workflows. Otherwise, you can invoke a standard shell “mail” command on a linux slave.

Q: 'mvn' is always started using “sh" in the sample scripts - so are the scripts always OS dependent?
A: Certainly the sh step is (you would use bat on Windows). In the future we expect to have a neutral Maven step.

Q: Are there any additional DSLs available/planned?
A: Additional DSLs, or additional steps? There is no plan for any other DSL other than the Groovy one, but the architecture supports the possibility.

Q: Are we able to add JARs to the workflow Groovy scripts classpath without having to restart Jenkins?
A: You may not add precompiled JARs to the classpath currently. There are security implications to this, and also any function called in a JAR could not survive Jenkins restart. There may in the future be a way of using common libraries like this.

Q: But I am the Jenkins user, not root, it does not give me access to copy files to /usr.
A: Well somehow there must be a privileged script (setuid?) to deploy things; out of scope for Jenkins.

Q: How do you publish to Artifactory?
A: A step to publish to Artifactory could be added, or you can simply use any other external command which accomplishes this, such as sh 'mvn deploy'.

Q: Any potential gotchas/problems with Workflow and Ruby/Cucumber...?
A: Not that I know of. What kind of problems do you foresee?

Q: Do you have support for Docker containers? what about LB's ? Let's say I have service deployed to service machines ... with an LB in front of it. Is this nothing more than using shell constructs in the workflow ... where I sh to the LB's cli ? I want to be able to install (which I can do) to new boxes ... test on the new boxes ... again I can do that ... but then I want to put those boxes into a LB pool and take the old one out.
A: Jenkins supports integration with Docker to do the following:

Slave provisioning

Building and deploying Docker artifacts
You may have to look at application deployment solutions for your orchestration needs.
Webinar & Demo Questions

Q: What is deploy staging?
A: In this demo, we name “staging environment” the environment that mimics the production environment that is often called “pre-production environment”.

Q: Can you provide the Workflow Groovy script, other example and sites for review and learning?
A: See:
Q: Can you add to demo project a deployment step using puppet? How to capture the success/fail status of deployment via puppet?
A: Please look at this presentation of Jenkins Workflow combined with Puppet:

Workflow script:

Jenkins Dev Questions

Q: Is there currently any (or are there plans) for a testing harness so that the Groovy Workflow scripts can be evaluated for correctness?
A: No current plans for a workflow testing harness, beyond what is available for Jenkins generally such as the acceptance testing harness.

Q: Is there some metamodel for Workflow script? That you can walk through it programmatically?
A: There is an API for accessing the graph of steps run during a flow, if that is what you are asking for.

Q: For plugins developers, should one develop a DSL for a plugin?
A: You just need to implement the Step extension point, defined in the workflow-step-api plugin.

Generic Jenkins Plugin Ecosystem Questions

Q: Is support only for Chef / Puppet? Is there support (planned) for SaltStack, Rundeck and similar tooling?
A: There are more than 1,000 open source plugins for Jenkins including plugins for Chef, Puppet and Rundeck and SaltStack. Please note that the tracking of artifacts is not yet implemented in the Jenkins Rundeck plugin and the Jenkins SlatStack plugin.

Q: Does Jenkins integrate with OpenStack?
A: Yes, the Jenkins JClouds Plugin allows you to provision slaves “on demand” . We are not aware of plugins to ease the scripting of OpenStack or the packaging of OpenStack artifacts from Jenkins (e.g. automatic install of the OpenStack CLI on Jenkins slaves …)

Q: Does Jenkins have built-in database? If not, does Jenkins have DB plugins?
A: There is no built-in database. There may be plugins to work with your existing database.

Q: Can I get the Jenkins workflow to integrate with Heat orchestration on OpenStack?
A: we are not aware of such integration for the moment.

Steven Harris is senior vice president of products at CloudBees.
Follow Steve on Twitter.

Jesse Glick is a developer for CloudBees and is based in Boston. He works with Jenkins every single day. Read more about Jesse in his Meet the Bees blog post.

Cyrille Le Clerc is an elite architect at CloudBees, with more than 12 years of experience in Java technologies. He came to CloudBees from Xebia, where he was CTO and architect. Cyrille was an early adopter of the “You Build It, You Run It” model that he put in place for a number of high-volume websites. He naturally embraced the DevOps culture, as well as cloud computing. He has implemented both for his customers. Cyrille is very active in the Java community as the creator of the embedded-jmxtrans open source project and as a speaker at conferences.
Categories: Companies

Continuous Information - Newsletter for the Jenkins Community

Fri, 12/12/2014 - 03:44
.ReadMsgBody { width: 100%; background-color: #f4f4f4; } .ExternalClass { width: 100%; line-height:100%; } body { font-family: Arial, sans-serif, Helvetica; font-size:13px; line-height:18px; color:#000000; } table { border-collapse: collapse; } ul li { font-size:13px; color:#333333; font-family:Arial, Helvetica, sans-serif;line-height:18px; } p { margin:0 0 12px 0; padding:0 !important; font-family: Arial, sans-serif, Helvetica; color:#000000; } @media only screen and (max-width: 640px) { *[class].deviceWidth { width:440px!important; padding:0; } *[class].deviceWidth2 { width:400px!important; } *[class].deviceWidth3 { width:100%!important; } *[class].text-left { text-align: left!important; } *[class].top-pad { padding-top:15px !important; } *[class].bottom-pad { padding-bottom:20px !important; } *[class].nopad { padding-left:0px !important; } *[class].no-display { display:none; } *[class].block { display:block; width:100%; } } @media only screen and (max-width: 479px) { *[class].deviceWidth { width:280px!important; } *[class].deviceWidth2 { width:240px!important; } }
Winter 2014 - Volume 6 Jenkins Marches Towards World Domination SDTimes2014.gif   bossie_awards_2014_primary-100463112-carousel.idge.jpg   Geek-Choice-Awards-CI-Server-300x300-black.png
As of November 2014, Jenkins can claim nearly 100,000 active installations and more than 1,000 plugins! That’s up 35% from this time last year. The Jenkins community keeps pulling in the awards too. Here are just a few accolades our favorite butler has collected recently:
  And according to an InfoQ survey this spring, three times as many people use Jenkins as use the next most popular tool - not an award, but an accolade in its own right. Check out the results [click the Average button to get the Results graph] and you’ll find that 498 out of 686 respondents are using Jenkins now… almost three times as many as the second most popular CI tool. Feature Simplifying Continuous Delivery Pipelines - Kohsuke Explains Jenkins’ New Workflow Feature Best Practices from Kohsuke kohsuke-kawaguchi.jpg Jenkins started with a notion of jobs and builds at heart. One script is one job, and as you repeatedly execute jobs, it creates builds as records. As the use case of Jenkins got more sophisticated, people started combining jobs to orchestrate ever more complex activities.

So we felt the need to develop a single unified solution to create and manage pipelines: the new Workflow plugin. Now it’s much easier to orchestrate activities that span across multiple build slaves, code repositories, etc…
Learn about Workflow  
JENKINS USER CONFERENCE The Butler on Tour In 2014, the Jenkins butler undertook his biggest world tour yet, hosting Jenkins User Conferences in four cities. If you didn’t make it, you’re in luck – slides are available from all conferences, and most have videos as well:
Community interest continues to grow each year. More than 450 Jenkins users attended the Boston JUC, 375 came to Berlin, 400 turned out for Israel, and nearly 550 showed up in San Francisco. Another 600 registered for San Francisco via Live Stream. You can check out Kohsuke's photos of the San Francisco JUC here.  

Look for more Jenkins User Conferences worldwide next year. We’ll have a Call for Papers in early spring. Updates What's New in Jenkins? Kohsuke shares the latest and greatest Jenkins developments…
  • Acceptance test harness: this consists of two pieces. One is a reusable test harness that enables rapid development and execution of blackbox acceptance tests of Jenkins and its plugins. The other is a suite of tests used to verify releases, which is built on top of the test harness.
  • Infrastructure rehaul: we're now eating our own dog food. We’ve upgraded how we do continuous delivery on our infrastructure by using…Jenkins. The process now consists of a series of containerized services built and tested via Jenkins, then deployed to servers via a Puppet master.
  • NIO Java Web Start slaves: Jenkins master now manages slaves connected via Java Web Start through NIO, thereby reducing the threads and context switches necessary to service them. This structure allows Jenkins to scale more efficiently.
Jenkins Events CD Summits: Accelerating Innovation with Continuous Delivery CloudBees hosted seven free, Jenkins-focusedCDSummit-graphic.png
Continuous Delivery Summits in the US and Europe this summer and fall. Experts from Forrester Research, XebiaLabs, Puppet Labs, SOASTA and many other Jenkins-loving organizations shared their expertise on continuous delivery – specifically, continuous delivery powered by Jenkins. Visit this page, choose your favorite city, and explore the slides and videos (where available). ad from CloudBees, our newsletter sponsor Holidays 2014: Jenkins Takes Three Major Strides Forward for Enterprises Is proliferation of your Jenkins jobs slowing you down?  Is your complete software pipeline still not as automated as you would like it to be?  Do you have a good handle on build metrics and Jenkins monitoring?

Check out the latest and greatest Jenkins capabilities:   
  • Open source Jenkins now features Workflow.  Eliminate job proliferation with native support for complex CD pipelines.
  • The November 2014 Jenkins Enterprise by CloudBees release lets you establish checkpoints and restart jobs closer to the point of failure, as well as visualize your continuous delivery pipeline.
  • The November 2014 Jenkins Operations Center by CloudBees release gives you highly scalable, real-time analysis of Jenkins performance and build metrics without additional coding. The new Cluster Operations feature also greatly simplifies management activities.

  • Get the details by visiting
Feature Plugin Highlight: DotCI The DotCI plugin is specifically focused for those environments that work with GitHub, and in that context it drastically simplifies setting up CI jobs for repositories. It also lets you manage configuration via a YAML file stored in a Git repository.
The plugin also comes with strong Docker integration, which lets you build containers with minimal configuration, as well allowing you to run builds in containers. To top if off, the DotCI plugin uses MongoDB for persistence, allowing Jenkins to house thousands of jobs with ease.
Table of Contents: Jenkins Marches Towards World Domination Simplifying Continuous Delivery Pipelines - Kohsuke Explains Jenkins’ New Workflow Feature The Butler on Tour What's New in Jenkins? CD Summits: Accelerating Innovation with Continuous Delivery Plugin Highlight: DotCI Upcoming Jenkins Webinars Analyze This! Jenkins Cluster Operations and Analytics
Wednesday, Dec 17 at 1pmET
Continuous Delivery and Pipeline Traceability with Jenkins and Chef
Thursday , Dec 18 at 1pm ET CASE STUDY Jenkins Enterprise by CloudBees Case Study See how one company adopted JenkinsCI across a multinational development organization to increase the pace of development for Android software.
SemiconductorCaseStudy.png Leading Semiconductor Manufacturer INFOGRAPHIC Continuous Delivery DZone surveyed 500+ IT professionals...revealing insights into DevOps practices and CD adoption.
The infographic reveals the three traits that indicate true continuous delivery is underway, what % of respondents use Jenkins, how many professionals are actually doing continuous delivery (v. how many think they are doing continuous delivery), and who actually deploys code. JENKINS GATHERINGS Meetups  nbsp; Have you found the Jenkins Meetup page yet? Keep an eye out for a meetup near you. Or start your own! Contact one of the Jenkins Meetup organizers and they’ll be happy to help. DZONE REFCARD Preparing for Continuous Delivery The Essential Continuous Delivery Prep Cheat Sheet
This Refcard is written to ease the transition to continuous delivery, giving guidance, advice and best practices to development and operations teams looking to move away from traditional release cycles.   DzoneRefcard.png CD NEWSLETTER Introducing the New CD Journal CD Journal.png Stay up on the latest continuous delivery practices. Get the week's best articles sent to to you every Friday.
Subscribe to the CD Journal. CONTRIBUTE Write for this Newsletter Are you doing something exciting with Jenkins? Send us a short overview and we’ll share it with the community! Just drop an email to continuous-information ARCHIVES Continuous Information Archives Want to read past issues of this newsletter or subscribe another email address? Go here. Lisa Wells Managing Editor, Continuous Information Marketing Bee, CloudBees Kohsuke Kawaguchi Technical Editor, Continuous Information Jenkins & Hudson Project Founder and CTO, CloudBees You are receiving this email because you opted in at our website or you are a customer, prospect or vendor. You can unsubscribe your email address from this list at any time. Company Business HQ: CloudBees, Inc.
289 South San Antonio RdSuite 200Los Altos, CA 94022United States
Phone +1.781.404.5100
Email: Copyright (C) 2014 CloudBees, Inc. All rights reserved.
Categories: Companies

Jenkins Operations and Continuous Delivery @Scale with CloudBees Jenkins Enterprise

Wed, 12/10/2014 - 02:17
Continuous Delivery @Scale Lately, there has been a tremendous buzz in the Jenkins open source community about the release of the Workflow feature. The Workflow feature enables organizations to build complex, enterprise-ready continuous delivery pipelines. I am particularly excited as native support for pipelines in Jenkins was one of the most common requests I have encountered from enterprise (and small) users. You can read about the OSS Workflow GA announcement here.

I am pleased to announce that CloudBees is delivering additional features built on top of OSS Jenkins that help enterprises use the Workflow features to implement continuous delivery pipelines.

The Workflow Stage View feature helps teams visualise the flow and performance of their pipelines. So, for example, a manager can look at a pipeline and easily drill into the performance of a particular stage or a developer can look at the pipeline and see how far in the pipelines their commits have traversed.

The Checkpoint feature enables recovery from both infrastructure or Jenkins failures. In the event of a failure, the pipeline can be started from any of the previous successful checkpoints, instead of from the beginning. This is extremely valuable in the case of long-running builds that may take hours or days to run. 

Jenkins Workflow is a technological leap to help organizations build out continuous delivery pipelines and I urge you to check it out. 

Jenkins Operations @Scale 
Late last year, we announced Jenkins Operations Center by CloudBees - a game changer in the world of Jenkins. It acts as the operations hub for multiple Jenkins in an organization, letting them easily share resources like slaves and security.

I am happy to announce the release of a new version of Jenkins Operations Center by CloudBees - version 1.6. This release has two significant features: 

Cluster Operations simplifies the management of Jenkins by allowing one operational command to simultaneously act on a group of Jenkins masters, versus administering individual masters. Cluster Operations includes actions such as: starting/ restarting client masters, installing and upgrading plugins and upgrading the Jenkins core.

CloudBees Jenkins Analytics provides operational insight into Jenkins performance. Performance aspects include Jenkins-specific build and executor performance across a cluster of masters and standard JVM-based performance metrics. CloudBees Jenkins Analytics also make the monitoring of multiple masters easier by adding a number of new graphs that show performance for build queues and persistence for views across master restarts. The feature also includes new visualization graphs built in Kibana and ElasticSearch, delivering the ability to quickly drill down into individual client performances.  

Performance Analytics

Build Analytics
Join our webinar 10 December 2014 about Workflow, and one on Jenkins Operations @Scale on 17 December to learn more. 

Try Jenkins Enterprise by CloudBeesTry Jenkins Operations Center

More information: 
Product pages
Jenkins Operations Center Documentation 
Jenkins Enterprise by CloudBees Documentation
Talk to sales for Jenkins Enterprise by CloudBees, Jenkins Operations Center.

- Harpreet Singh
vice president, product management
Categories: Companies