Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
CloudBees provides an enterprise Continuous Delivery Platform that accelerates the software development, integration and deployment processes. Building on the power of Jenkins CI, CloudBees enables you to adopt continuous delivery incrementally or organization-wide, supporting on-premise, cloud and hybrid environments.
Updated: 22 hours 59 min ago

Continuous Delivery and Workflow

Fri, 07/25/2014 - 15:18
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Jesse Glick, software developer, CloudBees, about a presentation given by himself and Kohuske Kawaguchi, as well as a session given by Alex Manly, MidVision. Both sessions are from JUC Boston.

At the Jenkins User Conference in Boston this year, Kohsuke and I gave a session Workflow in Jenkins where for the first time we spoke to a general audience about the project we started to add a new job type to Jenkins that can manage complex and long-running processes. If you have not heard about Workflow yet, take a look at its project page, which gives background and also links to our slides. I was thrilled to see the level of interest and to hear confirmation that we picked the right problem to solve.

Alex Manly, MidVisionA later session by Alex Manly of MidVision (Stairway to Heaven: 10 Best Practices for Enterprise Continuous Delivery with Jenkins) focused on the theory and practice of CD, such as the advantages of pull (or “convergent”) deployment at large scale when using homogeneous servers, as opposed to “pushing” new versions immediately after they are built, and deployment scenarios, especially for WebSphere. Since I am only a spectator when it comes to dealing with industrial-scale deployments like that, while listening to this talk I thought about how Workflow would help smooth out some of the nitty-gritty of getting such practices set up on Jenkins.

One thing Alex emphasized was the importance of evaluating the “cost of non-automation” when setting up CD: you should “take the big wins first,” meaning that steps which are run only once in a blue moon, or are just really hard to get a machine to do exactly right all the time, can be left for humans until there is a pressing need to change that. This is why we treated the human input step as a crucial feature for Workflow: you need to leave a space for a qualified person to at least approve what Jenkins is doing, and maybe give it some information too. With a background in regulatory compliance, Alex did remind the audience that these approvals need to be audited, so I have made a note to fix the input step to keep an audit trail recording the authorized user.

The most important practice, though, seemed to be “Build Once, Deploy Anywhere”: you should ensure the integrity of a build package destined for deployment, ideally being a single compressed file with a known checksum (“Fingerprint” to Jenkins), matched to an SCM tag, with the SCM commit ID in its manifest. Honoring this constraint means that you are always deploying exactly the same file, and you can always trace a problem in production back to the revision of the software it is running. There should also be a Definitive Software Library such as Nexus where this file is stored and from which it is deployed. One important advantage of Workflow is that you can choose to keep metadata like commit IDs, checksums, timestamps, and so on as local variables; as well as being able to keep a workspace (i.e., slave directory) locked and available for either the entire duration of the flow, or only some parts of it. This means that it is easy for your flow to track the SCM commit ID long enough to bake it into a manifest, while keeping a big workspace open on a slow slave with the SCM checkout, then checksum the final build product and deploy to Nexus, releasing the workspace; and then acquire a fast slave with a smaller workspace to host some functional tests, with the Nexus download URL for the artifact still easily accessible; and finally switch to a weak slave to schedule deployment and wait. Whereas a setup using traditional job chaining would require you to carefully pass around artifacts, workspace copies, and variables (parameters) from one job to the next with a lot of glue code to reconstruct information an earlier step already had, in a Workflow everything can remain in scope as long as you need it.

The biggest thing that Alex treated as important which is not really available in Workflow today is matrix combinations (for testing, or in some cases also for building): determining the effects of different operating systems / architectures, databases, JDK or other frameworks, browsers, and so on. Jenkins matrix projects also offer “touchstone builds” that let you first verify that a canonical combination looks OK before spending time and money on the exotic ones. Certainly you can run whatever matrix combinations you like from a Workflow: just write some nested for-loops, each grabbing a slave if it needs one, maybe using the parallel construction to run several at once. But there is not yet any way of reporting the results in a pretty table; until then, the whole flow run is essentially pass/fail. And of course you would like to track historical behavior, so you can see that Windows Java 6 tests started failing with a commit done a week ago, while tests on Firefox just started failing due to an unrelated commit. So matrix reporting is a feature we need to include in our plans.

All in all, it was a fun day and I am looking forward to seeing what people are continuously delivering at next year’s conference!


Jesse Glick
Developer Extaordinare
CloudBees

Jesse Glick is a developer for CloudBees and is based in Boston. He works with Jenkins every single day. Read more about Jesse on the Meet the Bees blog post about him.


Categories: Companies

Automating CD Pipelines with Jenkins - Part 1: Vagrant, Fabric and Selenium

Tue, 07/22/2014 - 20:10
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Tracy Kennedy, solutions architect, CloudBees, about a session given by Hoi Tsang, DealerTrack, at JUC Boston.

There’s a golden standard for the software development lifecycle that it seems most every shop aspires to, yet seemingly few have already achieved - a complete continuous delivery pipeline with Jenkins that automatically pulls from an SCM repository on each commit, then compiles the code, packages the app and runs all unit/acceptance/static analysis tests in parallel.

Integration testing on the app then runs in mini-stacks provided by Vagrant and if the build passes all testing, Jenkins stores the binary in a repository as a release candidate until a candidate passes QA. Jenkins then plucks the release from the repository to deploy it to production servers, which are created on-demand by a provisioning and configuration management tool like Chef.

The nitty gritty details of the actual steps may vary from shop to shop, but based on my interactions with potential CloudBees customers and the talks at the 2014 Boston JUC, this pipeline seems to be what many high-level execs aspire to see their organization achieving in the next few years.

Jenkins + Vagrant, Fabric and Selenium
Hoi Tsang of DealerTrack gave a wonderful overview of how DealerTrack accomplished such a pipeline in his talk: “Distributed Scrum Development w/ Jenkins, Vagrant, Fabric and Selenium.”

As Tsang explained, integration can be a problem, and it’s an unfortunately expensive problem to fix. He explained that is was best to think of the problem of integration as a multiplication problem, where

Hoi Tsang, DealerTrackpractice x precision x discipline = perfection
When it comes to SCRUM, which Tsang likened to being “like driving really fast on a curvy road,” most all of the attendees at Tsang’s JUC speech practiced it and almost all confirmed that they do test-driven development.

In Tsang’s case, DealerTrack was also a test-driven development shop and had the goals of writing more meaningful use cases and defining meaningful test data.

To accomplish this, DealerTrack set up Jenkins and installed a few plugins: Build Pipeline plugin, Cobertura and Violations to name a few. They also created build and deployment jobs - the builds were triggered by code commits and schedules, and the builds triggers tests whose pass/fail rules have been defined by each DealerTrack team. Their particular rules were:
  • All unit tests passed
  • Code coverage > 90%
  • Code standard > 90%
DealerTrack had their Jenkins master control a Selenium hub, which consisted of a grid of dedicated VMs/boxes registered to the Selenium hub. Test cases would get distributed among the grid, and the results would be reported back to the associated Jenkins jobs.

The builds would also be subject to an automated integration build, which relied on Vagrant to define mini-stacks for the integration tests to run in by checking out source code into a shared folder with a Virtual Machine, launching the VM, preparing + running the test, then cleaning up the test space. Despite this approach to integration testing taking longer, Tsang argued that it provided a more realistic testing environment.

If the build passed, then its artifact would be uploaded to an internally-hosted repository and reports on the code standards + code coverage were published.This would also trigger a documentation generation job.

According to Tsang, DealerTrack also managed to setup an automated deployment flow, where Jenkins would pick up a build from the internal repository, tunnel into the development server, then drop off the artifact and deploy the build. They managed to accomplish this using Python Fabric, a CLI for streamlining the use of SSH for application deployment or system administrator tasks.

Tsang explained that DealerTrack had a central Jenkins master to maintain the build pipeline, but split the work between each team’s assigned slave and assigned testing server. Dedicated slaves worked on the more important jobs, which allowed branch merging to be accomplished 30% faster.
Stay tuned for Part 2!


Tracy Kennedy
Solutions Architect
CloudBees
As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. (Meet the Bees blog post coming soon!) For now, follow her on Twitter.
Categories: Companies

To Successfully Adopt Continuous Delivery, Organizations Need To Change

Fri, 07/18/2014 - 16:56
In a recent Forrester Research report, Modern Application Delivery Demands a Modern Organization, Kurt Bittner, John Rymer, Chris Hines and Diego Lo Giudice review the differences between the 'modern organization' of yesterday and today and the shifts that need to be taken to keep up with not only customer demand, but the success of more agile competitors.
Bottlenecks
When you look at the structure of a successful organization, it is rare to find silos of any sort. The reason being that when you shift the emphasis from individual performance optimization to a team-based structure focused on optimizing delivery, you get faster output. Why?
When an individual is focused on their own task lists, priorities slip for other projects that get held up. This ultimately creates a bottleneck of work. What is the natural thing you may do when you are waiting for someone else to do the next step of a project or be told to do so from a superior? You start something else. Because you are working on some new project, a new bottleneck is formed when your resources are needed. You are not available any longer and someone is now waiting on you. They start a new project while they wait and so on and so forth.
The Culture Shift
We are members of a culture of multi-tasking – we must always be busy. This is not always good. In the modern culture, resources are dedicated and at the ready to move projects along, even if they are underutilized. So now you have resources that are not moving on to new projects and are ready when their resources are needed.
Now going back to the silo vs. team approach, we start to see less specialization and more focus on distributing knowledge. So now you have a team that can be the next in line instead of one person. It’s now about cross-functional teams vs. superstars.
The focus also needs to change. Our culture wants us to get the Employee of the Month award and achieve personal objectives but what if we focused less on how much we could get out of top performers and more on how much output we could deliver to our customers?
This would mean another huge cultural shift and this time it’s about the management team. Management must be agile and allow for teams to make decisions quickly without having to cut through yards of red tape to get something across the finish line. It’s more about holding your team accountable vs. tracking and monitoring their every move.
The report concludes by stating: “While process and automation are essential enablers of these better results, organization culture, structure, and management approach are the true enablers of better business results.”
Continuous Delivery can be a tremendous game changer for your organization but the organization needs to be modernized in such a way that it will be a successful game changer. 



Christina Pappas
Marketing Funnel Manager
CloudBees

Follow her on Twitter
Categories: Companies

The Butler and the Snake: Continuous Integration for Python by Timo Stollenwerk, Plone Foundation

Tue, 07/15/2014 - 17:41
This is the first in a series of blog posts in which various CloudBees technical experts will summarize presentations from the Jenkins User Conferences. This first post is written by Félix Belzunce, solutions architect, CloudBees.

At the Jenkins User Conference/Europe, held in Berlin on June 25, Timo Stollenwerk of Plone Foundation presented how the Plone community uses Jenkins to build, test and deliver Python-based software projects. Timo went through some of the CI rules and talked about the main tools you should take a look at for implementing Python CI.

For open source projects implementing CI, the most important thing besides version control and automated builds is the agreement of the team. In small development teams, that is an easy task most of the time, but not in big teams or in open source projects where you need to follow some rules.

When implementing CI, it is always a good practice to build per commit and then notify the responsible team as to the outcome. This makes the integration process easier and avoids "Integration Hell." The Jenkins dashboard and the Email-ext plugin could help accomplish this. Also, the Role-based Access Control Plugin could be useful to set-up roles for your organization, so your developers can access the Jenkins dashboard while being sure that nobody can change their job configuration.


Java developers usually use Gradle, Maven or Ant as automated build tools, but in Python there are different tools you should consider, like Buildout, PIP, Tox and Shining Panda. Regarding testing and acceptance testing, I have listed below some of the tools that Timo mentioned.


Due to Python's nature static analysis has become somewhat essential. If you plan to implement this in your organization, I recommend reading this article, which compares different tools for Python static analysis, some of them which Timo also mentions.

Regarding scalability, when you are running long builds you could start facing some issues. A good practice here is not to run any build in your master and to let your slaves do the job.

If you have several jobs involved in launching a daemon process, you should ensure that each job uses unique TCP port numbers. If you don't do this, two jobs running on the same machine may use the same port and end up interfering with one another. In this case, the Port Allocator Plugin can help you out.

The CloudBees Long Running Build Plugin and the NIO SSH Slaves Plugin could also be helpful if you want to restart a build (in the case that Jenkins crashes) without starting from scratch or if you want to increase the number of executors attached to your Jenkins master while maintaining the same performance.

In the release process, Timo explains that the Jenkins Pipeline plugin could be combined with some specific Python tools like zest.releaser or devpi.

Get Timo's slides and (when videos are posted) watch the video of his JUC Europe session.



Félix Belzunce
Solutions Architect
CloudBees

Félix Belzunce is a solutions architect for CloudBees based in Europe. He focuses on continuous delivery. Read more about him on his Meet the Bees blog post and follow him on Twitter.
Categories: Companies

CloudBees Announces Public Sector Partnership with DLT Solutions

Thu, 07/10/2014 - 14:50

Continuous Delivery is becoming a main initiative across all vertical industries in commercial markets/private markets. The ability for IT teams to deliver quality software on a hourly/daily/weekly basis is the new standard.

The public sector has the same needs to accelerate application delivery for important governmental initiatives. To make access to the CloudBees Continuous Delivery Platform easier for the public sector, CloudBees and DLT Solutions have formally joined hands in order to help provide Jenkins Enterprise by CloudBees and Jenkins Operations Center by CloudBees to federal, state and local governmental entities.

With Jenkins Enterprise by CloudBees now offered by DLT Solutions, public sector agencies have access to our 23 proprietary plugins (along with 900+ OSS plugins) and will receive professional support for their Jenkins continuous integration/continuous delivery implementation.

Some of our most popular plugins can be utilized to:
  • Eliminate downtime by automatically spinning up a secondary master when the primary master fails with the High Availability plugin
  • Push security features and rights onto downstream groups, teams and users with Role-based Access Control
  • Auto-scale slave machines when you have builds starved for resources by “renting” unused VMware vCenter virtual machines with the VMware vCenter Auto-Scaling plugin
Try a free evaluation of Jenkins Enterprise by CloudBees or read more about the plugins provided with it.

For departments using larger installations of Jenkins, CloudBees and DLT Solutions propose Jenkins Operations Center by CloudBees to:
  • Access any Jenkins master in the enterprise. Easily manage and navigate between masters (optionally with SSO)
  • Add masters to scale Jenkins horizontally, instead of adding executors to a single master. Ensure no single point of failure
  • Push security configurations to downstream masters, ensuring compliance
  • Use the Update Center plugin to automatically ensure approved plugin versions are used across all masters
Try a free evaluation of Jenkins Operations Center by CloudBees, or watch a video about Jenkins Operations Center by CloudBees.

The CloudBees offerings, combined with DLT Solutions’ 20+ years of public sector “know-how”, makes it easier to support and optimize Jenkins in the civilian, federal and SLED branches of government.

For more information about the newly established CloudBees and DLT Solutions partnership read the news release.

We are proud to partner with our friends at DLT Solutions to bring continuous delivery to governmental organizations.

Zackary Mahon
Business Development Manager
CloudBees

Categories: Companies

Jenkins Operations Center by CloudBees 1.1 generally available today

Wed, 06/18/2014 - 18:00
Late last year with the release of Jenkins Operations Center by CloudBees (affectionately called Jockey), we announced a game changer in the world of Jenkins. It acts as the operations hub for multiple Jenkins[a] in an organization, letting them easily share resources like slaves and security. We have been busy ever since improving the product and helping customers bring it in-house.

I am happy to announce the release of a new version of Jenkins Operations Center by CloudBees - version 1.1. The pie´ce de´ resistance is the monitoring[1] feature, which lets administrators monitor multiple Jenkins that are connected to Jockey. In addition to SSH slaves, Jockey supports windows (JNLP) slaves[2], an often requested feature.The on-boarding experience for Jenkins into Jockey has been made easier. Finally, we released a new type of higher throughput slaves (called NIO SSH slaves)[3] (in Jenkins Enterprise by CloudBees) as a result of our focus on scalability improvements.

Let me quickly introduce you to the monitoring and alerting feature (released in Jenkins Enterprise by CloudBees) which is available on Jockey. The monitoring plugin on an individual master provides a standard dashboard that includes mechanism to see if a master is overloaded by providing insight in to memory, system load, file descriptors and web response times. The plugin also gives insight into build queue metrics like the build queue length, build duration, build scheduling rate and executors available for builds. Administrators can set alerts via emails for thresholds exceeding pre-determined values. On Jockey, the monitoring plugin consolidates information across all client masters in a cluster. Thus, administrators can quickly determine the masters that need attention.


If you haven't tried Jockey, now is the time to download Jockey and Jenkins Enterprise by CloudBees and give it a spin[7 & 8].

[a] Jenkins should be upgraded to Jenkins Enterprise by CloudBeesAdditional information[1] Monitoring plugin[2] JNLP slaves on Jenkins Operations Center by CloudBees[3] NIO SSH slaves[4] Release Notes[5] Jenkins Operations Center by CloudBees documentation and tutorial[6] Jenkins Enterprise by CloudBees documentation[7] Jenkins Enterprise by CloudBees download[8] Jenkins Operations Center by CloudBees download

- Harpreet Singh
Senior Director, product management

Harpreet has 16 years of experience in the software industry. Prior to CloudBees, he was at Oracle and Sun for 10 years in various roles, including leading marketing efforts for JavaEE 6, GlassFish 3.1 and tech lead for GlassFish 2.1. He was also product manager for Hudson, launching it within Sun's GlassFish portfolio.
Categories: Companies

The CD Revolution - Come to Learn More!

Tue, 06/17/2014 - 15:08
A bit of fast forward history…A few years back, Continuous Delivery started out as a technical evolution within software development and IT Ops circles that were tired of seeing each other as best enemies and thought they could do better. They started working on removing the friction that was sitting between the “business value” that was created by development teams and its distribution by IT Ops teams, where it would actually come into effect.
Removing the friction from within any process means the cost of repeating it over and over doesn’t create “heat” and this has fantastic consequences. As an example, simply imagine if the time it would take you to travel, anywhere, was pretty much null? Where would you live? Where would you work? Would they be the same places? Probably not.
Consequently, what might have been initially considered as a simple and local technical and organizational optimization within IT, led to have a much greater impact on companies as a whole.
Technically first, removing this friction meant that there was no incentive anymore for IT to “group” software changes in “batches” that would justify the friction cost involved in shipping that new batch to production. Why would IT wait 9 months to push a new feature batch to production if, instead, for the same cost, it could instead split this new feature into hundreds of less risky changes and push them iteratively to production? Furthermore, in doing so, IT would be able to measure pretty much in realtime whether the feature they were building in steps was indeed leading to the changes they were expecting. And this had huge consequences as this was not just reducing the IT risk associated to any change – which is a great improvement in itself - this was also enabling businesses to measure and validate much sooner, whether what they had asked IT to deliver was really yielding the expected results. Yes, business started to see huge value that Continuous Delivery could have on their business.
And in doing so, businesses realized that to fully benefit from Continuous Delivery, it wasn’t simply the IT processes that had to be adapted. The entire value creating chains and feedback loops had to be rebuilt. Since IT doesn’t work in a vacuum, business had to redefine the way business requirements are identified, formalized, and funneled to IT for delivery as a constant stream rather than as big 18 months plans. Furthermore, early feedback from those initial deployments has to re-wired back to the business so it can adapt and improve its plans. Gone are the days of IT as a remote arm of the business : once Continuous Delivery gets achieved across a company, Business and IT merge into a virtuous circle and become one. This obviously has an important impact on how companies have to architect their org chart, processes, decision process, reporting structure, etc.
Where to start?In the last few years, the move towards Continuous Delivery has inexorably made it front and center on the agenda of development teams, IT Ops and CIOs. While the challenges they each have to solve are unique, they all have in common to be in discovery mode to understand what it means to them, how much is already there, how much should change and where and how they should start.
Since education is a prime concern, a few years back CloudBees has decided to initiate the Jenkins User Conferences with the Jenkins community. Those "JUC" aimed at helping software development, IT Ops and DevOps teams learn and share around their use of Jenkins. The 2014 edition of those “JUC” are just about to start with the American edition in Boston this week, a European edition in Berlin next week and the Middle-East edition in Israel in one month.
However, the feedback we have increasingly received in the last 12 months is that CEOs and CIOs are also very much interested in learning more about Continuous Delivery, how it can help them and what it means to their organization. To that end, CloudBees is proud to announce a series of “Continuous Delivery Seminars” - CD Summit - aimed at decision makers. The first edition will take place this week in New York city and is already counting … several hundred registrations! Hurry up if you are interested in attending, we have a few more seats available. The good news is that we will also be having a European edition of our ”Continuous Delivery Seminar” in Berlin next week. Those seminars will feature prominent speakers from Forrester, Fidelity, Bosh and leading vendors in that space.
Kohsuke Kawaguchi and I will be present at all of those events and hopefully will see you there.
Onward,

Sacha
Categories: Companies

Jenkins User Conference Boston is almost here!

Tue, 06/10/2014 - 20:57
There is only a week left until our Jenkins User Conference US East kicks off in Boston on Wednesday June 18th. As of now, more than 300 people have registered and we have had to release more tickets! If you will be in New England on the 18th, please sign up now so you don't miss out!


This year is the butler's first conference tour to New England. The event will be held at the waterfront Seaport hotel.

We have a great speaker line-up this year. Attendees will be well fed, caffeinated, and the afternoon break features BEvERages. :-) This year, everyone also gets a Jenkins World Tour t-shirt!

We would like to thank our generous JUC US East sponsors. With their help, this Jenkins User Conference is sure to be the best yet.




--Kohsuke




Kohsuke Kawaguchi is Founder, Jenkins CI, and Chief Technology Officer, CloudBees. You can follow him on Twitter @KohsukeKawa
Categories: Companies

CD Summit: Learn From Continuous Delivery Experts

Tue, 06/03/2014 - 19:36
Continuous delivery (CD), a software delivery methodology that allows you to deliver software faster and with lower risk, is gaining a foothold in enterprises like Fidelity Investments and Bosch, as well as in startups like Choose Digital and Viridity Energy.  CD enables companies to accelerate innovation, move faster than the competition and enables IT to respond quickly to the application needs of the business.

Continuous delivery is a transformational journey that begins in development and QA, and stretches all the way to IT operations and into production, where business value is delivered to end users. The CD process enables a fast flow of new features from development into production, while preserving software quality, reliability and maintainability. It impacts the software delivery process and technology, of course - but also has great impact on the organizational culture. Let there be no doubt, continuous delivery is larger than anyone thought and is proving to be one of the most innovative and transformational trends in technology.

Starting June 19th in New York City, CloudBees, along with our expert partners, will be kicking off a series of CD Summits around the world to help educate IT executives and technologists on the business significance and technology impact of continuous delivery. The Summits are unique in that they are designed to address both the higher level ‘business value’ questions of executives and the technical ‘how to’ needs of technologists.

We start off the morning of each summit with a keynote presentation discussing the business value that can be realized from following continuous delivery practices. The keynote speakers are experts like Kurt Bittner from Forrester Research and Dr. Jan Hagen from the European School of Management and Technology.  Following the keynote, we will have presentations covering the people, process and technology impacts of continuous delivery from firms leading the CD charge like Eliassen Group, CloudBees and codecentric. We’ll show you real world examples of CD in action from enterprises that are actually transforming their software delivery practices like Fidelity Investments and Bosch.

After lunch, we kick off the afternoon session with a technology keynote from Jenkins founder Koshuke Kowaguchi, who will discuss the role of Jenkins CI in continuous delivery practices. Afterward, our partners - including XebiaLabs, SOASTA, MidVision and Puppet Labs - will all discuss how to automate the software delivery pipeline.

Register now! Join us in New York City on June 19 or in Berlin on June 24 for an event that is not to be missed. Check here for future CD Summit dates in cities such as London, Paris and Chicago.

P.S. As an added CD Summit feature, every attendee will receive a complimentary copy of The Phoenix Project.

Written by Gene Kim, Kevin Behr and George Spafford, The Phoenix Project is a must read for anyone in IT development or operations that wants to learn how CD can impact the value CD delivers to the business in a profound way. Armed with all of the information you will gain from the CD Summit, you can write your own (successful) final chapter!


See you there!
André Pino
CloudBees
www.cloudbees.com



André Pino is vice president of marketing at CloudBees. 
Categories: Companies