Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
CloudBees provides an enterprise Continuous Delivery Platform that accelerates the software development, integration and deployment processes. Building on the power of Jenkins CI, CloudBees enables you to adopt continuous delivery incrementally or organization-wide, supporting on-premise, cloud and hybrid environments.
Updated: 16 hours 14 min ago

CloudBees Around the World - September 2014

Fri, 09/19/2014 - 18:26

CloudBees employees travel the world to a lot of interesting events. Where you can find the Bees before September ends? Hint: This month, it's all about JavaOne. If you are there, be sure to connect with us!
  • JavaOne San Francisco – September 28 - October 2Take advantage of tools to help you generate awareness, enthusiasm and participation for the Java event of the year. You can choose from more than 400 sessions, including technical sessions, hands-on labs, tutorials, keynotes and birds-of-a-feather sessions. Learn from the world's foremost Java experts, improve your working knowledge and coding expertise, follow in-depth technical tracks to focus on the Java technology that interests you the most. To register and get more information just click on the links.




Categories: Companies

Customer Spotlight: Choose Digital

Thu, 09/18/2014 - 17:52
At CloudBees, we have a lot of innovative customers. They’ve established leadership positions in the marketplace with their great ideas, hard work and a little help from the CloudBees Continuous Delivery Platform.

This blog is the first of several that we will run from time-to-time, highlighting various CloudBees customers. In this first post, we head to Miami to visit Mario Cruz, co-founder and CTO of Choose Digital (recently acquired by Viggle).

Mario, tell us about yourself.
I’m a technologist, born in Cuba and now living in the Miami area. I've now been developing and marketing B2B and B2C technology solutions for over 20 years.

Tell us about Choose Digital.
We developed a private-label digital marketplace that has enabled companies to launch a digital content strategy incorporating the latest in music, movies, TV shows, eBooks and audiobooks. SkyMall, Marriott, United Airlines and others have tapped into our platform to up-level initiatives such as customer loyalty programs, promotional offers, affinity sales channels and digital retail roll-outs. We’ve had great success providing a streamlined channel, helping companies navigate around licensing conflicts, reduce brand friction and take control of usage data. We’ve also provided solutions for musicians and authors to market their work directly to fans and monetize their social media followings.

What did you do before you started Choose Digital?
I’ve had a bunch of jobs in the technology space. I spent three years as CTO of Grass Roots America, a provider of global performance improvement solutions for employee, channel and consumers. I oversaw the business’s technology, infrastructure and information security in the Americas region. Before that I worked for five years as CIO of Rewards Network, operator of loyalty dining programs in the U.S. for most major airlines, hotels and credit card companies.

What kinds of challenges did you face at Choose Digital that spurred you to start working with CloudBees?
We felt we had to be the first to market and we dedicated all our resources to this goal. We didn’t have time for long development and integration cycles. We didn’t want to worry about setting up and maintaining a Java infrastructure, so we adopted Jenkins in the cloud - the CloudBees’ cloud platform. We were up and running with DEV@cloud in just one day. And using CloudBees’ ClickStarts we were able to set up new projects in about an hour. If we had to set up our own hardware or use a IaaS solution, development would have taken three to five times as long, and costs would have been multiplied by a factor of 10 to 15.

Can you talk about your experience with Continuous Delivery, using CloudBees’ technology?
Using a continuous delivery model, we’re able to experiment cheaply and quickly, with low risk. We’re able to run every step of the process in a streamlined manner. Every update kicks off a series of tests, and once the tests pass, the update deploys to production. Everything is automated using Jenkins and deployed to CloudBees. Rather than wait for new versions, we can constantly push, build in improvements and be confident that production will never be more than a couple of hours behind. This gives us control over our development process and instills a certain amount of trust within the staff that projects we undertake will get done on time, on budget and with the quality that we need.

Your business is all about helping companies make strategic use of digital content. What do you like to listen to, read and watch in your spare time?
I’m in the right profession because I’m a huge consumer of content myself – all kinds.

My favorite book is probably “Bluebeard,” by Kurt Vonnegut. It’s about an abstract impressionist painter who, in typical Vonnegut form, has some eccentric ideas about how to create and promote art. The first movie I ever saw was “Raiders of the Lost Ark.” It made me want to travel the world, and luckily my technology career has allowed me to do that. Going way back, my first 45 record was “Freeze Frame” by the J. Geils Band and my first album was “Ghost in the Machine” by the Police.

I’m still a big music guy. I play drums in a band called Switch, which plays all kinds of music, from the Doobie Brothers to Four Non Blondes. I used to be in a bunch of other bands called The Pull, Premonition and Wisdom of Crocodiles. (To see/hear Mario playing the drums in his band, go to this post by Mario.)

So, what’s next for you?
After Choose Digital being acquired by Viggle my goal is to make sure Viggle members get the best media rewards for doing things they love to do – like watching TV and listening to music - while continuing to innovate on our platform.

Read the case study about Mario and his team at Choose Digital
Follow Mario on Twitter: @mariocruz
Categories: Companies

Webinar Q&A: Continuous Delivery with Jenkins and Puppet - Debug Bad Bits in Production

Mon, 09/15/2014 - 20:13
Thank you to everyone who joined us on our webinar.


We presented:

  • How to build a modern continuous delivery pipeline with Jenkins
  • Connect Jenkins and Puppet such that Dev and Ops team can determine what happens on the other side of the house and closely interact to debug issues in production environments.


Webinar recording is here.

Following are answers to questions we received during the webinar:________________________________________________________________
Q: Is Puppet serving as the orchestrator for Jenkins?
A: Not quite - the tools run independently but communicate with each other. The demo will make it clear.

Q: Can JMeter be plugged in with Jenkins for Continuous testing?
A: Yes it can. 

Q: When we say continuous testing do we mean automated testing here?
A: Continuous Testing = automated testing for each commits made in the source repository.

Q: What drivers or plugins are required? Can I get a website where I can get this info?
A: https://wiki.jenkins-ci.org/display/JENKINS/JMeter+Plugin

Q: With JMeter can we run a load test using the build in Jenkins, or how can we do continuous testing with this combination?
A: JMeter is going to used for load testing stage. It depends how you setup your workflow/pipeline. If you run perf test on every commit (you shouldn't) but you have continuous testing. You will have more testing stages ideally.

Q: Can Puppet work with VM's
A: Yes, Puppet can work with VMs. Puppet agents live at the OS level, and can be deployed to virtual machines or bare hardware. Puppet is agnostic to where or how it has been deployed. We do have some hooks and integrations around provisioning new VMs as well.

Q: I'm curious that I don't see AWS/EC2 under "Virtual & Cloud" for Puppet along with VMware, Xen, Azure ... is there a reason? Any concerns I should have about compatibility with EC2 infrastructure?
A:  No, there are no concerns around EC2. Puppet runs great in EC2 and we have many customers running their infrastructure with Puppet in Amazon's cloud.

Q: Are you going to share these scripts somewhere?
A: Demo write up available on CloudBees developer wiki. The jenkinsci infrastructure is available at https://github.com/jenkinsci/infra-puppet
Q: I understand that Puppet helps create an MD5 hash file of the war file -  build deployments. Could you provide a basic definition of what is Puppet and what is Docker?A: Puppet (stealing from the Puppet page)
Puppet Enterprise (PE) uses Puppet as the core of its configuration management features. Puppet models desired system states, enforces those states, and reports any variances so you can track what Puppet is doing.
To model system states, Puppet uses a declarative resource-based language — this means a user describes adesired final state (e.g. “this package must be installed” or “this service must be running”) rather than describing a series of steps to execute
Docker (stealing from Docker.io)
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.


Q: Will this work with SVN too?
A: There is an equivalent version of Validated Merge for Jenkins that our team has pushed out in OSS.
Q: Will Validated merge with SVN repo too?A: See above.
Q: Is an equivalent to the gated repo available with subversion?  It's a great idea; a while back I'd worked with a similar homegrown solution for Perforce.A: See above.
Q: What's the difference between open source Jenkins & CloudBees's version?A: See this link.
Q: Where I could get the quotation if I want to buy?A: Email sales@cloudbees.com
Q: Does Puppet require root access for Unix host? What privileges would it require as a user?A: The Puppet agent typically runs as root in order to be able to fully configure the system, but it does not require those privileges. When running as a non-privileged user, it will only be able to manage aspects of the system the user has permissions for.
Q: When Harpreet was doing the Traceability demo, the Jenkins screen that showed the artifact deployment state had a field for 'Previous version' that was blank. Why was that empty? What value would normally be in there, the MD5 hash of the previous artifact?A: Those would change if I had checked in new code thus altering the MD5 hash. Since I was just rebuilding the same image in the demo, the hashes are same and hence no previous version.
Q: Is Puppet capable to work with IBM Solutions? like Websphere?A: Yes. In general, if it's possible to manage or modify an application from the command line of a system, it is possible to build a Puppet model for it. Check out forge.puppetlabs.com for 2500+ examples of pre-built community and supported modules.
Q: I read that about the agent, but what about the master? If not, can you run Puppet without a master?A: The master is effectively a web service, which does not require root privileges, so it too can be run without root. For testing and development, you can run Puppet in a stand-alone mode using the `puppet apply` family of commands.

Q: Does Puppet need vagrant to run or can we run it directly on the VM?
A: Puppet can be run directly on a VM. It does not have dependencies on Vagrant or any other specific virtualization/cloud management software.
Q: How does the facility match with the preccommit checkin provided by Visual Studio Env?A: I am not familiar with Visual Studio Env but documentation indicates that those are just environment variables that are in injected into builds, if so then Jenkins can understand environment variables.


-- Harpreet Singhwww.cloudbees.com
Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter. -- Reid Vandewiele
www.puppetlabs.com


Reid is a technical solutions engineer at Puppet Labs, Inc.


Categories: Companies

CloudBees Becomes the Enterprise Jenkins Company

Thu, 09/11/2014 - 09:04
Since we founded the company, back in 2010, CloudBees always had the vision to help enterprises accelerate the way they develop and deploy applications. To that end we delivered a PaaS that covered the entire application lifecycle, from development, continuous integration and deployment to staging and production. As part of this platform, Jenkins always played a prominent role. Based on popular demand for Jenkins CI, we quickly responded and also provided an on-premise Jenkins distribution, Jenkins Enterprise by CloudBees.
Initially, Jenkins Enterprise by CloudBees customers were mainly using Jenkins on-premise for CI workloads. But in the last two years, a growing number of customers have pursued an extensive Continuous Delivery strategy and Jenkins has moved from a developer-centric tool to a company-wide Continuous Delivery hub, orchestrating many of the key company IT assets.
For CloudBees, this shift has translated into a massive growth of our Jenkins Enterprise by CloudBees business and has forced us to reflect on how we see our future. Since a number of CloudBees employees, advisors and investors are ex-JBossians, we’ve had the chance to witness first-hand what a successful open source phenomenon is and how it can translate into a successful business model, while respecting its independence and further fueling its growth. Consequently, it quickly became obvious to us that we had to re-focus the company to become the Enterprise Jenkins Company, both on-premise and in the cloud, hence exit the runtime PaaS business (RUN@cloud & WEAVE@cloud). While this wasn’t a light-hearted decision (we are still PaaS lovers!), this is the right decision for the company.
With regard to our existing RUN@cloud customers, we’ve already reached out to each of them to make sure they’re being taken care of. We’ve published a detailed migration guide and have setup a migration task-force that will help them with any question related to the migration of their applications.  (Read our FAQ for RUN@cloud customers.) We’ve also worked with a number of third-party PaaS providers and will be able to perform introductions as needed. We’ve always claimed that our PaaS, based on open standards and open source (Tomcat, JBoss, MongoDB, MySQL, etc.) would not lock customers in, so we think those migrations should be relatively painless. In any case, we’ll do everything we can to make all customer transitions a success
From a Jenkins portfolio standpoint, refocusing the company means we will be able to significantly increase our engineering contribution to Jenkins, both in the open source community as well as in our enterprise products. Kohsuke Kawaguchi, founder of Jenkins and CTO at CloudBees, is also making sure that what we do as a company preserves the interest of the community.
Our Jenkins-based portfolio will fit a wide range of deployment scenarios:
  • Running Jenkins Enterprise by CloudBees within enterprises on native hardware or virtualized environments, thanks to our enterprise extensions (such as role-based access control, clustering, vSphere support, etc.)
  • Running Jenkins Enterprise by CloudBees on private and public cloud environments, making it possible for enterprises to leverage the elastic and self-service cloud attributes offered by those cloud layers. On that topic, see the Pivotal partnership we announced today. I also blogged about the new partnership here.
  • Consuming Jenkins as a service, fully managed for you by CloudBees in the public cloud, thanks to our DEV@cloud offering (soon to be renamed “CloudBees Jenkins as a Service”).

Furthermore, thanks to CloudBees Jenkins Operations Center, you’ll be able to run Jenkins Enterprise by CloudBees at scale on any mix of the above scenarios (native hardware, private cloud, public cloud and SaaS), all managed and monitored from a central point.
From a market standpoint, several powerful waves are re-shaping the IT landscape as we know it today: Continuous Delivery, Cloud and DevOps. A number of companies sit at the intersection of those forces: Amazon, Google, Chef, Puppet, Atlassian, Docker, CloudBees, etc. We think those companies are in a strategic position to become tomorrow’s leading IT vendors.
Onward,

Sacha

Additional Resources
Read the press release about our new Jenkins focus
Read our FAQ for RUN@cloud customers
Read Steve Harris's blog







Sacha Labourey is the CEO and founder of CloudBees.
Categories: Companies

CloudBees Partners with Pivotal

Thu, 09/11/2014 - 09:02
Today, Pivotal and CloudBees are announcing a strategic partnership, one that sits at the intersection of two very powerful waves that are re-shaping the IT landscape as we know it today: Cloud and Continuous Delivery.
Pivotal has been executing on an ambitious platform strategy that makes it possible for enterprises to benefit from a wide range of services within their existing datacenter: from Infrastructure as a Service  (IaaS) up to Platform as a Service (PaaS), as well as a very valuable service, Pivotal Network, that makes it trivial to deploy certified third-party solutions on your Pivotal private cloud. (To read Pivotal's view on the partnership, check out the blog authored by Nima Badiey, head of ecosystem partnerships and business development for Cloud Foundry.)
As such, our teams have been working closely on delivering a CloudBees Jenkins Enterprise solution specifically crafted for Pivotal CF. It will feature a unique user experience and will be leveraging Pivotal’s cloud layer to provide self-service and elasticity to CloudBees Jenkins Enterprise users. We expect our common solution to be available on Pivotal CF later this year, and we will be iteratively increasing the feature set.
Given Jenkins’ flexibility, Pivotal customers will be using our combined offering in a variety of ways but two leading scenarios are already emerging.
The first scenario is for Pivotal developers to use Jenkins to perform continuous integration and continuous delivery of applications deployed on top of the Pivotal CF PaaS. CloudBees Jenkins Enterprise provides an integration with the CloudFoundry PaaS API that makes the application deployment process very smooth and straightforward. This first scenario provides first class support for continuous delivery to Pivotal CF developers.
The second scenario focuses on enterprises relying on Jenkins for continuous integration and/or continuous delivery of existing (non-Pivotal CF-based) applications. Thanks to the Pivotal/CloudBees partnership, companies will ultimately be able to leverage the Pivotal cloud to benefit from elastic build capacity as well as the ability to provision more resources on-demand, in a self-service fashion.
The CloudBees team is very proud to partner with Pivotal and bring Pivotal users access to CloudBees Jenkins Enterprise, the leading continuous delivery solution.
Onward,
Sacha







Sacha Labourey is the CEO and founder of CloudBees.
Categories: Companies

Reflections on the PaaS Marketplace

Thu, 09/11/2014 - 09:00
Cairn from the
Canadian Arctic Expedition
Entering the PaaS marketplace in 2010 resembled a polar expedition near the turn of the last century. Lots of preparation and fundraising required, not a lot of information about what you’d encounter on the journey, life-and-death decisionmaking along the way, shifting and difficult terrain in unpredictable conditions and intense competition for the prize. At least we didn’t have to eat the dogs.

In case you missed it, CloudBees announced that we’ll no longer offer our runtime PaaS, RUN@cloud. Instead, we’re focusing on our growing Jenkins Enterprise by CloudBees subscription business - on-prem, in the cloud, and connecting the two - and the continuous delivery space where Jenkins plays such a key role. Jenkins has been at the core of our PaaS offering all the way along, so in some ways, this is less of a pivot than a re-focusing. Still, it’s an important event for CloudBees customers, many of whom rely on our runtime services and the integrated dev-to-deployment model we offer. We’ll continue to support those customers on RUN@cloud for an extended period and help them transition as painlessly as possible to alternatives (read our FAQ about the RUN@cloud news). Given our open PaaS approach and the range of offerings in the marketplace, the transition will be non-trivial, but manageable (read our transition documentation). Given that background, I wanted to share some thoughts behind our move and what we see going on in the PaaS marketplace.

A Platform, Of Sorts
By Agrant141 [CC-BY-SA-3.0]As a team, we come from a platform background. To us, cloud changes the equation in how people build, deploy and manage applications. So, the platforms we’re all used to building on top of - like Java - need to change scope and style to be effective. That idea has driven a lot of what we delivered at CloudBees. It’s why Jenkins was such a big part of the offering, because from our perspective Continuous Integration and Continuous Delivery really needed to be integral to the experience when you’re delivering as-a-service with elastic resources, on-demand. I think we have been proven right. Doubts? Take a look at what Google is doing with the Google Cloud Platform. They agree with us and they built their solution around Jenkins. This is also why primarily runtime-deployment-focused PaaS offerings like Pivotal’s Cloud Foundry partner with us on Jenkins.

What’s changed, then?
  • Service - IaaS platform affinity. IaaS providers, but particularly AWS and Google, are moving up-stack rapidly, fleshing out a wider and wider array of very capable services. These services often come with rich APIs that are part of the IaaS-provider’s platform. Google Cloud Services is a good example. If you’re an Android developer, it’s your go-to toolbox to unlock location and notification services. It also incentivizes you to use Google identity and runtime GAE services. The same is true on AWS and Azure with some different slants and degrees of lock-in. Expect the same on any public cloud offering that aims to succeed longer term. This upstack march by the IaaS vendors blurs the line on PaaS value. PaaS vendors like CloudBees can make it easy to consume these IaaS-native services, but how the value sorts itself out for end-users between “PaaS-native” services and those coming directly from the IaaS provider is unclear.
  • What’s a platform? Who’s to say that AWS Elastic Beanstalk is less of a platform than what CloudBees offers? I’d like to think I have some experience and credibility to speak to the topic, and I can assure you ours is superior in all ways that matter technically. But in the end, if a bunch of Ruby scripts pushing CloudFormation templates make it as simple to deploy, update, and monitor a Java app as CloudBees does, those distinctions just don’t matter to most users. This is not to say that Beanstalk is functionally equivalent to CloudBees today, because it isn’t. But it’s a lot closer than it was two years ago. The integration with VPC is front-and-center, because, well, they are AWS and as an end-user, you’re using your own account with it, while we are managing the PaaS on your behalf. My point here is that our emphasis on platform value, which was very much a differentiator two years ago, is less of one today and will continue to decrease even as we add feature/functionality. Is that because we are being outpaced by competitors who were behind? No, it’s because as IaaS-native services expand their scope and the platform itself changes (see next point), the extra value that can be added by a pure-play PaaS gets boxed-in.
  • Commoditization of platform. There is a lot going on in this area that is hard to capture succinctly. First, there is the Cloud Foundry effect. Cloud Foundry has executed well on an innovate-leverage-commoditize (ILC) strategy using open source and ecosystem as the key weapons in that approach. Without any serious presence in public cloud, Pivotal Cloud Foundry has produced partnerships with the largest, established players in enterprise middleware and apps. In turn, that middleware marketplace ($20B) is prime hunting ground for PaaS, and Cloud Foundry has served up fresh hope to IT people searching desperately for a private cloud strategy with roots in open source. Glimmers of hope for success in on-prem private PaaS in the enterprise act as a damper on public cloud PaaS adoption, making a risk-averse enterprise marketplace even more sluggish. Second, thanks to Docker, the containerization of apps - a mainstay implementation strategy of PaaS providers like CloudBees - is becoming “standard” and simple for everyone to use. It’s been embraced by Google as a means to make their offering more customizable, and even Amazon hasn’t been able to ignore it. This shift changes the PaaS equation again, because combining Docker with infrastructure automation tools like Chef and Puppet starts to look a lot like PaaS. New tools like Mesos also change the landscape when combined with Docker. Granted for those paying attention to details, Docker still has some holes in it, but don’t expect those to remain unplugged for long.
  • It’s about service. There is a clear dividing line among PaaS players between fully-managed (think: CloudBees, Heroku) and self-managed (think: any on-prem solution, AWS Elastic Beanstalk). Broadly speaking, the startups and SME customers tend to lean toward the fully-managed side, while the larger enterprises lean toward the self-managed side. The platform changes I was covering above continue to make self-service easier, while reducing the perceived value of the fully-managed approach. I say “perceived” because the gap between the perceived and actual effort to implement a PaaS and operate it at scale is huge. It’s something that is hard for people to understand, especially if they haven’t lived through it. But, perception is reality at the buying stage, even if the reality bites at delivery. The technology and organizational investment of Heroku and CloudBees to operate at scale and to deliver deep, quality service is significant, but the perception gap leads people to equate it to the labor associated with answering PagerDuties and Nagios alerts. Furthermore, as the IaaS players move more up-stack, and customers consume a broader mixture of self-service and fully-managed value-add services, the gap increases. The other difference between fully-managed vs. self-service centers around the delivery model. When you deliver as-a-service, like we do with the CloudBees PaaS, you have advantages that are not available to on-prem software delivery and support models. But, from a CloudBees perspective, with a large, growing business delivering to on-premise Jenkins Enterprise users, we really need to think of our fully-managed Jenkins more as a SaaS, not just a component of a broader PaaS offering.
What does all this change mean to the PaaS marketplace? In addition to the moves I noted earlier, you can already observe some of the impact:
  • Google consolidated their PaaS GAE and IaaS GCE stories into a single, powerful developer-savvy Google Cloud Platform story, with more consistency no doubt on the way from the mobile side of the house.
  • CenturyLink bought AppFog and Tier3, putting the combined IaaS and PaaS pieces in place to move up from being just a hosting provider.
  • IBM moved all SmartCloud Enterprise efforts onto Softlayer and consolidated PaaS efforts behind the Cloud Foundry based BlueMix to extend the life of WebSphere in the cloud. At the same time, the introduction of UrbanCode gives them DevOps coolness, at least as much coolness as a blue shop can handle.
  • Microsoft blurred the line between Azure PaaS and a real public IaaS, a clear recognition that combined there is more value and better ways to appeal to a broader audience.
  • DotCloud pivoted to become Docker, re-purposing their internal containerization investments and de-emphasizing their PaaS business.
  • Heroku aligned more closely with the Salesforce side of the house in Heroku1 - you know, the part with access to enterprise companies with deep pockets who already trust Salesforce with some of their most sensitive information.
  • Rackspace, caught in the middle without a IaaS or PaaS card to play, is floundering and looking for a buyer.
  • In a classic enemy-of-my-enemy confederation, traditional enterprise players have lined up behind OpenStack. Because of its open source heritage, Red Hat is well positioned to grab the leadership ring in what appears to be a contentious, political, but perhaps too-big-to-fail mess.
  • Looking to avoid the messiness of OpenStack but to obtain an aura of community governance around its Cloud Foundry efforts, Pivotal created a new pay-to-play Cloud Foundry Foundation and notched up a broad range of enterprise participants.
  • Amidst all this, Amazon just continues their relentless pace to add more services, the latest onslaught being aimed at mobile and collaboration.
Taken together, these changes demonstrate market consolidation, platform commoditization, a continued strength of on-prem solutions in the enterprise, and the important strategic leverage to be obtained by combining IaaS, PaaS and managed service offerings. Longer term, it calls into question whether there will even be a PaaS marketplace that is identifiable except by the most academic of distinctions. These are not trends we can ignore, particularly when we have a successful and growing business centered on Jenkins.

Amundsen ExpeditionSo, we’re emerging from our PaaS polar expedition. Like a triumphant Amundsen, we are leaving behind some noble competitors. We’re taking what we’ve learned and are applying the lessons toward new adventures. Jenkins is an incredible phenomenon. It’s built around an amazing open source community that is populated with passionate advocates. With its Continuous Integration roots, Jenkins sits at the center of the fundamental changes cloud has ushered in to software development - the same ones that brought CloudBees into existence in the PaaS world. Join us and follow us as we push the boundaries of Continuous Delivery using Jenkins, and as we work with the community to make sure Jenkins continues to be the tool of choice for software development and delivery both on-premise and in the cloud.


Resources:





Steven Harris is senior vice president of products at CloudBees (and a fan of Roald Amundsen). 
Follow Steve on Twitter.
Categories: Companies

Advanced Git with Jenkins

Wed, 09/10/2014 - 21:09
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Harpreet Singh, VP Product Management, CloudBees about a presentation given by Christopher Orr of iosphere GmbH at JUC Berlin.

Git has become the repository of choice for developers everywhere and Jenkins supports git very well. In the talk, Christopher shed light on advanced configuration options for the git plugin. Cloning extremely large repositories is an expensive proposition and he outlined a solution for speeding up builds with large repositories.

Advanced Git Options
The are three main axes for building projects: What, When and How.
Git plugin optionsWhat to build:
The refspec option on Jenkins lets you choose on what you need to build. By default, the plugin will build the master branch, this option can be supplanted by wildcards to build specific features or tags. For example:


  • */feature/* will build a specific feature branch
  • */tags/beta/* will build a beta version of a specific tag
  • +refs/pull/*:refs/remotes/origin/pull/* will build pull requests from GitHub

The default strategy is usually to build particular branches. So for example if refspec is */release/*, branches release/1.0release/2.0 will be built while branches feature/123bugfix/123 will be ignored. To build feature/123/ and bugfix/123, you can flip this around by choosing the Inverse strategy.

Choosing the build strategy
When to build:
Generally, polling should not be used and webhooks are the preferred options when configuring jobs. OTOH, if you have a project that needs to be built nightly only if a commit made it to the repository during the day, it can be easily setup as follows:



How to build:
A git clone operation is performed to clone the repository before building it. The clone operation can be speeded up by using shallow clone (no history is cloned). Furthermore by using the "reference repo" during the clone operation, builds can be speeded up. In the reference repo option, the repository is cloned to a local directory and from there on, this local repository is used for subsequent clone operations. A network access is made only if the repository is unavailable. Ideally, you line these up, so shallow clone for the first clone (fast clone) and reference repo for faster builds subsequently.


Equivalent to git clone --reference option


Working with Large RepositoriesThe iosphere team uses the reference repository approach to speed up builds. They have augmented this approach by inserting a proxy server (git-webhook-proxy [1]) between the actual repo and Jenkins. Thus, a clone happens to this proxy server. The slave setup plugin copies the workspace over to the slaves (over NAS) and builds proceed there on. Since network access is restricted to the proxy server and each slave does a local copy, this speeds up builds considerably. 


git-webhook-proxy: to speed up workspace clones
The git-webhook-proxy option seems a compelling solution, well worth investigating if your team is trying to speed up builds.

[1] git-webhook-proxy


-- Harpreet Singhwww.cloudbees.com
Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter



Categories: Companies

[Infographic] Need To Deliver Software Faster? Continuous Delivery May Be The Answer

Mon, 09/08/2014 - 19:20
More and more organizations are realizing the impact of delivering applications in an accelerated manner. Many of those that are seeking to do so are leveraging DevOps functions internally and moving towards Continuous Delivery. Did you know that 40% of companies practicing Continuous Delivery increased frequency of code delivery by 10% or more in past 12 months?

Do you need to deliver software faster? This infographic based off the DevOps and Continuous Delivery survey conducted by the EMA shows why Continuous Delivery may be the answer.


Download your copy of the DevOps and Continuous Delivery paper to read the entire report based on the EMA survey.


Christina Pappas
Marketing Funnel Manager
CloudBees

Follow her on Twitter
Categories: Companies

Building Pipelines at Scale with Puppet and Jenkins Plugins

Thu, 09/04/2014 - 18:15
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Harpreet Singh, VP Product Management, CloudBees about a presentation given by Julien Pivotto of Inuits at JUC Berlin.

Inuits is an open source consultancy firm that has been an early practitioner of DevOps. They set up Drupal-based asset management systems that store assets, transcode videos for a number of clients. These systems are setup such that each client has a few environments (Dev/UAT/Production) and each environment further splits into a one backend per environment and a number of front ends per backend. Thus, they end up managing a lot of pipelines for each Drupal sites that they setup. Consequently, they need a standard way of setting up these pipelines. 
In short, Inuits is a great use case for DevOps teams that are responsible for bringing in new teams and enabling them to deliver continuously and deliver software fast. 
There are simple approaches to building pipelines through the UI (clone-a-job) and through xml (clone config.xml) but these approaches don't scale well. Julien outlined two distinct approaches to setting up pipelines:
  • Pipelines through Puppet
  • Julien Pivotto
  • Pipelines through Jenkins plugins
I will focus mostly on the puppet piece in this blog as that seems to be a novel approach that I haven't come across before. Although, Julien does lean towards using standard Jenkins plugins to deliver these pipelines.

Pipelines through Puppet
Julien started with a definition of a pipeline:     A pipeline is a chain of Jenkins job that are run to fetch, compile, package, run tests and deploy an  application
And then he goes about how to set these chain of inter-related jobs through Puppet. Usually, Puppet is used to provision OS, Apps and DB but not application data. In his approach, he puppetized provisioning Jenkins  and job configurations (application data). 
jobs.pp: Manifest for standalone jobEach type of job and pipeline has a corresponding puppet manifest that takes arguments like job name, next job, parameters etc. Since the promotions plugin adds some meta-data into an existing job config and adds a separate configuration folder in the jobs folder, promotions have their manifest as well. Configuration changes in the xml are done through Augeas.

With the above approach on-boarding a team is easy: puppet provisions a new Jenkins with its own set of pipelines and jobs. History of configuration changes can be tracked in the source repository. 
Pipeline.pp: Manifest for a pipeline
However delivering these pipelines gets hard because you end up with a lot of templates. Each change to configuration requires restart to Jenkins which impacts team productivity.

Delivering pipelines through puppet is the infrastructure as code approach and although the approach is novel the disadvantages outweigh the benefits and Julien leaned towards using Jenkins plugins to deliver these. 

Pipelines through Jenkins Plugins

Julien talked about two main plugins to realize pipelines. These plugins are well known in the community. The novel approach is connecting these two together to deliver dynamic pipelines.

Build_flow plugin: define pipelines through groovy DSL's and constructs to do parallels, conditionals, retries and rescues. 

Job generator plugin: create and updates a new job on the fly. 

Julien then combines them both where starting jobs (an orchestrator) are created using build flow and subsequent jobs are generated by job generator. Using conditionals and parallel constructs, he can end up delivering complex pipelines. 




The above approaches highlight two things:


  • Continuous delivery is becoming the de-facto way organizations want to deliver software and
  • Since Jenkins is the tool of choice for delivering software, it has to evolve and offer first class constructs to help companies like Inuits to deliver pipelines easily.

We at CloudBees, have heard the above loud and clear in the last year. Consequently, the workflow work delivered in OSS by Jesse Glick offers these first class constructs to Jenkins. As this work moves towards a 1.0 in OSS, we will get to point where the definition of a pipeline will change from


     A pipeline is a chain of Jenkins job that are run to fetch, compile, package, run tests and deploy an  applicationto      A workflow pipeline is a Jenkins job that describes the flow of software components through multiple stages (& teams) as they make way from commit to production.
-- Harpreet Singhwww.cloudbees.com
Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter. 




Categories: Companies

Webinar Q&A: Role-Based Access Control for the Enterprise with Jenkins

Thu, 08/28/2014 - 17:29
Thank you to everyone who joined us on our webinar, the recording is now available.

Below are several of the questions we received during the webinar Q&A:

Q: How do you admin the groups? Manually or is this there LDAP involved?
A: You can decide if you want to create internal Jenkins users/groups or import users and groups from your LDAP server. In this case you can use the LDAP Jenkins plugin to import them but you still need to manage them manually using Jenkins. Each external group has to match an internal Jenkins group so that you can assign a role to it. Roles are defined in Jenkins regardless the origin of users and groups (internal or external).

Q: Is there any setting for views, instead folders? Are the RBAC settings available for views?A: In short, yes. The RBAC plugin supports setting group definitions over the following objects:
  • Jenkins itself
  • Jobs
  • Maven modules
  • Slaves
  • Views
  • Folders

Q: Are folders the only way to associate multiple Jenkins jobs with the same group?
A: The standard way in which you should associate multiple Jenkins jobs with the same group is through folders. However, remember that you can also create groups at job level.
Q: If we convert from the open source 'role-based strategy' plugin to this role-based plugin, will it translate the roles automatically to the new plugin?
A: Roles are not converted automatically, so you will need to set-up your new rules with the RBAC plugin.
Q: Who do we contact for more questions?
A: You can contact us in the public mail users@cloudbees.com.
Q: How do you create those folders in Jenkins? Is this part of RBAC plugin, too?A: Folders are created using the Folder plugin. The Folder plugin allows users to create new “jobs” of the type “folder.” The Role-Based Access Control plugin then integrates with this plugin by allowing administrators to set folder-level security roles and let child folders inherit parent folders’ roles.
Q: Is there a permission that allows a user see the test console steps (the bash cmds that are executed)?A: You can define a role to only have read permission for a job configuration. In this way, users with that role will only be able to read the bash commands used in the job.
Q: Do you provide any sort of API to work with these security settings programmatically?A: At this time, there is not any API to work with these security settings.
Q: Are there any security issues that one needs to take into consideration?A: When configuring permissions for roles, be aware of the implications of allowing users of different teams or projects to have access to all of the jobs in a Jenkins instance. This open setup can occur when a role is granted overall read/execute/configure permissions.
While an administrative role would obviously require such overall access, consider limiting further assignment of those permissions to only trusted groups, like team/division leads.
Such an open setup would allow users with overall permissions to see information that you might rather restrict from them - like access to any secret projects, workspaces, credentials or scripts. 


Overall configure permissions would also allow users to modify any setting on the Jenkins master.

---


Valentina Armenise
Solutions Architect
CloudBees

Follow Valentina on Twitter.



Félix Belzunce
Solutions Architect
CloudBees

Félix Belzunce is a solutions architect for CloudBees based in Europe. He focuses on continuous delivery. Read more about him on his Meet the Bees blog post and follow him on Twitter.




Tracy Kennedy
Solutions Architect
CloudBees

As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. Read her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Configuration as Code: The Job DSL Plugin

Tue, 08/26/2014 - 17:16
This is one in a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Valentina Armenise, solutions architect, CloudBees. In this presentation, Daniel Spilker, CoreMedia AGs, maintainer of the plugin, shows how to configure a Jenkins Job without using the GUI at JUC Berlin.

Daniel Spilker, from CoreMedia, at the JUC 2014 in Berlin, presented the DSL plugin and showed how the Configuration as a Code Approach can simplify the orchestration of complex workflow pipelines.

The goal of the plugin is to create new pipelines fast and easily using the preferred tools to “code” the configuration as opposite of using different plugins and jobs to set up complex workflows through the GUI.

Indeed, the DSL plugin defines a new way to describe a Jenkins Job configuration by the use of Groovy Language piece of code stored in a single file.

After installing the plugin a new option will be available in the list of build steps: “process JOB DSL” which will allow you to parse the DSL script.

The descriptive groovy file can be either uploaded in Jenkins manually or stored in the SCM and pulled in a specific job.

The jobs whose configuration is described in the DSL script will be created on the fly so that the user is responsible for maintaining the groovy script only.






Each DSL element used in the groovy script matches a specific plugin functionality. The community is continuously releasing new DSL elements in order to be able to cover as many plugins as possible.





Of course, given the +900 plugins available today and the frequency of new plugin releases, it is fairly impossible that the DSL plugin covers all use-cases.

Here comes the strength of this plugin: although each Jenkins plugin need to be defined by a DSL element, you can create your own custom DSL element by the use of the method configure which gives direct access to underlying XML of the Jenkins config.xml. This means that you can use DSL plugin to code any configuration even if a predefined DSL element is not available.

The plugins gives also the possibility to introduce custom DSL commands.

Given the flexibility of the DSL plugin, and how fast the community is in realizing new DSL elements (a new feature every 6 weeks), this plugin seems to be a really interesting way to put Jenkins configuration into code.

Want to know more? Refer to:





Valentina Armenise
Solutions Architect, CloudBees

Follow Valentina on Twitter.


Categories: Companies

Integrated Pipelines with Jenkins CI

Wed, 08/20/2014 - 15:55
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Félix Belzunce, solutions architect, CloudBees about a presentation given by Mark Rendell, Accenture, at JUC Berlin.

Integrated Pipelines is a pattern that Mark Rendell uses at Accenture to reduce the complexity of integrating different packages when they come from different source control repositories.

The image below, which was one of the slides that Mark presented, represents the problem of building several packages, which will need to be integrated at some point. The build version we need to use, how to manage the control flow and what exactly we need to release are all the main pain points when you are working on such an integration.


Mark proposes a solution where you will not only create a CI pipeline, but also an Integration pipeline to be able to fix the problem. In order to stop displaying all the jobs downstream inside the pipeline, Mark uses a Groovy script. For deploying the right version of the application, several approaches could be used: Maven, Nexus or even a simple plain text file.


The pattern can scale up, but using this same concept for micro services could be indeed a big challenge as the number or pipelines significantly scales up. As Mark pointed out, it cannot only be applied to micro services or applications, as this concept on Jenkins could be also used when you do Continuous Delivery to manage your infrastructure.

You might use similar jobs configurations along your different pipelines. The CloudBees templates plugin will be useful to templatize your different jobs, allowing you to save time and making the process more reliable. It also allows you to do a one time modification in the template which will automatically be pushed to all the jobs without going individually from one job to another.

View the slides and video from this talk here.



Félix Belzunce
Solutions Architect
CloudBees

Félix Belzunce is a solutions architect for CloudBees based in Europe. He focuses on continuous delivery. Read more about him on his Meet the Bees blog post and follow him on Twitter.
Categories: Companies

Webinar Q&A: "Scaling Jenkins in the Enterprise"

Thu, 08/14/2014 - 22:13
Thank you to everyone who joined us on our webinar, the recording is now available.

Below are some of the questions we received to webinar:

Q: How do you implement HA on the data layer (jobs)?  Do you have the data hosted on a network drive?

A: Yes - the 2 masters (primary and failover) share a filesystem visible to both over a network. You can read about HA setup here.

Q: I would like to know how to have a different UI instead of Jenkins UI. If I want to customize the Jenkins UI what needs to be done?

A: There are plugins in the open source community that offer customizable UIs for Jenkins: Simple Theme Plugin is one popular example.

            Q: I want to have a new UI for Jenkins. I want to limit certain things for the Jenkins
                 user.


            A: Interesting. What types of things? A lot of the Jenkins Enterprise plugins allow admins to
                exercise stricter limits on different roles' access to certain functions in Jenkins, whether
                that be through templating or Role-based access control with Folders. The Jenkins
                Enterprise templates also allow you to “hide” some configuration parameters.


            Q: Let's take simple example. I want to have a very simple UI for the parameterized
                 build where a user can submit the SRC path and the build script name. He
                 submits that job by specifying the above two values. How we can have a very
                 simple UI instead of Jenkins UI?


            A: Okay - this is exactly the use case that the job template was designed for. See
                 the last image in the job template tutorial.


            Q: Looks like it will work. How I can get rid of the left hand Jenkins menu?

            A: You can remove most of the options in that menu with the role-based access 
                control plugin - you can remove certain roles' ability to create new jobs, configure
                the system, kick off builds, delete 
projects, and see changes/the work space, etc,
                which will remove most all of the options in that 
menu.

Q: We use the open source version of Jenkins and we have been facing an issue with parsing the console log. We use curl and there is a limit for console text to be displayed for only 10000 lines. Will this enterprise edition handle that issue?

A: It sounds like you're seeing Run.doConsoleText being truncated, though it seems there shouldn't be a 1000-line limit, I just checked sources and it looks to send the full log, regardless of size.

It has come to our attention that this answer is incorrect. Daniel Beck clarifies:

A: While a build is running, LargeText in Stapler (used by doConsoleText) truncates output after 10k lines since Jenkins 1.447.1. This is not configurable and caused e.g. JENKINS-23660.

https://github.com/stapler/stapler/blob/master/core/src/main/java/org/kohsuke/stapler/framework/io/LargeText.java#L226
https://github.com/stapler/stapler/blob/master/core/src/main/java/org/kohsuke/stapler/framework/io/LargeText.java#L547

Reproduce using e.g. shell build step with the script:

#!/bin/bash
for I in $( seq 1 20000 ) ; do echo $I ; done
sleep 60

Open .../consoleText while the script sleeps. It'll end around ~9996, depending on the number of log lines before the script is executed.

Q: Is there a customizable workflow capability to allow me to configure some change control and release management process for enterprise?
A: The Jenkins community is currently developing a workflow plugin (0.1-beta at the moment). Jesse Glick, engineer at CloudBees, did a presentation about it at the '14 Boston JUC. CloudBees is working on enterprise workflow features such as checkpoints as a part of Jenkins Enterprise by CloudBees.
Q: Is there any framework/processes/checklists that you follow to ensure the consistency/security of multi-tenant slaves across multiple masters?
A: Please see the recording of the webinar for the answer
            Q: Is there a way to version control job configuration?
            A: Yes - CloudBees offers a Backup Plugin that allows your to store your job configs
                 in a tar ball. You can set how long to retain these configs and how many to keep,
                 just as you would for a job's run history. You can also use the Jenkins
                Job Configuration History plugin.

            Q: This backup plugin is available with the open source version of Jenkins?
            A: The backup plugin that I'm speaking of is only a part of
                the Jenkins Enterprise package of plugins.

Q: How is the environment specific deployment done through same project configuration in Jenkins?
A: You can use CloudBees' template plugin to define projects and then have a job template take environment variables to pull them from a parent folder with Groovy Scripting, or to take them from user input using the parameterized builds plugin:  http://developer-blog.cloudbees.com/2013/07/jenkins-template-plugin-and-build.html
http://jenkins-enterprise.cloudbees.com/docs/user-guide-bundle/template-sect-job.html

Q: Do we need to purchase additional licenses if we want to set up an upgrade/evaluate validation master and slaves, as you recommend?
A: For testing environments, CloudBees subscription pricing is different, it is cheaper. For evaluation, I recommend just doing a trial of both to see which fits your needs better. You can request a 30 day trial of Jenkins Enterprise here.
Q: Is this LDAP group access is only available in the enterprise version? I am asking if I can make it so that some users can only see the jobs of their group.
A: Jenkins OSS supports LDAP authentication. The Role Based Access Control authorization provided by Jenkins Enterprise by CloudBees allows you to apply RBAC security on groups defined in LDAP. You can then put the jobs in folders using the Folders/Folders Plus Plugin and assign read/write/etc permissions over those folders using the CloudBees RBAC plugin.
            Q: Another question. What's the difference between having dedicated slaves
                 with your plugin/addon or just add another slave with another label?

            A: Dedicated slaves cannot be shared with another master - only with the master
                that it's been assigned to - whereas shared slaves with just labels are still open
                for use between any masters that can connect to it.

Q: At this moment my organization is planning to implement Open Source Jenkins. Does CloudBees provide training or consultancy adoc to client environment in order to implement Jenkins with best practices, saving time, money and resources?
A: CloudBees service partners provide consulting and training. The training program is written by CloudBees.
Q: Can I use an LDAP for authentication, but create and manage groups (and membership) locally in Jenkins?  For us, creating groups and managing them in the corporate LDAP is a very heavyweight process (plus, support only for static LDAP groups, not dynamic). Clarification - we have a corporate LDAP, and want to use it for authentication. I want to not use the LDAP to host/manage groups and group management.  I want to do that in Jenkins - and not using LDAP whatsoever in any way for groups
A: Yes, with the the Role Based Access Control security provided by Jenkins Enterprise by CloudBees, you can declare users in LDAP and declare groups and associate users in Jenkins. A Jenkins group can combine users and groups declared in LDAP. You can define users in your authentication backend (LDAP, Active Directory, Jenkins internal user database, OpenID SSO, Google Apps SSO ...) and manage security groups in Jenkins with the CloudBees RBAC plugin.
Q: Is the controlled slaves feature available in the Enterprise version only?
A: Yes - this is a feature of the CloudBees Folders Plus plugin.
Q: Is there a way to version control job configuration?
A: Yes - CloudBees offers a Backup Plugin that allows your to store your job configs in a tar ball. You can set how long to retain these configs and how many to keep, just as you would for a job's run history. You can also use the Jenkins Job Configuration History plugin.
Q: Can I start implementing Jenkins Operations Center as a monitoring layer for teams that have set up with Jenkins OSS? Over time I would move them to Jenkins Enterprise, but we need to progress in small iterative stages.
A: Jenkins OSS masters must be converted into Jenkins Enterprise by CloudBees masters. You can do this either by installing the package provided by CloudBees or by installing the “Enterprise by CloudBees” plugin available in the update center of your Jenkins console. Please remember that a Jenkins OSS master must be upgraded to the LTS or to the ‘tip’ before installing the “Enterprise by CloudBees” plugin.
Q: What is the purpose of the HA proxy?
A: HA proxy is an example of the load balancer used to setup High Availability of Jenkins Enterprise by CloudBees (JEBC) masters (it could also be another load balancer such as BIG IP F5, Cisco ...). More details are available on the JEBC High Availability page and in JEBC User Guide / High Availability.
Q: When builds will run on slaves and Jenkins Operation Center will manage, what is use of masters?
A: JOC is the orchestrator. It manages which slaves are in the pool, which masters need a slave, and which masters are connected. The masters are still where the jobs/workflows are configured and where the results are published
Q: Is there a functionality for a preflight/proof build - i.e. the build with the local Dev changes grabbed from developer's desktop?
A: Jenkins Enterprise by CloudBees offers the Validated Merge plugin that allows the developer to validate their code before pushing it to the source code repository.
Q: Currently we are using OSS version with 1 Master and 18 slaves with 60 executors and faces performances issues, and a workaround used to bounce the server once in a week. Any clue to debug the issue?
A: We would need more information to help diagnose performance problems, but the CloudBees Support plugin in conjunction with a CloudBees Support plan, you could always create a Support Bundle and send it to our support team along with a description of your performance problem.
Q: How do I create dummy users and assign passwords (not using LDAP, AD or any security tool) just for testing my trial Jenkins jobs? (Jenkins open source)
A: Use the "Mock Security Realm" plugin and add dummy users with the syntax "username groupname" under the Global Security Settings
Q: Can you have shared slave groups?  For example, slave group "A"  and within it have sub group "A-Linux5", "A-Linux6", etc...
A: Yes, you can do this with folders in Jenkins Operation Center. A detailed tutorial is available here.
For example with groups “us-east” and “us-west”, you could create folders “us-east” and “us-west”:
  • In the “us-west” folder, you would declare the masters and slave of the West coast (e.g. san-jose-master-1, palo-alto-master-1, san-jose-slave-linux-1, san-francisco-slave-linux-1 ...).
  • In the “us-east” folder, you would declare the masters and slave of the East coast (e.g. nyc-
    master-1…).
Thanks to this, the west coast masters will share the west coast slaves. More subtles scenarios can be implemented with hierarchies of folders as explained in the tutorial.
Q: How do you implement HA on the data layer (jobs)?  Do you have the data hosted on a network drive?
A: Yes - the 2 masters (primary and failover) share a filesystem visible to both over a network. You can read about HA setup here.

--- Tracy Kennedy & Cyrille Le Clerc

Tracy Kennedy
Solutions Architect
CloudBees

As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. (Read her Meet the Bees blog post and follow her on Twitter.


Cyrille Le Clerc
Elite Architect
CloudBees

Cyrille Le Clerc is an elite architect at CloudBees, with more than 12 years of experience in Java technologies. He came to CloudBees from Xebia, where he was CTO and architect. Cyrille was an early adopter of the “You Build It, You Run It” model that he put in place for a number of high-volume websites. He naturally embraced the DevOps culture, as well as cloud computing. He has implemented both for his customers. Cyrille is very active in the Java community as the creator of the embedded-jmxtrans open source project and as a speaker at conferences.
Categories: Companies

Building Resilient Jenkins Infrastructure

Thu, 08/14/2014 - 15:22
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Harpreet Singh, VP product management, CloudBees about a presentation given by Kohsuke Kawaguchi from CloudBees at JUC Boston.

A talk by Kohsuke Kawaguchi is always exciting. It gets triply exciting when his talk bundles three in one. 
Scaling Jenkins horizontallyKohsuke outlined the case on how organizations scale, either vertically or organically (numerous Jenkins masters abound in the organization). He made the case that the way forward is to scale horizontally. In this approach a Jenkins Operations Center by CloudBees master manages multiple Jenkins in the organizations. This approach helps organizations share resources (slaves) and have a unified security model through roles-based access control plugin from CloudBees. 
Jenkins Operations Center by CloudBees
This architecture lets administrators maintain a few big Jenkins masters that can be managed by the operations center. This effectively builds an infrastructure that fails less and recovers from failures faster.


Right sized Jenkins mastersBursting to the cloud (through CloudBees DEV@cloud)He then switched gear to address a use case where teams can start using cloud resources when they run out of build capacity on their local build farm. He walked through the underlying technological pieces built at CloudBees using LXC. 
CloudBursting: Supported by LXC containers on CloudBees
The neat thing with the above technology piece is that we have used it to offer OSX build slaves in the cloud. We have an article [2] highlights on how to use cloud bursting with CloudBees. The key advantage is that users pay for builds-by-the-minute.
TraceabilityOrganizations are looking at continuous delivery to deliver software often. They often use Jenkins to build binaries and use tools such as Puppet and Chef to deploy these binaries in production. However, if something does go wrong in production environment, it is quite a challenge to tie these back to the commit that caused issues. The traceability work in Jenkins ties this loose end. So post deployment, Puppet/Chef notifies a Jenkins plugin and Jenkins calculates its finger print and maintains it in the internal database. This fingerprint can be used to track where the commits have landed and help diagnose failures faster. We have an article [3] that describes how to set this up with Puppet.

Finger prints flow through Jenkins, Puppet and Chef
[1] Jenkins Operations Center by CloudBees[2] Bursting to the cloud
[3] Traceability example
-- Harpreet Singhwww.cloudbees.com
Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter


Categories: Companies

Automation, Innovation and Continuous Delivery - Mario Cruz

Tue, 08/12/2014 - 18:08
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Steve Harris, SVP Products, CloudBees about a presentation given by Mario Cruz of Choose Digital at JUC Boston.

Choose Digital is a longstanding CloudBees customer, and Mario Cruz, founder and CTO has been a vocal supporter of CloudBees and continuous delivery. So, it was fun to have a chance to hear Mario talk about how they use continuous delivery to fuel innovation at Choose Digital at the recent Jenkins User Conference in Boston (slides, video).

Mario began by talking about what Choose Digital does as a business. They host millions of music downloads, along with movies, TV shows, and eBooks, that they offer as a service in a kind of "white label iTunes". Choose Digital's service is used by companies like United, Marriott and Skymall to offer rewards. Pretty much all of this runs on CloudBees and is delivered using Jenkins as their continuous delivery engine.

The thesis of Mario's presentation is that innovation is really the next evolution of continuous delivery. From my perspective, this is probably the biggest strategic advantage a continuous delivery organization gets from its investment. Still, it's hard to quantify, and it can come across as marketing hot air or the search for unicorns. Being able to experiment cheaply and quickly, with low risk, and have an ability to make data-driven product choices are huge advantages that a continuous delivery shop has over its more traditional competition. Fortunately, Mario is able to speak from experience!

To set the stage, he covered Choose Digital's automation and testing processes. They are a complete continuous delivery shop - every check-in kicks off a set of tests, and if successful, deploys to production. Everything is automated using Jenkins and deployed to CloudBees. They are constantly pushing, constantly building, and their production systems are "never more than a couple of hours behind". The rest of Mario's talk was about the practices, both operational and cultural, they have used to get to this continuous delivery nirvana. Some of Choose Digital's practices include:

  • Developer control. They follow the Amazon "write the press release first" style. Very short specs identify what they want to achieve, but the developer is given control over how to make that happen; i.e., specs identify the "what" not the "how", so that developers are in control and empowered. But, this requires...
  • Trust. Their culture and processes disincentivize the need for heroes, and force a degree of excellence from everyone. For that to work, they need a...
  • Blameless culture. Tools like extensive logging and monitoring give everyone what they need to find and fix issues quickly and efficiently.
  • Core not context. They ruthlessly offload anything that is not core to their business. Mario talked about avoiding "smart people disease", where smart people are attracted to hard problem solving, even if it's not what they should be doing. By offloading infrastructure, and even running of Jenkins, to service providers who are specialists in their area, Choose Digital has been able to stay hyper-focused on their business and quickly improve their offerings. In particular, that means...
  • No heavy lifting. Just because you're capable and might even be great at some of the heavy lifting to support infrastructure or some technical area (like search), that's not what you should be doing if it's not a core part of the business. This is one of the main reasons Choose Digital is using CloudBees and AWS services.
  • Responsibility. If you write code at Choose Digital, you are on call to support it when it's deployed. To me the goodness enabled by this simple rule is one of the biggest wins of the as-a-service continuous delivery model (everything at Choose Digital is API-accessed by their customers).
  • Use feature flags. Mario went into some detail about how Choose Digital uses feature flags to enable them to deliver incrementally, experiment, do A-B testing, and even interact with specific customers directly and in proofs of concept.

Mario is a quotable guy, but I'd say the money quote of his presentation was:"Once you make every developer in the room part of what makes the company's bottom line move forward, they'll start thinking like that."In a lot of ways, that's what continuous delivery is all about. It's great to have customers who walk the walk and talk the talk. Thanks, Mario!



Steven Harris is senior vice president of products at CloudBees. 
Follow Steve on Twitter.
Categories: Companies

Meet the Bees: Tracy Kennedy

Mon, 08/11/2014 - 16:30

At CloudBees, we have a lot of seriously talented developers. They work hard behind the scenes to keep the CloudBees continuous delivery solutions (both cloud and on-premise) up-to-date with all the latest and greatest technologies, gizmos and overall stuff that makes it easy for you to develop amazing software.
In this Meet the Bees post, we buzz over to our Richmond office to catch up with Tracy Kennedy, a solutions architect at CloudBees.

Tracy has a bit of an eccentric background. In college, she studied journalism and, in 2010, interned for the investigative unit of NBC Nightly News. She won a Hearst Award for a report she did about her state’s delegates browsing Facebook and shopping during one of the last legislative sessions of the season. She had several of her stories published in newspapers around the state. Sounds like the beginnings of a great journalistic career, right?

Well, by the time she graduated, Tracy ended up being completely burned out and very cynical about the news industry. Instead of trying to get a job in journalism, she wanted to make a career change.

Tracy's dad was a programmer and he offered to pay for her to study computer science in a post-bachelor’s program at her local university. He had wanted her to study computer science when she first started college, but idealistic Tracy wanted to first save the world with her hard-hitting reporting skills. She now took him up on his offer, and surprisingly, found she had a knack for technology.

Tracy landed a job at a small web development shop in Richmond as a QA and documentation contractor. The work tickled her journalistic skills as well as her newly budding computer science skills and she had a great opportunity to be mentored by some really talented web developers and other technical folks while she was there.

By the time Tracy felt ready to look for more permanent work, she had finished some hobby projects of her own that furthered her programming skills better than any class she had taken. It was also at that time that Mike Lambert, VP of Sales - Americas at CloudBees, was looking for someone with Tracy's skills and experience.
You can follow Tracy on Twitter: @Tracy_Kennedy
Who are you? What is your role at CloudBees? My name is Tracy Kennedy and I’m a solutions architect/sherpa at CloudBees.

My primary role is to reach out to customers on our continuous delivery cloud platform and assist them in on-boarding and learning how to use the platform to its fullest potential. However, I work on other things, too. My role actually varies wildly; it really just depends on what the current needs of the organization are.

Tracy with her dog Oliver.I’ve dabbled in some light marketing by writing emails for and sometimes creating customer communication campaigns, done lots of QA work when debugging our automated sherpa funnel campaign and do a bit of sales engineering, as well, since I’m physically located in the Richmond sales office. I also write some of our documentation as I find the time and identify the need for it.

Lately, I’ve also been spending a good chunk of my week working on updating our Jenkins training materials for use by our CloudBees Service Partners and laying the foundation for future sherpa outreach campaigns.

When those projects are done, I plan on going back to work on a Selenium bot that will automate a lot of my weekly tasks involving the collection of customer outreach statistics. I’m hoping that bot will give me more free time to spend learning about Jenkins Enterprise by CloudBees and Jenkins Operations Center by CloudBees - our on-premise Jenkins solutions, and to create some ClickStacks for RUN@cloud.

What makes CloudBees different from other PaaS and cloud computing companies?CloudBees has a really, really excellent "Jenkins story" as the business guys like to say, and that story is really almost like a Dr. Seuss book in its elegant simplicity. Ahem:Not only is Tracy a poet, but she is a budding actress!
Here she is as an extra in a Lifetime movie.

I can use Jenkins on DEV@cloud I can hide Jenkins from a crowd

I can load Jenkins to on-premise machines I can access Jenkins by many means

I can use Jenkins to group my jobs   I can use Jenkins to change templated gobs

I can use Jenkins to build mobile apps I can use Jenkins to check code for cracks

I can keep Jenkins up when a master is down I can “rent” slaves to Jenkins instances all around

I can use Jenkins here or there, I can use Jenkins anywhere.

Don’t worry; I have no plans on quitting my day job to become a poet laureate!

What are CloudBees customers like? What does a typical day look like for you?
CloudBees PaaS customers can range from university students to enterprise consultants. It’s also not uncommon to see old school web gurus open an account and “play around” with it in an attempt to understand this crazy new cloud/PaaS sensation.
I’ve even seen some non-computer science engineers on our platform who are just trying to learn how to program, and those are my favorite customers to interact with since they’re almost always very bright and seem to have an unparalleled respect for the art of creating web applications. It’s always a great delight to be able to “sherpa” them along on their web dev journey and to see them succeed as a result.
As for my typical day, I actually keep track of each of my days’ activities in a Google Calendar, so I can give you a pretty accurate timeline of my average day:

8:30 or 8:45 am - Roll into the Richmond office, grab some coffee. Start reading emails that I received overnight and start replying as needed. Check the engineering chat for any callouts to me and check Skype for any missed messages.

9:30 am - Either start responding to customer emails or start working on whatever the major project of the day is. If it’s something serious or due ASAP, I throw my headphones on to help me concentrate and tune out the sales calls going on around me.

12:00 pm - Lunch at my desk while I read articles on either arstechnica.com, theatlantic.com, or one of my local news sites.

1:00 pm - Usually by this point, someone will have asked me to review an email or answer a potential customer’s question, so this is when I start working on answering those requests. Tracy after doing the CrossFit workout "Cindy XXX."



3:00 pm - Start moving forward a non-urgent project by contacting the appropriate parties or doing the relevant research.

The end of my day varies depending on the day of the week:
  • Monday/Wednesday - 4:00 pm  - Leave to go to class
  • Tuesday/Thursday - 5 pm  - Leave for the gym
  • Friday - 5:30 pm  - Leave for home

Tracy's motorcycle: a 1979 Honda CM400In my spare time, video games are a fun escape for me and they give me a cheap way of tickling my desire to see new places. Sometimes I spend my Friday nights playing as a zombie-apocalypse survivor in DayZ and exploring a pseudo-Czech Republic with nothing but a fireman’s axe to protect me from the zombie hordes.

On the weekends I spend my time playing catch-up on chores, hanging out with my awesome and super-spoiled doggie and going on mini-adventures with my boyfriend. Richmond has a lot of really beautiful parks, and we hike through one of them each weekend if the weather’s conducive to it.

When I can get more spare time during the week, I plan on finishing restoring my motorcycle and actually riding it, renovating my home office into a gigantic closet for all of my shoes and girly things, and learning how to self-service my car.



What is your favorite form of social media and why?Twitter -- I enjoy the simplicity of it, how well it works even when my wi-fi or cellular data connection is terrible, and how easy it makes following my favorite news outlets.
Something we all have in common these days is the constant use of technology. What’s your favorite gadget and why?While I’d love to name some clever or obscure gadget that will blow everyone’s mind, the truth is that I’d be completely lost without my Android smartphone. I use it to manage my time via Google Calendar, check all 10 million of my email accounts with some ease and stay up to date on any breaking news events. Google Maps also keeps me from getting hopelessly lost when driving outside of my usual routes.
Favorite Game of Thrones character? Why is this character your favorite?Sansa Stark, Game of ThronesPlease note that book-wise I’m only on “Storm of Swords” and that I’m completely caught up on the HBO show, so I’m only naming my favorite character based on what I’ve seen and read so far. Some light spoilers below:

While I know she’s not the most popular character, I really like Sansa Stark. Sure, she’s not the typical heroine who wields swords or always does the right thing, but that’s part of her appeal to me. I like to root for the underdogs, and here we have this flawed teenager who’s struggling to survive her unwitting entanglement in an incredibly dangerous political game. She has no fighting skills, no political leverage beyond her name, and no true allies, and she’s trapped in a city with and by her psychopathic ex-fiancé whose favorite past time is to literally torture her.

The odds of Sansa surviving such a situation seem very slim, and yet despite her naïveté, she’s managing to do just that while the more conventional “heroes” of the story are dropping like flies. I could very well see her learning lessons from the fallen’s mistakes and applying them to any leadership roles she takes on in the future. Is she perhaps a future Queen of the North? I wouldn’t discount it.
Sansa is a bright girl with the right name and the right disposition to gracefully handle any misfortunates thrown her way, and aren’t grace, intelligence and a noble lineage all the right traits for a queen? I think so, but we’ll just have to see if George R.R. Martin agrees.
Categories: Companies

Amadeus Contribution to the Jenkins Literate Plugin and the Plugin's Value

Thu, 08/07/2014 - 17:28
This is one in a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Valentina Armenise, solutions architect, CloudBees, about a presentation called "Going Literate in Amadeus" given by Vincent Latombe, Amadeus at JUC Berlin.
The Literate plugin is built on top of the Literate programming concept, introduced by Donald Knuth, who introduced the idea that a program can be described by natural language, such as English, rather than by a programming language. The description would be translated automatically to source code to be used in the scripts in a process completely transparent for the users.
The Literate plugin is built on top of two APIs:
  • Literate API responsible for translating the descriptive language in source code
  • BRANCH API which is the toolkit to handle multi-branch:
    • SCM API - provides the capability to interact with multiple heads of the repository
    • capability to tag some branches as untrusted and skip those
    • capability to discard builds
    • foundation for multi-branch freestyle project
    • foundation for multi-branch template project

Basically, the Literate plugin makes you able to describe your environment together with the build steps required by your job to build, in a simple file (either the marker file or the README.md). The Literate plugin queries the repository looking for one or more branches which contain the descriptive file. If more than one branch contains this file, being eligible to be built in a literate way and no specific branch is specified in the job, then the branches are built in parallel. This means that you can create multi-branch projects where each branch requires different build steps or simply different environments.
The use of the Literate plugin becomes quite interesting when you need to define templates with customizable variables or to whitelist build sections.
Amadeus has invested resources in Jenkins in order to accomplish continuous integration. Over the years they have specialized in the use of the Literate plugin in order to make the creation of jobs easier and become a contributor to this plugin.Vincent Latombe presenting his talk at JUC Berlin.
Click here to watch the video.
And click here to see the slides.
In particular, Amadeus invested resources in order to enhance the plugin usage experience by introducing the use of YAML, a descriptive language which leaves less space to errors compared to the traditional MARKDOWN -too open.
How do we see the Literate plugin today?
With the introduction of CI, there are conversations going on about what is the best approach in merging and pulling changes to repositories.
Some people support the “feature branching” approach, where each new feature is a new branch and is committed to the mainline only when ready to be released in order to provide isolation among branches and stability of the trunk.
Although this approach is criticized by many who think that it is too risky to commit the whole new feature at once, it could be the best approach when the new feature is completely isolated from the rest (a completely new module) or in open source projects where a new feature is developed without deadlines and, thus, can take quite a while to be completed.
The Literate plugin works really well with the feature branching approach described above, since it would be possible to define different build steps for each branch and, thus, for each feature.
Also, this approach gets along really well with the concept of continuous delivery, where the main idea is that the trunk has to be continuously shippable into production.
How does it integrate with CD tools?
Today, we’re moving from implementing CI to CD: Jenkins is not a tool for developers only anymore but it’s now capturing the interest of Dev-Ops.
By using plugins to implement deployment pipelines (ie. Build Pipeline plugin, Build Flow plugin, Promotion plugin), Jenkins is able to handle all the phases of the software lifecycle.
The definition of environments and agents to build and deploy to is provided with integration to Puppet and Chef. These tools can be used to describe the configuration of the environment and apply the changes on the target machines before deployment.
At the same time, virtualization technologies that allow you to create software containers, such as Docker, are getting more and more popular.
How the literate builds could take part in the CD process?
As said before, one of the things that the Literate plugin simplifies is the definition of multiple environments and of build steps by the use of a single file: the build definition will be stored in the same SCM as the job that is being built.
This means that the Literate plugin gets along really well with the infrastructure as code approach and tools like Docker or Puppet where all the necessary files are stored in the SCM. Docker, in particular, could be a good candidate to work with this plugin, since a Docker image is completely described by a single file (the Dockerfile) and it’s totally self-contained in the SCM.
What's next?
Amadeus is looking for adding new features for the plugin in the near feature:
  • Integration with GitHub, Bitbucket and stash pull request support
  • Integration with isolation features (i.e. sandbox commands within the container)

Do you want to know more?



Valentina Armenise
Solutions Architect, CloudBees

Follow Valentina on Twitter.


Categories: Companies

Automating CD pipelines with Jenkins - Part 2: Infrastructure CI and Deployments with Chef

Tue, 08/05/2014 - 17:56
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Tracy Kennedy, solutions architect, CloudBees, about a presentation given by Dan Stine, Copyright Clearance Center at JUC Boston.

In a world where developers are constantly churning code changes and Jenkins is building those changes daily, there is also a need to spin up test environments for those builds in an equally fast fashion.

To respond to this need, we’re seeing a movement towards treating “infrastructure as code.” This goes beyond simple BAT files and shell scripts -- instead, “infrastructure as code” means that you can automate the configurations for ALL aspects of your environment, including the infrastructure and the operating system layers, as well as infrastructure orchestration with tools like Chef, Ansible and Puppet.

These tools’ automation scripts are version controlled like the application code, and can even be integrated with the application code itself.

While configuration management tools date back to at least the 1970s, this way of treating infrastructure code like application code is much newer and can be traced to at least CFEngine in the 90s. Even then, these declarative configuration tools didn’t start gaining popularity until late 2011:

Screen Shot 2014-07-30 at 2.40.09 PM.png

Screen Shot 2014-07-30 at 2.42.22 PM.pngScreen Shot 2014-07-30 at 2.42.05 PM.png

Screen Shot 2014-07-30 at 2.43.35 PM.png


Infrastructure CIThis rise of infrastructure code has created a new use case for Jenkins: as a CI tool for an organization’s infrastructure.

At the 2014 Boston Jenkins User Conference, Dan Stine of the Copyright Clearance Center presented how he and his organization met this challenge. According to Stine, the Copyright Clearance Center’s platform efforts began back in 2011. They saw “infrastructure as code” as an answer to the plight of their “poor IT ops guy,” who was being forced to deploy and manage everything manually.

Stine compared the IT ops guy to the infamous “Brent” of The Phoenix Project: all of their deployments hinged on him, and he became overwhelmed by the load and became the source of their bottlenecks.

To solve this problem, they set two goals to improve their deployment process:
1. Reduce effort2. Improve speed, reliability and frequency of deploymentsJenkins and Chef
As for the tools to accomplish this, the organization specifically picked Jenkins and Chef, as they were already familiar and comfortable with Jenkins, and knew both tools had good communities behind them.They also used Jenkins to coordinate with Liquibase to execute schema updates, since Jenkins is a good general purpose job executor.

They installed the Chef client onto nodes they registered on their Chef server. The developers would then write code on their workstations and use tools like Chef’s “knife” to interact with the server.

Their Chef code was stored in GitHub, and they pushed their Cookbooks to the Chef server.

For Jenkins, they would give each application group their own Cookbook CI job and Cookbook release job, which would be run by the same master as the applications’ build jobs. The Cookbook CI jobs ran any time that new infrastructure code was merged.

They also introduced a new class of slaves, which had the required RubyGems installed for the Cookbook jobs and Chef with credentials for the Chef server.

Cookbook CI Jobs and Integration Testing with AWSThe Cookbook CI jobs first prompt static analysis of the code’s syntax with JSON, Ruby and Chef, followed by integration testing using the kitchen-ec2 plugin to spin up an EC2 instance in a way that would mimic the actual deployment topology for an application.

Each EC2 instance was created from an Amazon Machine Image that was preconfigured with Ruby and Chef, and each instance was tagged for traceability purposes. Stine explained that they would also run chef-solo on each instance to avoid having to connect ephemeral nodes to their Chef server.

Cookbook Release Jobs
The Cookbook release jobs were conversely triggered manually. They ran the same tests as the CI jobs, but would upload new Cookbooks to the Chef server.

Application Deployment with ChefFrom a workstation, code would be pushed to the Chef repo on GitHub. This would then trigger a separate Jenkins master dedicated to deployments. This deployment master would then pull the relevant data bags and environments from the Chef server. The deployment slaves kept the SSH keys for the deployment nodes, along with the required gems and Chef with credentials.

Stine then explained the two deployment job types for each application:

1. DEV deploy for development2. Non-DEV deploy for operations

Screen Shot 2014-07-30 at 3.43.34 PM.png
Non-dev jobs took an environment job parameters to define where the application would be deployed to, while both took application group version numbers. These deployment jobs would edit application data bags and application environment files before uploading them to the Chef server, find all nodes in the specified environment with the deploying app’s recipes, run the Chef client on each node and send an email notification of the result of the deployment.


Click here for Part 1.


Tracy Kennedy
Solutions Architect
CloudBees

As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. (A Meet the Bees blog post about Tracy is coming soon!) For now, follow her on Twitter.
Categories: Companies

Multi-Stage CI with Jenkins in an Embedded World

Thu, 07/31/2014 - 16:27
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Steve Harris, SVP Products, CloudBees about a presentation given by Robert Martin of BMW at JUC Berlin.


Embedded systems development is an incredibly complex world. Robert (Robby) Martin of BMW spoke at JUC Berlin on the topic of Multi-Stage CI in an Embedded World (slides, video). Robby spent a lot of his career at Nokia prior to coming to the BMW Car IT team. While many of the embedded systems development and delivery principles are common between phones and cars, the complexity and supply chain issues for modern automobiles are much larger. For example, a modern BMW depends on over 100 million lines of code, much of which originates with external suppliers, each of whom have their own culture and QA processes. Robby used an example scenario throughout his presentation, where a development team consisting of 3 developers and a QA person produce a software component, which is then integrated with other components locally and rolled up for delivery as part of a global integration which must be installed and run as part of the overall product. 



The magnifying effect of an error at an early stage being propagated and discovered at a later stage becomes obvious. Its impact is most clearly felt in the end-to-end "hang time" needed to deliver a one-line change into a production product. Measuring the hang-time automatically and working to speed it up continuously is one of his key recommendations. Fast feedback and turnaround in the event of errors, and minimizing the number of commits within a change-triggered CI stage, is critical. Robby also clarified the difference and importance of using a proper change-triggered approach for CI, as opposed to nightly integration.



Robby described the multi-stage CI approach they're using, which is divided into four stages:
  1. DEV-CI - Single developer, max 5 minutes
  2. TEAM-CI - Single SW component, max 30 minutes
  3. VERTICAL-CI - Multiple SW components, max 30 minutes (e.g., camera system, nav system)
  4. SYSTEM-CI - System level, max 30 minutes (e.g., the car)
The first stage is triggered by a developer commit, and each subsequent stage is automatically triggered by the appropriate overall promotion criteria being met within the previous CI stage. Note how the duration, while minimal for developers, is still held to 30 minutes even at the later stages. Thus, feedback loops to the responsible team or developer are kept very short, even up to the product release at the end. This approach also encourages people to write tests, because it's dead obvious to them that better testing gets their changes to production more quickly, both individually and as a team, and lowers their pain.

One problem confronting embedded systems developers is limited access to real hardware (and it is also a problem for mobile development, particularly in the Android world). Robby recommended using a hardware "farm" consisting of real and emulated hardware test setups, managed by multiple Jenkins masters. He also noted how CloudBees' Jenkins Operations Center would help make management of this type of setup simpler. In their setup, the DEV-CI stage does not actually test with hardware at all, and depending on availability and specifics, even the TEAM-CI stage may be taken up into VERTICAL-CI without actual hardware-based testing.

Robby's recommendations are worthwhile noting:


  • Set up your integration chain by product, not by organizational structure
  • Measure the end-to-end "hang time" automatically, and continuously improve it (also key for management to understand the value of CI/CD)
  • Block problems at the source, but always as early as possible in the delivery process
  • After a developer commits, everything should be completely automated, including reports, metrics, release notes, etc
  • Make sure the hardware prototype requirements for proper CI are committed to by management as part of the overall program
  • Treat external suppliers like internal suppliers, as hard as that might be to make happen
  • Follow Martin Fowler's 10 practices of CI, and remember that "Commit to mainline daily" means the product - the car at BMW
Finally, it was fun to see how excited Robby was about the workflow features being introduced in Jenkins. If you watch his Berlin presentation and Jesse's workflow presentation from Boston JUC, you can really see why Jenkins CI workflow will be a big step forward for continuous delivery in complex environments and organizations.

-- Steven G. Harriswww.cloudbees.com


Steven Harris is senior vice president of products at CloudBees. Follow Steve on Twitter.
Categories: Companies

Continuous Delivery: Deliver Software Faster and with Lower Risk

Wed, 07/30/2014 - 20:30
continuous-delivery-infographicContinuous Delivery is a methodology that allows you to deliver software faster and with lower risk. Continuous delivery is an extension of continuous integration - a development practice that has permeated organizations utilizing agile development practices.

Recently DZone conducted a survey of 500+ IT professionals to find out what they are doing regarding continuous delivery adoption and CloudBees was one of the research sponsors. We have summarized the DZone findings in an infographic.

Find out:
  • Most eye-opening statistic: The percentage of people that think they are following continuous delivery practices versus the percentage of people that actually are, according to the definition of continuous delivery
  • Who most typically provides production support: development, operations or DevOps
  • Which team is responsible for actual code deployment
  • How pervasive version control is for tracking IT configuration
  • The length of time it takes organizations from code commit to production deployment
  • Barriers to adopting continuous delivery (hint: they aren't technical ones)

View the infographic and learn about the current state of continuous delivery.
For more information:Get the CloudBees whitepaper: The Business Value of Continuous Delivery.


Categories: Companies