Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
CloudBees provides an enterprise Continuous Delivery Platform that accelerates the software development, integration and deployment processes. Building on the power of Jenkins CI, CloudBees enables you to adopt continuous delivery incrementally or organization-wide, supporting on-premise, cloud and hybrid environments.
Updated: 13 hours 2 min ago

Another Look at the Jenkins Promoted Builds Plugin

Mon, 10/13/2014 - 03:00
I discussed the Jenkins Promoted Build Plugin in a few recent blog post, when talking about the QA Process and Beta Test Distribution for mobile apps, where I gave a simple scenario of how it could be used to help control the testing lifecycle for application builds.  I happened to run through this with my friend Andrew Phillips from XebiaLabs, and he suggested some improvements to that scenario that I thought people might like to see, so I reworked the online example to illustrate those ideas.  Thanks also (as always) to Kohsuke Kawaguchi for his help and suggestions.  By the way, if you are interested in this and other enterprise-scale features of Jenkins, then please join Andrew and I for a XebiaLabs and CloudBees joint webinar on Wednesday November 7th: Setting up Continuous Delivery with Jenkins Enterprise and DeployIt.  

For this scenario, we have four Jenkins jobs, two which are "owned" by the developer and two which are "owned" by QA and Release Management.  The developer pushes code changes to a Git repository  and that triggers the initial build job (cuckoochess-android): this job has a build promotion configured (Release to QA), which depends on a successful build/test plus a downstream build (cuckoochess-android-matrix) that checks for compatibility with older versions of the Android SDK.  As far as the developer is concerned, he or she is really only interested in the outcome of the initial app build/test but until the downstream multi-configuration test build completes, the build isn't ready to go for QA review so the Promotion Status shows like this:


Once the downstream matrixed build(s) run successfully (and in reality we would expect to see a range of other tests including automated functional and touch tests), the build is automatically promoted and is ready for QA review and the Promotion Status now looks like this:


The remaining two build jobs are managed by the QA and Release Management teams: these are gated by a second build promotion (CuckooChess QA Approval).  A third Jenkins job (cuckoochess-android-QA) is triggered by the Release to QA build promotion, like so:


and the only qualification (in this example) is a manual approval by a named QA reviewer, which is configured like this:


The QA reviewer would usually want to look at unit test results, code coverage and quality metrics reported by tools like JUnit, Cobertura/Emma, Android Lint or one or more of the many code quality tools supported by the Jenkins Violations Plugin.  Many users probably prefer to have as much as possible of this quality and test coverage checking automated using Jenkins and it is common to set thresholds that must be achieved before a build can be accepted.  You can see an example of how to configure this sort of reporting for an Android build here, described in more detail in this blog.  Either way, the cuckoochess-android-QA job will show its Promotion Status like this, until a manual approval is given (by logging on to Jenkins, viewing the build and clicking Accept):


Once the QA reviewer is happy and the manual approval has been given, then the QA Approval build promotion will run, which in this case triggers the final build job (cuckoochess-android-release) which pushes the approved build to Zubhium for beta test distribution as described in the earlier blog.  The final build promotion status of the cuckoochess-android-QA job is now:



The final piece of the puzzle is how to ensure that the build that finally gets pushed to Zubhium for beta testing is the right one: we need to make sure that the application archive that goes out contains the exact bits that were built by the original cuckoochess-android job.  The way to configure that is by using the Jenkins Copy Artifacts Plugin to get the .apk application archive from the cuckoochess-android project, using a permalink to specify that we want the build associated with the most recent Release to QA build promotion.  That configuration looks like this:






Mark Prichard, Senior Director of Product ManagementCloudBeeswww.cloudbees.com

Mark Prichard is Java PaaS Evangelist for CloudBees. He came to CloudBees after 13 years at BEA Systems and Oracle, where he was Product Manager for the WebLogic Platform. A graduate of St John's College, Cambridge and the Cambridge University Computer Laboratory, Mark works for CloudBees in Los Altos, CA.  Follow Mark on Twitter and via his blog Clouds, Bees and Blogs.



Follow CloudBees:

   
Categories: Companies

Breaking Builds - No More Half Measures

Fri, 10/10/2014 - 05:04

OK, you’re a TV producer, and you’ve been charged with creating a spoof series based on “Breaking Bad.” But instead of making meth, your characters are going to make … software. 
What would the show look like? What kinds of jams would the characters find themselves in? Who’d play Walt?
Here at CloudBees we kicked the idea around and came up with our own concept that we trotted out last week at JavaOne and in a series of memes on Twitter, Facebook, G+ and LinkedIn. Our show is “Breaking Builds.” Our Walter White character is being played by everybody’s fanatically efficient, deviously stoic butler, Jenkins.
Now, follow along for a minute. The term “breaking bad,” according to UrbanDictionary.com, comes from the American Southwest – just like the show itself. The phrase equates roughly to the process of defying authority, skirting the edges of the law – just generally raising hell. In the TV show, Walt breaks bad in a big way. He drives his RV right along the edge of the law and pretty much makes a mess of everything around him.
In our scenario, “Breaking Builds” is a good metaphor for the anarchy that can manifest itself in the software development process. If something can go wrong in the software lab, chances are it will. Ask anybody in development or DevOps if they have ever broken a build, and he or she will probably have some pretty good stories to tell. Our character doesn’t cause the anarchy; it’s there already. His job is to make sure bad things don’t happen.
To keep builds from breaking in the real world, development teams can dial up Continuous Delivery by using Jenkins CI. Jenkins manages and controls development lifecycle processes of all kinds – everything from building to documenting to testing to packaging to deploying to analyzing software projects.
On our show, Jenkins is the star. He’s got Walter White’s bravado and the Jenkins butler’s work ethic. He is the one who knocks. He’s an expert in chemistry, ensuring that the "chemistry" in companies’ DevOps is pure. Walter White needs to cook; Jenkins needs to code. Walt and Skyler pile up cash; Jenkins keeps stacking up hundreds and hundreds of plug-ins for people to use to integrate Jenkins with their favorite technologies.
Let’s face it: Builds will break – if you don’t do something to head off problems. Continuous Delivery can make a difference. And there’s one character that knows how to make CD work.
This character is Jenkins. Say his name.


Categories: Companies

Where in the World is Jenkins? - October 2014

Mon, 10/06/2014 - 17:27

CloudBees employees travel the world to a lot of interesting events. This month the Bees are busy. If you are attending any of these October events, be sure to connect with us!
  • DevOps & Continuous Delivery Event for IT Executives by Atos – 7th of OctoberThe quote: "You can't succeed in the future with the organization of the past." is the central idea in this seminar meant for IT executives who wish to improve their delivery efficiency and the alignment between their IT delivery and business objectives. More than ever before, development teams must connect with IT operations in a dynamic and ‘continuous delivery’ environment, using the right toolset in order to deliver faster, higher quality software to their business end-users at a lower cost. The Atos DevOps & Continuous Delivery event will present real customer cases and the tools allowing companies to deliver better quality applications faster. For more information and to register click here.

  • DevOps Days Chicago - 7th and 8th of October
    The DevOps days are all about bringing development and operations together. There will be a lot of interesting presentations for instance about how and why not effectively managing an organization's priorities is as damaging to a company's success as not trying to implement a DevOps culture at all. Next to presentations, this event also works with the open space concept. That is the simplest meeting format that could possibly work and is based on (un)common sense of what people do naturally in productive meetings. Prepare to be surprised! Click here for registration and more information.

  • CD Summit Chicago – 15th of October In this seminar for IT executives and technologists, you will learn how to make evolutionary changes to people, processes and technologies to achieve the benefits of continuous delivery and agile. Come join an impressive group of continuous delivery experts to explore how you can increase quality and much more. You'll see how to reduce errors throughout the pipeline and dramatically improve time-to-market for new features and applications with continuous integration with Jenkins. There will be an executive morning consisting of four speakers and a technical afternoon consisting of again four other speakers. For more information and to register, click here.


  • CD Summit San Francisco – 22nd of October This Summit in San Francisco will also be about how to make evolutionary changes to people, processes and technologies to achieve the benefits of continuous delivery and agile. The timetable of speakers will be the same as the Summit in Chicago except for a couple of small changes in speakers, but they will be talking about the same topic. To see the timetable, more information about the speakers and to register click here.


  • Jenkins User Conference - US West (San Francisco) - 23rd of October The Jenkins User Conference (JUC) brings Jenkins experts and community enthusiasts together for an invaluable day of Jenkins-focused learning and networking opportunities. Among other things, you will learn about the latest and greatest Jenkins technology, best practices and plugin development. Jenkins CI is the leading open source continuous integration server. Built with Java, it provides over 961 plugins to support building and testing virtually any project. By attending JUC, you join the community of Jenkins technologists dedicated to expanding their skills and moving the Jenkins platform forward. To buy tickets and for more information click here.

  • IC3: IT Cloud Computing Conference (San Francisco) - 27th and 28th of October 
    "IC3 gives you everything you need to automate IT and DevOps in the cloud". This is in short what the Conference is about. Attendees get vendor-neutral technical content, training and hands-on experience to take the industry (and their careers) to the next level. After you have seen the presentations, you can for instance participate in labs to immediately build what you have learned or network with peers from large enterprise organizations. To register and more information click here.

  • ZendCon 2014 - 27th until 30th of October 
    ZendCon is the place to catch up on news, float new ideas and share coding challenges with developers from around the globe. You can fill your days and evenings with sessions, tutorials, and networking time. There will be three great conference tracks: PHP Best Practices & Tooling, Continuous Delivery & DevOps and Application Architecture - APIs, Mobile, Cloud Services. For much more information about the sessions, tutorials, speakers and to register check out the Zendcon 2014 website.


Categories: Companies

Migrating to Jenkins Enterprise by CloudBees from Open Source Jenkins

Fri, 10/03/2014 - 18:49
Apparently a few of you have been wondering 'what is involved in moving from Jenkins OSS to CloudBees', and you're in luck: it's super easy! On a scale of difficulty, with 10 being quantum computing and 1 being writing 'hello world' in Python, it's probably a 2 or 3.
SourceXKCD

I won't bore you with too much detail (which can be found here), but if you're interested in making this migration, you have 2 options. The one you pick really depends on you and your needs:

Scenario 1: Your Jenkins version is an LTS version newer than the latest version on this list
In this case, you'll want to install our "Enterprise by CloudBees" plugin. Simply go to your Plugin Manager ("Manage Jenkins" >> "Manage Plugins") and go to the "Available: tab. Install the plugin by checking the box next to the "Enterprise by CloudBees" and selecting the "Install without restart" option.




After installing this "meta-plugin", you'll now need to go to "Manage Jenkins" and select the "Install Jenkins Enterprise by CloudBees" menu option.



You'll now have the option to pick whether you'd like to install all of the plugins packaged as a part of the Jenkins Enterprise by CloudBees offering or whether you'd prefer to just install the license for now. With the latter, you can choose which specific plugins you'd like to install later from your plugin manager.




Regardless of which option you pick, you'll see text updates appear on screen as the required steps are completed (adding the CloudBees update center, installing plugins, etc). 


Afterwards, you'll be prompted to input a valid license to continue.




If you've already purchased a license from someone in sales, simply select the third option ("I already have a  license key") and enter your license key + certificate here. 

If you haven't yet purchased a license, then you'll need to register for an evaluation here by selecting the first option and entering your name + email.

And that's it! Your OSS Jenkins master will now be a Jenkins Enterprise by CloudBees master.

Scenario 2: Your Jenkins version is an LTS version older than the latest version on this list

You can either upgrade using the meta-plugin outlined in scenario 1, or you can install the Jenkins Enterprise by CloudBees WAR with a compatible version number to the LTS version you have now - for example, you'd pick the Jenkins Enterprise 1.554 WAR if you're running OSS Jenkins 1.554.

Once you download the WAR, you would just need set its JENKINS_HOME to be the same as the JENKINS_HOME your OSS Jenkins is currently working from and then run it.

Once you run the WAR, any plugins in your existing OSS installation's JENKINS_HOME will be updated to the version bundled with the Jenkins Enterprise by CloudBees WAR.

Alternative installation options
  • openSUSE users can run:

sudo zypper addrepo http://nectar-downloads.cloudbees.com/nectar/opensuse/ jenkins
followed by
sudo zypper install jenkins
  • Red Hat/Fedora/CentOS users can download an RPM package by adding the key to their system:
sudo rpm --import http://nectar-downloads.cloudbees.com/nectar/rpm/jenkins-ci.org.key
Then adding the repository:
sudo wget -O /etc/yum.repos.d/jenkins.repo http://nectar-downloads.cloudbees.com/nectar/rpm/jenkins.repo
Then installing Jenkins Enterprise by CloudBees:
sudo yum update
sudo yum install jenkins
  • Ubuntu/Debian users can install Jenkins Enterprise as a Debian package by adding the keys to their system:
wget -q -O - http://nectar-downloads.cloudbees.com/nectar/debian/jenkins-ci.org.key | sudo apt-key add -
Then adding the repository. If you have already added open-source Hudson/Jenkins as a repository, be sure to remove it to avoid Jenkins Enterprise by CloudBees from being overwritten by them:
echo deb http://nectar-downloads.cloudbees.com/nectar/debian binary/ | 
sudo tee /etc/apt/sources.list.d/jenkins.list
Then installing Jenkins Enterprise by CloudBees:
sudo apt-get update
sudo apt-get install jenkins
  • Windows users can download a ZIP file from here and execute the setup program inside, then access their instance at http://localhost:8080.

Scenario 3: Your Jenkins version is an LTS version older than 1 year OR is not an LTS version

You'll need to upgrade to an LTS version that is less than a year old and then follow either of the above instructions.
Categories: Companies

Continuous Delivery - The Real Deal

Tue, 09/30/2014 - 15:43
 CD Summit


Continuous delivery (CD), a methodology that allows you to deliver software faster and with lower risk, is a topic that is gaining foothold in startups like Choose Digital and Neustar, and in enterprises like Cisco and Thomson Reuters. CD enables companies to accelerate innovation, move faster than the competition and finally allow IT to quickly meet the application needs of the business.
We have just completed our second set of CD Summits in London and Paris to packed houses, including a standing room-only event in London. The word is getting out that our summits are the place to learn about CD: they provide a full day of education for executives and technologists that covers the people, process and technology aspects of continuous delivery. Additionally, our partner sponsors provide a unique view on how various tools for testing, infrastructure provisioning and application deployment fit together to create a toolchain in support of the CD pipeline.
Next up in our CD Summit series are Chicago on Oct. 15th, San Francisco on Oct. 22nd and Washington D.C. on Nov. 19th. Please consider joining us for a day that you will find well worth your time.
Here is what people have had to say about past CD Summits:
“These summits are very impressive. The scope of presentations covers all of the important aspects, and the technology presentations cover much of the pipeline. “- Kurt Bittner, Forrester Research
“This summit was fantastic. Thanks very much.” - New York City attendee
"The London summit was full, so I traveled to Paris, and I'm very glad I did." - Paris attendee           
“I need the slides to show my boss.” – London attendee
We start off the morning of each summit with an executive-level presentation discussing the business benefits that can be realized by CD. We then have presentations covering the people, process and technology impacts of CD. You’ll hear about real world examples of CD in action by enterprises that are actually transforming their practices. For example, at the upcoming Summits, Choose Digital will present in Chicago, Cisco in San Francisco and both Thomson Reuters and Neustar in Washington D.C.
Here’s an example agenda:
8:00  Registration (includes continental breakfast)9:00The Business of Continuous Delivery - Kurt Bittner, Forrester Research9:45Orchestrating the Continuous Delivery Process - Steve Harris, CloudBees10:30Break11:00Three Pillars of Continuous Delivery: Culture, Tooling & Practices - Andrew Phillips, XebiaLabs11:45Achieving "Fast IT" With Continuous Delivery - Nick Pace, Cisco Systems12:30Lunch (provided)14:00Jenkins for Continuous Delivery - Kohsuke Kawaguchi, CloudBees14:30Accelerating Application Delivery with Continuous Testing - Peter Galvin, SOASTA15:00Break15:30Automating Infrastructure - Gabriel Schuyler, Puppet Labs16:00Successfully Implementing Continuous Delivery - MomentumSI16:30Continuous Delivery in the Real World: From Jenkins to Production - Mario Cruz, Choose Digital17:00Panel: Ask the Experts17:30Reception: Continuous Beer Delivery

During lunch, attendees have the opportunity to speak with other attendees and our expert presenters on a topic of their choice. After lunch, we kick off the afternoon session with Jenkins Founder Koshuke Kowaguchi discussing the use of Jenkins for CD. After the Summit and during the social hour, our partners, including XebiaLabs, SOASTA and Puppet Labs, will all discuss how to automate the software delivery pipeline. As well, you’ll be treated to breakfast, lunch and an evening reception.
Join us in ChicagoSan Francisco or Washington D.C. for an event that is not to be missed.
See you there!
André Pino
CloudBees
www.cloudbees.com


André Pino is vice president of marketing at CloudBees. 





Categories: Companies

CloudBees Around the World - September 2014

Fri, 09/19/2014 - 18:26

CloudBees employees travel the world to a lot of interesting events. Where you can find the Bees before September ends? Hint: This month, it's all about JavaOne. If you are there, be sure to connect with us!
  • JavaOne San Francisco – September 28 - October 2Take advantage of tools to help you generate awareness, enthusiasm and participation for the Java event of the year. You can choose from more than 400 sessions, including technical sessions, hands-on labs, tutorials, keynotes and birds-of-a-feather sessions. Learn from the world's foremost Java experts, improve your working knowledge and coding expertise, follow in-depth technical tracks to focus on the Java technology that interests you the most. To register and get more information just click on the links.




Categories: Companies

Customer Spotlight: Choose Digital

Thu, 09/18/2014 - 17:52
At CloudBees, we have a lot of innovative customers. They’ve established leadership positions in the marketplace with their great ideas, hard work and a little help from the CloudBees Continuous Delivery Platform.

This blog is the first of several that we will run from time-to-time, highlighting various CloudBees customers. In this first post, we head to Miami to visit Mario Cruz, co-founder and CTO of Choose Digital (recently acquired by Viggle).

Mario, tell us about yourself.
I’m a technologist, born in Cuba and now living in the Miami area. I've now been developing and marketing B2B and B2C technology solutions for over 20 years.

Tell us about Choose Digital.
We developed a private-label digital marketplace that has enabled companies to launch a digital content strategy incorporating the latest in music, movies, TV shows, eBooks and audiobooks. SkyMall, Marriott, United Airlines and others have tapped into our platform to up-level initiatives such as customer loyalty programs, promotional offers, affinity sales channels and digital retail roll-outs. We’ve had great success providing a streamlined channel, helping companies navigate around licensing conflicts, reduce brand friction and take control of usage data. We’ve also provided solutions for musicians and authors to market their work directly to fans and monetize their social media followings.

What did you do before you started Choose Digital?
I’ve had a bunch of jobs in the technology space. I spent three years as CTO of Grass Roots America, a provider of global performance improvement solutions for employee, channel and consumers. I oversaw the business’s technology, infrastructure and information security in the Americas region. Before that I worked for five years as CIO of Rewards Network, operator of loyalty dining programs in the U.S. for most major airlines, hotels and credit card companies.

What kinds of challenges did you face at Choose Digital that spurred you to start working with CloudBees?
We felt we had to be the first to market and we dedicated all our resources to this goal. We didn’t have time for long development and integration cycles. We didn’t want to worry about setting up and maintaining a Java infrastructure, so we adopted Jenkins in the cloud - the CloudBees’ cloud platform. We were up and running with DEV@cloud in just one day. And using CloudBees’ ClickStarts we were able to set up new projects in about an hour. If we had to set up our own hardware or use a IaaS solution, development would have taken three to five times as long, and costs would have been multiplied by a factor of 10 to 15.

Can you talk about your experience with Continuous Delivery, using CloudBees’ technology?
Using a continuous delivery model, we’re able to experiment cheaply and quickly, with low risk. We’re able to run every step of the process in a streamlined manner. Every update kicks off a series of tests, and once the tests pass, the update deploys to production. Everything is automated using Jenkins and deployed to CloudBees. Rather than wait for new versions, we can constantly push, build in improvements and be confident that production will never be more than a couple of hours behind. This gives us control over our development process and instills a certain amount of trust within the staff that projects we undertake will get done on time, on budget and with the quality that we need.

Your business is all about helping companies make strategic use of digital content. What do you like to listen to, read and watch in your spare time?
I’m in the right profession because I’m a huge consumer of content myself – all kinds.

My favorite book is probably “Bluebeard,” by Kurt Vonnegut. It’s about an abstract impressionist painter who, in typical Vonnegut form, has some eccentric ideas about how to create and promote art. The first movie I ever saw was “Raiders of the Lost Ark.” It made me want to travel the world, and luckily my technology career has allowed me to do that. Going way back, my first 45 record was “Freeze Frame” by the J. Geils Band and my first album was “Ghost in the Machine” by the Police.

I’m still a big music guy. I play drums in a band called Switch, which plays all kinds of music, from the Doobie Brothers to Four Non Blondes. I used to be in a bunch of other bands called The Pull, Premonition and Wisdom of Crocodiles. (To see/hear Mario playing the drums in his band, go to this post by Mario.)

So, what’s next for you?
After Choose Digital being acquired by Viggle my goal is to make sure Viggle members get the best media rewards for doing things they love to do – like watching TV and listening to music - while continuing to innovate on our platform.

Read the case study about Mario and his team at Choose Digital
Follow Mario on Twitter: @mariocruz
Categories: Companies

Webinar Q&A: Continuous Delivery with Jenkins and Puppet - Debug Bad Bits in Production

Mon, 09/15/2014 - 20:13
Thank you to everyone who joined us on our webinar.


We presented:

  • How to build a modern continuous delivery pipeline with Jenkins
  • Connect Jenkins and Puppet such that Dev and Ops team can determine what happens on the other side of the house and closely interact to debug issues in production environments.


Webinar recording is here.

Following are answers to questions we received during the webinar:________________________________________________________________
Q: Is Puppet serving as the orchestrator for Jenkins?
A: Not quite - the tools run independently but communicate with each other. The demo will make it clear.

Q: Can JMeter be plugged in with Jenkins for Continuous testing?
A: Yes it can. 

Q: When we say continuous testing do we mean automated testing here?
A: Continuous Testing = automated testing for each commits made in the source repository.

Q: What drivers or plugins are required? Can I get a website where I can get this info?
A: https://wiki.jenkins-ci.org/display/JENKINS/JMeter+Plugin

Q: With JMeter can we run a load test using the build in Jenkins, or how can we do continuous testing with this combination?
A: JMeter is going to used for load testing stage. It depends how you setup your workflow/pipeline. If you run perf test on every commit (you shouldn't) but you have continuous testing. You will have more testing stages ideally.

Q: Can Puppet work with VM's
A: Yes, Puppet can work with VMs. Puppet agents live at the OS level, and can be deployed to virtual machines or bare hardware. Puppet is agnostic to where or how it has been deployed. We do have some hooks and integrations around provisioning new VMs as well.

Q: I'm curious that I don't see AWS/EC2 under "Virtual & Cloud" for Puppet along with VMware, Xen, Azure ... is there a reason? Any concerns I should have about compatibility with EC2 infrastructure?
A:  No, there are no concerns around EC2. Puppet runs great in EC2 and we have many customers running their infrastructure with Puppet in Amazon's cloud.

Q: Are you going to share these scripts somewhere?
A: Demo write up available on CloudBees developer wiki. The jenkinsci infrastructure is available at https://github.com/jenkinsci/infra-puppet
Q: I understand that Puppet helps create an MD5 hash file of the war file -  build deployments. Could you provide a basic definition of what is Puppet and what is Docker?A: Puppet (stealing from the Puppet page)
Puppet Enterprise (PE) uses Puppet as the core of its configuration management features. Puppet models desired system states, enforces those states, and reports any variances so you can track what Puppet is doing.
To model system states, Puppet uses a declarative resource-based language — this means a user describes adesired final state (e.g. “this package must be installed” or “this service must be running”) rather than describing a series of steps to execute
Docker (stealing from Docker.io)
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.


Q: Will this work with SVN too?
A: There is an equivalent version of Validated Merge for Jenkins that our team has pushed out in OSS.
Q: Will Validated merge with SVN repo too?A: See above.
Q: Is an equivalent to the gated repo available with subversion?  It's a great idea; a while back I'd worked with a similar homegrown solution for Perforce.A: See above.
Q: What's the difference between open source Jenkins & CloudBees's version?A: See this link.
Q: Where I could get the quotation if I want to buy?A: Email sales@cloudbees.com
Q: Does Puppet require root access for Unix host? What privileges would it require as a user?A: The Puppet agent typically runs as root in order to be able to fully configure the system, but it does not require those privileges. When running as a non-privileged user, it will only be able to manage aspects of the system the user has permissions for.
Q: When Harpreet was doing the Traceability demo, the Jenkins screen that showed the artifact deployment state had a field for 'Previous version' that was blank. Why was that empty? What value would normally be in there, the MD5 hash of the previous artifact?A: Those would change if I had checked in new code thus altering the MD5 hash. Since I was just rebuilding the same image in the demo, the hashes are same and hence no previous version.
Q: Is Puppet capable to work with IBM Solutions? like Websphere?A: Yes. In general, if it's possible to manage or modify an application from the command line of a system, it is possible to build a Puppet model for it. Check out forge.puppetlabs.com for 2500+ examples of pre-built community and supported modules.
Q: I read that about the agent, but what about the master? If not, can you run Puppet without a master?A: The master is effectively a web service, which does not require root privileges, so it too can be run without root. For testing and development, you can run Puppet in a stand-alone mode using the `puppet apply` family of commands.

Q: Does Puppet need vagrant to run or can we run it directly on the VM?
A: Puppet can be run directly on a VM. It does not have dependencies on Vagrant or any other specific virtualization/cloud management software.
Q: How does the facility match with the preccommit checkin provided by Visual Studio Env?A: I am not familiar with Visual Studio Env but documentation indicates that those are just environment variables that are in injected into builds, if so then Jenkins can understand environment variables.


-- Harpreet Singhwww.cloudbees.com
Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter. -- Reid Vandewiele
www.puppetlabs.com


Reid is a technical solutions engineer at Puppet Labs, Inc.


Categories: Companies

CloudBees Becomes the Enterprise Jenkins Company

Thu, 09/11/2014 - 09:04
Since we founded the company, back in 2010, CloudBees always had the vision to help enterprises accelerate the way they develop and deploy applications. To that end we delivered a PaaS that covered the entire application lifecycle, from development, continuous integration and deployment to staging and production. As part of this platform, Jenkins always played a prominent role. Based on popular demand for Jenkins CI, we quickly responded and also provided an on-premise Jenkins distribution, Jenkins Enterprise by CloudBees.
Initially, Jenkins Enterprise by CloudBees customers were mainly using Jenkins on-premise for CI workloads. But in the last two years, a growing number of customers have pursued an extensive Continuous Delivery strategy and Jenkins has moved from a developer-centric tool to a company-wide Continuous Delivery hub, orchestrating many of the key company IT assets.
For CloudBees, this shift has translated into a massive growth of our Jenkins Enterprise by CloudBees business and has forced us to reflect on how we see our future. Since a number of CloudBees employees, advisors and investors are ex-JBossians, we’ve had the chance to witness first-hand what a successful open source phenomenon is and how it can translate into a successful business model, while respecting its independence and further fueling its growth. Consequently, it quickly became obvious to us that we had to re-focus the company to become the Enterprise Jenkins Company, both on-premise and in the cloud, hence exit the runtime PaaS business (RUN@cloud & WEAVE@cloud). While this wasn’t a light-hearted decision (we are still PaaS lovers!), this is the right decision for the company.
With regard to our existing RUN@cloud customers, we’ve already reached out to each of them to make sure they’re being taken care of. We’ve published a detailed migration guide and have setup a migration task-force that will help them with any question related to the migration of their applications.  (Read our FAQ for RUN@cloud customers.) We’ve also worked with a number of third-party PaaS providers and will be able to perform introductions as needed. We’ve always claimed that our PaaS, based on open standards and open source (Tomcat, JBoss, MongoDB, MySQL, etc.) would not lock customers in, so we think those migrations should be relatively painless. In any case, we’ll do everything we can to make all customer transitions a success
From a Jenkins portfolio standpoint, refocusing the company means we will be able to significantly increase our engineering contribution to Jenkins, both in the open source community as well as in our enterprise products. Kohsuke Kawaguchi, founder of Jenkins and CTO at CloudBees, is also making sure that what we do as a company preserves the interest of the community.
Our Jenkins-based portfolio will fit a wide range of deployment scenarios:
  • Running Jenkins Enterprise by CloudBees within enterprises on native hardware or virtualized environments, thanks to our enterprise extensions (such as role-based access control, clustering, vSphere support, etc.)
  • Running Jenkins Enterprise by CloudBees on private and public cloud environments, making it possible for enterprises to leverage the elastic and self-service cloud attributes offered by those cloud layers. On that topic, see the Pivotal partnership we announced today. I also blogged about the new partnership here.
  • Consuming Jenkins as a service, fully managed for you by CloudBees in the public cloud, thanks to our DEV@cloud offering (soon to be renamed “CloudBees Jenkins as a Service”).

Furthermore, thanks to CloudBees Jenkins Operations Center, you’ll be able to run Jenkins Enterprise by CloudBees at scale on any mix of the above scenarios (native hardware, private cloud, public cloud and SaaS), all managed and monitored from a central point.
From a market standpoint, several powerful waves are re-shaping the IT landscape as we know it today: Continuous Delivery, Cloud and DevOps. A number of companies sit at the intersection of those forces: Amazon, Google, Chef, Puppet, Atlassian, Docker, CloudBees, etc. We think those companies are in a strategic position to become tomorrow’s leading IT vendors.
Onward,

Sacha

Additional Resources
Read the press release about our new Jenkins focus
Read our FAQ for RUN@cloud customers
Read Steve Harris's blog







Sacha Labourey is the CEO and founder of CloudBees.
Categories: Companies

CloudBees Partners with Pivotal

Thu, 09/11/2014 - 09:02
Today, Pivotal and CloudBees are announcing a strategic partnership, one that sits at the intersection of two very powerful waves that are re-shaping the IT landscape as we know it today: Cloud and Continuous Delivery.
Pivotal has been executing on an ambitious platform strategy that makes it possible for enterprises to benefit from a wide range of services within their existing datacenter: from Infrastructure as a Service  (IaaS) up to Platform as a Service (PaaS), as well as a very valuable service, Pivotal Network, that makes it trivial to deploy certified third-party solutions on your Pivotal private cloud. (To read Pivotal's view on the partnership, check out the blog authored by Nima Badiey, head of ecosystem partnerships and business development for Cloud Foundry.)
As such, our teams have been working closely on delivering a CloudBees Jenkins Enterprise solution specifically crafted for Pivotal CF. It will feature a unique user experience and will be leveraging Pivotal’s cloud layer to provide self-service and elasticity to CloudBees Jenkins Enterprise users. We expect our common solution to be available on Pivotal CF later this year, and we will be iteratively increasing the feature set.
Given Jenkins’ flexibility, Pivotal customers will be using our combined offering in a variety of ways but two leading scenarios are already emerging.
The first scenario is for Pivotal developers to use Jenkins to perform continuous integration and continuous delivery of applications deployed on top of the Pivotal CF PaaS. CloudBees Jenkins Enterprise provides an integration with the CloudFoundry PaaS API that makes the application deployment process very smooth and straightforward. This first scenario provides first class support for continuous delivery to Pivotal CF developers.
The second scenario focuses on enterprises relying on Jenkins for continuous integration and/or continuous delivery of existing (non-Pivotal CF-based) applications. Thanks to the Pivotal/CloudBees partnership, companies will ultimately be able to leverage the Pivotal cloud to benefit from elastic build capacity as well as the ability to provision more resources on-demand, in a self-service fashion.
The CloudBees team is very proud to partner with Pivotal and bring Pivotal users access to CloudBees Jenkins Enterprise, the leading continuous delivery solution.
Onward,
Sacha







Sacha Labourey is the CEO and founder of CloudBees.
Categories: Companies

Reflections on the PaaS Marketplace

Thu, 09/11/2014 - 09:00
Cairn from the
Canadian Arctic Expedition
Entering the PaaS marketplace in 2010 resembled a polar expedition near the turn of the last century. Lots of preparation and fundraising required, not a lot of information about what you’d encounter on the journey, life-and-death decisionmaking along the way, shifting and difficult terrain in unpredictable conditions and intense competition for the prize. At least we didn’t have to eat the dogs.

In case you missed it, CloudBees announced that we’ll no longer offer our runtime PaaS, RUN@cloud. Instead, we’re focusing on our growing Jenkins Enterprise by CloudBees subscription business - on-prem, in the cloud, and connecting the two - and the continuous delivery space where Jenkins plays such a key role. Jenkins has been at the core of our PaaS offering all the way along, so in some ways, this is less of a pivot than a re-focusing. Still, it’s an important event for CloudBees customers, many of whom rely on our runtime services and the integrated dev-to-deployment model we offer. We’ll continue to support those customers on RUN@cloud for an extended period and help them transition as painlessly as possible to alternatives (read our FAQ about the RUN@cloud news). Given our open PaaS approach and the range of offerings in the marketplace, the transition will be non-trivial, but manageable (read our transition documentation). Given that background, I wanted to share some thoughts behind our move and what we see going on in the PaaS marketplace.

A Platform, Of Sorts
By Agrant141 [CC-BY-SA-3.0]As a team, we come from a platform background. To us, cloud changes the equation in how people build, deploy and manage applications. So, the platforms we’re all used to building on top of - like Java - need to change scope and style to be effective. That idea has driven a lot of what we delivered at CloudBees. It’s why Jenkins was such a big part of the offering, because from our perspective Continuous Integration and Continuous Delivery really needed to be integral to the experience when you’re delivering as-a-service with elastic resources, on-demand. I think we have been proven right. Doubts? Take a look at what Google is doing with the Google Cloud Platform. They agree with us and they built their solution around Jenkins. This is also why primarily runtime-deployment-focused PaaS offerings like Pivotal’s Cloud Foundry partner with us on Jenkins.

What’s changed, then?
  • Service - IaaS platform affinity. IaaS providers, but particularly AWS and Google, are moving up-stack rapidly, fleshing out a wider and wider array of very capable services. These services often come with rich APIs that are part of the IaaS-provider’s platform. Google Cloud Services is a good example. If you’re an Android developer, it’s your go-to toolbox to unlock location and notification services. It also incentivizes you to use Google identity and runtime GAE services. The same is true on AWS and Azure with some different slants and degrees of lock-in. Expect the same on any public cloud offering that aims to succeed longer term. This upstack march by the IaaS vendors blurs the line on PaaS value. PaaS vendors like CloudBees can make it easy to consume these IaaS-native services, but how the value sorts itself out for end-users between “PaaS-native” services and those coming directly from the IaaS provider is unclear.
  • What’s a platform? Who’s to say that AWS Elastic Beanstalk is less of a platform than what CloudBees offers? I’d like to think I have some experience and credibility to speak to the topic, and I can assure you ours is superior in all ways that matter technically. But in the end, if a bunch of Ruby scripts pushing CloudFormation templates make it as simple to deploy, update, and monitor a Java app as CloudBees does, those distinctions just don’t matter to most users. This is not to say that Beanstalk is functionally equivalent to CloudBees today, because it isn’t. But it’s a lot closer than it was two years ago. The integration with VPC is front-and-center, because, well, they are AWS and as an end-user, you’re using your own account with it, while we are managing the PaaS on your behalf. My point here is that our emphasis on platform value, which was very much a differentiator two years ago, is less of one today and will continue to decrease even as we add feature/functionality. Is that because we are being outpaced by competitors who were behind? No, it’s because as IaaS-native services expand their scope and the platform itself changes (see next point), the extra value that can be added by a pure-play PaaS gets boxed-in.
  • Commoditization of platform. There is a lot going on in this area that is hard to capture succinctly. First, there is the Cloud Foundry effect. Cloud Foundry has executed well on an innovate-leverage-commoditize (ILC) strategy using open source and ecosystem as the key weapons in that approach. Without any serious presence in public cloud, Pivotal Cloud Foundry has produced partnerships with the largest, established players in enterprise middleware and apps. In turn, that middleware marketplace ($20B) is prime hunting ground for PaaS, and Cloud Foundry has served up fresh hope to IT people searching desperately for a private cloud strategy with roots in open source. Glimmers of hope for success in on-prem private PaaS in the enterprise act as a damper on public cloud PaaS adoption, making a risk-averse enterprise marketplace even more sluggish. Second, thanks to Docker, the containerization of apps - a mainstay implementation strategy of PaaS providers like CloudBees - is becoming “standard” and simple for everyone to use. It’s been embraced by Google as a means to make their offering more customizable, and even Amazon hasn’t been able to ignore it. This shift changes the PaaS equation again, because combining Docker with infrastructure automation tools like Chef and Puppet starts to look a lot like PaaS. New tools like Mesos also change the landscape when combined with Docker. Granted for those paying attention to details, Docker still has some holes in it, but don’t expect those to remain unplugged for long.
  • It’s about service. There is a clear dividing line among PaaS players between fully-managed (think: CloudBees, Heroku) and self-managed (think: any on-prem solution, AWS Elastic Beanstalk). Broadly speaking, the startups and SME customers tend to lean toward the fully-managed side, while the larger enterprises lean toward the self-managed side. The platform changes I was covering above continue to make self-service easier, while reducing the perceived value of the fully-managed approach. I say “perceived” because the gap between the perceived and actual effort to implement a PaaS and operate it at scale is huge. It’s something that is hard for people to understand, especially if they haven’t lived through it. But, perception is reality at the buying stage, even if the reality bites at delivery. The technology and organizational investment of Heroku and CloudBees to operate at scale and to deliver deep, quality service is significant, but the perception gap leads people to equate it to the labor associated with answering PagerDuties and Nagios alerts. Furthermore, as the IaaS players move more up-stack, and customers consume a broader mixture of self-service and fully-managed value-add services, the gap increases. The other difference between fully-managed vs. self-service centers around the delivery model. When you deliver as-a-service, like we do with the CloudBees PaaS, you have advantages that are not available to on-prem software delivery and support models. But, from a CloudBees perspective, with a large, growing business delivering to on-premise Jenkins Enterprise users, we really need to think of our fully-managed Jenkins more as a SaaS, not just a component of a broader PaaS offering.
What does all this change mean to the PaaS marketplace? In addition to the moves I noted earlier, you can already observe some of the impact:
  • Google consolidated their PaaS GAE and IaaS GCE stories into a single, powerful developer-savvy Google Cloud Platform story, with more consistency no doubt on the way from the mobile side of the house.
  • CenturyLink bought AppFog and Tier3, putting the combined IaaS and PaaS pieces in place to move up from being just a hosting provider.
  • IBM moved all SmartCloud Enterprise efforts onto Softlayer and consolidated PaaS efforts behind the Cloud Foundry based BlueMix to extend the life of WebSphere in the cloud. At the same time, the introduction of UrbanCode gives them DevOps coolness, at least as much coolness as a blue shop can handle.
  • Microsoft blurred the line between Azure PaaS and a real public IaaS, a clear recognition that combined there is more value and better ways to appeal to a broader audience.
  • DotCloud pivoted to become Docker, re-purposing their internal containerization investments and de-emphasizing their PaaS business.
  • Heroku aligned more closely with the Salesforce side of the house in Heroku1 - you know, the part with access to enterprise companies with deep pockets who already trust Salesforce with some of their most sensitive information.
  • Rackspace, caught in the middle without a IaaS or PaaS card to play, is floundering and looking for a buyer.
  • In a classic enemy-of-my-enemy confederation, traditional enterprise players have lined up behind OpenStack. Because of its open source heritage, Red Hat is well positioned to grab the leadership ring in what appears to be a contentious, political, but perhaps too-big-to-fail mess.
  • Looking to avoid the messiness of OpenStack but to obtain an aura of community governance around its Cloud Foundry efforts, Pivotal created a new pay-to-play Cloud Foundry Foundation and notched up a broad range of enterprise participants.
  • Amidst all this, Amazon just continues their relentless pace to add more services, the latest onslaught being aimed at mobile and collaboration.
Taken together, these changes demonstrate market consolidation, platform commoditization, a continued strength of on-prem solutions in the enterprise, and the important strategic leverage to be obtained by combining IaaS, PaaS and managed service offerings. Longer term, it calls into question whether there will even be a PaaS marketplace that is identifiable except by the most academic of distinctions. These are not trends we can ignore, particularly when we have a successful and growing business centered on Jenkins.

Amundsen ExpeditionSo, we’re emerging from our PaaS polar expedition. Like a triumphant Amundsen, we are leaving behind some noble competitors. We’re taking what we’ve learned and are applying the lessons toward new adventures. Jenkins is an incredible phenomenon. It’s built around an amazing open source community that is populated with passionate advocates. With its Continuous Integration roots, Jenkins sits at the center of the fundamental changes cloud has ushered in to software development - the same ones that brought CloudBees into existence in the PaaS world. Join us and follow us as we push the boundaries of Continuous Delivery using Jenkins, and as we work with the community to make sure Jenkins continues to be the tool of choice for software development and delivery both on-premise and in the cloud.


Resources:





Steven Harris is senior vice president of products at CloudBees (and a fan of Roald Amundsen). 
Follow Steve on Twitter.
Categories: Companies

Advanced Git with Jenkins

Wed, 09/10/2014 - 21:09
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Harpreet Singh, VP Product Management, CloudBees about a presentation given by Christopher Orr of iosphere GmbH at JUC Berlin.

Git has become the repository of choice for developers everywhere and Jenkins supports git very well. In the talk, Christopher shed light on advanced configuration options for the git plugin. Cloning extremely large repositories is an expensive proposition and he outlined a solution for speeding up builds with large repositories.

Advanced Git Options
The are three main axes for building projects: What, When and How.
Git plugin optionsWhat to build:
The refspec option on Jenkins lets you choose on what you need to build. By default, the plugin will build the master branch, this option can be supplanted by wildcards to build specific features or tags. For example:


  • */feature/* will build a specific feature branch
  • */tags/beta/* will build a beta version of a specific tag
  • +refs/pull/*:refs/remotes/origin/pull/* will build pull requests from GitHub

The default strategy is usually to build particular branches. So for example if refspec is */release/*, branches release/1.0release/2.0 will be built while branches feature/123bugfix/123 will be ignored. To build feature/123/ and bugfix/123, you can flip this around by choosing the Inverse strategy.

Choosing the build strategy
When to build:
Generally, polling should not be used and webhooks are the preferred options when configuring jobs. OTOH, if you have a project that needs to be built nightly only if a commit made it to the repository during the day, it can be easily setup as follows:



How to build:
A git clone operation is performed to clone the repository before building it. The clone operation can be speeded up by using shallow clone (no history is cloned). Furthermore by using the "reference repo" during the clone operation, builds can be speeded up. In the reference repo option, the repository is cloned to a local directory and from there on, this local repository is used for subsequent clone operations. A network access is made only if the repository is unavailable. Ideally, you line these up, so shallow clone for the first clone (fast clone) and reference repo for faster builds subsequently.


Equivalent to git clone --reference option


Working with Large RepositoriesThe iosphere team uses the reference repository approach to speed up builds. They have augmented this approach by inserting a proxy server (git-webhook-proxy [1]) between the actual repo and Jenkins. Thus, a clone happens to this proxy server. The slave setup plugin copies the workspace over to the slaves (over NAS) and builds proceed there on. Since network access is restricted to the proxy server and each slave does a local copy, this speeds up builds considerably. 


git-webhook-proxy: to speed up workspace clones
The git-webhook-proxy option seems a compelling solution, well worth investigating if your team is trying to speed up builds.

[1] git-webhook-proxy


-- Harpreet Singhwww.cloudbees.com
Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter



Categories: Companies

[Infographic] Need To Deliver Software Faster? Continuous Delivery May Be The Answer

Mon, 09/08/2014 - 19:20
More and more organizations are realizing the impact of delivering applications in an accelerated manner. Many of those that are seeking to do so are leveraging DevOps functions internally and moving towards Continuous Delivery. Did you know that 40% of companies practicing Continuous Delivery increased frequency of code delivery by 10% or more in past 12 months?

Do you need to deliver software faster? This infographic based off the DevOps and Continuous Delivery survey conducted by the EMA shows why Continuous Delivery may be the answer.


Download your copy of the DevOps and Continuous Delivery paper to read the entire report based on the EMA survey.


Christina Pappas
Marketing Funnel Manager
CloudBees

Follow her on Twitter
Categories: Companies

Building Pipelines at Scale with Puppet and Jenkins Plugins

Thu, 09/04/2014 - 18:15
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Harpreet Singh, VP Product Management, CloudBees about a presentation given by Julien Pivotto of Inuits at JUC Berlin.

Inuits is an open source consultancy firm that has been an early practitioner of DevOps. They set up Drupal-based asset management systems that store assets, transcode videos for a number of clients. These systems are setup such that each client has a few environments (Dev/UAT/Production) and each environment further splits into a one backend per environment and a number of front ends per backend. Thus, they end up managing a lot of pipelines for each Drupal sites that they setup. Consequently, they need a standard way of setting up these pipelines. 
In short, Inuits is a great use case for DevOps teams that are responsible for bringing in new teams and enabling them to deliver continuously and deliver software fast. 
There are simple approaches to building pipelines through the UI (clone-a-job) and through xml (clone config.xml) but these approaches don't scale well. Julien outlined two distinct approaches to setting up pipelines:
  • Pipelines through Puppet
  • Julien Pivotto
  • Pipelines through Jenkins plugins
I will focus mostly on the puppet piece in this blog as that seems to be a novel approach that I haven't come across before. Although, Julien does lean towards using standard Jenkins plugins to deliver these pipelines.

Pipelines through Puppet
Julien started with a definition of a pipeline:     A pipeline is a chain of Jenkins job that are run to fetch, compile, package, run tests and deploy an  application
And then he goes about how to set these chain of inter-related jobs through Puppet. Usually, Puppet is used to provision OS, Apps and DB but not application data. In his approach, he puppetized provisioning Jenkins  and job configurations (application data). 
jobs.pp: Manifest for standalone jobEach type of job and pipeline has a corresponding puppet manifest that takes arguments like job name, next job, parameters etc. Since the promotions plugin adds some meta-data into an existing job config and adds a separate configuration folder in the jobs folder, promotions have their manifest as well. Configuration changes in the xml are done through Augeas.

With the above approach on-boarding a team is easy: puppet provisions a new Jenkins with its own set of pipelines and jobs. History of configuration changes can be tracked in the source repository. 
Pipeline.pp: Manifest for a pipeline
However delivering these pipelines gets hard because you end up with a lot of templates. Each change to configuration requires restart to Jenkins which impacts team productivity.

Delivering pipelines through puppet is the infrastructure as code approach and although the approach is novel the disadvantages outweigh the benefits and Julien leaned towards using Jenkins plugins to deliver these. 

Pipelines through Jenkins Plugins

Julien talked about two main plugins to realize pipelines. These plugins are well known in the community. The novel approach is connecting these two together to deliver dynamic pipelines.

Build_flow plugin: define pipelines through groovy DSL's and constructs to do parallels, conditionals, retries and rescues. 

Job generator plugin: create and updates a new job on the fly. 

Julien then combines them both where starting jobs (an orchestrator) are created using build flow and subsequent jobs are generated by job generator. Using conditionals and parallel constructs, he can end up delivering complex pipelines. 




The above approaches highlight two things:


  • Continuous delivery is becoming the de-facto way organizations want to deliver software and
  • Since Jenkins is the tool of choice for delivering software, it has to evolve and offer first class constructs to help companies like Inuits to deliver pipelines easily.

We at CloudBees, have heard the above loud and clear in the last year. Consequently, the workflow work delivered in OSS by Jesse Glick offers these first class constructs to Jenkins. As this work moves towards a 1.0 in OSS, we will get to point where the definition of a pipeline will change from


     A pipeline is a chain of Jenkins job that are run to fetch, compile, package, run tests and deploy an  applicationto      A workflow pipeline is a Jenkins job that describes the flow of software components through multiple stages (& teams) as they make way from commit to production.
-- Harpreet Singhwww.cloudbees.com
Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter. 




Categories: Companies

Webinar Q&A: Role-Based Access Control for the Enterprise with Jenkins

Thu, 08/28/2014 - 17:29
Thank you to everyone who joined us on our webinar, the recording is now available.

Below are several of the questions we received during the webinar Q&A:

Q: How do you admin the groups? Manually or is this there LDAP involved?
A: You can decide if you want to create internal Jenkins users/groups or import users and groups from your LDAP server. In this case you can use the LDAP Jenkins plugin to import them but you still need to manage them manually using Jenkins. Each external group has to match an internal Jenkins group so that you can assign a role to it. Roles are defined in Jenkins regardless the origin of users and groups (internal or external).

Q: Is there any setting for views, instead folders? Are the RBAC settings available for views?A: In short, yes. The RBAC plugin supports setting group definitions over the following objects:
  • Jenkins itself
  • Jobs
  • Maven modules
  • Slaves
  • Views
  • Folders

Q: Are folders the only way to associate multiple Jenkins jobs with the same group?
A: The standard way in which you should associate multiple Jenkins jobs with the same group is through folders. However, remember that you can also create groups at job level.
Q: If we convert from the open source 'role-based strategy' plugin to this role-based plugin, will it translate the roles automatically to the new plugin?
A: Roles are not converted automatically, so you will need to set-up your new rules with the RBAC plugin.
Q: Who do we contact for more questions?
A: You can contact us in the public mail users@cloudbees.com.
Q: How do you create those folders in Jenkins? Is this part of RBAC plugin, too?A: Folders are created using the Folder plugin. The Folder plugin allows users to create new “jobs” of the type “folder.” The Role-Based Access Control plugin then integrates with this plugin by allowing administrators to set folder-level security roles and let child folders inherit parent folders’ roles.
Q: Is there a permission that allows a user see the test console steps (the bash cmds that are executed)?A: You can define a role to only have read permission for a job configuration. In this way, users with that role will only be able to read the bash commands used in the job.
Q: Do you provide any sort of API to work with these security settings programmatically?A: At this time, there is not any API to work with these security settings.
Q: Are there any security issues that one needs to take into consideration?A: When configuring permissions for roles, be aware of the implications of allowing users of different teams or projects to have access to all of the jobs in a Jenkins instance. This open setup can occur when a role is granted overall read/execute/configure permissions.
While an administrative role would obviously require such overall access, consider limiting further assignment of those permissions to only trusted groups, like team/division leads.
Such an open setup would allow users with overall permissions to see information that you might rather restrict from them - like access to any secret projects, workspaces, credentials or scripts. 


Overall configure permissions would also allow users to modify any setting on the Jenkins master.

---


Valentina Armenise
Solutions Architect
CloudBees

Follow Valentina on Twitter.



Félix Belzunce
Solutions Architect
CloudBees

Félix Belzunce is a solutions architect for CloudBees based in Europe. He focuses on continuous delivery. Read more about him on his Meet the Bees blog post and follow him on Twitter.




Tracy Kennedy
Solutions Architect
CloudBees

As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. Read her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Configuration as Code: The Job DSL Plugin

Tue, 08/26/2014 - 17:16
This is one in a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Valentina Armenise, solutions architect, CloudBees. In this presentation, Daniel Spilker, CoreMedia AGs, maintainer of the plugin, shows how to configure a Jenkins Job without using the GUI at JUC Berlin.

Daniel Spilker, from CoreMedia, at the JUC 2014 in Berlin, presented the DSL plugin and showed how the Configuration as a Code Approach can simplify the orchestration of complex workflow pipelines.

The goal of the plugin is to create new pipelines fast and easily using the preferred tools to “code” the configuration as opposite of using different plugins and jobs to set up complex workflows through the GUI.

Indeed, the DSL plugin defines a new way to describe a Jenkins Job configuration by the use of Groovy Language piece of code stored in a single file.

After installing the plugin a new option will be available in the list of build steps: “process JOB DSL” which will allow you to parse the DSL script.

The descriptive groovy file can be either uploaded in Jenkins manually or stored in the SCM and pulled in a specific job.

The jobs whose configuration is described in the DSL script will be created on the fly so that the user is responsible for maintaining the groovy script only.






Each DSL element used in the groovy script matches a specific plugin functionality. The community is continuously releasing new DSL elements in order to be able to cover as many plugins as possible.





Of course, given the +900 plugins available today and the frequency of new plugin releases, it is fairly impossible that the DSL plugin covers all use-cases.

Here comes the strength of this plugin: although each Jenkins plugin need to be defined by a DSL element, you can create your own custom DSL element by the use of the method configure which gives direct access to underlying XML of the Jenkins config.xml. This means that you can use DSL plugin to code any configuration even if a predefined DSL element is not available.

The plugins gives also the possibility to introduce custom DSL commands.

Given the flexibility of the DSL plugin, and how fast the community is in realizing new DSL elements (a new feature every 6 weeks), this plugin seems to be a really interesting way to put Jenkins configuration into code.

Want to know more? Refer to:





Valentina Armenise
Solutions Architect, CloudBees

Follow Valentina on Twitter.


Categories: Companies