Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
CloudBees provides an enterprise Continuous Delivery Platform that accelerates the software development, integration and deployment processes. Building on the power of Jenkins CI, CloudBees enables you to adopt continuous delivery incrementally or organization-wide, supporting on-premise, cloud and hybrid environments.
Updated: 7 hours 25 min ago

Webinar Q&A: Role-Based Access Control for the Enterprise with Jenkins

Thu, 08/28/2014 - 17:29
Thank you to everyone who joined us on our webinar, the recording is now available.

Below are several of the questions we received during the webinar Q&A:

Q: How do you admin the groups? Manually or is this there LDAP involved?
A: You can decide if you want to create internal Jenkins users/groups or import users and groups from your LDAP server. In this case you can use the LDAP Jenkins plugin to import them but you still need to manage them manually using Jenkins. Each external group has to match an internal Jenkins group so that you can assign a role to it. Roles are defined in Jenkins regardless the origin of users and groups (internal or external).

Q: Is there any setting for views, instead folders? Are the RBAC settings available for views?A: In short, yes. The RBAC plugin supports setting group definitions over the following objects:
  • Jenkins itself
  • Jobs
  • Maven modules
  • Slaves
  • Views
  • Folders

Q: Are folders the only way to associate multiple Jenkins jobs with the same group?
A: The standard way in which you should associate multiple Jenkins jobs with the same group is through folders. However, remember that you can also create groups at job level.
Q: If we convert from the open source 'role-based strategy' plugin to this role-based plugin, will it translate the roles automatically to the new plugin?
A: Roles are not converted automatically, so you will need to set-up your new rules with the RBAC plugin.
Q: Who do we contact for more questions?
A: You can contact us in the public mail users@cloudbees.com.
Q: How do you create those folders in Jenkins? Is this part of RBAC plugin, too?A: Folders are created using the Folder plugin. The Folder plugin allows users to create new “jobs” of the type “folder.” The Role-Based Access Control plugin then integrates with this plugin by allowing administrators to set folder-level security roles and let child folders inherit parent folders’ roles.
Q: Is there a permission that allows a user see the test console steps (the bash cmds that are executed)?A: You can define a role to only have read permission for a job configuration. In this way, users with that role will only be able to read the bash commands used in the job.
Q: Do you provide any sort of API to work with these security settings programmatically?A: At this time, there is not any API to work with these security settings.
Q: Are there any security issues that one needs to take into consideration?A: When configuring permissions for roles, be aware of the implications of allowing users of different teams or projects to have access to all of the jobs in a Jenkins instance. This open setup can occur when a role is granted overall read/execute/configure permissions.
While an administrative role would obviously require such overall access, consider limiting further assignment of those permissions to only trusted groups, like team/division leads.
Such an open setup would allow users with overall permissions to see information that you might rather restrict from them - like access to any secret projects, workspaces, credentials or scripts. 


Overall configure permissions would also allow users to modify any setting on the Jenkins master.

---


Valentina Armenise
Solutions Architect
CloudBees

Follow Valentina on Twitter.



Félix Belzunce
Solutions Architect
CloudBees

Félix Belzunce is a solutions architect for CloudBees based in Europe. He focuses on continuous delivery. Read more about him on his Meet the Bees blog post and follow him on Twitter.




Tracy Kennedy
Solutions Architect
CloudBees

As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. Read her Meet the Bees blog post and follow her on Twitter.
Categories: Companies

Configuration as Code: The Job DSL Plugin

Tue, 08/26/2014 - 17:16
This is one in a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Valentina Armenise, solutions architect, CloudBees. In this presentation, Daniel Spilker, CoreMedia AGs, maintainer of the plugin, shows how to configure a Jenkins Job without using the GUI at JUC Berlin.

Daniel Spilker, from CoreMedia, at the JUC 2014 in Berlin, presented the DSL plugin and showed how the Configuration as a Code Approach can simplify the orchestration of complex workflow pipelines.

The goal of the plugin is to create new pipelines fast and easily using the preferred tools to “code” the configuration as opposite of using different plugins and jobs to set up complex workflows through the GUI.

Indeed, the DSL plugin defines a new way to describe a Jenkins Job configuration by the use of Groovy Language piece of code stored in a single file.

After installing the plugin a new option will be available in the list of build steps: “process JOB DSL” which will allow you to parse the DSL script.

The descriptive groovy file can be either uploaded in Jenkins manually or stored in the SCM and pulled in a specific job.

The jobs whose configuration is described in the DSL script will be created on the fly so that the user is responsible for maintaining the groovy script only.






Each DSL element used in the groovy script matches a specific plugin functionality. The community is continuously releasing new DSL elements in order to be able to cover as many plugins as possible.





Of course, given the +900 plugins available today and the frequency of new plugin releases, it is fairly impossible that the DSL plugin covers all use-cases.

Here comes the strength of this plugin: although each Jenkins plugin need to be defined by a DSL element, you can create your own custom DSL element by the use of the method configure which gives direct access to underlying XML of the Jenkins config.xml. This means that you can use DSL plugin to code any configuration even if a predefined DSL element is not available.

The plugins gives also the possibility to introduce custom DSL commands.

Given the flexibility of the DSL plugin, and how fast the community is in realizing new DSL elements (a new feature every 6 weeks), this plugin seems to be a really interesting way to put Jenkins configuration into code.

Want to know more? Refer to:





Valentina Armenise
Solutions Architect, CloudBees

Follow Valentina on Twitter.


Categories: Companies

Integrated Pipelines with Jenkins CI

Wed, 08/20/2014 - 15:55
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Félix Belzunce, solutions architect, CloudBees about a presentation given by Mark Rendell, Accenture, at JUC Berlin.

Integrated Pipelines is a pattern that Mark Rendell uses at Accenture to reduce the complexity of integrating different packages when they come from different source control repositories.

The image below, which was one of the slides that Mark presented, represents the problem of building several packages, which will need to be integrated at some point. The build version we need to use, how to manage the control flow and what exactly we need to release are all the main pain points when you are working on such an integration.


Mark proposes a solution where you will not only create a CI pipeline, but also an Integration pipeline to be able to fix the problem. In order to stop displaying all the jobs downstream inside the pipeline, Mark uses a Groovy script. For deploying the right version of the application, several approaches could be used: Maven, Nexus or even a simple plain text file.


The pattern can scale up, but using this same concept for micro services could be indeed a big challenge as the number or pipelines significantly scales up. As Mark pointed out, it cannot only be applied to micro services or applications, as this concept on Jenkins could be also used when you do Continuous Delivery to manage your infrastructure.

You might use similar jobs configurations along your different pipelines. The CloudBees templates plugin will be useful to templatize your different jobs, allowing you to save time and making the process more reliable. It also allows you to do a one time modification in the template which will automatically be pushed to all the jobs without going individually from one job to another.

View the slides and video from this talk here.



Félix Belzunce
Solutions Architect
CloudBees

Félix Belzunce is a solutions architect for CloudBees based in Europe. He focuses on continuous delivery. Read more about him on his Meet the Bees blog post and follow him on Twitter.
Categories: Companies

Webinar Q&A: "Scaling Jenkins in the Enterprise"

Thu, 08/14/2014 - 22:13
Thank you to everyone who joined us on our webinar, the recording is now available.

Below are some of the questions we received to webinar:

Q: How do you implement HA on the data layer (jobs)?  Do you have the data hosted on a network drive?

A: Yes - the 2 masters (primary and failover) share a filesystem visible to both over a network. You can read about HA setup here.

Q: I would like to know how to have a different UI instead of Jenkins UI. If I want to customize the Jenkins UI what needs to be done?

A: There are plugins in the open source community that offer customizable UIs for Jenkins: Simple Theme Plugin is one popular example.

            Q: I want to have a new UI for Jenkins. I want to limit certain things for the Jenkins
                 user.


            A: Interesting. What types of things? A lot of the Jenkins Enterprise plugins allow admins to
                exercise stricter limits on different roles' access to certain functions in Jenkins, whether
                that be through templating or Role-based access control with Folders. The Jenkins
                Enterprise templates also allow you to “hide” some configuration parameters.


            Q: Let's take simple example. I want to have a very simple UI for the parameterized
                 build where a user can submit the SRC path and the build script name. He
                 submits that job by specifying the above two values. How we can have a very
                 simple UI instead of Jenkins UI?


            A: Okay - this is exactly the use case that the job template was designed for. See
                 the last image in the job template tutorial.


            Q: Looks like it will work. How I can get rid of the left hand Jenkins menu?

            A: You can remove most of the options in that menu with the role-based access 
                control plugin - you can remove certain roles' ability to create new jobs, configure
                the system, kick off builds, delete 
projects, and see changes/the work space, etc,
                which will remove most all of the options in that 
menu.

Q: We use the open source version of Jenkins and we have been facing an issue with parsing the console log. We use curl and there is a limit for console text to be displayed for only 10000 lines. Will this enterprise edition handle that issue?

A: It sounds like you're seeing Run.doConsoleText being truncated, though it seems there shouldn't be a 1000-line limit, I just checked sources and it looks to send the full log, regardless of size.

Q: Is there a customizable workflow capability to allow me to configure some change control and release management process for enterprise?
A: The Jenkins community is currently developing a workflow plugin (0.1-beta at the moment). Jesse Glick, engineer at CloudBees, did a presentation about it at the '14 Boston JUC. CloudBees is working on enterprise workflow features such as checkpoints as a part of Jenkins Enterprise by CloudBees.
Q: Is there any framework/processes/checklists that you follow to ensure the consistency/security of multi-tenant slaves across multiple masters?
A: Please see the recording of the webinar for the answer
            Q: Is there a way to version control job configuration?
            A: Yes - CloudBees offers a Backup Plugin that allows your to store your job configs
                 in a tar ball. You can set how long to retain these configs and how many to keep,
                 just as you would for a job's run history. You can also use the Jenkins
                Job Configuration History plugin.

            Q: This backup plugin is available with the open source version of Jenkins?
            A: The backup plugin that I'm speaking of is only a part of
                the Jenkins Enterprise package of plugins.

Q: How is the environment specific deployment done through same project configuration in Jenkins?
A: You can use CloudBees' template plugin to define projects and then have a job template take environment variables to pull them from a parent folder with Groovy Scripting, or to take them from user input using the parameterized builds plugin:  http://developer-blog.cloudbees.com/2013/07/jenkins-template-plugin-and-build.html
http://jenkins-enterprise.cloudbees.com/docs/user-guide-bundle/template-sect-job.html

Q: Do we need to purchase additional licenses if we want to set up an upgrade/evaluate validation master and slaves, as you recommend?
A: For testing environments, CloudBees subscription pricing is different, it is cheaper. For evaluation, I recommend just doing a trial of both to see which fits your needs better. You can request a 30 day trial of Jenkins Enterprise here.
Q: Is this LDAP group access is only available in the enterprise version? I am asking if I can make it so that some users can only see the jobs of their group.
A: Jenkins OSS supports LDAP authentication. The Role Based Access Control authorization provided by Jenkins Enterprise by CloudBees allows you to apply RBAC security on groups defined in LDAP. You can then put the jobs in folders using the Folders/Folders Plus Plugin and assign read/write/etc permissions over those folders using the CloudBees RBAC plugin.
            Q: Another question. What's the difference between having dedicated slaves
                 with your plugin/addon or just add another slave with another label?

            A: Dedicated slaves cannot be shared with another master - only with the master
                that it's been assigned to - whereas shared slaves with just labels are still open
                for use between any masters that can connect to it.

Q: At this moment my organization is planning to implement Open Source Jenkins. Does CloudBees provide training or consultancy adoc to client environment in order to implement Jenkins with best practices, saving time, money and resources?
A: CloudBees service partners provide consulting and training. The training program is written by CloudBees.
Q: Can I use an LDAP for authentication, but create and manage groups (and membership) locally in Jenkins?  For us, creating groups and managing them in the corporate LDAP is a very heavyweight process (plus, support only for static LDAP groups, not dynamic). Clarification - we have a corporate LDAP, and want to use it for authentication. I want to not use the LDAP to host/manage groups and group management.  I want to do that in Jenkins - and not using LDAP whatsoever in any way for groups
A: Yes, with the the Role Based Access Control security provided by Jenkins Enterprise by CloudBees, you can declare users in LDAP and declare groups and associate users in Jenkins. A Jenkins group can combine users and groups declared in LDAP. You can define users in your authentication backend (LDAP, Active Directory, Jenkins internal user database, OpenID SSO, Google Apps SSO ...) and manage security groups in Jenkins with the CloudBees RBAC plugin.
Q: Is the controlled slaves feature available in the Enterprise version only?
A: Yes - this is a feature of the CloudBees Folders Plus plugin.
Q: Is there a way to version control job configuration?
A: Yes - CloudBees offers a Backup Plugin that allows your to store your job configs in a tar ball. You can set how long to retain these configs and how many to keep, just as you would for a job's run history. You can also use the Jenkins Job Configuration History plugin.
Q: Can I start implementing Jenkins Operations Center as a monitoring layer for teams that have set up with Jenkins OSS? Over time I would move them to Jenkins Enterprise, but we need to progress in small iterative stages.
A: Jenkins OSS masters must be converted into Jenkins Enterprise by CloudBees masters. You can do this either by installing the package provided by CloudBees or by installing the “Enterprise by CloudBees” plugin available in the update center of your Jenkins console. Please remember that a Jenkins OSS master must be upgraded to the LTS or to the ‘tip’ before installing the “Enterprise by CloudBees” plugin.
Q: What is the purpose of the HA proxy?
A: HA proxy is an example of the load balancer used to setup High Availability of Jenkins Enterprise by CloudBees (JEBC) masters (it could also be another load balancer such as BIG IP F5, Cisco ...). More details are available on the JEBC High Availability page and in JEBC User Guide / High Availability.
Q: When builds will run on slaves and Jenkins Operation Center will manage, what is use of masters?
A: JOC is the orchestrator. It manages which slaves are in the pool, which masters need a slave, and which masters are connected. The masters are still where the jobs/workflows are configured and where the results are published
Q: Is there a functionality for a preflight/proof build - i.e. the build with the local Dev changes grabbed from developer's desktop?
A: Jenkins Enterprise by CloudBees offers the Validated Merge plugin that allows the developer to validate their code before pushing it to the source code repository.
Q: Currently we are using OSS version with 1 Master and 18 slaves with 60 executors and faces performances issues, and a workaround used to bounce the server once in a week. Any clue to debug the issue?
A: We would need more information to help diagnose performance problems, but the CloudBees Support plugin in conjunction with a CloudBees Support plan, you could always create a Support Bundle and send it to our support team along with a description of your performance problem.
Q: How do I create dummy users and assign passwords (not using LDAP, AD or any security tool) just for testing my trial Jenkins jobs? (Jenkins open source)
A: Use the "Mock Security Realm" plugin and add dummy users with the syntax "username groupname" under the Global Security Settings
Q: Can you have shared slave groups?  For example, slave group "A"  and within it have sub group "A-Linux5", "A-Linux6", etc...
A: Yes, you can do this with folders in Jenkins Operation Center. A detailed tutorial is available here.
For example with groups “us-east” and “us-west”, you could create folders “us-east” and “us-west”:
  • In the “us-west” folder, you would declare the masters and slave of the West coast (e.g. san-jose-master-1, palo-alto-master-1, san-jose-slave-linux-1, san-francisco-slave-linux-1 ...).
  • In the “us-east” folder, you would declare the masters and slave of the East coast (e.g. nyc-
    master-1…).
Thanks to this, the west coast masters will share the west coast slaves. More subtles scenarios can be implemented with hierarchies of folders as explained in the tutorial.
Q: How do you implement HA on the data layer (jobs)?  Do you have the data hosted on a network drive?
A: Yes - the 2 masters (primary and failover) share a filesystem visible to both over a network. You can read about HA setup here.

--- Tracy Kennedy & Cyrille Le Clerc

Tracy Kennedy
Solutions Architect
CloudBees

As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. (Read her Meet the Bees blog post and follow her on Twitter.


Cyrille Le Clerc
Elite Architect
CloudBees

Cyrille Le Clerc is an elite architect at CloudBees, with more than 12 years of experience in Java technologies. He came to CloudBees from Xebia, where he was CTO and architect. Cyrille was an early adopter of the “You Build It, You Run It” model that he put in place for a number of high-volume websites. He naturally embraced the DevOps culture, as well as cloud computing. He has implemented both for his customers. Cyrille is very active in the Java community as the creator of the embedded-jmxtrans open source project and as a speaker at conferences.
Categories: Companies

Building Resilient Jenkins Infrastructure

Thu, 08/14/2014 - 15:22
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Harpreet Singh, VP product management, CloudBees about a presentation given by Kohsuke Kawaguchi from CloudBees at JUC Boston.

A talk by Kohsuke Kawaguchi is always exciting. It gets triply exciting when his talk bundles three in one. 
Scaling Jenkins horizontallyKohsuke outlined the case on how organizations scale, either vertically or organically (numerous Jenkins masters abound in the organization). He made the case that the way forward is to scale horizontally. In this approach a Jenkins Operations Center by CloudBees master manages multiple Jenkins in the organizations. This approach helps organizations share resources (slaves) and have a unified security model through roles-based access control plugin from CloudBees. 
Jenkins Operations Center by CloudBees
This architecture lets administrators maintain a few big Jenkins masters that can be managed by the operations center. This effectively builds an infrastructure that fails less and recovers from failures faster.


Right sized Jenkins mastersBursting to the cloud (through CloudBees DEV@cloud)He then switched gear to address a use case where teams can start using cloud resources when they run out of build capacity on their local build farm. He walked through the underlying technological pieces built at CloudBees using LXC. 
CloudBursting: Supported by LXC containers on CloudBees
The neat thing with the above technology piece is that we have used it to offer OSX build slaves in the cloud. We have an article [2] highlights on how to use cloud bursting with CloudBees. The key advantage is that users pay for builds-by-the-minute.
TraceabilityOrganizations are looking at continuous delivery to deliver software often. They often use Jenkins to build binaries and use tools such as Puppet and Chef to deploy these binaries in production. However, if something does go wrong in production environment, it is quite a challenge to tie these back to the commit that caused issues. The traceability work in Jenkins ties this loose end. So post deployment, Puppet/Chef notifies a Jenkins plugin and Jenkins calculates its finger print and maintains it in the internal database. This fingerprint can be used to track where the commits have landed and help diagnose failures faster. We have an article [3] that describes how to set this up with Puppet.

Finger prints flow through Jenkins, Puppet and Chef
[1] Jenkins Operations Center by CloudBees[2] Bursting to the cloud
[3] Traceability example
-- Harpreet Singhwww.cloudbees.com
Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter


Categories: Companies

Automation, Innovation and Continuous Delivery - Mario Cruz

Tue, 08/12/2014 - 18:08
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Steve Harris, SVP Products, CloudBees about a presentation given by Mario Cruz of Choose Digital at JUC Boston.

Choose Digital is a longstanding CloudBees customer, and Mario Cruz, founder and CTO has been a vocal supporter of CloudBees and continuous delivery. So, it was fun to have a chance to hear Mario talk about how they use continuous delivery to fuel innovation at Choose Digital at the recent Jenkins User Conference in Boston (slides, video).

Mario began by talking about what Choose Digital does as a business. They host millions of music downloads, along with movies, TV shows, and eBooks, that they offer as a service in a kind of "white label iTunes". Choose Digital's service is used by companies like United, Marriott and Skymall to offer rewards. Pretty much all of this runs on CloudBees and is delivered using Jenkins as their continuous delivery engine.

The thesis of Mario's presentation is that innovation is really the next evolution of continuous delivery. From my perspective, this is probably the biggest strategic advantage a continuous delivery organization gets from its investment. Still, it's hard to quantify, and it can come across as marketing hot air or the search for unicorns. Being able to experiment cheaply and quickly, with low risk, and have an ability to make data-driven product choices are huge advantages that a continuous delivery shop has over its more traditional competition. Fortunately, Mario is able to speak from experience!

To set the stage, he covered Choose Digital's automation and testing processes. They are a complete continuous delivery shop - every check-in kicks off a set of tests, and if successful, deploys to production. Everything is automated using Jenkins and deployed to CloudBees. They are constantly pushing, constantly building, and their production systems are "never more than a couple of hours behind". The rest of Mario's talk was about the practices, both operational and cultural, they have used to get to this continuous delivery nirvana. Some of Choose Digital's practices include:

  • Developer control. They follow the Amazon "write the press release first" style. Very short specs identify what they want to achieve, but the developer is given control over how to make that happen; i.e., specs identify the "what" not the "how", so that developers are in control and empowered. But, this requires...
  • Trust. Their culture and processes disincentivize the need for heroes, and force a degree of excellence from everyone. For that to work, they need a...
  • Blameless culture. Tools like extensive logging and monitoring give everyone what they need to find and fix issues quickly and efficiently.
  • Core not context. They ruthlessly offload anything that is not core to their business. Mario talked about avoiding "smart people disease", where smart people are attracted to hard problem solving, even if it's not what they should be doing. By offloading infrastructure, and even running of Jenkins, to service providers who are specialists in their area, Choose Digital has been able to stay hyper-focused on their business and quickly improve their offerings. In particular, that means...
  • No heavy lifting. Just because you're capable and might even be great at some of the heavy lifting to support infrastructure or some technical area (like search), that's not what you should be doing if it's not a core part of the business. This is one of the main reasons Choose Digital is using CloudBees and AWS services.
  • Responsibility. If you write code at Choose Digital, you are on call to support it when it's deployed. To me the goodness enabled by this simple rule is one of the biggest wins of the as-a-service continuous delivery model (everything at Choose Digital is API-accessed by their customers).
  • Use feature flags. Mario went into some detail about how Choose Digital uses feature flags to enable them to deliver incrementally, experiment, do A-B testing, and even interact with specific customers directly and in proofs of concept.

Mario is a quotable guy, but I'd say the money quote of his presentation was:"Once you make every developer in the room part of what makes the company's bottom line move forward, they'll start thinking like that."In a lot of ways, that's what continuous delivery is all about. It's great to have customers who walk the walk and talk the talk. Thanks, Mario!



Steven Harris is senior vice president of products at CloudBees. 
Follow Steve on Twitter.
Categories: Companies

Meet the Bees: Tracy Kennedy

Mon, 08/11/2014 - 16:30

At CloudBees, we have a lot of seriously talented developers. They work hard behind the scenes to keep the CloudBees continuous delivery solutions (both cloud and on-premise) up-to-date with all the latest and greatest technologies, gizmos and overall stuff that makes it easy for you to develop amazing software.
In this Meet the Bees post, we buzz over to our Richmond office to catch up with Tracy Kennedy, a solutions architect at CloudBees.

Tracy has a bit of an eccentric background. In college, she studied journalism and, in 2010, interned for the investigative unit of NBC Nightly News. She won a Hearst Award for a report she did about her state’s delegates browsing Facebook and shopping during one of the last legislative sessions of the season. She had several of her stories published in newspapers around the state. Sounds like the beginnings of a great journalistic career, right?

Well, by the time she graduated, Tracy ended up being completely burned out and very cynical about the news industry. Instead of trying to get a job in journalism, she wanted to make a career change.

Tracy's dad was a programmer and he offered to pay for her to study computer science in a post-bachelor’s program at her local university. He had wanted her to study computer science when she first started college, but idealistic Tracy wanted to first save the world with her hard-hitting reporting skills. She now took him up on his offer, and surprisingly, found she had a knack for technology.

Tracy landed a job at a small web development shop in Richmond as a QA and documentation contractor. The work tickled her journalistic skills as well as her newly budding computer science skills and she had a great opportunity to be mentored by some really talented web developers and other technical folks while she was there.

By the time Tracy felt ready to look for more permanent work, she had finished some hobby projects of her own that furthered her programming skills better than any class she had taken. It was also at that time that Mike Lambert, VP of Sales - Americas at CloudBees, was looking for someone with Tracy's skills and experience.
You can follow Tracy on Twitter: @Tracy_Kennedy
Who are you? What is your role at CloudBees? My name is Tracy Kennedy and I’m a solutions architect/sherpa at CloudBees.

My primary role is to reach out to customers on our continuous delivery cloud platform and assist them in on-boarding and learning how to use the platform to its fullest potential. However, I work on other things, too. My role actually varies wildly; it really just depends on what the current needs of the organization are.

Tracy with her dog Oliver.I’ve dabbled in some light marketing by writing emails for and sometimes creating customer communication campaigns, done lots of QA work when debugging our automated sherpa funnel campaign and do a bit of sales engineering, as well, since I’m physically located in the Richmond sales office. I also write some of our documentation as I find the time and identify the need for it.

Lately, I’ve also been spending a good chunk of my week working on updating our Jenkins training materials for use by our CloudBees Service Partners and laying the foundation for future sherpa outreach campaigns.

When those projects are done, I plan on going back to work on a Selenium bot that will automate a lot of my weekly tasks involving the collection of customer outreach statistics. I’m hoping that bot will give me more free time to spend learning about Jenkins Enterprise by CloudBees and Jenkins Operations Center by CloudBees - our on-premise Jenkins solutions, and to create some ClickStacks for RUN@cloud.

What makes CloudBees different from other PaaS and cloud computing companies?CloudBees has a really, really excellent "Jenkins story" as the business guys like to say, and that story is really almost like a Dr. Seuss book in its elegant simplicity. Ahem:Not only is Tracy a poet, but she is a budding actress!
Here she is as an extra in a Lifetime movie.

I can use Jenkins on DEV@cloud I can hide Jenkins from a crowd

I can load Jenkins to on-premise machines I can access Jenkins by many means

I can use Jenkins to group my jobs   I can use Jenkins to change templated gobs

I can use Jenkins to build mobile apps I can use Jenkins to check code for cracks

I can keep Jenkins up when a master is down I can “rent” slaves to Jenkins instances all around

I can use Jenkins here or there, I can use Jenkins anywhere.

Don’t worry; I have no plans on quitting my day job to become a poet laureate!

What are CloudBees customers like? What does a typical day look like for you?
CloudBees PaaS customers can range from university students to enterprise consultants. It’s also not uncommon to see old school web gurus open an account and “play around” with it in an attempt to understand this crazy new cloud/PaaS sensation.
I’ve even seen some non-computer science engineers on our platform who are just trying to learn how to program, and those are my favorite customers to interact with since they’re almost always very bright and seem to have an unparalleled respect for the art of creating web applications. It’s always a great delight to be able to “sherpa” them along on their web dev journey and to see them succeed as a result.
As for my typical day, I actually keep track of each of my days’ activities in a Google Calendar, so I can give you a pretty accurate timeline of my average day:

8:30 or 8:45 am - Roll into the Richmond office, grab some coffee. Start reading emails that I received overnight and start replying as needed. Check the engineering chat for any callouts to me and check Skype for any missed messages.

9:30 am - Either start responding to customer emails or start working on whatever the major project of the day is. If it’s something serious or due ASAP, I throw my headphones on to help me concentrate and tune out the sales calls going on around me.

12:00 pm - Lunch at my desk while I read articles on either arstechnica.com, theatlantic.com, or one of my local news sites.

1:00 pm - Usually by this point, someone will have asked me to review an email or answer a potential customer’s question, so this is when I start working on answering those requests. Tracy after doing the CrossFit workout "Cindy XXX."



3:00 pm - Start moving forward a non-urgent project by contacting the appropriate parties or doing the relevant research.

The end of my day varies depending on the day of the week:
  • Monday/Wednesday - 4:00 pm  - Leave to go to class
  • Tuesday/Thursday - 5 pm  - Leave for the gym
  • Friday - 5:30 pm  - Leave for home

Tracy's motorcycle: a 1979 Honda CM400In my spare time, video games are a fun escape for me and they give me a cheap way of tickling my desire to see new places. Sometimes I spend my Friday nights playing as a zombie-apocalypse survivor in DayZ and exploring a pseudo-Czech Republic with nothing but a fireman’s axe to protect me from the zombie hordes.

On the weekends I spend my time playing catch-up on chores, hanging out with my awesome and super-spoiled doggie and going on mini-adventures with my boyfriend. Richmond has a lot of really beautiful parks, and we hike through one of them each weekend if the weather’s conducive to it.

When I can get more spare time during the week, I plan on finishing restoring my motorcycle and actually riding it, renovating my home office into a gigantic closet for all of my shoes and girly things, and learning how to self-service my car.



What is your favorite form of social media and why?Twitter -- I enjoy the simplicity of it, how well it works even when my wi-fi or cellular data connection is terrible, and how easy it makes following my favorite news outlets.
Something we all have in common these days is the constant use of technology. What’s your favorite gadget and why?While I’d love to name some clever or obscure gadget that will blow everyone’s mind, the truth is that I’d be completely lost without my Android smartphone. I use it to manage my time via Google Calendar, check all 10 million of my email accounts with some ease and stay up to date on any breaking news events. Google Maps also keeps me from getting hopelessly lost when driving outside of my usual routes.
Favorite Game of Thrones character? Why is this character your favorite?Sansa Stark, Game of ThronesPlease note that book-wise I’m only on “Storm of Swords” and that I’m completely caught up on the HBO show, so I’m only naming my favorite character based on what I’ve seen and read so far. Some light spoilers below:

While I know she’s not the most popular character, I really like Sansa Stark. Sure, she’s not the typical heroine who wields swords or always does the right thing, but that’s part of her appeal to me. I like to root for the underdogs, and here we have this flawed teenager who’s struggling to survive her unwitting entanglement in an incredibly dangerous political game. She has no fighting skills, no political leverage beyond her name, and no true allies, and she’s trapped in a city with and by her psychopathic ex-fiancé whose favorite past time is to literally torture her.

The odds of Sansa surviving such a situation seem very slim, and yet despite her naïveté, she’s managing to do just that while the more conventional “heroes” of the story are dropping like flies. I could very well see her learning lessons from the fallen’s mistakes and applying them to any leadership roles she takes on in the future. Is she perhaps a future Queen of the North? I wouldn’t discount it.
Sansa is a bright girl with the right name and the right disposition to gracefully handle any misfortunates thrown her way, and aren’t grace, intelligence and a noble lineage all the right traits for a queen? I think so, but we’ll just have to see if George R.R. Martin agrees.
Categories: Companies

Amadeus Contribution to the Jenkins Literate Plugin and the Plugin's Value

Thu, 08/07/2014 - 17:28
This is one in a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Valentina Armenise, solutions architect, CloudBees, about a presentation called "Going Literate in Amadeus" given by Vincent Latombe, Amadeus at JUC Berlin.
The Literate plugin is built on top of the Literate programming concept, introduced by Donald Knuth, who introduced the idea that a program can be described by natural language, such as English, rather than by a programming language. The description would be translated automatically to source code to be used in the scripts in a process completely transparent for the users.
The Literate plugin is built on top of two APIs:
  • Literate API responsible for translating the descriptive language in source code
  • BRANCH API which is the toolkit to handle multi-branch:
    • SCM API - provides the capability to interact with multiple heads of the repository
    • capability to tag some branches as untrusted and skip those
    • capability to discard builds
    • foundation for multi-branch freestyle project
    • foundation for multi-branch template project

Basically, the Literate plugin makes you able to describe your environment together with the build steps required by your job to build, in a simple file (either the marker file or the README.md). The Literate plugin queries the repository looking for one or more branches which contain the descriptive file. If more than one branch contains this file, being eligible to be built in a literate way and no specific branch is specified in the job, then the branches are built in parallel. This means that you can create multi-branch projects where each branch requires different build steps or simply different environments.
The use of the Literate plugin becomes quite interesting when you need to define templates with customizable variables or to whitelist build sections.
Amadeus has invested resources in Jenkins in order to accomplish continuous integration. Over the years they have specialized in the use of the Literate plugin in order to make the creation of jobs easier and become a contributor to this plugin.Vincent Latombe presenting his talk at JUC Berlin.
Click here to watch the video.
And click here to see the slides.
In particular, Amadeus invested resources in order to enhance the plugin usage experience by introducing the use of YAML, a descriptive language which leaves less space to errors compared to the traditional MARKDOWN -too open.
How do we see the Literate plugin today?
With the introduction of CI, there are conversations going on about what is the best approach in merging and pulling changes to repositories.
Some people support the “feature branching” approach, where each new feature is a new branch and is committed to the mainline only when ready to be released in order to provide isolation among branches and stability of the trunk.
Although this approach is criticized by many who think that it is too risky to commit the whole new feature at once, it could be the best approach when the new feature is completely isolated from the rest (a completely new module) or in open source projects where a new feature is developed without deadlines and, thus, can take quite a while to be completed.
The Literate plugin works really well with the feature branching approach described above, since it would be possible to define different build steps for each branch and, thus, for each feature.
Also, this approach gets along really well with the concept of continuous delivery, where the main idea is that the trunk has to be continuously shippable into production.
How does it integrate with CD tools?
Today, we’re moving from implementing CI to CD: Jenkins is not a tool for developers only anymore but it’s now capturing the interest of Dev-Ops.
By using plugins to implement deployment pipelines (ie. Build Pipeline plugin, Build Flow plugin, Promotion plugin), Jenkins is able to handle all the phases of the software lifecycle.
The definition of environments and agents to build and deploy to is provided with integration to Puppet and Chef. These tools can be used to describe the configuration of the environment and apply the changes on the target machines before deployment.
At the same time, virtualization technologies that allow you to create software containers, such as Docker, are getting more and more popular.
How the literate builds could take part in the CD process?
As said before, one of the things that the Literate plugin simplifies is the definition of multiple environments and of build steps by the use of a single file: the build definition will be stored in the same SCM as the job that is being built.
This means that the Literate plugin gets along really well with the infrastructure as code approach and tools like Docker or Puppet where all the necessary files are stored in the SCM. Docker, in particular, could be a good candidate to work with this plugin, since a Docker image is completely described by a single file (the Dockerfile) and it’s totally self-contained in the SCM.
What's next?
Amadeus is looking for adding new features for the plugin in the near feature:
  • Integration with GitHub, Bitbucket and stash pull request support
  • Integration with isolation features (i.e. sandbox commands within the container)

Do you want to know more?



Valentina Armenise
Solutions Architect, CloudBees

Follow Valentina on Twitter.


Categories: Companies

Automating CD pipelines with Jenkins - Part 2: Infrastructure CI and Deployments with Chef

Tue, 08/05/2014 - 17:56
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Tracy Kennedy, solutions architect, CloudBees, about a presentation given by Dan Stine, Copyright Clearance Center at JUC Boston.

In a world where developers are constantly churning code changes and Jenkins is building those changes daily, there is also a need to spin up test environments for those builds in an equally fast fashion.

To respond to this need, we’re seeing a movement towards treating “infrastructure as code.” This goes beyond simple BAT files and shell scripts -- instead, “infrastructure as code” means that you can automate the configurations for ALL aspects of your environment, including the infrastructure and the operating system layers, as well as infrastructure orchestration with tools like Chef, Ansible and Puppet.

These tools’ automation scripts are version controlled like the application code, and can even be integrated with the application code itself.

While configuration management tools date back to at least the 1970s, this way of treating infrastructure code like application code is much newer and can be traced to at least CFEngine in the 90s. Even then, these declarative configuration tools didn’t start gaining popularity until late 2011:

Screen Shot 2014-07-30 at 2.40.09 PM.png

Screen Shot 2014-07-30 at 2.42.22 PM.pngScreen Shot 2014-07-30 at 2.42.05 PM.png

Screen Shot 2014-07-30 at 2.43.35 PM.png


Infrastructure CIThis rise of infrastructure code has created a new use case for Jenkins: as a CI tool for an organization’s infrastructure.

At the 2014 Boston Jenkins User Conference, Dan Stine of the Copyright Clearance Center presented how he and his organization met this challenge. According to Stine, the Copyright Clearance Center’s platform efforts began back in 2011. They saw “infrastructure as code” as an answer to the plight of their “poor IT ops guy,” who was being forced to deploy and manage everything manually.

Stine compared the IT ops guy to the infamous “Brent” of The Phoenix Project: all of their deployments hinged on him, and he became overwhelmed by the load and became the source of their bottlenecks.

To solve this problem, they set two goals to improve their deployment process:
1. Reduce effort2. Improve speed, reliability and frequency of deploymentsJenkins and Chef
As for the tools to accomplish this, the organization specifically picked Jenkins and Chef, as they were already familiar and comfortable with Jenkins, and knew both tools had good communities behind them.They also used Jenkins to coordinate with Liquibase to execute schema updates, since Jenkins is a good general purpose job executor.

They installed the Chef client onto nodes they registered on their Chef server. The developers would then write code on their workstations and use tools like Chef’s “knife” to interact with the server.

Their Chef code was stored in GitHub, and they pushed their Cookbooks to the Chef server.

For Jenkins, they would give each application group their own Cookbook CI job and Cookbook release job, which would be run by the same master as the applications’ build jobs. The Cookbook CI jobs ran any time that new infrastructure code was merged.

They also introduced a new class of slaves, which had the required RubyGems installed for the Cookbook jobs and Chef with credentials for the Chef server.

Cookbook CI Jobs and Integration Testing with AWSThe Cookbook CI jobs first prompt static analysis of the code’s syntax with JSON, Ruby and Chef, followed by integration testing using the kitchen-ec2 plugin to spin up an EC2 instance in a way that would mimic the actual deployment topology for an application.

Each EC2 instance was created from an Amazon Machine Image that was preconfigured with Ruby and Chef, and each instance was tagged for traceability purposes. Stine explained that they would also run chef-solo on each instance to avoid having to connect ephemeral nodes to their Chef server.

Cookbook Release Jobs
The Cookbook release jobs were conversely triggered manually. They ran the same tests as the CI jobs, but would upload new Cookbooks to the Chef server.

Application Deployment with ChefFrom a workstation, code would be pushed to the Chef repo on GitHub. This would then trigger a separate Jenkins master dedicated to deployments. This deployment master would then pull the relevant data bags and environments from the Chef server. The deployment slaves kept the SSH keys for the deployment nodes, along with the required gems and Chef with credentials.

Stine then explained the two deployment job types for each application:

1. DEV deploy for development2. Non-DEV deploy for operations

Screen Shot 2014-07-30 at 3.43.34 PM.png
Non-dev jobs took an environment job parameters to define where the application would be deployed to, while both took application group version numbers. These deployment jobs would edit application data bags and application environment files before uploading them to the Chef server, find all nodes in the specified environment with the deploying app’s recipes, run the Chef client on each node and send an email notification of the result of the deployment.


Click here for Part 1.


Tracy Kennedy
Solutions Architect
CloudBees

As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. (A Meet the Bees blog post about Tracy is coming soon!) For now, follow her on Twitter.
Categories: Companies

Multi-Stage CI with Jenkins in an Embedded World

Thu, 07/31/2014 - 16:27
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Steve Harris, SVP Products, CloudBees about a presentation given by Robert Martin of BMW at JUC Berlin.


Embedded systems development is an incredibly complex world. Robert (Robby) Martin of BMW spoke at JUC Berlin on the topic of Multi-Stage CI in an Embedded World (slides, video). Robby spent a lot of his career at Nokia prior to coming to the BMW Car IT team. While many of the embedded systems development and delivery principles are common between phones and cars, the complexity and supply chain issues for modern automobiles are much larger. For example, a modern BMW depends on over 100 million lines of code, much of which originates with external suppliers, each of whom have their own culture and QA processes. Robby used an example scenario throughout his presentation, where a development team consisting of 3 developers and a QA person produce a software component, which is then integrated with other components locally and rolled up for delivery as part of a global integration which must be installed and run as part of the overall product. 



The magnifying effect of an error at an early stage being propagated and discovered at a later stage becomes obvious. Its impact is most clearly felt in the end-to-end "hang time" needed to deliver a one-line change into a production product. Measuring the hang-time automatically and working to speed it up continuously is one of his key recommendations. Fast feedback and turnaround in the event of errors, and minimizing the number of commits within a change-triggered CI stage, is critical. Robby also clarified the difference and importance of using a proper change-triggered approach for CI, as opposed to nightly integration.



Robby described the multi-stage CI approach they're using, which is divided into four stages:
  1. DEV-CI - Single developer, max 5 minutes
  2. TEAM-CI - Single SW component, max 30 minutes
  3. VERTICAL-CI - Multiple SW components, max 30 minutes (e.g., camera system, nav system)
  4. SYSTEM-CI - System level, max 30 minutes (e.g., the car)
The first stage is triggered by a developer commit, and each subsequent stage is automatically triggered by the appropriate overall promotion criteria being met within the previous CI stage. Note how the duration, while minimal for developers, is still held to 30 minutes even at the later stages. Thus, feedback loops to the responsible team or developer are kept very short, even up to the product release at the end. This approach also encourages people to write tests, because it's dead obvious to them that better testing gets their changes to production more quickly, both individually and as a team, and lowers their pain.

One problem confronting embedded systems developers is limited access to real hardware (and it is also a problem for mobile development, particularly in the Android world). Robby recommended using a hardware "farm" consisting of real and emulated hardware test setups, managed by multiple Jenkins masters. He also noted how CloudBees' Jenkins Operations Center would help make management of this type of setup simpler. In their setup, the DEV-CI stage does not actually test with hardware at all, and depending on availability and specifics, even the TEAM-CI stage may be taken up into VERTICAL-CI without actual hardware-based testing.

Robby's recommendations are worthwhile noting:


  • Set up your integration chain by product, not by organizational structure
  • Measure the end-to-end "hang time" automatically, and continuously improve it (also key for management to understand the value of CI/CD)
  • Block problems at the source, but always as early as possible in the delivery process
  • After a developer commits, everything should be completely automated, including reports, metrics, release notes, etc
  • Make sure the hardware prototype requirements for proper CI are committed to by management as part of the overall program
  • Treat external suppliers like internal suppliers, as hard as that might be to make happen
  • Follow Martin Fowler's 10 practices of CI, and remember that "Commit to mainline daily" means the product - the car at BMW
Finally, it was fun to see how excited Robby was about the workflow features being introduced in Jenkins. If you watch his Berlin presentation and Jesse's workflow presentation from Boston JUC, you can really see why Jenkins CI workflow will be a big step forward for continuous delivery in complex environments and organizations.

-- Steven G. Harriswww.cloudbees.com


Steven Harris is senior vice president of products at CloudBees. Follow Steve on Twitter.
Categories: Companies

Continuous Delivery: Deliver Software Faster and with Lower Risk

Wed, 07/30/2014 - 20:30
continuous-delivery-infographicContinuous Delivery is a methodology that allows you to deliver software faster and with lower risk. Continuous delivery is an extension of continuous integration - a development practice that has permeated organizations utilizing agile development practices.

Recently DZone conducted a survey of 500+ IT professionals to find out what they are doing regarding continuous delivery adoption and CloudBees was one of the research sponsors. We have summarized the DZone findings in an infographic.

Find out:
  • Most eye-opening statistic: The percentage of people that think they are following continuous delivery practices versus the percentage of people that actually are, according to the definition of continuous delivery
  • Who most typically provides production support: development, operations or DevOps
  • Which team is responsible for actual code deployment
  • How pervasive version control is for tracking IT configuration
  • The length of time it takes organizations from code commit to production deployment
  • Barriers to adopting continuous delivery (hint: they aren't technical ones)

View the infographic and learn about the current state of continuous delivery.
For more information:Get the CloudBees whitepaper: The Business Value of Continuous Delivery.


Categories: Companies

Continuous Integration for node.js with Jenkins

Tue, 07/29/2014 - 17:09
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Steven Christou, technical support engineer, CloudBees about a presentation given by Baruch Sadogursky, JFrog, at JUC Boston.



Fully automating a continuous integration system from development, to testing, to deployment in production servers for Node.js can be a challenge. Most Node.js developers are familiar with NPM, which I learned does not stand for “Node Package Manager,” but stands for a recursive bacronymic abbreviation for "npm is not an acronym." In other words, it contains packages that contain a program described by the package.json file. To compare with a Java developer, an NPM is similar to a jar, and NPM registry is similar to Maven central for Java developers. What would happen if the main rpm registry https://www.npmjs.org/ goes down? At that moment node.js developers would be stuck waiting for npmjs.org to return to normal status, or they could run their own private registry.



Baruch Sadogursky, JFrogThat sounds easier said than done, though. According to http://isaacs.iriscouch.com/registry, the current size of the registry is 450.378 gigabytes of binaries. Out of all of those 450 gigabytes of information how many of the packages are going to be used by your developers?

Artifactory: a repository manager to bridge the gap between developers and rpm registry npmjs.org. Artifactory acts as a proxy between your coders and Jenkins instances to the outside world. When I (a developer) require a new package and I declare a new dependency in my code, Artifactory will pull the necessary package dependency from npmjs.org and make it available. After the code has been committed with the new package dependency, Jenkins is then able to fetch the same package from Artifactory. In this scenario if npmjs.org ever goes down, testing using Jenkins will never hault because it will still be able to obtain the necessary dependencies from the Artifactory server.



Building code using an Artifactory server also eliminates the need for users to checkout and build their dependencies as it would be time consuming. Also dependencies could be in an unstable state if I build in my environment and it differs from other users or the Jenkins server. Another advantage is Jenkins could record information about the packages that were used during the build.

Overall, using a package manager like Artifactory to act as a proxy between your Jenkins instance and the NPM registry npmjs.org is beneficial in order to maintain true continuous integration. Your developers and Jenkins instances would not be impacted by any downtime issues if the NPM repository is down or unavailable. Thus, adding an Artifactory server to manage package dependencies would help maintain continuous integration.

Steven Christou
Technical Support Engineer
CloudBees

Steven works on providing bug fixes to CloudBees customers for Jenkins, Jenkins plugins and Jenkins enterprise plugins. He has a great passion for software development and extensive experience with Hudson and Jenkins. Follow him on Twitter.
Categories: Companies

Continuous Delivery and Workflow

Fri, 07/25/2014 - 15:18
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Jesse Glick, software developer, CloudBees, about a presentation given by himself and Kohuske Kawaguchi, as well as a session given by Alex Manly, MidVision. Both sessions are from JUC Boston.

At the Jenkins User Conference in Boston this year, Kohsuke and I gave a session Workflow in Jenkins where for the first time we spoke to a general audience about the project we started to add a new job type to Jenkins that can manage complex and long-running processes. If you have not heard about Workflow yet, take a look at its project page, which gives background and also links to our slides. I was thrilled to see the level of interest and to hear confirmation that we picked the right problem to solve.

Alex Manly, MidVisionA later session by Alex Manly of MidVision (Stairway to Heaven: 10 Best Practices for Enterprise Continuous Delivery with Jenkins) focused on the theory and practice of CD, such as the advantages of pull (or “convergent”) deployment at large scale when using homogeneous servers, as opposed to “pushing” new versions immediately after they are built, and deployment scenarios, especially for WebSphere. Since I am only a spectator when it comes to dealing with industrial-scale deployments like that, while listening to this talk I thought about how Workflow would help smooth out some of the nitty-gritty of getting such practices set up on Jenkins.

One thing Alex emphasized was the importance of evaluating the “cost of non-automation” when setting up CD: you should “take the big wins first,” meaning that steps which are run only once in a blue moon, or are just really hard to get a machine to do exactly right all the time, can be left for humans until there is a pressing need to change that. This is why we treated the human input step as a crucial feature for Workflow: you need to leave a space for a qualified person to at least approve what Jenkins is doing, and maybe give it some information too. With a background in regulatory compliance, Alex did remind the audience that these approvals need to be audited, so I have made a note to fix the input step to keep an audit trail recording the authorized user.

The most important practice, though, seemed to be “Build Once, Deploy Anywhere”: you should ensure the integrity of a build package destined for deployment, ideally being a single compressed file with a known checksum (“Fingerprint” to Jenkins), matched to an SCM tag, with the SCM commit ID in its manifest. Honoring this constraint means that you are always deploying exactly the same file, and you can always trace a problem in production back to the revision of the software it is running. There should also be a Definitive Software Library such as Nexus where this file is stored and from which it is deployed. One important advantage of Workflow is that you can choose to keep metadata like commit IDs, checksums, timestamps, and so on as local variables; as well as being able to keep a workspace (i.e., slave directory) locked and available for either the entire duration of the flow, or only some parts of it. This means that it is easy for your flow to track the SCM commit ID long enough to bake it into a manifest, while keeping a big workspace open on a slow slave with the SCM checkout, then checksum the final build product and deploy to Nexus, releasing the workspace; and then acquire a fast slave with a smaller workspace to host some functional tests, with the Nexus download URL for the artifact still easily accessible; and finally switch to a weak slave to schedule deployment and wait. Whereas a setup using traditional job chaining would require you to carefully pass around artifacts, workspace copies, and variables (parameters) from one job to the next with a lot of glue code to reconstruct information an earlier step already had, in a Workflow everything can remain in scope as long as you need it.

The biggest thing that Alex treated as important which is not really available in Workflow today is matrix combinations (for testing, or in some cases also for building): determining the effects of different operating systems / architectures, databases, JDK or other frameworks, browsers, and so on. Jenkins matrix projects also offer “touchstone builds” that let you first verify that a canonical combination looks OK before spending time and money on the exotic ones. Certainly you can run whatever matrix combinations you like from a Workflow: just write some nested for-loops, each grabbing a slave if it needs one, maybe using the parallel construction to run several at once. But there is not yet any way of reporting the results in a pretty table; until then, the whole flow run is essentially pass/fail. And of course you would like to track historical behavior, so you can see that Windows Java 6 tests started failing with a commit done a week ago, while tests on Firefox just started failing due to an unrelated commit. So matrix reporting is a feature we need to include in our plans.

All in all, it was a fun day and I am looking forward to seeing what people are continuously delivering at next year’s conference!


Jesse Glick
Developer Extaordinare
CloudBees

Jesse Glick is a developer for CloudBees and is based in Boston. He works with Jenkins every single day. Read more about Jesse on the Meet the Bees blog post about him.


Categories: Companies

Automating CD Pipelines with Jenkins - Part 1: Vagrant, Fabric and Selenium

Tue, 07/22/2014 - 20:10
This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Tracy Kennedy, solutions architect, CloudBees, about a session given by Hoi Tsang, DealerTrack, at JUC Boston.

There’s a golden standard for the software development lifecycle that it seems most every shop aspires to, yet seemingly few have already achieved - a complete continuous delivery pipeline with Jenkins that automatically pulls from an SCM repository on each commit, then compiles the code, packages the app and runs all unit/acceptance/static analysis tests in parallel.

Integration testing on the app then runs in mini-stacks provided by Vagrant and if the build passes all testing, Jenkins stores the binary in a repository as a release candidate until a candidate passes QA. Jenkins then plucks the release from the repository to deploy it to production servers, which are created on-demand by a provisioning and configuration management tool like Chef.

The nitty gritty details of the actual steps may vary from shop to shop, but based on my interactions with potential CloudBees customers and the talks at the 2014 Boston JUC, this pipeline seems to be what many high-level execs aspire to see their organization achieving in the next few years.

Jenkins + Vagrant, Fabric and Selenium
Hoi Tsang of DealerTrack gave a wonderful overview of how DealerTrack accomplished such a pipeline in his talk: “Distributed Scrum Development w/ Jenkins, Vagrant, Fabric and Selenium.”

As Tsang explained, integration can be a problem, and it’s an unfortunately expensive problem to fix. He explained that is was best to think of the problem of integration as a multiplication problem, where

Hoi Tsang, DealerTrackpractice x precision x discipline = perfection
When it comes to SCRUM, which Tsang likened to being “like driving really fast on a curvy road,” most all of the attendees at Tsang’s JUC speech practiced it and almost all confirmed that they do test-driven development.

In Tsang’s case, DealerTrack was also a test-driven development shop and had the goals of writing more meaningful use cases and defining meaningful test data.

To accomplish this, DealerTrack set up Jenkins and installed a few plugins: Build Pipeline plugin, Cobertura and Violations to name a few. They also created build and deployment jobs - the builds were triggered by code commits and schedules, and the builds triggers tests whose pass/fail rules have been defined by each DealerTrack team. Their particular rules were:
  • All unit tests passed
  • Code coverage > 90%
  • Code standard > 90%
DealerTrack had their Jenkins master control a Selenium hub, which consisted of a grid of dedicated VMs/boxes registered to the Selenium hub. Test cases would get distributed among the grid, and the results would be reported back to the associated Jenkins jobs.

The builds would also be subject to an automated integration build, which relied on Vagrant to define mini-stacks for the integration tests to run in by checking out source code into a shared folder with a Virtual Machine, launching the VM, preparing + running the test, then cleaning up the test space. Despite this approach to integration testing taking longer, Tsang argued that it provided a more realistic testing environment.

If the build passed, then its artifact would be uploaded to an internally-hosted repository and reports on the code standards + code coverage were published.This would also trigger a documentation generation job.

According to Tsang, DealerTrack also managed to setup an automated deployment flow, where Jenkins would pick up a build from the internal repository, tunnel into the development server, then drop off the artifact and deploy the build. They managed to accomplish this using Python Fabric, a CLI for streamlining the use of SSH for application deployment or system administrator tasks.

Tsang explained that DealerTrack had a central Jenkins master to maintain the build pipeline, but split the work between each team’s assigned slave and assigned testing server. Dedicated slaves worked on the more important jobs, which allowed branch merging to be accomplished 30% faster.
Stay tuned for Part 2!


Tracy Kennedy
Solutions Architect
CloudBees
As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. (Meet the Bees blog post coming soon!) For now, follow her on Twitter.
Categories: Companies

To Successfully Adopt Continuous Delivery, Organizations Need To Change

Fri, 07/18/2014 - 16:56
In a recent Forrester Research report, Modern Application Delivery Demands a Modern Organization, Kurt Bittner, John Rymer, Chris Hines and Diego Lo Giudice review the differences between the 'modern organization' of yesterday and today and the shifts that need to be taken to keep up with not only customer demand, but the success of more agile competitors.
Bottlenecks
When you look at the structure of a successful organization, it is rare to find silos of any sort. The reason being that when you shift the emphasis from individual performance optimization to a team-based structure focused on optimizing delivery, you get faster output. Why?
When an individual is focused on their own task lists, priorities slip for other projects that get held up. This ultimately creates a bottleneck of work. What is the natural thing you may do when you are waiting for someone else to do the next step of a project or be told to do so from a superior? You start something else. Because you are working on some new project, a new bottleneck is formed when your resources are needed. You are not available any longer and someone is now waiting on you. They start a new project while they wait and so on and so forth.
The Culture Shift
We are members of a culture of multi-tasking – we must always be busy. This is not always good. In the modern culture, resources are dedicated and at the ready to move projects along, even if they are underutilized. So now you have resources that are not moving on to new projects and are ready when their resources are needed.
Now going back to the silo vs. team approach, we start to see less specialization and more focus on distributing knowledge. So now you have a team that can be the next in line instead of one person. It’s now about cross-functional teams vs. superstars.
The focus also needs to change. Our culture wants us to get the Employee of the Month award and achieve personal objectives but what if we focused less on how much we could get out of top performers and more on how much output we could deliver to our customers?
This would mean another huge cultural shift and this time it’s about the management team. Management must be agile and allow for teams to make decisions quickly without having to cut through yards of red tape to get something across the finish line. It’s more about holding your team accountable vs. tracking and monitoring their every move.
The report concludes by stating: “While process and automation are essential enablers of these better results, organization culture, structure, and management approach are the true enablers of better business results.”
Continuous Delivery can be a tremendous game changer for your organization but the organization needs to be modernized in such a way that it will be a successful game changer. 



Christina Pappas
Marketing Funnel Manager
CloudBees

Follow her on Twitter
Categories: Companies

The Butler and the Snake: Continuous Integration for Python by Timo Stollenwerk, Plone Foundation

Tue, 07/15/2014 - 17:41
This is the first in a series of blog posts in which various CloudBees technical experts will summarize presentations from the Jenkins User Conferences. This first post is written by Félix Belzunce, solutions architect, CloudBees.

At the Jenkins User Conference/Europe, held in Berlin on June 25, Timo Stollenwerk of Plone Foundation presented how the Plone community uses Jenkins to build, test and deliver Python-based software projects. Timo went through some of the CI rules and talked about the main tools you should take a look at for implementing Python CI.

For open source projects implementing CI, the most important thing besides version control and automated builds is the agreement of the team. In small development teams, that is an easy task most of the time, but not in big teams or in open source projects where you need to follow some rules.

When implementing CI, it is always a good practice to build per commit and then notify the responsible team as to the outcome. This makes the integration process easier and avoids "Integration Hell." The Jenkins dashboard and the Email-ext plugin could help accomplish this. Also, the Role-based Access Control Plugin could be useful to set-up roles for your organization, so your developers can access the Jenkins dashboard while being sure that nobody can change their job configuration.


Java developers usually use Gradle, Maven or Ant as automated build tools, but in Python there are different tools you should consider, like Buildout, PIP, Tox and Shining Panda. Regarding testing and acceptance testing, I have listed below some of the tools that Timo mentioned.


Due to Python's nature static analysis has become somewhat essential. If you plan to implement this in your organization, I recommend reading this article, which compares different tools for Python static analysis, some of them which Timo also mentions.

Regarding scalability, when you are running long builds you could start facing some issues. A good practice here is not to run any build in your master and to let your slaves do the job.

If you have several jobs involved in launching a daemon process, you should ensure that each job uses unique TCP port numbers. If you don't do this, two jobs running on the same machine may use the same port and end up interfering with one another. In this case, the Port Allocator Plugin can help you out.

The CloudBees Long Running Build Plugin and the NIO SSH Slaves Plugin could also be helpful if you want to restart a build (in the case that Jenkins crashes) without starting from scratch or if you want to increase the number of executors attached to your Jenkins master while maintaining the same performance.

In the release process, Timo explains that the Jenkins Pipeline plugin could be combined with some specific Python tools like zest.releaser or devpi.

Get Timo's slides and (when videos are posted) watch the video of his JUC Europe session.



Félix Belzunce
Solutions Architect
CloudBees

Félix Belzunce is a solutions architect for CloudBees based in Europe. He focuses on continuous delivery. Read more about him on his Meet the Bees blog post and follow him on Twitter.
Categories: Companies

CloudBees Announces Public Sector Partnership with DLT Solutions

Thu, 07/10/2014 - 14:50

Continuous Delivery is becoming a main initiative across all vertical industries in commercial markets/private markets. The ability for IT teams to deliver quality software on a hourly/daily/weekly basis is the new standard.

The public sector has the same needs to accelerate application delivery for important governmental initiatives. To make access to the CloudBees Continuous Delivery Platform easier for the public sector, CloudBees and DLT Solutions have formally joined hands in order to help provide Jenkins Enterprise by CloudBees and Jenkins Operations Center by CloudBees to federal, state and local governmental entities.

With Jenkins Enterprise by CloudBees now offered by DLT Solutions, public sector agencies have access to our 23 proprietary plugins (along with 900+ OSS plugins) and will receive professional support for their Jenkins continuous integration/continuous delivery implementation.

Some of our most popular plugins can be utilized to:
  • Eliminate downtime by automatically spinning up a secondary master when the primary master fails with the High Availability plugin
  • Push security features and rights onto downstream groups, teams and users with Role-based Access Control
  • Auto-scale slave machines when you have builds starved for resources by “renting” unused VMware vCenter virtual machines with the VMware vCenter Auto-Scaling plugin
Try a free evaluation of Jenkins Enterprise by CloudBees or read more about the plugins provided with it.

For departments using larger installations of Jenkins, CloudBees and DLT Solutions propose Jenkins Operations Center by CloudBees to:
  • Access any Jenkins master in the enterprise. Easily manage and navigate between masters (optionally with SSO)
  • Add masters to scale Jenkins horizontally, instead of adding executors to a single master. Ensure no single point of failure
  • Push security configurations to downstream masters, ensuring compliance
  • Use the Update Center plugin to automatically ensure approved plugin versions are used across all masters
Try a free evaluation of Jenkins Operations Center by CloudBees, or watch a video about Jenkins Operations Center by CloudBees.

The CloudBees offerings, combined with DLT Solutions’ 20+ years of public sector “know-how”, makes it easier to support and optimize Jenkins in the civilian, federal and SLED branches of government.

For more information about the newly established CloudBees and DLT Solutions partnership read the news release.

We are proud to partner with our friends at DLT Solutions to bring continuous delivery to governmental organizations.

Zackary Mahon
Business Development Manager
CloudBees

Categories: Companies