Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
Updated: 19 hours 16 min ago

Getting Started with the Blue Ocean Dashboard

Wed, 04/26/2017 - 21:37

Blue Ocean is a new user experience for Jenkins, and version 1.0 is now live! Blue Ocean makes Jenkins, and continuous delivery, approachable to all team members. In my previous post, I used the Blue Ocean Activity View to track the state of branches and Pull Requests in one project. In this video, I’ll use the Blue Ocean Dashboard to get a personalized view of the areas of my project that are most important to me, and also to monitor multiple projects.

Please Enjoy!

 

 

Blog Categories: Jenkins
Categories: Companies

How I Do Continuous Delivery with My Personal Jenkins Server

Fri, 04/21/2017 - 21:38

I suspect I am not alone in having my own personal Jenkins server. In fact I have a cloud-hosted box on which I run quite a few different services:

  • Jenkins
  • Jenkins build agents
  • Nexus
  • HAProxy with Let’s Encrypt
  • Some personal websites
  • etc

I thought it might be interesting to explain how I have set this up to do continuous delivery for my personal websites.

When I first set this up, I ran everything directly as services in the OS. But of course the different services had different dependency requirements and maintaining it all was just a pain. The solution - for me - was obvious:

Docker All The Things!

So the base OS runs Docker and all my services are Docker containers.

  • Each Docker container is on an individual locked down virtual network.
  • I forward ports 80 and 443 to the HAProxy Docker container using a firewall rule on the main server.
  • HAProxy routes the requests to the individual Docker containers.
  • Each Docker container is managed by systemd using a custom template systemd script that I developed.

I am sure that there are more fancy setups I could configure, but this is a cloud machine that I am paying for myself after tax!

With this setup, however, I have my sweet-spot.

  • I can upgrade the OS and restart, all my applications and services stop and start correctly.
  • Because each application and service is running in a container, they are isolated from dependency changes in the OS.
  • I have a very tight set of firewall rules and the applications themselves are sandboxed.

You may have noticed, however, that Jenkins and my build agents are running as containers. And it probably has not escaped your notice that the applications and services that I use are all Docker containers. So how do I use Jenkins to manage all this from within its sandboxed container?

Put a hole in the sandbox

So the first thing I have done is I put a hole in the sandbox. There is one build agent that has the Docker socket bind mounted:

docker run --name jenkins-agent-with-docker -v /var/run/docker.sock:/var/run/docker.sock --rm --net internal-jenkins jenkins-agent

I this agent is not permanently on-line, rather it has the availability strategy of Take this agent online when in demand, and offline when idle. It’s not perfect security, if I were using CloudBees Jenkins Enterprise, I could use the controlled agents functionality to ensure that only those jobs that I explicitly approve can have access to this special build agent, which would give me an extra layer of protection. But I am the only user with permissions on the Jenkins instance, and it only watches the GitHub repositories that I have explicitly configured, and I am not overly concerned!

Use pipeline multi-branch to develop changes

I love multi-branch projects in Jenkins. Of course since I invented them I would say that!

The basic pipeline looks pretty much like this:

pipeline {
    agent {
        label 'with-docker'
    }
    stages {
        stage('build') {
            steps {
                withMaven(jdk: 'java-8', maven: 'maven-3', mavenLocalRepo: '.repository') {
                    sh "mvn clean verify ${env.BRANCH_NAME=='master'?'':'-Ddocker.skip.tag=true'}"
                }
            }
        }
        stage('deploy') {
            when {
                branch 'master'
            }
            steps {
                // deploy steps go here
            }
        }
    }
    post {
        success {
            emailext (
                to: 'my email address goes here',
                subject: "[Jenkins] '${env.JOB_NAME}#${env.BUILD_NUMBER}' was successful!",
                mimeType: 'text/html',
                body: """<p>See <a href='${env.BUILD_URL}'>${env.JOB_NAME}#${env.BUILD_NUMBER}</a> for more details.</p>""",
                recipientProviders: [[$class: 'DevelopersRecipientProvider']]
            )
        }
        failure {
            emailext (
                to: 'my email address goes here',
                subject: "[Jenkins] '${env.JOB_NAME}#${env.BUILD_NUMBER}' failed!",
                mimeType: 'text/html',
                body: """<p>See <a href='${env.BUILD_URL}/console'>${env.JOB_NAME}#${env.BUILD_NUMBER}</a> to find out why.</p>""",
                recipientProviders: [[$class: 'DevelopersRecipientProvider']],
                attachLog: true
            )
        }
    }
}

As an Apache Maven developer, I tend to use Maven for building Java web applications. There are one or two of my services where it was easier to use Make, but the majority of my applications are using Maven to build. I use Fabric8’s Docker Maven Plugin to build the Docker images, so the important thing is to ensure that the tags are not updated unless we are on the master branch. (This is because my systemd scripts always just run the latest tag.)

So I can develop my changes using a PR and have Jenkins build and test the application, giving my commits the seal of approval - and I get email notification too. If I want to verify the application locally I can just build and run locally.

When I am ready to deploy, I click the Merge button on GitHub.

  • The changes are applied to master.
  • GitHub sends the webhook notification to Jenkins.
  • Jenkins starts the build.
  • Jenkins launches the agent with Docker.
  • The Docker image gets built and tagged.
  • Because we are on the master branch we execute the deploy stage.
The deployment secret sauce

So earlier I said that the services were managed by systemd. In fact I have a cron task running as root that - as well as tidying up any unused Docker containers - will kill any Docker containers that have been lying around for too long and are not in the “managed” list. This is important as otherwise a build could fail at just the wrong point in time and leave a container running. If that happens too often the system could become overloaded.

One consequence of this, from a deployment perspective, is that the even the build agent with the Docker socket mounted could not deploy the tagged image into production. If the container was restarted using Docker by other than root, the container id would change and it would get killed by the cron task. So we need root to restart the service, however we don’t want to give root access to the build agent.

What I have done instead is setup a special ssh user, called reload. In reload’s ~/.ssh/authorized_keys file I have given each application and service their own key and the key will just run a specific command:

command="touch service-foo" ssh-rsa AAAAB3NzaC1yc2.....kONZ reload-service-foo
command="touch service-bar" ssh-rsa AAAAB3NzaC1yc2.....eI9p reload-service-bar

The reload user cannot login to a shell, the only thing you can do is login with one of the service specific keys and that will touch a file and exit. The reload user is also configured to only accept connections from the Jenkins build agent.

The cron task that runs as root now just looks for the service file and if present deletes it and restarts the corresponding service, e.g. something like:

for s in foo bar
do
    if [ -f /home/reload/service-$s ]
    then
        rm -f /home/reload/service-$s
        systemctl restart docker-service-$s.service
    fi
done

So now we just need the deploy steps… which is actually quite simple:

stage('deploy') {
    when {
        branch 'master'
    }
    steps {
        sshagent(['reload']) {
            sh 'ssh -T reload@$(ip r | awk '/default/{print $3}')'
        }
    }
}

I use the sshagent step to make the SSH key available. For each service I use the folder scoped credentials store to hold the SSH Key and each time I just use the ID reload.

Thus each application only has access to its own SSH key and only from the master branch.

The end result

Here’s a quick unedited real-time demonstration where I updated the copyright year from 2012 to 2017 on one of my personal domains:

Blog Categories: Jenkins
Categories: Companies

Sailing the Jenkins Blue Ocean

Thu, 04/20/2017 - 18:24

Around July last year, with the introduction of some new components in the systems we build at mPort, we also introduced some new continuous integration/continuous delivery (CI/CD) tooling. I’m experienced with the tools built by previous companies I’ve worked for (particularly ThoughtWorks and Atlassian), so GoCD and Bamboo were good candidates. Jenkins was not (at the time). This article does not dwell on the specific pros and cons of those products in the specific mPort context. Suffice it to say, they are all fine products and the evaluation criteria for mPort are not the same for other teams.

For the past year or so, the team at mPort have been focused on constantly improving the rate of delivering value with high confidence. This has meant quite a few changes and experimentation with how we work. The first step was taken with a GoCD server and, once the security-oriented configuration for the infrastructure-as-code was worked out (with some gaffer tape and string), it was working well enough.

A few months later, though, when the product manager for Blue Ocean asked me what I thought about their early beta, I took it as an opportunity to test it out for more than just a hobby project (as I had been since May) and see how it helped a team get better at DevOps, continuous delivery and improving visibility of their activity throughout the organization. I expected this to be a fruitful experiment as I trusted the Blue Ocean core team from CloudBees based on having worked with one of them before (at Canva and Atlassian) and many sensible technical conversations over many years with the others.

Within two days, the relevant builds were migrated from GoCD and we ran both pipelines simultaneously for a few days (apart from the deploy steps) to verify. We then swapped to Blue Ocean for a deploy through each environment. At that point, we disabled the old pipeline and a few weeks later it was removed.

The first appealing thing about the Blue Ocean project is the thing you notice first — the UI. Jenkins is long in the tooth and its UI is, to put it mildly, ugly and awkward to work with. But Blue Ocean goes beyond just the fonts, colors and layout. This is not a pig with lipstick. The entire UX is transformed and the solid, fully-featured, ecosystem-rich platform that is Jenkins CI is now becoming a pleasure to work with.

Blue Ocean has been built with a high level of user empathy. They know how convoluted, confusing, time consuming and unappealing it can be to work with build tools. Even things like the console log being split into regions for steps and focusing on the failed steps is an eye-rollingly obvious and yet desperately welcome addition.

More importantly, Blue Ocean alleviates the need for the often more experienced developers and CI experts to be the only ones to configure projects, stages, steps, etc. Having all of the members of the development team understand the CI/CD pipeline is important in a high-performance team, and Blue Ocean has allowed anyone in my team to build and maintain those pipelines.

Furthermore, in a transparent, collaborative organization, it means anyone can visualize what is happening in that pipeline at any time. It is a pretty special thing to see the head of marketing and the product owner looking at an office dashboard, following a particular feature from git push to first user analytics.

Because I prefer trunk-based development and feel it is a necessary approach to successful continuous delivery, I eschew long-lived branches, be they “feature” or otherwise. However, in a distributed team especially, short-lived shared branches are often unavoidable and can make for a useful means of temporarily working with discrete changes to the software in isolation (and that isolation is the two-edged sword).

Blue Ocean and Jenkins are developed by a globally distributed team. They know first-hand what it’s like to have a high throughput of small-batch feature branches and pull requests vying for attention. Other CI servers have this capability too, yet Blue Ocean makes it stunningly simple for people who don’t intuitively understanding branching to see the results of candidate builds. It even takes the cognitive load away from those of us who do understand such things by making it blazingly obvious where merges need attention, without taking away the detail required to resolve the issue.

I mentioned infrastructure-as-code earlier. Given the CI/CD pipeline is a critical piece of infrastructure, it is vital that the meta-pipelines (the ones that build and configure the various layers of infrastructure for delivering the customer-facing production software) are also readily understood.

I’m not a mouse person. I prefer to define my software, integration, infrastructure and tooling in terms of declarative textual specifications. (An excellent satirical insight from one of my former coworkers using Tibco at a client was that all he needed to integrate millions of dollars of enterprise software was two mouses — one to drag, and one to drop.)

For example, why clicky-click in the AWS console when you can Terraformyour cloud? Why clicky-click in the a build server management console when you can have declarative source code define configuration, projects, stages, etc? Such source-code based mechanisms are intentional, testable, versionable, comparable and more easily transposed to other tools. Nonetheless, this meta-infrastructure can be non-intuitive and people find it hard to know which turtle they are on (or which level of “Inception” they are in).

In a humble and adept “you’re welcome” to the DevOps community, Blue Ocean has made it vastly simpler for anyone in the team to build and visualize not just the software pipeline, but the pipeline that builds that pipeline. This is also made a lot simpler with advancements like Habitat and Docker, but it’s really important in a team with collective ownership to have those tools in the right context in the meta-pipeline.

Our Blue Ocean server includes a project to provision, deploy, and configure itself. It treats itself as another head of cattle, not a pet or a snowflake. As the person who often is the custodian of meta-infrastructure, it is huge relief to have this elevated to the status of first-class software — just as important and vital as the user-facing software it is building.

With Blue Ocean, software delivery team members of all skill levels can participate in continuous delivery because we get a great visualization of our software pipelines throughout the entire process. We are now able to include the software delivery pipeline itself as infrastructure-as-code without compromising usability; that has vastly improved the confidence in and timeliness of our software deployments. Furthermore, Blue Ocean offers a superb user interface — well ahead of anything else available.

I am looking forward to expanding our use of Blue Ocean and the richer features that will be added to it, particularly as it takes on capabilities from tooling we have had to use in the past and we migrate those builds across.

This was an important experiment for mPort’s engineering team and our whole-company collaboration stance, and it has been a resounding success.

Josh Graham
CTO, mPort

Follow Josh on Twitter

This post was originally posted on medium.com.

Blog Categories: Jenkins
Categories: Companies

Getting Started with Blue Ocean's Activity View

Thu, 04/20/2017 - 18:13

Blue Ocean is a new user experience for Jenkins, and version 1.0 is now live! Blue Ocean makes Jenkins, and continuous delivery, approachable to all team members. In my previous post, I showed how easy it is to create and edit Declarative Pipelines using the Blue Ocean Visual Pipeline Editor. In this video, I’ll use the Blue Ocean Activity View to track the state of branches and Pull Requests in one project. Blue Ocean makes it so much easier to find the logs I need to triage failures.

Please Enjoy!  In my next video, I’ll switch from looking at a single project to monitoring multiple projects with the Blue Ocean Dashboard.

 

 

Blog Categories: Jenkins
Categories: Companies

Getting Started with Blue Ocean's Visual Pipeline Editor

Wed, 04/19/2017 - 02:43

Blue Ocean is a new user experience for Jenkins and version 1.0 is now live!

Blue Ocean makes Jenkins and continuous delivery approachable to all team members. In my previous post, I explained how to install Blue Ocean on your local Jenkins instance and switch to using Blue Ocean. As promised, here’s a screencast that picks up where that post left off. Starting from a clean Jenkins install, the video below will guide you through creating and running your first pipeline in Blue Ocean with the Visual Pipeline Editor.

Please Enjoy! In my next video, I’ll go over the Blue Ocean Pipeline Activity View.

 

 

Blog Categories: Jenkins
Categories: Companies

Getting Started with Blue Ocean

Fri, 04/14/2017 - 22:29
Welcome to Blue Ocean 1.0!

In case you’ve been heads down on other projects for the past 10 months, Blue Ocean is a new user experience for Jenkins, and version 1.0 is out! Blue Ocean makes Jenkins, and continuous delivery, approachable to all team members. I’ve been working with it for the past several months, and I can tell you it is amazing. I wish all the interactions with Jenkins were as easy as this:

 

It's time to create your first Pipeline!

10 minutes to Blue Ocean

Blue Ocean is simple to install and will work on basically any Jenkins 2 instance (version 2.7 or later). Even better, it runs side-by-side with the existing Jenkins web UI - you can switch back and forth between them whenever you like. There’s really no risk. If you have a Jenkins instance and a good network connection, in 10 minutes you could be using Blue Ocean.

  1. Login to your Jenkins server
  2. Click Manage Jenkins in the sidebar then Manage Plugins
  3. Choose the Available tab and use the search bar to find Blue Ocean
  4. Click the checkbox in the Install column
  5. Click either Install without restart or Download now and install after restart

Installing Blue Ocean

After you install Blue Ocean, you can start using it by clicking on **Open Blue Ocean** in the top navigation bar of the Jenkins web UI, or you can navigate directly to Blue Ocean by adding `/blue` to your Jenkins URL, for example `https://ci.jenkins.io/blue`.

Opening Blue Ocean

If you have to go back to the “classic” Jenkins UI, there’s an “exit” icon located at the top of every page in Blue Ocean.

Returning to the "classic" web UI

Dive in!

That’s it! You now have a working Blue Ocean installation. Take a look around at your Pipelines and activity, or try creating a new Pipeline. I think you’ll be pleasantly surprised at how intuitive and helpful Blue Ocean can be. Blue Ocean is so cool, I never want to leave it. Over the next few days, I’ll be publishing a series of videos, showing some common Jenkins use cases and how Blue Ocean makes them clearer and easier than ever before.

Stay Tuned!

Blog Categories: Jenkins
Categories: Companies

Now on DevOps Radio: CloudBees CEO Sacha Labourey Dives Into Blue Ocean

Wed, 04/05/2017 - 03:52

Sacha Labourey, CEO at CloudBees, joins DevOps Radio host Andre Pino, just in time for the general release of Blue Ocean, the dramatic new UX implementation of Jenkins. Though they do navigate their way to Blue Ocean, they actually start out with a deep dive into open source and its evolution.

In this episode, Sacha explains that Blue Ocean is not just bringing the Jenkins experience into 2017, but is in fact a radical departure from anything else on the market. It’s the same Jenkins engine but a completely reimagined UX, enabling users to move from concept to reality, focusing on their unique use case. With Blue Ocean, users have the ability to visualize software pipelines - to see in real time what the status of their software delivery pipeline is. Users can graphically modify pipelines and easily add and remove steps, configuring their pipeline to run the way they want it to.

To get to Blue Ocean, Sacha and Andre rewind to talk about Sacha’s path from the early days of open source and his stint at JBoss. These were the days of the “sandal brigade” (we see a theme here: blue ocean, sandals, warm weather, sandy beach…) and companies pushing back on allowing open source inside the enterprise. Fortunately, over time open source has become accepted in even the largest enterprises.

Sacha then shares the story of his meeting Kohsuke Kawaguchi, founder of Jenkins, explaining that it almost didn’t happen due to an errant email. Sacha and Andre then discuss the evolution of Jenkins from a continuous integration tool, to now the de facto tool for continuous delivery.

Sacha likes to think of CloudBees as a best friend to the Jenkins project. CloudBees contributes significant resources to the project, including substantial development resources. Jenkins Pipeline came about from work initiated by CloudBees. That work evolved into a foundation for Blue Ocean, created by a team of CloudBees developers led by James Dumay. Sacha mentions he couldn’t be prouder of the growth of the Jenkins project, now 12 years old, with more than 150,000 active installations and 1,300 plugins that enable Jenkins to integrate with almost any technology on the market.

Want to dive further into DevOps Radio? Head over to the CloudBees website or look us up on iTunes and subscribe to DevOps Radio via RSS feed. Let us know what you think by tweeting to @CloudBees and including #DevOpsRadio in your post.

 

 

 

 

 

 

 

Sacha on stage at Jenkins World 2016 in Santa Clara, CA.  

 

 

 

 

Blog Categories: JenkinsCompany News
Categories: Companies

Blue Ocean Makes Creating Pipelines Fun and Intuitive for Teams Doing Continuous Delivery

Mon, 04/03/2017 - 17:32

Here at CloudBees we know that continuous delivery is as much about people as it is about tools. We believe that the best tools should enable people to work smarter and not harder. This week, CloudBees and the Jenkins project are proud to announce general availability of Blue Ocean 1.0.

Blue Ocean is an entirely modern and enjoyable way for developers to use Jenkins that is built from the ground up for continuous delivery (CD) pipelines. No code or text editing is required. It moves continuous delivery out of the purview of experts and allows anyone to build CD pipelines and begin practicing DevOps as quickly as possible.

We know that the entire business – not just developers – needs to be engaged in continuous delivery for it to be successful. For developers, CD is about deploying software, while for the business-minded, it’s about delivering value to customers - faster than the competition. Blue Ocean is designed to be the convergence of these two goals. 

Blue Ocean will be integrated into the CloudBees products and enhanced in the next few months. In the meantime, it is available today from the Jenkins project, in the update center. We’ve got big plans in store for reflecting our investment in Blue Ocean in our other enterprise tools soon – so stay tuned! 

Benefits

Visual Pipeline Editing - Team members of any skill level can create continuous delivery pipelines from start to finish with just several clicks, using the intuitive, visual pipeline editor. Any pipeline created with the visual editor can also be edited in your favorite text editor bringing all the benefits of Pipeline as Code. 

Pipeline Visualization - Developers can visually represent pipelines in a way that anyone on the team can understand - even your boss’s boss - improving clarity into the continuous delivery process for the whole organization. The visualization helps you focus on what the pipeline does, not how it does it.

Pinpoint Troubleshooting - Blue Ocean enables developers to locate automation problems instantly, without endlessly scanning through logs or navigating through many screens, so you can get back to building the next big thing.

GitHub and Git Integration - Pipelines are created for all feature branches and pull requests, with their status reported back to GitHub. The whole team has visibility into whether changes need work or are good to go. 

Personalization – Every team member can make Jenkins their own by customizing the dashboard so they only see the pipelines that matter to them. Favoriting any pipeline or branch in Blue Ocean will show a favorite card on the dashboard so you can see its status at a glance.

Why not give your team a boost and try Blue Ocean – it’s available for free, today, from your CloudBees Plugin Manager!

To start using Blue Ocean from an existing Jenkins installation:

  • Login to your Jenkins server
  • Click Manage Jenkins in the sidebar then Manage Plugins
  • Choose the Available tab and use the search bar to find Blue Ocean

 

Blog Categories: JenkinsCompany News
Categories: Companies

Continuous Delivery is Eating DevOps as Software is Eating the Business

Fri, 03/31/2017 - 16:34

So what is continuous delivery?

For years, IT has been automating business processes. Now IT is automating its own processes to keep up with the rapid pace of change and to meet the business demand for new software features and capabilities.

Continuous Delivery

What if a business could release dozens of tested software improvements to production daily? Could the business respond to market and customer demands more quickly? The answer is yes – and continuous delivery makes this possible.

Continuous delivery involves the automation of the application development and delivery lifecycle, thus taking software from code to production in an automated process – and doing so more quickly than traditional processes. It is also the foundation for enabling the DevOps implementations that are occurring in many organizations today. Continuous delivery is an opportunity to bring the development and operations teams closer together. The industry calls this new culture DevOps. Companies that are successful with continuous delivery have embraced it. DevOps is all about a no-blame culture - getting everyone focused on the release and how they can successfully deliver it to the business in short order.

By adopting a DevOps mindset, developers and IT operations have to execute the cultural changes necessary to bind them together into a single team that can successfully leverage continuous integration and continuous delivery to its utmost.

DevOps teams use continuous delivery techniques such as applying automation and version control to the configurations of the supporting application infrastructure. By being able to duplicate (and replicate) these environments exactly, development, testing and production environments are in sync. Release to production becomes smooth - a non-event, and any errors that occur throughout the delivery process can be quickly identified and resolved.

With automation, you eliminate manual processes that slow down delivery and are also prone to human error. Once a developer changes an application, the change automatically propagates through the build, test and deployment processes in quick succession. This enables frequent, incremental and higher quality software releases deployed to production – hence at lower risk and cost than traditional software development practices. While this automation can be intimidating to some organizations at first, the value in more rapidly delivering what the business wants outweighs the effort that the transformation requires.

Onboarding Business as Part of IT

One key aspect of continuous delivery, is that it makes it possible to channel critical business, IT and market feedback into the development cycle to quickly enable incremental feature improvements and enhancements. With continuous delivery, an organization can innovate, test and deploy revenue-generating software many times each hour, day or week—as often and as quickly as the business requires.

However, the iteration doesn’t end once new code gets deployed to production. Business metrics are continuously collected in production and the business will be able to compare metrics against pre-defined goals (shorter sales, increased sales, etc.) This feedback loop to the business is a key aspect of continuous delivery.

The Business Value of Continuous Delivery

Software is at the heart of many products. As software eats the world, the business must deliver new software capabilities more quickly and efficiently. Business units are trying to sell more in their markets and respond more quickly to market demands. They are trying to interact in new ways and more intimately with customers in order to gain a competitive edge. The business is increasingly looking to IT for software processes that enable fast cycles.

Continuous delivery is at the forefront of the software-led business. Continuous delivery enables the line of business to meet market requirements in order to drive the business forward. Continuous delivery empowers the business to respond to competitive threats by testing new capabilities or even new markets – and doing so quickly. This enables the business to learn what works - and more importantly, what doesn’t work - using an approach that is faster, low risk and low cost.

Continuous delivery practices fold IT into the business strategically. Continuous delivery affects business-driven software change and error resolutions in close proximity to testing for quick course corrections to keep the business on track. IT can respond in a very timely fashion to the needs of the business and the business can embrace IT as a partner.

A common misconception associated to highly iterative and automated IT processes could increase risk and quality. On the contrary, reality shows that accelerated delivery of capabilities does not translate to sacrifices in quality. Continuous testing is inherent in continuous delivery practices, so automated testing is central to this software delivery methodology. Automated testing ensures that every time you release a new update or feature, you put it through a rigorous and repeatable automated testing process. Furthermore, since very little changes between two iterations, it gets very easy to spot where the issue comes from, how to solve it and, in the meantime, revert back to the previous version, which was known to work.

The result is a software release candidate that instills confidence. The candidate meets your quality standards because it passes long-established testing criteria. We’re actually talking about building quality into the process through automated testing: Organizations utilizing effective continuous delivery practices are able to produce higher quality software.

Similarly for business changes, since continuous delivery makes businesses release relatively small incremental features, frequently, if any given functionality doesn’t yield the expected output/metrics, the business hasn’t invested as much time and resources as they previously would have for the same outcome. The business can gather immediate feedback and make a decision as to whether they’re going to continue to invest in the incremental functionality or revert back to the previous business situation, without having too much invested.

Why Should you Care about Continuous Delivery?

Businesses are increasingly relying on software. The automobile industry is a good example, but by no means the only one. Automotive manufacturers today utilize embedded software in the form of ECUs (electronic control units) to control systems throughout the car. Cars can have up to 10 million lines of code and 125 ECUs or more in them. When automotive manufacturer Tesla discovers an issue with its cars, it delivers the software directly to the owner in the form of a download the owner initiates from within the car. This process saves Tesla millions of dollars – unlike the process that traditional automotive manufacturers use, which requires expensive physical recalls when an engineering or manufacturing issue is discovered. That’s the kind of leap modern software delivery practices – such as continuous delivery - can provide to the business.

As software eats the business, so continuous delivery eats software development and deployment. If you don’t care about continuous delivery, your competition will. You will find yourself facing either a smaller, more nimble company or a larger, more nimble company – both of whom have adopted continuous delivery. They will outpace you very quickly in their ability to deliver new capabilities to market. Can you wait months to deploy a solution that your competitors already used to steal your customers and eat your lunch yesterday?

Yet, moving from a traditional development processes – such as waterfall - to a continuous delivery practice is not an easy transformation for large organizations. In order to make the move to continuous delivery, they have to take a holistic approach to their processes to enable transformation.

A lot of firms dip their toe in the water first, trying continuous delivery on a single and simple project led by a team motivated to perform that transition. That gives them experience with a project that is low risk. Then they can take what they learned from that small project and apply it to broader initiatives. In any case, forcing a big bang approach is very likely to fail.

When the first project is successful, other project teams will want to adopt continuous delivery practices. When a small team completes a project successfully using continuous delivery, people share that success and promote it throughout the organization. Suddenly, everyone wants to adopt continuous delivery.

Continuous delivery produces a satisfying counter-intuitive result: any apprehensions about potential risks associated with the shift to continuous delivery are relieved as speed, transparency, automation and cross-team collaboration and ownership of software actually get rid of old risks, spare the enterprise new risks and catapult application delivery. In the process, applications are optimized – and so is the business.

Onward!

Sacha

Sacha Labourey
CEO and Founder
CloudBees
Follow Sacha on Twitter

 

 

 

Categories: Companies

Bug-vs-Feature: The Acceptance Criteria Game

Thu, 03/30/2017 - 20:05

Search any issue tracker and before long you are sure to find many examples of the “Bug-vs-Feature” debate. Sometimes the debate can become quite comical, as parodied by XKCD

Workflow

Often times the Bug-vs-Feature debate is seen as an excuse not to do something.

Well actually that’s not a bug but a feature, closing as WONTFIX.

This type of thing can - quite rightly - get users very upset.

At this point we could start to drag out the Myers Briggs personality types and try to identify the kinds of personality that would be drawn to software engineering… Perhaps more tempting would be to use Lacanian psychoanalytic theory to place this behaviour into the Lacanian structures label software engineers as suffering forms of perversion.

I would like to offer a different, perspective.

This behavior is part of the Acceptance Criteria Game. The problem is that you didn’t realize you were playing the game. And you probably don’t even know the rules.

This is not a game in the traditional sense, nor is it one where you can be co-opted just for observing - as some bees learned when watching other bees play bee-Corey’s variant of Mao which includes a penalty for “observing”.

Rather this is a game in the sense of Game Theory.

In the Acceptance Criteria Game we all have roles to play. If we all play those roles correctly, then we will end up with wonderful products and everyone will be happy. If somebody doesn’t play the game correctly, or chooses not to play, then eventually you will end up with the Bug-vs-Feature debate and frustration.

I will attempt to explain the game.

  • The aim of the game is to evolve a product.
  • In order to evolve the product we must change it.
  • Each change has associated acceptance criteria.
  • The change is not applied to the product until the acceptance criteria have been met.
  • The game centers around defining the acceptance criteria.

The game works best when the acceptance criteria are explicit and documented in some form, but for small teams that trust each other verbal acceptance criteria and shared domain knowledge can allow the game to be played well.

The game does not work when the acceptance criteria are fuzzy.

There are two roles in the game:

  • The change driver’s role is to define the acceptance criteria exactly as precise as required for the change.
  • The change implementer’s role is to implement the acceptance criteria as minimally as required by the acceptance criteria.

When the implementer does more than required by the acceptance criteria they are failing at the game, just as when the driver under specifies the acceptance criteria.

Let’s play the game.

  • The product is the USS Enterprise NCC-1701-D.
  • It has come to the attention of management that the shields need to be engaged quite often.
  • A time and motion study was performed and the results showed quite clearly that several seconds were lost between:
    • the bridge commander saying “Engage Shields”;
    • the security officer opening up the shield control console;
    • selecting the “Shields” screen; and finally
    • clicking the “Engage” button.

The result was a request to change the UI.

Add an “Engage Shields” button to the bridge commander’s screen.

This change could also include a REST API so that the functionality could be invoked from the bridge commander’s tri-corder in the event that they are away from their station.

The change driver has not played their role correctly, so let’s see what happens when the change implementer plays the acceptance criteria game:

Benedict Cumberbatch filming Sherlock cropped2

The game is afoot!

First, that whole “This change could”… well you used “could”, so we don’t have to do anything there.

So, no need for the REST API then.

Next, you said add a button to the screen but you never said that the button should do anything.

From which we conclude that we can just add the button and have it do nothing.

The result is that the change gets implemented and the ship blows up at the next encounter.

In other words, because the change driver did not play their role correctly, we end up with a crappy sub-optimal product and everyone is a loser.

What should happen after this is the change driver should recognize the ship destruction was their fault resulting from a failure to play their role correctly. There should be learning, and the next request to change should be accompanied with better acceptance criteria.

Add an “Engage Shields” button to the bridge commander’s screen.

The button will not require any confirmation and will engage the shields.

Pressing the button should engage the shields within no more than 50ms.

Let’s see what could happen if the change implementer does not play their role correctly.

Ok, we have added the “Engage Shields” button…

It doesn’t make sense to keep the button their when the shields are already engaged.

For completeness, when the shields are engaged we will replace the button with a “Shields Down” button.

The result is that the change gets implemented and more besides. This time the ship blows up because the shields were being cycled rapidly because the bridge commander had left their finger on their console for more than 50ms and activated the “Shields Down” button that was not requested in the original change.

In other words, because the change implementer did not play their role correctly, again we end up with a crappy sub-optimal product and everyone is a loser.

Boom!

So, how can we play the game so that everyone wins and we end up with a superb product?

My answer is that we should recognize this is a game, and reward playing it correctly… but reward it before you start making the change.

Before you start work on a change, make sure that there is agreement on the acceptance criteria for the change.

  • The change driver should ask for the holes in their acceptance criteria.

    Do not wait until the change has been implemented to find out that you left holes in your criteria!
     

  • The change implementer should seek to confirm the minimal effort to deliver the change.

    Do not implement more than required.

If we do the above before we start, ideally as part of the review of the acceptance criteria, then it is clear what each change will provide.

Now of course there is another problem with maintaining an accurate description of what the aggregate functionality of the product is as it is changed, but we can just add updating that description to the acceptance criteria and we are done!

Blog Categories: Developer Zone
Categories: Companies

Now on DevOps Radio: Analyze This! Clive Longbottom of Quocirca on How DevOps is Transforming Software Development in Europe

Wed, 03/29/2017 - 01:35

In the latest installment of DevOps Radio, host Andre Pino and Clive Longbottom, founder of UK analyst firm Quocirca, find common ground in their chemistry and chemical engineering degrees, before diving into the chemistry of DevOps. Clive explains how he went from working with anti-cancer drugs and fuel cells to then being recruited by an analyst firm called The Meta Group (now part of Gartner), covering a wide range of IT processes in his role as an analyst. He eventually went on to found Quocirca.

Clive comments on the global state of DevOps, identifying the European countries that tend to be more IT savvy and have been early adopters of DevOps, as well as the laggards that are behind. As with almost anything, politics plays a role in some of the aspects of adoption (between financial/economic woes and data residency issues). A recent and significant impact is, of course, Brexit and the implications for UK-based companies and cloud-based hosters are significant.

One such obstacle to DevOps adoption is what Clive describes as chaos caused by tool proliferation. Clive describes the domino effect caused by developers using their own open source tools of choice and how the chaos escalates when more and more individuals and then teams work together and more and more tools proliferate. He explains that a DevOps orchestration tool is helpful, to manage the multitude of tools and help the organization put processes, workflows and checks and balances in place. (We’re pretty sure Andre knows a butler who can help with that problem!) When all the checks and balances are put in place to ensure that the workflows occur in the correct manner, feedback loops are in place, then that’s the best an organization can hope for in the DevOps world — and that’s what Clive is seeing in the organizations that are winning with DevOps right now.

What else is going on in the world of DevOps? Tune into the latest episode of DevOps Radio on the CloudBees website or on iTunes and make sure to subscribe to DevOps Radio via RSS feed. Let us know what you think by tweeting to @CloudBees and including #DevOpsRadio in your post. 

 

 

Blog Categories: Company News
Categories: Companies

FileInputStream / FileOutputStream Considered Harmful

Fri, 03/24/2017 - 12:02

Suricata suricatta -Auckland Zoo -group-8a (with captions added by Stephen Connolly)

Ok, so you have been given an array of bytes that you have to write to a file. You’re a Java developer. You have been writing Java code for years. You got this:

public void writeToFile(String fileName, byte[] content) throws IOException {
  try (FileOutputStream os = new FileOutputStream(fileName)) {
    os.write(content);
  }
}

Can you spot the bug?

What about this method to read the files back again?

public byte[] readFromFile(String fileName) throws IOException {
  byte[] buf = new byte[8192];
  try (FileInputStream is = new FileInputStream(fileName)) {
    int len = is.read(buf);
    if (len < buf.length) {
      return Arrays.copyOf(buf, len);
    }
    ByteArrayOutputStream os = new ByteArrayOutputStream(16384);
    while (len != -1) {
      os.write(buf, 0, len);
      len = is.read(buf);      
    }
    return os.toByteArray();
  }
} 

Spotted the bug yet?

Of course the bug is in the title of this post! We are using FileInputStream and FileOutputStream.

So what exactly is wrong with that?

Have you ever noticed that FileInputStream overrides finalize()? Same goes for FileOutputStream by the way

Every time you create either a FileInputStream or a FileOutputStream you are creating an object, even if you close it correctly and promptly, it will be put into a special category that only gets cleaned up when the garbage collector does a full GC. Sadly, due to backwards compatibility constraints, this is not something that can be fixed in the JDK any time soon as there could be some code out there where somebody has extended FileInputStream / FileOutputStream and is relying on those finalize() methods ensuring the call to close().

Now that is not an issue for short lived programs… or for programs that do very little file I/O… but for programs that create a lot of files, it can cause issues. For example, Hadoop found “long GC pauses were devoted to process high number of final references” resulting from the creation of lots of FileInputStream instances.

The solution (at least if you are using Java 7 or newer) is not too hard - apart from retraining your muscle memory - just switch to Files.newInputStream(...) and Files.newOutputStream(...)

Our code becomes:

public void writeToFile(String fileName, byte[] content) throws IOException {
  try (OutputStream os = Files.newOutputStream(Paths.get(fileName))) {
    os.write(content);
  }
}

public byte[] readFromFile(String fileName) throws IOException {
  byte[] buf = new byte[8192];
  try (InputStream is = Files.newInputStream(Paths.get(fileName))) {
    int len = is.read(buf);
    if (len < buf.length) {
      return Arrays.copyOf(buf, len);
    }
    ByteArrayOutputStream os = new ByteArrayOutputStream(16384);
    while (len != -1) {
      os.write(buf, 0, len);
      len = is.read(buf);      
    }
    return os.toByteArray();
  }
} 

A seemingly small change that could reduce GC pauses if you do a lot of File I/O!

Oh and yeah, we’re making this change in Jenkins

Blog Categories: Developer Zone
Categories: Companies

Using Multi-branch Pipelines in the Apache Maven Project

Thu, 03/23/2017 - 16:56

This is a post about how using Jenkins and Pipeline has enabled the Apache Maven project to work faster and better.

Most Java developers should have at least some awareness of the Apache Maven project. Maven is used to build a lot of Java projects. In fact the Jenkins project and most Jenkins plugins are currently built using Maven.

After the release of Maven 3.3.9 in 2015, at least from the outside, the project might have appeared to be stalled. In reality, the project was trying to resolve a key issue with one of its core components: Eclipse Aether. The Eclipse Foundation had decided that the Aether project was no longer active and had started termination procedures.

Behind the scenes the Maven Project Management Committee was negotiating with the Eclipse Foundation and getting all the IP clearance from committers required in order to move the project to Maven. Finally in the second half of 2016, the code landed as Maven Resolver.

But code does not stay still.

There had been other changes made to Maven since 3.3.9 and the integration tests had not been updated in accordance with the project conventions.

The original goal had been to get a release of Maven itself with Resolver and no other major changes in order to provide a baseline. This goal was no longer possible.

In January 2017, the tough decision was taken.

Reset everything back to 3.3.9 and merge in each feature cleanly, one at a time, ideally with a full clean test run on the main supported platforms: Linux and Windows, Java 7 and 8.

In a corporate environment, you could probably spend money to work your way out of trying to reconstruct a subset of 14 months of development history. The Apache Foundation is built on volunteers. The Maven project committers are all volunteers working on the project in their spare time.

What was needed was a way to let those volunteers work in parallel preparing the various feature branches while ensuring that they get feedback from the CI server so that there is very good confidence of a clean test run before the feature branch is merged to master.

Enter Jenkins Pipeline Multibranch and the Jenkinsfile.

A Jenkinsfile was set up that does the following:

  1. Determines the current revision of the integration tests for the corresponding branch of the integration tests repository (falling back to the master branch if there is no corresponding branch)
  2. Checks out Maven itself and builds it with the baseline Java version (Java 7) and records the unit test results
  3. In parallel on Windows and Linux build agents, with both Java 7 and Java 8. Checks out the single revision of the integration tests identified in step 1 and runs those tests against the Maven distribution built in step 2, recording all the results at the end.

There’s more enhancements planned for the Jenkinsfile (such as moving to the declarative syntax) but with just this we were able to get all the agreed scope merged and cut two release candidates.

The workflow is something like this:

  1. Developer starts working on a change in a local branch
  2. The developer recognizes that some new integration tests are required, so creates a branch with the same name in the integration tests repository.
  3. When the developer is ready to get a full test run, they push the integration tests branch (integration tests have to be pushed first at present) and then push the core branch.
  4. The Apache GitPubSub event notification system sends notification of the commit to all active subscribers.
  5. The Apache Jenkins server is an active subscriber to GitPubSub and routes the push details into the SCM API plugin’s event system.
  6. The Pipeline Multibranch plugin creates a branch project for the new branch and triggers a build
  7. Typically the build is started within 5 seconds of the developer pushing the commit.
  8. As the integration tests run in parallel, the developer can get the build result as soon as possible.
  9. Once the branch is built successfully and merged, the developer deletes the branch.
  10. GitPubSub sends the branch deletion event and Jenkins marks the branch job as disabled (we keep the last 3 deleted branches in case anyone has concerns about the build result)

The general consensus among committers is that the multi-branch project is a major improvement on what we had before. 

Notes
  • While GitPubSub itself is probably limited in scope to being used at the Apache Software Foundation, the subscriber code that routes events from source control into the SCM API plugin’s event system is relatively small and straightforward and would be easy to adapt if you have a custom Git hosting service, i.e. if you were in the 4% on this totally unscientific poll I ran on twitter:

    If you use Git at work, please answer this poll. The git server we use is:


    - Stephen Connolly (@connolly_s) March 17, 2017

  • There is currently an issue whereby changes to the integration test repository do not trigger a build. This has not proved to be a critical issue so far as typically developers change both repositories if they are changing the integration tests.

 

Blog Categories: Jenkins
Categories: Companies

“Workflow” Means Different Things to Different People

Wed, 03/22/2017 - 21:38

Wikipedia defines the term workflow as “an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes” - processes that make things or just generally get work done. Manufacturers can thank workflows for revolutionizing the production of everything from cars to chocolate bars. Management wonks have built careers on applying workflow improvement theories like Lean and TQM to their business processes.

What does workflow mean to the people who create software? Years ago, probably not much. While this is a field where there’s plenty of complicated work to move along a conceptual assembly line, the actual process of building software historically has included so many zigs and zags that the prototypical pathway from A to Z was less of a straight line than a more of a sideways fever chart.

Today, workflow, as a concept, is gaining traction in software circles, with the universal push to increase businesses’ speed, agility and focus on the customer. It’s emerging as a key component in an advanced discipline called continuous delivery that enables organizations to conduct frequent, small updates to apps so companies can respond to changing business needs.

So, how does workflow actually work in continuous delivery environments? How do companies make it happen? What kinds of pains have they experienced that have pushed them to adopt workflow techniques? And what kinds of benefits are they getting?

To answer these questions, it makes sense to look at how software moves through a continuous delivery pipeline. It goes through a series of stages to ensure that it’s being built, tested and deployed properly. While organizations set up their pipelines according to their own individual needs, a typical pipeline might involve a string of performance tests, Selenium tests for multiple browsers, Sonar analysis, user acceptance tests and deployments to staging and production. To tie the process together, an organization would probably use a set of orchestration tools such as the ones available in Jenkins.

Assessing your processes

Some software processes are simpler than others. If the series of steps in a pipeline is simple and predictable enough, it can be relatively easy to define a pipeline that repeats flawlessly – like a factory running at full capacity.

But this is rare, especially in large organizations. Most software delivery environments are much more complicated, requiring steps that need to be defined, executed, revised, run in parallel, shelved, restarted, saved, fixed, tested, retested and reworked countless times.

Continuous delivery itself smooths out these uneven processes to a great extent, but it doesn’t eliminate complexity all by itself. Even in the most well-defined pipelines, steps are built in to sometimes stop, veer left or double back over some of the same ground. Things can change – abruptly, sometimes painfully – and pipelines need to account for that.

The more complicated a pipeline gets, the more time and cost get piled onto a job. The solution: automate the pipeline. Create a workflow that moves the build from stage to stage, automatically, based on the successful completion of a process – accounting for any and all tricky hand-offs embedded within the pipeline design.

Again, for simple pipelines, this may not be a hard task. But, for complicated pipelines, there are a lot of issues to plan for. Here are a few:

  • Multiple stages – In large organizations, you may have a long list of stages to accommodate, with some of them occurring in different locations, involving different teams.
  • Forks and loops – Pipelines aren’t always linear. Sometimes, you’ll want to build in a re-test or a re-work, assuming some flaws will creep in at a certain stage.
  • Outages – They happen. If you have a long pipeline, you want to have a workflow engine ensure that jobs get saved in the event of an outage.
  • Human interaction – For some steps, you want a human to check the build. Workflows should accommodate the planned – and unplanned – intervention of human hands.
  • Errors – They also happen. When errors crop up, you want an automated process to let you restart where you left off.
  • Reusable builds – In the case of transient errors, the automation engine should allow builds to be used and re-used to ensure that processes move forward.

In the past, software teams have automated parts of the pipeline process using a variety of tools and plugins. They have combined the resources in different ways, sometimes varying from job to job. Pipelines would get defined, and builds would move from stage to stage in a chain of jobs — sometimes automatically, sometimes with human guidance, with varying degrees of success.

As the pipeline automation concept has advanced, new tools are emerging that program in many of the variables that have thrown wrenches into more complex pipelines over the years. Some of the tools are delivered by vendors with big stakes in the continuous delivery process – known names like Chef, Puppet, Serena and Pivotal. Other popular continuous delivery tools have their roots in open source, such as Jenkins.

While we are mentioning Jenkins, the community recently introduced functionality, specifically to help automate workflows. Jenkins Pipeline (formerly known as Workflow) gives a software team the ability to automate the whole application lifecycle – simple and complex workflows, automation processes and manual steps. Teams can now orchestrate the entire software delivery process with Jenkins, automatically moving code from stage to stage and measuring the performance of an activity at any stage of the process.

Conclusion
Over the last 10 years continuous integration brought tangible improvements to the software delivery lifecycle – improvements that enabled the adoption of agile delivery practices. The industry continues to evolve. Continuous delivery has given teams the ability to extend beyond integration to a fully formed, tightly wound delivery process drawing on tools and technologies that work together in concert.

Pipeline brings continuous delivery forward another step, helping teams link together complex pipelines and automate tasks every step of the way. For those who care about software, workflow means business.

This blog entry was originally posted on Network World.

 

 

Blog Categories: Jenkins
Categories: Companies

Prerequisites for a Successful Enterprise Continuous Delivery Implementation

Thu, 03/16/2017 - 17:27

Continuous delivery as a methodology and tool to meet the ever-increasing demand to deliver software at the speed of ideas is quickly gaining the attention of businesses today. Continuous delivery, with its emphasis on keeping software in a release-ready state at all times, is a natural evolution from continuous integration and agile software development practices. However, the cultural and operational challenges to achieving continuous delivery are much greater. For most organizations, continuous delivery requires adaptation and extension of existing software release processes. The roles, relationships and responsibilities of people across the organization can also be impacted. The tools used to deliver, update and maintain software must support automation and collaboration properly, in order to minimize delays and provide tight feedback cycles across the business.

Organizations looking to transition to continuous delivery should consider the following seven pre-requisites – these are practical steps that will allow them to successfully execute the cultural and operational changes within the regulatory and business constraints they face.

1. Development, quality assurance and operations teams must have shared goals; and communicate

While continuous integration limits its scope to the development team, continuous delivery embraces the testing phases of the quality assurance team (QA) and the deployments to staging and production environments that are managed by the production operations team. This is a major transformation in software development, and to succeed in transforming a continuous integration platform into a continuous delivery platform, integrating the QA and operations teams in its governance, as well involving the development team is critical. Collaboration and communication are a vital component of successful software development today, and in a continuous delivery environment it has to take centre stage.

2. Continuous integration must be working prior to moving to continuous delivery

Continuous delivery is an extension of continuous integration. The prerequisite to continuous delivery is to have continuous integration in place and working during the project, including source control management, automated builds and unit tests, as well as continuous builds of the software.

3. Automate and version everything

Continuous delivery involves the continuous repetition of many tasks such as building applications and packages, deploying applications and configurations, resetting environments and databases. All these tasks in continuous delivery should be automated with tools and scripts, and kept under version control so that everything can be audited and reproduced.

4. Sharing tools and procedures between teams is critical

Continuous delivery aims to validate the deployment procedures and automation used in the production environment. To do this successfully, these procedures and automations must be used as early on as possible so that they are extensively tested when used to deploy software to production. In most cases, the same tools can be used in all environments, e.g. integration, staging and production.

The automation scripts should be managed in shared source code repositories so that each team - development, QA and operations – can enhance tools and procedures. Mechanisms like pull-requests can help the governance of these shared tools and scripts.

5. The application must be production-friendly to make deployments non-events

Applications should simplify their deployment and rollback procedures so that deployments in production become non-events. A major step to achieve this is to reduce the number of components and of configuration parameters deployed. The ease of rollbacks is important when deploying new versions; that is, allowing the ability to quickly rollback in case of problems. Feature toggles help to de-couple the deployment of binaries from feature activation - a rollback can then simply be the deactivation of a feature, thanks to a toggle.

Special attention should be paid to any changes of database schemas, as this can make deployments and rollbacks much more complex. The schema-less design pattern of NoSQL databases brings a lot of flexibility, moving the responsibility of the schema from the database to the code. This concept can also be applied to relational databases.

6. The infrastructure must be project-friendly, it’ll empower the people and teams

Infrastructures should provide all the tooling (GUIs, APIs and SDKs) and documentation required to empower the development and QA team and make them autonomous in their work. These tasks include:

  • Deploying the application version of their choice in an environment
  • Managing configuration parameters (view, modify, export, import)
  • Managing databases (creating snapshots of data, restoring a database snapshot)
  • Allowing view, search and notification alerts on application logs

Public cloud platforms, mainly Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), are examples of project-friendly platforms.

7. Application versions must be ready to be shipped into production

One of the most important goals of continuous delivery is to allow the product owner to decide to deploy into production any version of the application that successfully goes through the continuous delivery pipeline; not only the version delivered at the end of an iteration with a “beautiful” version number.

Reaching this target requires many changes in the way applications are designed:

  • Features that are not yet validated by the QA team should be hidden from end users. Feature toggles and feature branches are two key ways to implement this.
  • Build tools should evolve from the concept of semantic versions separated by intermediate unidentified snapshot versions, to a continuous stream of non-semantic versions. Subversion repositories help provide ordered version numbers thanks to a revision number. Git, the free, open-source distributed version control system, is more complex to use for this, due to its un-ordered commit hashes; special tooling may be useful to make this version identifier more “human readable.”

The crux is that continuous delivery is not just about a set of tools, it is also about the people and organizational culture. Technology, people and process need to be aligned to make continuous delivery successful; and a collaborative approach is fundamental to its success. Implementing these best practices can allow organizations to reap the rewards of a more fluid, automated approach to software development – and one that provides business agility too.

Cyrille Le Clerc
Director of Product Management
CloudBees

Follow Cyrille on Twitter

This blog entry was originally posted on Beta News.

 

Blog Categories: Developer ZoneCloud Platform
Categories: Companies

Meet the Bees: Steven Christou

Thu, 03/09/2017 - 22:33

In every Meet the Bees blog post, you’ll learn more about a different CloudBees Bee. Let’s buzz on over to California and meet Steven Christou.​

Who are you? What is your role at CloudBees?

My name is Steven Christou I am currently Tech Enablement at CloudBees.

My primary role involves helping engaging customers with Support related questions, and providing more efficient tooling for the Support team. I do a lot of coding for our backend infrastructure as well as making diagnosing support issues easier for the team.

What makes CloudBees different from other companies?

The people. Seriously. I have never worked with a set of engineers who are phenomenally gifted in coding. I have learned far working side by side with these engineers than any other environment previously. There’s always something new to learn and they’re always willing to help me learn more. I also never worked with a more adventurous group of engineers who are always willing to strive to learn something new and take the extra steps to make themselves more efficient.

What are some of the most common mistakes to avoid while using Jenkins?

One of the most common mistakes I find when engaging in support is upgrading. There are a few tips I have for upgrading Jenkins. Firstly use a package manager. Upgrading using a package manager (like apt, or yum) make it far easier to do upgrades and reduce the complexity involved with moving everything to custom locations. On this note I would also recommend for upgrades to not just replace the war as issues involving the init scripts will not be upgraded.

Another common mistake I find is customers will upgrade, and if something breaks immediately downgrade. This is ill-advised and strongly recommend against. Jenkins will try it’s hardest to maintain backward compatibility with newer versions of the plugins, however there’s no guarantee that upgrading then downgrading will not cause significantly more issues. I always recommend to customers to use a test environment or clone of their production instance and do the upgrade. Trigger a few jobs and confirm nothing causes issues.

I would also like to recommend my talk Help! My Jenkins is Down! which talks more in depth about some more common issues encountered when managing a Jenkins instance.

Do you have any advice for someone starting a career in the CI/CD/DevOps/Jenkins space?

Do not be afraid to ask questions. I have been in the community for a while now and I will say that everyone I have interacted with has been extremely welcoming. I am always on the Jenkins IRC channel (#jenkins on freenode) as schristou88 and I am always willing to try my best to help out. There are plenty of resources on the internet which provides best practices and advice in the CI/CD space. I would recommend starting out with learning the most important tool, Jenkins, and working out from there. Jenkins is one of the core tools for a DevOps engineer and has over 1000 plugins to fit almost every requirement.

What has been the best thing you have worked on since joining CloudBees?

That’s a secret :)

If you could eat only one meal for the rest of your life, what would it be?

Kohsuke Kawaguchi introduced me to Japanese Curry, and I have not found anything more amazing than that!

Vanilla or chocolate or some other flavor, what’s your favorite ice cream flavor and brand?

I like Vanilla custard. Most I get from ice cream shops are amazing!

Blog Categories: Jenkins
Categories: Companies

Meet the Bees: Steven Christou

Thu, 03/09/2017 - 22:33

In every Meet the Bees blog post, you’ll learn more about a different CloudBees Bee. Let’s buzz on over to California and meet Steven Christou.​

Who are you? What is your role at CloudBees?

My name is Steven Christou I am currently Tech Enablement at CloudBees.

My primary role involves helping engaging customers with Support related questions, and providing more efficient tooling for the Support team. I do a lot of coding for our backend infrastructure as well as making diagnosing support issues easier for the team.

What makes CloudBees different from other companies?

The people. Seriously. I have never worked with a set of engineers who are phenomenally gifted in coding. I have learned far working side by side with these engineers than any other environment previously. There’s always something new to learn and they’re always willing to help me learn more. I also never worked with a more adventurous group of engineers who are always willing to strive to learn something new and take the extra steps to make themselves more efficient.

What are some of the most common mistakes to avoid while using Jenkins?

One of the most common mistakes I find when engaging in support is upgrading. There are a few tips I have for upgrading Jenkins. Firstly use a package manager. Upgrading using a package manager (like apt, or yum) make it far easier to do upgrades and reduce the complexity involved with moving everything to custom locations. On this note I would also recommend for upgrades to not just replace the war as issues involving the init scripts will not be upgraded.

Another common mistake I find is customers will upgrade, and if something breaks immediately downgrade. This is ill-advised and strongly recommend against. Jenkins will try it’s hardest to maintain backward compatibility with newer versions of the plugins, however there’s no guarantee that upgrading then downgrading will not cause significantly more issues. I always recommend to customers to use a test environment or clone of their production instance and do the upgrade. Trigger a few jobs and confirm nothing causes issues.

I would also like to recommend my talk Help! My Jenkins is Down! which talks more in depth about some more common issues encountered when managing a Jenkins instance.

Do you have any advice for someone starting a career in the CI/CD/DevOps/Jenkins space?

Do not be afraid to ask questions. I have been in the community for a while now and I will say that everyone I have interacted with has been extremely welcoming. I am always on the Jenkins IRC channel (#jenkins on freenode) as schristou88 and I am always willing to try my best to help out. There are plenty of resources on the internet which provides best practices and advice in the CI/CD space. I would recommend starting out with learning the most important tool, Jenkins, and working out from there. Jenkins is one of the core tools for a DevOps engineer and has over 1000 plugins to fit almost every requirement.

What has been the best thing you have worked on since joining CloudBees?

That’s a secret :)

If you could eat only one meal for the rest of your life, what would it be?

Kohsuke Kawaguchi introduced me to Japanese Curry, and I have not found anything more amazing than that!

Vanilla or chocolate or some other flavor, what’s your favorite ice cream flavor and brand?

I like Vanilla custard. Most I get from ice cream shops are amazing!

Blog Categories: Jenkins
Categories: Companies