Skip to content

CloudBees' Blog - Continuous Integration in the Cloud
Syndicate content
Updated: 22 hours 42 min ago

Jenkins World Speaker Highlight: Secure Container Development Pipelines with Jenkins

Tue, 08/23/2016 - 20:17

This is a guest post by Jenkins World speaker Anthony Bettini, Founder and CEO at FlawCheck.

At FlawCheck, we’re really excited about presenting to the Jenkins community at the upcoming Jenkins World 2016 in Santa Clara! FlawCheck will be presenting on “Secure Container Development Pipelines with Jenkins” in Exhibit Hall C, on Day 2 (September 14) from 2:00 PM - 2:45 PM. At FlawCheck, most of our time is spent with customers who are using Jenkins to build Docker containers, but are concerned about the security risks. FlawCheck’s enterprise customers want to use enterprise policies to define which containers, they are building with Jenkins, reach production and then continuously monitor them for compliance.

Building security into the software development lifecycle is already difficult for large enterprises following a waterfall development process. With Docker, particularly in continuous integration and continuous deployment environments, the challenge is even more difficult. Yet, for enterprises to do continuous deployment, security needs to be coupled with the build and release process and the process needs to be fully automated, scalable and reliable.

If you’re interested in container security and security of open source software passing through Jenkins environments, we’d encourage you to grab a seat at the FlawCheck talk, “Secure Container Development Pipelines with Jenkins” in Exhibit Hall C, on Day 2 (September 14) from 2:00 PM - 2:45 PM. In the meantime, follow us on Twitter @FlawCheck and register for a free account at https://registry.flawcheck.com/register.

Anthony Bettini
Founder and CEO
 FlawCheck

This is a guest post written by Jenkins World 2016 speaker Anthony Bettini. Leading up to the event, there will be many more blog posts from speakers giving you a sneak peak of their upcoming presentations. Like what you see? Register for Jenkins World! For 20% off, use the code JWHINMAN

 

Blog Categories: Jenkins
Categories: Companies

Top 9 Reasons You Need to Go ALL IN and Attend Jenkins World

Sat, 08/20/2016 - 21:09

The countdown is on. Jenkins World 2016 is coming to the Santa Clara Convention Center, September 13-15. It’ll be the world’s largest gathering of Jenkins users ever - come interact with the community and learn about everything Jenkins. The lead organizing sponsor is CloudBees, along with a number of premier Jenkins ecosystem vendors who are also sponsoring. Jenkins World will offer attendees opportunities to learn, explore and network. This year’s theme is “ALL IN” as Jenkins users, experts and thought leaders prepare to go ALL IN on DevOps.

Need more convincing? Below are nine reasons for YOU to go ALL IN at Jenkins World 2016:

  1. Hear keynotes from industry leaders - Kohsuke Kawaguchi, founder of the Jenkins project kicks off the conference with the opening keynote this year. Other keynotes you won’t want to miss include Sacha Labourey, CEO of CloudBees, and Gary Gruver, former DevOps exec at Macys.com and HP, and industry author. Rumor has it that Gene Kim may make a guest appearance, too!
  2. Attend training/workshop add-on options – Come to Jenkins World as an attendee, leave as a Jenkins master by attending Jenkins certification training and/or learning the fundamentals of Docker and Jenkins. Additional workshops are available, covering topics such as plugin development, Jenkins certification and automating pipelines with the Pipeline plugin.
  3. FREE certification! Your Jenkins World registration also provides you with the option to take a certification exam completely FREE! Did we mention it was FREE?
  4. Rub shoulders with the Jenkins stars - Get access to some of the best Jenkins experts in the world - attend their sessions and network with them. The sessions cover a range of topics such as: infrastructure as code, security, containers, pipeline automation, best practices, scaling Jenkins and community development projects.
  5. Visit a variety of Jenkins ecosystem vendors – At the expanded Sponsor Expo you can check out a range of technologies and services that help you optimize software delivery with Jenkins.
  6. Pick up your next read – CloudBees Senior Consultant Viktor Farcic has recently published his book, The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices. Be one of 200 lucky attendees to get a free copy and be sure to catch his session on September 13.
  7. Do you follow CommitStrip? They will be onsite and YOU can help them paint a custom Jenkins-themed mural!
  8. Meet the Butler and snap a selfie – You may have seen the Butler on @CloudBees engaged in some Xtreme social media adventures to get himself to Jenkins World, but at Jenkins World you’ll have the chance to meet him in person. Don’t forget to snap a pic with him at the social media station and share with your friends.
  9. Spiff up your wardrobe – Always a hit, this year’s t-shirt promises to be hotter than ever. No more hints – attend and find out why!

All of this and so much more awaits you at Jenkins World. Go “All In” and register now! Use this code JWHGILMORE and get 20% off your conference registration.

See you in Santa Clara!

Categories: Companies

Service Discovery (The DevOps 2.0 Toolkit)

Fri, 08/19/2016 - 22:34

Service discovery is the answer to the problem of trying to configuration our services when they are deployed to clusters. In particular, the problem is caused by a high level of dynamism and elasticity. Services are not, anymore, deployed to a particular server but somewhere within a cluster. We are not specifying the destination but the requirement. Deploy anywhere as long as there is the specified amount of CPUs and memory, certain type of hard disk and so on.

Static configuration is not an option anymore. How can we statically configure a proxy if we do not know where our services will be deployed? Even if we do, they will be scaled, descaled and rescheduled. The situation might change from one minute to another. If a configuration is static, we would need an army of operators monitoring the cluster and changing the configuration. Even if we could afford it, the time required to apply changes manually would result in downtime and, probably, prevent us from continuous delivery or deployment. Manual configuration of our services would be another bottleneck that, even with the rest of improvements would slow down everything.

Hence, service discovery enters the scene. The idea is simple. Have a place where everything will be registered automatically and from where others can request info. It always has three components: Service discovery consists of a registry, registration process and discovery or templating.

There must be a place where information is stored. That must be some kind of a lightweight database that is resistant to failure. It must have an API that can be used to put, get and remove data. Some of the commonly used tools for these types are etcd and Consul.

Next, we need a way to register information whenever a new service is deployed, scaled or stopped. Registrator is one of those. It monitors Docker events and adds or removes data from the registry of choice.

Finally, we need a way to change configurations whenever data in the registry is updated. There are plenty of tools in this area, confd and Consul Template being just a few. However, this can quickly turn into an endeavor that is too complicated to maintain. Another approach is to incorporate discovery into our services. That should be avoided when possible since it introduces too much coupling. Both approaches to discovery are slowly fading in favor of software-defined networks (SDN). The idea is that SDNs are created around services that form a group so that all the communication is flowing without any predefined values. Instead of finding out where the database is, let SDN have a target called DB. That way, your service will not need to know anything but the network endpoint.

Service discovery creates another question. What should we do with a proxy?

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.

The book is available from Amazon (Amazon.com and other worldwide sites) and LeanPub.

Blog Categories: Developer Zone
Categories: Companies

Jenkins World Speaker Highlight: Continuously Delivering Continuous Delivery Pipelines

Thu, 08/18/2016 - 22:23

This is a guest post by Jenkins World speaker Neil Hunt, senior DevOps architect at Aquilent.

In smaller companies with a handful of apps and fewer silos, implementing CD pipelines to support these apps is fairly straightforward, using one of the many delivery orchestration tools available today. There is likely a constrained tool set to support - not an abundance of flavors of applications and security practices - and generally fewer cooks in the kitchen. But in a larger organization, I have found that there are seemingly endless unique requirements and mountains to climb to reach this level of automation on each new project.

Enter the Jenkins Pipeline plugin. My recently departed former company, a large financial services organization with a 600+ person IT organization and 150+ application portfolio, set out to implement continuous delivery enterprise-wide. After considering several pipeline orchestration tools, we determined the Pipeline plugin (at the time called Workflow) to be the superior solution for our company. Pipeline has continued Jenkins’ legacy of presenting an extensible platform with just the right set of features to allow organizations to scale its capabilities as they see fit, and do so rapidly. As early adopters of Pipeline with a protracted set of requirements, we used it both to accelerate the pace of on-boarding new projects and to reduce the ongoing feature delivery time of our applications.

In my presentation at Jenkins World, I will demonstrate the methods we used to enable this. A few examples:

  • We leveraged the Pipeline Remote File Loader plugin to write shared common code and sought and received community enhancements to these functions.


Jenkinsfile, loading a shared AWS utilities function library


awsUtils.groovy, snippets of some AWS functions

  • We migrated from EC2 agents to Docker-based agents running on Amazon’s Elastic Container Service, allowing us to spin up new executors in seconds and for teams to own their own executor definitions.

Pipeline run #1 using standard EC2 executors, spinning up EC2 instance for each node; Pipeline run #2 using shared ECS cluster with near-instant instantiation of a Docker slave in the cluster for each node.

  • We also created a Pipeline Library of common pipelines, enabling projects that fit certain models to use ready-made end-to-end pipelines. Some examples:
    • Maven JAR Pipeline: Pipeline that clones Git repository, builds JAR file from pom.xml, deploys to Artifactory, and runs Maven release plugin to increment next version
    • Anuglar.JS Pipeline: Pipeline that executes a grunt and bower build, then runs S3 sync to Amazon S3 bucket in dev, then stage, then prod buckets.
    • Pentaho Reports Pipeline: Pipeline that clones Git repository, constructs zip file, and executes Pentaho Business Intelligence Platform CLI to import new set of reports in dev, stage, then prod servers.

Perhaps most critically, a shout-out to the saving grace of this quest for our security and ops teams: the manual input step! While the ambition of continuous delivery is to have as few of these as possible, this was the single-most pivotal feature in convincing others of Pipeline’s viability, since now any step of the delivery process could be gate-checked by an LDAP-enabled permission group. Were it not for the availability of this step, we may still be living in the world of: “This seems like a great tool for development, but we will have a segregated process for production deployments.” Instead, we had a pipeline full of many input steps at first, and then used the data we collected around the longest delays to bring management focus to them and unite everyone around the goal of strategically removing them, one by one.

Going forward, having recently joined Aquilent’s cloud solutions architecture team, I’ll be working with our project teams here to further mature the use of these Pipeline plugin features as we move towards continuous delivery. Already, we have migrated several components of our healthcare.gov project to Pipeline. The team has been able to consolidate several Jenkins jobs into a single, visible delivery pipeline, to maintain the lifecycle of the pipeline with our application code base in our SCM, and to more easily integrate with our external tools.

Due to functional shortcomings in the early adoption stages of the Pipeline plugin and the ever-present political challenges of shifting organizational policy, this has been and continues to be far from a bruise-free journey. But we plodded through many of these issues to bring this to fruition and ultimately reduced the number of manual steps in some pipelines from 12 down to one and brought our 20+ Jenkins-minute pipelines to only six minutes, after months of iteration. I hope you’ll join this session at Jenkins World and learn about our challenges and successes in achieving the promise of continuous delivery at enterprise scale.

Neil Hunt
Senior DevOps Architect
 Aquilent

This is a guest post written by Jenkins World 2016 speaker Neil Hunt. Leading up to the event, there will be many more blog posts from speakers giving you a sneak peak of their upcoming presentations. Like what you see? Register for Jenkins World! For 20% off, use the code JWHINMAN

 

Blog Categories: Jenkins
Categories: Companies

Join the Jenkins World Sticker Competition!

Tue, 08/16/2016 - 20:01

We’re thrilled to announce our first Jenkins Butler design contest! Design a unique version of the Jenkins Butler and submit it before September 9, 2016. Voting will take place at Jenkins World, at the sticker exchange booth hosted by Sticker Mule.

We partnered with Sticker Mule, so the winner who produces the winning design will get a $100 credit on stickermule.com to turn their designs into die cut custom stickers.

Please see how to enter and the rules for the competition below. If you have any questions, please contact us: fboruvka@cloudbees.com.

How to enter:

Entering the competition is easy. Just submit your design (ensuring the rules have been followed) and send it to fboruvka@cloudbees.com before September 9, 2016.

Rules:
  • One design per person
  • Must include the Jenkins Butler
  • Design would need to be sketched, drawn, digitally drawn including dimensions
  • The reason behind your design
  • All entries must be submitted before September 9, 2016
  • The design must be original (not been previously created, not copyrighted, etc.)

Good luck!

 

Blog Categories: Jenkins
Categories: Companies

Cluster Orchestration (The DevOps 2.0 Toolkit)

Mon, 08/15/2016 - 22:33

When I was an apprentice, I was taught to treat servers as pets. I would treat them with care. I would make sure that they are healthy and well fed. If one of them gets sick, finding the cure was of utmost priority. I even gave them names. One was Garfield, and the other was Gandalf. Most companies I worked for had a theme for naming their servers. Mythical creatures, comic book characters, animals and so on. Today, when working with clusters, the approach is different. Cloud changed it all. Pets became cattle. When one of them gets sick, we kill them. We know that there's almost an infinite number of healthy specimens so curing a sick one is a waste of time. When something goes wrong, destroy it and create a new one. Our applications are built with scaling and fault tolerance in mind, so a temporary loss of a single node is not a problem. This approach goes hand in hand with a change in architecture.

If we want to be able to deploy and scale easily and efficiently, we want our services to be small. Smaller things are easier to reason with. Today, we are moving towards smaller, easier to manage, and shorter lived services. The excuse for not defining our architecture around microservices is gone. They were producing too many problems related to operations. After all, the more things to deploy, the more problems infrastructure department has trying to configure and monitor everything. With containers, each service is self-sufficient and does not create infrastructure chaos thus making microservices an attractive choice for many scenarios.

With microservices packed inside containers and deployed to a cluster, there is a need for a different set of tools. There is the need for cluster orchestration. Hence, we got Mesos, Kubernetes and Docker Swarm (just to name a few). With those tools, the need to manually SSH into servers disappeared. We got an automated way to deploy and scale services that will get rescheduled in case of a failure. If a container stops working, it will be deployed again. If a whole node fails, everything running on it will be moved to a healthy one. And all that is done without human intervention. We design a behavior and let machines take over. We are closer than ever to a widespread use of self-healing systems that do not need us.

While solving some of the problems, cluster orchestration tools created new ones. Namely, if we don't know, in advance, where will our services run, how to we configure them?

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.

The book is available from Amazon (Amazon.com and other worldwide sites) and LeanPub.

Blog Categories: Developer Zone
Categories: Companies

Containers and Immutable Deployments (The DevOps 2.0 Toolkit)

Tue, 08/09/2016 - 21:05

Even though CM alleviated some of the infrastructure problems, it did not make them go away. The problem is still there, only in a smaller measure. Even though it is now defined as code and automated, infrastructure hell continues to haunt us. Too many often conflicting dependencies quickly become a nightmare to manage. As a result, we tend to define standards. You can use only JDK7. The web server must be JBoss. These are the mandatory libraries. And so on, and so forth. The problem with such standards is that they are an innovation killer. They prevent us from trying new things (at least during working hours).

We should also add testing into the mix. How do you test a web application on many browsers? How do you make sure that your commercial framework works on different operating systems and with different infrastructure? The list of testing combinations is infinite. More importantly, how do we make sure that testing environments are exactly the same as production? Do we create a new environment every time a set of tests is run? If we do, how much time does such an action take?

CM tools were not addressing the cause of the problem but trying to tame it. The difficulty lies in the concept of mutable deployments. Every release brings something new and updates the previous version. That, in itself, introduces a high level of unreliability.

The solution to those, and a few other problems, lies in immutable deployments. As a concept, immutability is not something that came into being yesterday. We could create a new VM with each release and move it through the deployment pipeline all the way until production. The problem with VMs, in this context, is that they are heavy on resources and slow to build and instantiate. We want both fast and reliable. Either of those without the other does not cut it in today's market. Those are some of the reasons why Google has been using containers for a long time. Why doesn't everyone use containers? The answer is simple. Making containers work is challenging and that's where Docker enters the game. First, they made containers easy to use. Then they extended them with some of the things that we, today, consider a norm.

With Docker we got an easy way to create and run containers that provide immutable and fast deployments and isolation of processes. We got a lightweight and self-sufficient way to deploy applications and services without having to worry about infrastructure.

However, Docker itself proved not to be enough. Today, we do not run things on servers but inside clusters and we need more than containers to manage such deployments.

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.

The book is available from Amazon (Amazon.com and other worldwide sites) and LeanPub.

This post is part of a new blog series all about the DevOps 2.0 Toolkit. Follow along in the coming weeks. Each post builds upon the last!

The DevOps 2.0 Toolkit
Configuration Management (The DevOps 2.0 Toolkit)
Containers and Immutable Deployments (The DevOps 2.0 Toolkit)
Cluster Orchestration (The DevOps 2.0 Toolkit) 
Service Discovery (The DevOps 2.0 Toolkit) 
Dynamic Proxies (The DevOps 2.0 Toolkit)
Zero-Downtime Deployment (The DevOps 2.0 Toolkit) 
Continuous Integration, Delivery, And Deployment (The DevOps 2.0 Toolkit)

Blog Categories: Developer Zone
Categories: Companies

Configuration Management (The DevOps 2.0 Toolkit)

Tue, 08/02/2016 - 18:58

Configuration management (CM) or provisioning tools have been around for quite some time. They are one of the first types of tools adopted by operation teams. They removed the idea that servers provisioning and applications deployment should be manual. Everything, from installing base OS, through infrastructure setup, all the way until deployment of services we are developing, moved into the hands of tools like CFEngine, Puppet and Chef. They removed operations from being the bottleneck. Later on, they evolved into the self-service idea where operators could prepare scripts in advance and developers would need only to select how many instances of a particular type they want. Due to the promise theory those tools are based on, by running them periodically we got self-healing in its infancy.

The most notable improvement those tools brought is the concept of infrastructure defined as code. Now, we can put definitions into code repository and use the same processes we are already accustomed to with the code we write. Today, everything is (or should be) defined as code (infrastructure included), and the role of UIs is (or should be) limited to reporting.

With the emergence of Docker, configuration management and provisioning continues to have a critical role but the scope of what they should do has reduced. They are not in charge of deployment anymore. Other tools do that. They do not have to set up complicated environments since many things are now packed into containers. Their main role is to define infrastructure. We use them to create private networks, open ports, create users and other similar tasks.

For those and other reasons, adoption of simpler (but equally powerful) tools became widespread. With its push system and a simple syntax, Ansible got hold on the market and, today, is my CM weapon of choice.

The real question is why did Docker take away deployment from CM tools?

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.

The book is available from Amazon (Amazon.com and other worldwide sites) and LeanPub.

This post is part of a new blog series all about the DevOps 2.0 Toolkit. Follow along in the coming weeks. Each post builds upon the last!

The DevOps 2.0 Toolkit
Configuration Management (The DevOps 2.0 Toolkit)
Containers and Immutable Deployments (The DevOps 2.0 Toolkit)
Cluster Orchestration (The DevOps 2.0 Toolkit) 
Service Discovery (The DevOps 2.0 Toolkit) 
Dynamic Proxies (The DevOps 2.0 Toolkit)
Zero-Downtime Deployment (The DevOps 2.0 Toolkit) 
Continuous Integration, Delivery, And Deployment (The DevOps 2.0 Toolkit)

Blog Categories: Developer Zone
Categories: Companies

The DevOps 2.0 Toolkit

Thu, 07/28/2016 - 15:43

When agile appeared, it solved (some of) the problems we were facing at that time. It changed the idea that months-long iterations were the way to go. We learned that delivering often provides numerous benefits. It taught us to organize teams around all the skills required to deliver iterations, as opposed to horizontal departments organized around technical expertise (developers, testers, managers and so on). It taught us that automated testing and continuous integration are the best way to move fast and deliver often. Test-driven development, pair-programming, daily stand-ups and so on. A lot has changed since the waterfall days.

As a result, agile changed the way we develop software, but it failed to change how we deliver it.

Now we know that what we learned through agile is not enough. The problems we are facing today are not the same as those we were facing back then. Hence, the DevOps movement emerged. It taught us that operations are as important as any other skill and that teams need to be able not only to develop but also to deploy software. And by deploy, I mean reliably deploy often, at scale and without downtime. In today's fast-paced industry that operates at scale, operations require development and development requires operations. DevOps is, in a way, the continuation of agile principles that, this time, include operations into the mix.

What is DevOps? It is a cross-disciplinary community of practice dedicated to the study of building, evolving and operating rapidly-changing, resilient systems at scale. It is as much a cultural as technological change in the way we deliver software, from requirements all the way to production.

Let's explore technological changes introduced by DevOps that, later on, evolved into DevOps 2.0.

By adding operations into existing (agile) practices and teams, DevOps united, previously excluded, parts of organizations and taught us that most (if not all) of what we do after committing code to a repository can be automated. However, it failed to introduce a real technological change. With it, we got, more or less, the same as we had before but, this time, automated. Software architecture stayed the same but we were able to deliver automatically. Tools remained the same, but were used to their fullest. Processes stayed the same but with less human involvement.

DevOps 2.0 is a reset. It tries to redefine (almost) everything we do and provide benefits modern tools and processes provide. It introduces changes to processes, tools and architecture. It enables continuous deployment at scale and self-healing systems.

In this blog series, I'll focus on tools which, consequently, influence processes and architecture. Or, is it the other way around? It's hard to say. Most likely each has an equal impact on the others. Never the less, today's focus is tools. Stay tuned.

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book encompasses the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx and so on. We'll go through many practices and even more tools.

The book is available from Amazon (Amazon.com and other worldwide sites) and LeanPub.

Blog Categories: Developer Zone
Categories: Companies

Now Streaming on DevOps Radio: Jacob Tomaw Talks About How Orbitz Transformed Software Delivery

Tue, 07/26/2016 - 21:12

Blog co-authored by Sarah Grucza, PAN Communications

DevOps Radio - Interview with Jacob Tomaw, Orbitz

Have you ever booked a trip through the Orbitz website? Orbitz Worldwide, now a part of the Expedia family, is a leading global online travel company. If you’ve ever booked travel through the Orbitz website you can easily understand how Orbitz Worldwide sells ten of billions of dollars in travel annually. The Orbitz and Expedia brands use software to transform the way consumers around the world plan and purchase travel. The practice of direct booking online travel has disrupted the travel industry, and that industry innovation attracted Jacob Tomaw, principal engineer, to the company in 2006.

Jacob was in search of a company where technology was the business - and he found that in Orbitz. When Jacob joined the Orbitz team, he not only knew how important software was to Orbitz’s business, he quickly saw that there were ways to improve on current software delivery practices. Through a series of project and group transformations, Jacob began to implement agile, continuous delivery (CD) and DevOps practices throughout Orbitz. Since joining the company, the software delivery teams have achieved impressive results, including reducing release cycles by more than 75 percent, learning to value a team-oriented culture and enhancing user experience.   

DevOps Radio host Andre Pino wanted to learn more about Jacob and find out what it was like navigating through this transformation, so they sat down to talk. You can listen in on Jacob and Andre’s conversation in the latest episode of DevOps Radio.

In this latest DevOps Radio episode, Jacob covers how he got his start in software delivery and his experiences at Orbitz. He then talks through the transformation the software delivery teams at Orbitz went through. You’ll also get a look into the mind of a technology expert; Jacob explores thoughts on the future and on open source software.

Plug in your headphones and tune into the latest episode of DevOps Radio. Available on the CloudBees website and on iTunes. Join the conversation about the episode on Twitter by tweeting out to @CloudBees and including #DevOpsRadio in your post!

Listen to the podcast. If you still want to learn more about the Orbitz transformation, read the case study or watch the video (below), featuring Jacob and his team.

Categories: Companies

Jenkins World Speaker Highlight: Using Jenkins for Disparate Feedback on GitHub

Tue, 07/26/2016 - 15:35

This is a guest blog, authored by Ben Patterson, engineering manager, edX, and a speaker at Jenkins World

Picking a pear from a basket is straightforward when you can hold it in your hand, feel its weight, perhaps give a gentle squeeze, observe its color and look more closely at any bruises. If the only information we had was a photograph from one angle, we’d have to do some educated guessing.

As developers, we don’t get a photograph; we get a green checkmark or a red x. We use that to decide whether or not we need to switch gears and go back to a pull request we submitted recently. At edX, we take advantage of some Jenkins features that could give us more granularity on GitHub pull requests, and make that decision less of a guessing game.

Multiple contexts reporting back when they’re available

Pull requests on our platform are evaluated from several angles: static code analysis including linting and security audits, javascript unit tests, python unit tests, acceptance tests and accessibility tests. Using an elixir of plugins, including the GitHub Pull Request Builder Plugin, we put more direct feedback into the hands of the contributor so s/he can quickly decide how much digging is going to be needed.

For example, if I made adjustments to my branch and know more requirements are coming, then I may not be as worried about passing the linter; however, if my unit tests have failed, I likely have a problem I need to address regardless of when the new requirements arrive. Timing is important as well. Splitting out the contexts means we can run tests in parallel and report results faster.

Developers can re-run specific contexts

Occasionally the feedback mechanism fails. It is oftentimes a flaky condition in a test or in test setup. (Solving flakiness is a different discussion I’m sidestepping. Accept the fact that the system fails for purposes of this blog entry.) Engineers are armed with the power of re-running specific contexts, also available through the PR plugin. A developer can say “jenkins run bokchoy” to re-run the acceptance tests, for example. A developer can also re-run everything with “jenkins run all”. These phrases are set through the GitHub Pull Request Builder configuration.

More granular data is easier to find for our Tools team

Splitting the contexts has also given us important data points for our Tools team to help in highlighting things like flaky tests, time to feedback and other metrics that help the org prioritize what’s important. We use this with a log aggregator (in our case, Splunk) to produce valuable reports such as this one.

I could go on! The short answer here is we have an intuitive way of divvying up our tests, not only for optimizing the overall amount of time it takes to get build results, but also to make the experience more user-friendly to developers.

I’ll be presenting more of this concept and expanding on the edX configuration details at Jenkins World in September.

Ben Patterson 
Engineering Manager 
 edX

This is a guest post written by Jenkins World 2016 speaker Ben Patterson. Leading up to the event, there will be many more blog posts from speakers giving you a sneak peak of their upcoming presentations. Like what you see? Register for Jenkins World! For 20% off, use the code JWHINMAN

 

Blog Categories: Jenkins
Categories: Companies

Backup Jenkins to the Cloud

Tue, 07/19/2016 - 21:54

CloudBees has released a new version of its CloudBees Backup plugin. With this new version, you will be able to backup CloudBees Jenkins Operations Center and CloudBees Jenkins Enterprise instances directly to Amazon S3 and Azure Blob Storage.

Until now, users who wanted to store their backups on one of these cloud services first had to backup to a local filesystem and use some sort of automation to upload it to the desired cloud service. With this new version, the process is fully automated and easy to setup for the other available destinations.

To setup a backup to Amazon S3, in your backup job, select “Amazon S3” as destination. Next select the credentials of type AWS Credentials to login to S3 or leave them blank to use IAM Role Credentials.

You also will need to choose the AWS Region, S3 Bucket Name and S3 Bucket Folder to store the backups. Optionally, you can enable Server Side Encryption for increased security.

In case you want to backup to Azure Blob Storage, select Azure Blob Storage as the destination and type the container name and folder to store the backup. The container must already exist on your azure account.

For the connection you can choose whether to use the Azure Instance configuration if you are running on Azure or to provide an account name and a key for that account name by creating a credential of type “Secret Text” and selecting it on the Credentials field.

That’s all. When the job is run, the backup will be created and uploaded to the selected service.

Álvaro Lobato
Senior Software Engineer​
 CloudBees

 

Blog Categories: JenkinsDeveloper Zone
Categories: Companies

Want to Win a Free Pass to Jenkins World 2016? Here’s How.

Mon, 07/11/2016 - 22:43
NOTE: We have had such a great response thus and far we want to ensure that everyone is able to participate. You can now enter the contest until August 26!

Jenkins World is a must-attend event for insight on DevOps, continuous delivery, and this year it will be bigger and better than ever, featuring top-notch speakers, presentations, training and more. If this sounds like your type of event, then today is your lucky day! From July 13 through August 12, we are providing an opportunity for one lucky Jenkins user to win a free pass to the event ($499 value). Additionally, up to 20 runners-up will win 20% off the regular admission price.

How can you win?

Nirmal Mehta, Principal Technologist at Booz Allen Hamilton, holds the key to your free pass. To unlock your first entry into the Jenkins World contest, you need to listen to Nirmal, one of the biggest names in DevOps, talk about Docker, one of the hottest technologies, on DevOps Radio. Check below for a question about the episode and listen in for the answer.

Enter Here

Don’t miss out on this great opportunity to learn from, and network with, some of the brightest minds in business and IT. Enter to win today! 

Win a Free Jenkins World Pass

 

Categories: Companies