Applications are becoming the primary security threat vector. Since applications are constructed from 3rd party components, there continues to be a tremendous amount of industry effort and impetus behind managing open source components effectively. Many different initiatives have expanded their focus to ensure proper governance of components, including:
- Updated specifications like OWASP and PCI
- Industry analysts including Gartner
- Community efforts like the Trusted Software Alliance
And now we can add the Financial Services / Information Sharing and Analysis Center (FS-ISAC) to the list. The FS-ISAC started the Product & Services Committee working group to identify appropriate security control types for third party service and product providers. This effort is due to the fact that the application represents the “new perimeter”. The working group references Gartner research that states “since enterprises are getting better at defending perimeters, attackers are targeting IT supply chains.”. The report continues to state, “recent breach reports such as Verizon’s Data Breach Investigations Report underscore the vulnerability of the application layer, including third party software. This new perimeter of third party software must be addressed.”
The report addresses three suggested control types that should be implemented based on the new supply chain reality:
- vBSIMM process maturity assessment
- Binary static analysis
- Policy management and enforcement for consumption of open source libraries and components
Sonatype is pleased to be referenced in the FS-ISAC report as a preferred vendor for Control Type 3. The working group contrasted the Sonatype approach with existing vendor solutions in the third control type section:A new approach in the market is Component Lifecycle Management (CLM) which offers the ability to enforce policies in the development process. For example, if a development team inadvertently downloads obsolete software versions, CLM can apply a method of breaking the build when that library is submitted, enforcing the use of a more current version. CLM informs the developers and security staff which components have risky vulnerabilities and which ones do not. The benefits of this approach include:
- Enabling application architects to control versions of software.
- Accelerating the development process by encouraging the consumption of open source libraries that are resilient.
- Reduce operating costs since the cost of ripping out obsolete components from existing applications is high assuming the older versions can be identified in the first place.
While you can read the entire report here, I thought I’d highlight the primary motivation that FS-ISAC addresses and then provide my take on their key recommendations.Why is Control Type 3 Needed?
The FS-ISAC explains the importance of the control type by stating:
- “It (Control Type 3) is included as a control because it represents how the supply chain is feeding internal software development processes within financial institutions today.”
- “Open source code is available freely and reviewed by many independent developers, but this does not translate into software components and libraries free from security vulnerabilities.”
- “The more these open source components are shared, the more widespread the vulnerabilities become. Therefore, it is essential to have a control to protect the flow of open source components into the development process.”
Sonatype completely agrees with these statements as our research shows that the average application now consists of 90% components – that’s not to say that 90% of applications use components – 90% of each application is made up of components!Key FS-ISAC Recommendations
Here are the key working group recommendations that I teased out from the article, along with some additional considerations:
“… a combination of using controlled internal repositories to provision open source components and blocking the ability to download components directly from the internet is necessary for managing risk.”
- Sonatype’s take: A repository manager that supports security, licensing and architecture policies is the foundation for managing access to trusted components. But you can’t stop with the introduction of a golden repository – you need policies for the release management process that helps manage the build promotion and staging process. And you should extend your governance approach across the entire application lifecycle – for both development and production.
“Financial institutions should consider options in this control type to apply policies to the consumption of open source components and to specify methods for creating and managing an inventory of open source libraries in use within the application portfolio.”
- Sonatype’s take: To keep pace with the volume, variety, complexity and release cadence of components, the policy approach requires automation. And we aren’t talking about an automated approval workflow, we are talking about automating actions that enforce stage-appropriate behavior across the entire software lifecycle. This approach frees your human resources to optimize policies and address exceptions.
“Firms should also encourage use of mature versions of software that are patched and not yet obsolete by applying policies and enforcing them using the best methods available.”
- Sonatype’s take: To accomplish this effectively, developers and architects need component intelligence that is version specific. They need security, licensing and architecture guidance integrated directly in the tools that they use, and they need actionable guidance, including the ability to migrate to new component versions.
“It is time to apply resiliency controls to the consumption process that will reduce the requirements to fix old versions with vulnerabilities after they have been deployed. Controls should encourage deployment of current versions that have been determined to be resilient.”
- Sonatype’s take: Developers have lacked the information necessary to easily select the right components. This limitation has resulted in downstream issues that elongate the development process. It’s too much to ask the developers to research all of the components and the component dependencies, and if you institute an approval laden process, the developers will bypass the process to make their deadlines. Developers need component intelligence integrated into the IDE so that they can select optimal components from the start – components that meet your security, licensing and architecture policies.
“Providing more information to architects and developers is the responsibility of the information security staff. The information should improve the understanding that policy management applied early in the lifecycle will both cost less effort and speed up time to market in the long run.”
- Sonatype’s take: Numerous studies confirm that fixing flaws late in the development process or in production is extremely costly. Current application security solutions exacerbate this problem because results are delivered late in the process. The developers weed through an extensive list of “possible” vulnerabilities after or during the QA process, which creates resistance and ill will. The FS-ISAC working group has it right – the information security staff has to provide information to the architects and developers, in a way that is easily understood and consumed. Most importantly, the guidance has to drive action, it can’t be limited to a potential list of vulnerabilities.
You can download the entire FS-ISAC report here.
It’s certainly a busy time for open source component usage. Many of you are familiar with research that we have done that shows the average application now consists of 90% open source components. And we continue to see exponential growth in requests from the Central Repository. In fact, there were 8 Billion requests in 2012 – and it is looking like this year will total up to 13 Billion requests.
Given these trends, the time seemed right for a series of blog posts that address recent activity in the area of open source governance and security. I’ll cover:
- Impressions from the AppSecUSA Conference
- The latest changes to OWASP and PCI specifications
- Financial Services / Information Sharing and Analysis Center Working Group (FS-ISAC)
Let’s tee those topics up with a recap of a discussion that we just had with Mark Driver, Research VP from Gartner. Mark is well-known in the open source and application development space and we had a brief chat with him about the open source landscape.
Mark recently published research that addresses the state of the nation of open source software. These quotes represent the opportunity and the challenge of using open source from that research:
- “Thousands of OSS solutions are a mouse click away from any employee with an Internet connection; consequently, many OSS assets are invisible to IT management, but are heavily leveraged in many enterprises in numerous scenarios.”
- “Toward this end, OSS requires IT organizations to develop best practices and policies for IT asset management, development, deployment and support.”
In our conversation with Mark, we discussed the following topics:
- In general, there is greater awareness about the role that open source components and frameworks play in application portfolios. While this awareness has driven the implementation of policies, policies often prove ineffective. While only 25% don’t have a policy, 75% of the policies are ineffective. Worse yet, the policies can provide the illusion of safety, which is even more dangerous.
- Cost savings, flexibility and innovation continue to drive the use of open source adoption. Organizations now are especially motivated to cut cost, they are trying to determine if open source can be an effective cost lever.
- Open source licensing ramifications are becoming more important as mainstream organizations adopt open source. These organizations tend to be more conservative, so they want to carefully manage the risk associated with open source licenses.
- Organizations that successfully manage open source usage are inclusive and collaborative. They tend to have top-down support and sponsorship, they have participation from IT and the business, they start small and grow organically, they don’t focus solely on identification, and they can effectively demonstrate how their efforts reduce risk and increase open source usage.
These observations mirror what we are hearing from our Nexus and CLM community – including results from our most recent survey.
How do these observations track with your experience? What else do you see happening in the world of open source components?
Part 3 — Part 4 Component-Capable Release Management is Key to DevOps – Part 5 Up Next
DevOps conversations are dominated by release management and production deployment. These are the primary topics at the DevOps conferences that we have attended in Atlanta, New York, Vancouver, Portland, Barcelona and London. This concerns me at some level – if DevOps just becomes a fancy word for IT Ops, then the movement will not be that important – but it’s the reality given the DevOps is a new, immature approach. Not only are the conversations primarily about these topics, many of the discussions are tools or technology related – what CI and CD tools are best? What packaging construct should be used?, etc.
So why are the conversations focused on release management and deployment? Given that DevOps is a reaction to Agile, some say that “DevOps completes Agile”, the first thing that the IT Ops has to do is to keep up with agile delivery – that means deploying small and frequent changes in a repeatable, reliable and predictable fashion. It’s a natural and necessary starting point – if DevOps can’t support this capability, then it will fall all over itself when it tries to move to more strategic topics like incorporating security and compliance efforts into the software lifecycle process.
This is not an easy task since many organizations have diverse and large environments. Given that organizations are trying to deploy more often, approaches that rely on manual intervention just won’t work – so organizations that can automate every single aspect of the release and deployment process can outmaneuver their competitors. This is key – just think of the business advantage that an organization has if it can modify it’s website to react to user behavior, or if it can modify it’s production systems quickly to introduce new products. The impact can be significant – so if you are looking for budget justification for your DevOps efforts; try relating release and deployment speed to business agility.
So what does this all have to do with Sonatype? Well, Sonatype is all about helping people manage and leverage components effectively, including open source components. We know that the average application is now constructed of 80% or more components, so the release management and deployment process has to take this into account. While components help developers construct applications quickly, if your infrastructure isn’t capable of managing components effectively, you will introduce risk into your applications – security, licensing and quality risk. As you think about assembling the right mix of tools to support your build and release process, including Continuous Integration and Continuous Delivery tools, make sure you factor in components.
Here are a few things to consider
- Repository Manager Foundation – Start with a repository manager that can help you store and share your binaries effectively. As you scale your efforts, your repository should scale with you – and it should provide the enterprise class features like security, build promotion and staging, etc., that you need to manage all of your components effectively.
- Policy-based Support for Build Promotion and Staging – Instead of automating the approval process for components, use automated policies that apply your security, licensing and architecture standards to the release management process. The automated policies should provide guidance and enforcement (e.g., stop a build from being deployed) so that you can ensure your production systems are rock solid.
- Factor Components into your CI & CD Initiatives – As organizations leverage continuous integration technologies to automate the build and test process, and to extend automation into the deployment realm with Continuous Delivery approaches, they need to think about the role of components. One way to do this is to integrate your component management and governance approach directly into the build and CI technologies. This allows you to apply your policies and enforce action directly in those tools.
The good thing about this approach is that you end up incorporating other constituents into the process – you may be thinking, “I need to manage the release process. I need to keep up with the Developers. I need to automate building and deploying my VMs.” By leveraging security, licensing and architecture policies in your build and release management process, you are automatically incorporating the security, legal/compliance and architecture teams into the process. That’s a completely natural fit since DevOps is about driving collaboration and communication between constituents involved in the software lifecycle.
One of the approaches to software that I strongly believe is in taking advantage of the latest product innovations in all new releases. I think it’s important to upgrade to the latest versions of build tools and components as soon as you can. The benefits of these product improvements always outweigh the drawbacks of regular updates that you need to adapt too. And just like in the devops world, where releasing often makes development easier, upgrading your tools often will make it easier as well. Deciding to stay with the “stable and trusted” components and tools can cause you to fall more and more behind making the pain of upgrading bigger and bigger. And believe me – the need to upgrade will arise! Just try using Internet Explorer 6 or Windows 95 on a modern computer and you’ll instantly see what I mean. There’s always a cost tradeoff to waiting and we know that cost well in application security. Listen to this great discussion about the Real Cost of Waiting to Secure Your Applications
But following the original train of thought through, I think it is time for me to prepare you for another great upgrade to Nexus that’s in the pipeline for you. You might have seen Rich Seddon previewing some of these features and improvements on the October Nexus Live event already, if you missed it you can still check it out the recording (starting at about 28:00). And don’t forget to join us for our November event where we’ll be talking about Nexus and Chef integration use cases. Rich covered improvements like
- filtering lists in the Nexus user interace
- branding the Nexus header
- new logging configuration and log inspections user interface
which are all updates covered in the Repository Management with Nexus book.
And new additions including
are documented as well. These are just some of the over 150 issues fixed and implemented for this new release. In addition to these user facing changes we have improved the Nexus internals and you should see significant performance improvements as a result of your upgrades. Another one of these internal changes was changing all components to use JSR-330/Eclipse Sisu instead of Plexus and we have made sure to update the Nexus Example Plugins for you. We also created a guide that can be used if you need to upgrade your own plugins. And as you can imagine, the Nexus REST API and the Java client library are updated too.
With all this new goodness at your fingertips hopefully you’re ready to upgrade to Nexus 2.7. There is lots more if you check out the release notes. And you won’t have to wait much longer, it is already running on our production instances like Open Source Software Repository Hosting OSSRH instance or our own repo and humming along nicely. Barring any natural catastrophes you should see Nexus 2.7 available for download before you put that turkey in the oven on Thanksgiving. Unless, of course you are in Canada like myself and already celebrated Thanksgiving a few weeks ago…
Update: And now Nexus 2.7 has arrived and you can download it from the support site as usual.
Well there is nothing like an updated specification that drives action or interest in a topic. We’re seeing that with the introduction of PCI 3.0. While there are several key updates to the specification, the one I find most interesting reflects the reality of how applications are constructed today – from components. It’s great to see this baked into the latest PCI specification and related specifications like OWASP.
In some ways, the PCI specification already had this covered - PCI 2.0 required that organizations develop and maintain secure systems and applications. Since applications are comprised primarily of components, using secure components is the only way to comply with PCI.
The 3.0 specification version makes the component requirement more explicit – starting with basic identification of what you have. Version 3.0 expands the specification by requiring organizations to maintain an inventory of system components as a way to ensure proper compliance coverage.
The 3.0 specification reiterates that current best practices be used as defined by OWASP, SANS, and others. Of particular interest is OWASP A9, which focuses on eliminating vulnerable components. A9 requires that you identify components inure, monitor public databases for vulnerabilities and requires you to establish security policies that governs component use.
For more information on PCI 3.0 and the OWASP Top 10, check out our resource section. We have a new PCI whitepaper, and an upcoming webinar that addresses how Crosskey uses Sonatype to address PCI compliance.
And here’s a list of recent articles that have been published about PCI:
- The history of the PCI DSS standard: A visual timeline
- 5 things you need to know about new Payment Card Industry (PCI 3.0) standard
- PCI 3.0 special report: Reviewing the state of payment card compliance
- PCI DSS version 3.0: The five most important changes for merchants
- Payment card industry gets updated security standard with new requirements
Let me know if you run across other good resources – and join us for our upcoming webinar on Wednesday, December 4, 2013 3:00PM EST.
Sonatype Nexus can easily be integrated with external systems due to the fact that all functionality is available via various REST API calls. On the other hand Nexus can be expanded by writing plugins for Nexus that customize it and add further functionality.
In our recent Nexus Live October event we talked about using this REST API to script Nexus configuration after it has been installed. We learned more about the Puppet module that HubSpot has open sourced and provided for you to use as well.
In our upcoming November Nexus Live event we are going to talk about a similar project that offers installation of Nexus via Chef. The main author is Kyle Allan from RiotGames. He is also author of another open source project that uses the Nexus REST API to implement command line programs to interact with Nexus.
These three projects are great examples of open source contributions to the Nexus community. The YUM/RPM integration is another example. It ended up becoming part of Nexus itself. The tools from the Nexus Ruby support project could be on the same trajectory. And who knows? If there is enough interest and help in the community, the same might happen to the Nexus APT Plugin?
The Puppet module written by Clemens Escoffier focuses on using Nexus as a component repository and Puppet to retrieve components from it. This can be used to get your deployables from Nexus to your production servers via Puppet.
Benjamin Muschko needed to publish components created in his Gradle build to Nexus and wanted further features than the normal deployment and created a Gradle Nexus Plugin.
Further features for Nexus itself are available in the Artifact Usage Plugin and the Dependency Management Plugin. Both provide more information about a specific component. The GroupId Management Plugin on the other hand simplifies a security related administration task, while the Repository Cleanup Task fully automates an administration task by implementing a new scheduled task.
When it comes to integrating with external systems the GitLab Token Auth Plugin provides security integration with GitLab. The AWS S3 Publish Plugin on the other hand integrates with a cloud based storage. When it comes to CI servers Jenkins (and potentially Hudson) users should check out the List Nexus Versions Plugin and the Nexus Metadata Plugin. You can also find a poller for the Go CI server.
With all this open source goodness mentioned you should not forget that Nexus itself as well as the book Repository Management with Nexus are open source as well. As with any open source project, the quality and activity level of the various projects fluctuates. All these projects showcase different usage models and you can step in and improve them and adapt them to your own needs.
When you engage in such a task use these pointers, contact us on the mailing lists or on hipchat and let us know what you came up with. And of course we would love to have you on the panel of an upcoming Nexus Live event.
Ok, I need a “blog post delivery tool chain” because part 3 in my DevOps series of blog posts is woefully behind my expected delivery date. It’s like a broken development process – I’m missing oversight and guidance to make sure things stay on track! And to think, I don’t even have to manage collaboration between multiple developers, or deal with IT Ops, etc.., which brings me to the point of this post…
While DevOps is about culture, people and process; technology and automation are necessary to provide repeatability, improve efficiencies, and free human resources to do high value work. It’s also the thing that technologists tend to focus on – I found it interesting at the DevOpsDays that I’ve attended in Atlanta and Portland, there was great promise about culture, people and process, but the discussions invariable came back to tools. People wanted to discuss what tools to use and what tools not to use. It’s like they wanted to move the discussion to a higher ground, but they couldn’t resist talking about technology. This reflects that DevOps is new, its immature, and people are focused on optimizing the release process through automation, which means tools and technology. So while there is an appreciation for the non-technical topics, the reality is that people are still making technology and tool choices at this point.
It makes sense too, automation is critical given the scope of what organizations have to deal with. They have lots of applications, they have lots of new functionality they want to deliver, they have tons of servers to manage, they have lots of deployment environments to choose from. And finally, they have lots of components to manage. They simply can’t manage the the volume, complexity, diversity and release cadence of components (many open source) with all of these other factors manually. DevOps needs to provide and support an automated, efficient tool chain – this tool chain will form the foundation for accelerating application delivery.
While part of the tool discussion will focus on what tools to use – for example, what Continuous Delivery and Continuous Integration tool you should select, it’s important to step back and to think about the challenge more strategically. Even if you plan to build the tool chain and involve different constituents in the process in a phased manner, think about the critical design factors up front – it’s analogous to taking an enterprise architecture approach to building applications and infrastructure. Since I want to get this blog post published soon I can’t address every strategic design aspect, but I’ll throw out some thoughts that can serve as a starting point.
- Lifecycle Support: Think about a comprehensive tool chain that supports all aspects of the software lifecycle. Think about how your software development process works today, or better yet, think about how it should be optimized for the future. Then identify support for each aspect of the lifecycle. Since applications are dominated by components, make sure you factor in how applications are constructed today. Visibility, guidance and enforcement is needed throughout the entire application lifecycle, it shouldn’t be introduced as a single gating factor, especially if that gate is at the end of the process. Think about the information and process that you can implement in an automated way that will shift activity to the left, which will speed development and reduce downstream rework.
- Personalized Support: Design your toolchain so that it supports the needs of the individual constituents directly in the tools that they use. This is critical for multiple reasons – if the support is not designed for each constituent, it will slow them down, and they’ll look for work-arounds, which will force activity outside of your managed application delivery tool chain. It’s also key for effective collaboration. At the DevOps show in Portland, Ben Hughes (@benjammingh) provided an example of this when he gave an ignite talk on how security needs to “stop sucking”. He talked about how security needs to stop being so negative, and that security needs to understand what IT Ops and Development does. I think this illustrates my personalization point – if security can provide information to the developers in a language that they understand, delivered when they need it and where they need it, security will be more effective.
- Heterogeneous Support: Heterogeneity is a reality in mid-to-large size organizations. From programming languages, to databases, to operating systems, to deployment environments… even organizations that say they standardize on one particular aspect typically have exceptions. So while I’m not implying that you should provide multiple options for every tool in the chain, your tool chain should address heterogeneity. And along these lines, your toolchain should be flexible – while it can’t be completely plug-and-play given the integration necessary between the tools, if you can take a service based approach, you’ll have more flexibility if you need to switch a tool out at some point.
Hopefully this will help you identify the design factors you should consider as you think about our automation approach. Let me know what else we should add to the list!
And for those of you in London, join us at the DevOpsDays on 11 & 12 of November 2013.
Last week I was a host of October Nexus Live and attended DevOpsDays Vancouver. In both events Sonatype Nexus and CLM were present as part of a devops pipeline and I got to chat with many people that use Nexus or would probably get a lot of benefits from doing so…
In the Nexus Live event John Nagro and Tom McLaughlin from HubSpot detailed how they are using Nexus as a repository for their development and release components. They found that they need to be able to quickly create another virtual machine as part of their build infrastructure to react to changes in datacenter locations and other parameters. To facilitate that they have created a Puppet module that can install Nexus. The module is available on puppet forge ready for you to use. In addition the source is available on GitHub and John and Tom are looking forward to your contributions. They shared a lot of further interesting details about their Nexus use case and the scale they are working at. Go and check out the recording for more information.
Kyle Allan from RiotGames also hung out with us and reminded us of the Chef cookbook for Nexus he is maintaining, which is also available on GitHub. Both the Puppet module and the Chef cookbook are used for installing and configuring Nexus as part of a devops pipeline.
When it comes to configuring Nexus and generally interacting with it from the outside in an automated fashion, it is best to use the Nexus REST API either directly in your script or the Java API wrapper. A great collection of examples of using the REST API is the nexus_cli Ruby gem also available from GitHub and authored by RiotGames.
The REST API as well as plain HTTP based downloads are also what comes in handy for the more common Nexus use cases that support a devops scenario – as a repository. Nexus is certainly great to have in your build pipeline just for the mere proxying of components from the Central Repository and others and the resulting simplification and tremendous performance gains. But the best benefits occur when you get your build to deploy the production components into Nexus repositories. Your configuration management tool like Puppet, Chef or Ansible can then pick it up from there via the REST API. Alternatively you could use a YUM repository in Nexus, if your production platform uses RPMs.
Both of these scenarios were quite common with the various people I met at DevOpsDays, Vancouver. I presented an ignite talk in which I argued that the devops pipeline should take security and license characteristics of all the jars and other components used in your application into consideration when pushing to production. Just like a failing integration test stops deployment, a known security issue in one of your dependencies or a problematic license should stop deployment too. In the demo session I showed the attendees how the data from Sonatype CLM exposed in your Eclipse IDE, your Jenkins CI server and your Nexus staging setup can greatly help you with this and how easy it is to configure your policies for your components and adapt them to the specific parts of your devops pipeline.
There was a lot of agreement visible in the audience and other talks about web application security and hackers lead me to believe that considering what you know about component vulnerabilities should impact how you deploy applications in a devops fashion.
CONTROL, ENFORCEMENT, APPROVALS, POLICIES
These concepts run counter to fast, agile, based-development. These words make developers cringe, they are “4 letter words”. Could it be that the problems with these concepts is not what they are trying to accomplish, but how they are implemented? They are intended to ensure that applications developers create are trusted, that they meet and exceed the expectations of the user, that they drive the business forward without placing the business at risk. Who can argue with that? Sure, developers want to build things fast, but they want to deliver applications with high quality. So, the intent is good, but the implementation is bad.
Before we discuss the problem with policies, let’s look at some data. Our survey of 3500 developers, architects and development managers show that organizations are exposed.
- 66% of organizations lack meaningful controls over their open source usage.
- Even if they have a policy, only 38% of organizations have control over what components are used in their applications.
- Open source policies are not only unenforced, they get in the way – 31% don’t have enforcement / 28% state that it slows them down / 27% state that they find out problems too late.
So what is the problem with policies?
- They are manual – many organizations start with manual processes, but they can’t keep pace with today’s development approach.
- They are static – since policies aren’t easy to change, they tend to remain static, they don’t react to change.
- They are inflexible – if enforced, the same enforcement action is taken across the development lifecycle, which doesn’t provide the flexibility needed for stage appropriate action.
- They are document-centric – many policies are documented in written form and components are tracked in spreadsheets.
- They are generic – since policies are difficult to manage, generic policies are often created that don’t account for specific department or application needs.
- They are approval-laden – many organizations that manage open source component usage require developers to start an approval process that requires approval from the legal/compliance, security and architecture teams.
- The implementation is reactive – most policies are punitive in nature, they don’t provide guidance, they simply are used as a checkpoint late in the development lifecycle.
These implementation problems are exacerbated by how applications are constructed today:
- Applications are constructed of components – the typical application is comprised of 80% or more open source components.
- Component volume, diversity, complexity & release cadence make it impossible for manual, or approval-laden policy approaches to keep up.
- Organizations not only have a large number of components to manage, they have a large number of applications to manage – both existing production applications as well as applications that are currently in development.
- Organizations need to account for the varying risk levels of different departments or applications. Not all applications or usage is alike – some application scenarios require more vigilance. Not all applications are deployed in the same fashion – some deployment approaches allow more open source license flexibility.
- Agile-based development or fast waterfall delivery cycles – development cycles are fast, imagine asking a developer to start an approval process that will take weeks, when they have a deadline in several days!
- Security, Legal/Compliance, Architecture, Dev, IT Ops silos – due to specialization, many organizations have silos of expertise. Unless the policies can accommodate all of the different constituents and facilitate effective collaboration, they will fail.
One Potential Approach – Automated Workflow
Ok, so now that you understand the problem, what can you do about it? One potential approach that some organizations and vendors have taken is to take the manual processes and apply workflow. This approach is borne out of the BPM world – with the old thought that if we use automated workflow, the approval process will be streamlined and developers won’t be bogged down with manual work. Sounds good? Well, hold on a minute, because this approach has the following limitations:
- Automated workflow is still primarily linear – security, legal/compliance and architecture teams still need to address every component.
- It’s still reactive – the approval process starts when a developer identifies a new component they want to use.
- Since it is reactive, policies can’t be used to guide component selection.
- It doesn’t provide enforcement throughout the lifecycle (no way to know if developer bypasses approved list of components).
- It’s designed to automate the approval process, it doesn’t help manage the introduction of new component versions.
- It doesn’t identify and notify appropriate constituents of newly discovered threats.
So the approach sounds reasonable – even if you cut the approval process in half, you still can’t keep up with the volume, variety, complexity and release cadence of components. You still can’t provide the up-front flexibility that your developers need to try out new components. You still don’t have the ability to guide and enforce action throughout the lifecycle. You still don’t have policies that will help manage your production environment.
What’s the side effect of policy approaches that don’t work? Well, they either slow development, or developers ignore the policies, or developer settle for sub-optimal components because they are on the approved list. Each of these outcomes is a problem. If development is slowed, the business doesn’t get what it needs, and finger pointing ensues. If developers bypass the policies, organizations are put at risk because components are not properly vetted. And if developers follow the policy and use out-dated components that were previously approved, then they are constructing applications with sub-optimal components. There has to be a better way!
The Better Approach – Automated Policies
Luckily there is a better way, and it’s actually not as difficult as it may sound. You can leverage the work that you are already doing with your repository manager and extend governance with automated policies. That’s right, instead of automating the workflow, you leverage automated policies that keep up with today’s agile, component-based development efforts. This approach provides the following benefits:
- Frees humans to focus on higher value tasks (policy definition and exception management).
- Provides up-front guidance for optimal component selection vs. late in the process “scan and scold” approach.
- Automates guidance and enforcement throughout the lifecycle directly in the tools that developers use.
- Accommodates risk profiles for different organization / application requirements.
- Provides the up-front flexibility developers need while ensuring production deployments meet your standards.
- Frees developers from hassle of initiating approval process – they just have to manage exceptions.
- Policies drive proactive notification and action for newly discovered vulnerabilities (continuous trust for production apps)
Our Component Lifecycle Management approach leverages automated policies so that you can keep up with the volume of components while providing guidance and flexibility your developers need. And the end result will be applications that you assemble are trusted and remain trusted over time.
We also realize that there isn’t a single correct way to expand your governance approach. Sonatype has designed the CLM so that you can support your most critical needs, and you can expand usage over time. It could be that you want to start by assessing the risk of applications in production. It could be that you want to start by using policies that will manage your release process. Or it could be that you want to use policies to provide guidance to your developers early in the development process. Sonatype supports each of these approaches and more.For more information check out the CLM product.
We have been participating in the devopsdays events by presenting an ignite talk on how DevOps need to be aligned with how applications are constructed today – with open source components. The ignite presentation style is really interesting – you have 5 minutes to present 20 slides that advance automatically every 15 seconds. I started with a tightrope analogy and compared it to the pressure that DevOps faces based on the initial 10 deploys per day concept.
I covered the following topics:
- Components are the dominant pattern for constructing applications.
- Components drive efficiency but introduce risk if not managed properly.
- If DevOps doesn’t account for how applications are constructed, DevOps will fail.
- If the release management process doesn’t weed out flawed components, DevOps metrics like Defects to Production, MTTR, MTTF will deteriorate / not to mention the business will be at risk!
- The release management process should include support automated security, licensing and architecture policies that guide and enforce action.
- Guidance should be integrated throughout the entire DevOps toolchain (IDE, CI, etc.).
- A sound governance approach shifts activity to the left - vulnerability discovery, fixes, etc.
- Start by identifying an accurate inventory, using policies to identify at risk applications, don’t just identify – remediate.
George Lawton wrote a summary of the ignite presentation on ServiceVirtualization.com.
There were many other interesting presentations, and the Open Space concept fostered many timely discussions. Here are my observations from the conference held in Atlanta:
- The conference was dominated by IT Ops discussion. Although this was not surprising given DevOps efforts tend to be driven more by IT Ops than Dev, I hope that DevOps doesn’t simply turn into the next generation IT Ops. While initial efforts are focused on the release management process, for DevOps to have a significant impact, it needs equal participation from all constituencies – Dev, QA, IT Ops to start – then expanding to include Security, Compliance, etc.
- Many conversations turn to automation and tooling – even after the initial presentation by Tj Randall, where he noted that DevOps is more than technology, most of the conversations were dominated by what tools to use, what technologies to use, etc. While this is natural given our technology bent, the DevOps impact will be far greater if it is equally focused on process, people and culture.
- While my focus has been on the disconnect between Dev and IT Ops, and Dev and Security, I was reminded that the same disconnect can occur between IT Ops and the network team. Just as the Dev organization sees IT Ops as intractable and risk averse, IT Ops sees the networking team in the same light. It highlights the need to incorporate all constituents into the DevOps process.
- While there are multiple ways to drive DevOps, success is more likely to occur when you have bottom up support. Jim Hirschauer talked about his experience and approach with top down (executive led) or bottom up (passionate supporters), and stated that bottom up support is ultimately needed. He talked about achieving success in small chunks, “story after story” and discussed using lunch and learns, social media, staff meetings, company intranets, etc., to constantly share and reinforce the DevOps message.
- Given the heat around DevOps, organizations are struggling to hire DevOps expertise. Mark White spoke about how we can address this shortage by training existing System Administrators. He explained that we have moved from a culture of being defined by what we knew (Solaris, OpenView, etc.) to understanding how to do things. He said that System Administrators have the background, critical thinking skills, etc., they just need to be trained in the new approaches – Ruby, Python, Puppet, Chef, etc.
- Brian Johnson spoke about ITIL and argued that DevOps was invented by ITIL. I got the impression that he was making a case for ITIL to remain relevant. Although the audience didn’t react, I’m not sure that ITIL has the flexibility to be a significant force in agile, DevOps scenarios. He did make some interesting comments on how DevOps should avoid some of the pitfalls that befell ITIL – things like “don’t tell everyone that DevOps solves all problems”, “don’t create a universe of certified experts”, etc.