Part 1 of 3 in a Blog Series: ‘How Your Software is Like a Car’
Part 1: It’s Just the Way Software is Made
Today software runs the things that run our world. In fact, I’m starting to see the pundits talk not just about the Internet of Things, but about the Internet of Everything. With software so deeply embedded in every aspect of our lives, the companies running the software are accountable for protecting the consumers using it. In fact, it is just a matter of time before software liability becomes a reality (but that is a topic for another day).
Just like automobile manufacturers, software “manufacturers” need to apply supply chain management principles for both efficiency and quality. They need to be prepared to conduct a rapid and comprehensive “recall” when a defect is found. And today’s modern development practices make this, well, challenging to say the least.
Bear with me a moment, as I take you through a quick history of Toyota’s supply chain innovations … then I promise to bring this back to your software supply chain.
Toyota Transforms and Outperforms (Laying Agile Foundations)
In 1926, Sakichi Toyoda founded Toyoda Automatic Loom Works. From the start, he obsessed over efficiency and automation. He invented and ran the most advanced looms in the world – delivering dramatic improvements in quality and a 20-fold increase in productivity. Perfection and efficiency were so ingrained in his production processes, his looms stopped automatically whenever a thread broke, for example.
When Sakichi’s son, Kiichiro, decided to move from textiles to auto manufacturing, the apple did not fall far from the tree. Kiichiro set about optimizing everything conceivable in the production of automobiles. His production innovations, eventually called the Toyota Production System (TPS), gave rise to Lean Manufacturing and Supply Chain Management principles.
Today, the effect of these principles on Toyota’s efficiency is remarkable. Company-wide, Toyota has a total of 226 suppliers while GM has more than 5,000. Toyota produces only 27% of the content of their vehicles while GM produces more than 54% of theirs. That means GM has twenty times the suppliers but still produces twice as much of their vehicles. The result? A Chevy Volt sells for nearly double the price of the Toyota Prius while the Prius outsells the Volt nearly fifteen to one.
The First Wave: Toyota’s Principles Drive the Innovations in Agile
Toyota’s principles not only improved auto manufacturing, but also extended to many other industries including software development. As early as 2000, Fujitsu Software Technologies — desperate to improve productivity and overcome IT budget deflation in the post-bubble economy — decided to experiment with applying TPS Lean Manufacturing to software development. This effort led to a wave of innovation in agile software development. A success that, in hindsight, is not at all surprising.
The Second Wave: Agile Meets Component-Based Development
Where Agile methods were based on iterative and incremental development (embracing Toyota’s lean manufacturing principles), Fujitsu did not do a whole lot with Toyota’s supply chain management innovations (sourcing reliable and thoroughly tested “parts” that serve your people and processes). This is where another transformational change in the software development ecosystem is just beginning to come into play: the use of open source and the embrace of component-based software development. That is, where agile software development must meet supply chain management.
Today, 90% of a typical application is composed of open source and third party components. The open source community is the dominant supplier of software building blocks, the components they develop feeding virtually all software development “supply chains”. These components are sourced within the supply chain by software development organizations, usually from public repositories.
To give you a sense of the scale of operations in today’s software ‘manufacturing’ supply chains, the largest source of Java components known as the “Central Repository” clocked in 13 billion downloads last year alone – more than 35 million components every day (and that dramatically understates real usage because more than a quarter of the download requests came from local component repositories — such as Nexus – that are in turn accessed by teams of developers).
Today’s reality: software assembly (together with agile) is just the way software is made.
In the next part of this blog series, we’ll take a drive down the software supply chain to help you understand where your software has really come from.
It turns out, our timing could not have been better…
On April 1st, we launched our annual open source development survey. The five minute survey asks participants several questions about their open source security policies, practices, and experiences. Questions include:
- Have you ever banned an open source component?
- Do you have an open source policy and does it address security vulnerabilities?
- Do you track changes in vulnerabilities in production apps?
- Are applications developed with open source just as secure as COTS applications?
As bad fortune would have it, or perhaps it was luck, the Heartbleed bug was announced on April 7th and the notice went viral almost immediately. During that first week of April (pre-Heartbleed notice), we had over 1,500 participants in the survey. Post-heartbleed, we have had another 1,100+ participate.
This means we have perhaps the best and broadest stats on the state of open source application security at the apex of Heartbleed awareness. While it started as the “open source development survey”, it has quickly turned into the industry’s largest, most current “achy breaky Heartbleed survey”.
The survey runs through the end of April, so there is still time to take it here.
What have we learned thus far? Here are some of the preliminary results:
- 64% of organizations don’t actively monitor open source components for changes in vulnerability data (That’s right, these companies would not be watching for the next Heartbleed, Struts, Bouncy Castle vulnerability…perhaps waiting for word-of-mouth to reach them)
- 52% of organizations do not keep a record of all open source components used in production applications (For the next Heartbleed-like vulnerability, how would these companies be able to ascertain if they had ever used the component somewhere?)
- The top 3 challenges of open source policies named were: (1) No enforcement / workarounds common, (2) Does not address security vulnerabilities, (3) Not clear what’s expected of us (Does your organization face similar challenges with open source policies?)
You might be asking yourself if there were any notable differences in answers following the April 7th announcement? To our surprise, a few responses have seen a post-Heartbleed bump while responses to other security-related questions barely moved the needle.
If you would like to assess how your organization is faring compared to 2,600+ peers in the midst of the heartbleed announcement, please take our survey — and invite others to participate as well. All participants will receive a copy of the survey results shortly after the survey closes.
THE MOST IMPORTANT POINT : Take the survey, get the results, and then…start a conversation, spark a debate, or ask if your organization should be considering new actions. As with all surveys, it is not the stats that matter…it’s what you do with them.
As the HeartBleed bug wreaked havoc on the internet over the past few days, we at Sonatype began thinking about the lessons learned from this recent scare and how, collectively, we can develop a process for mitigating the next major exposure.
Was this OpenSSL vulnerability an oversight by system administrators installing unknown software?
The simple answer is no. OpenSSL is the defacto SSL implementation used on most internet servers around the world. This is not an untested, unverified component that slipped by security audits.
A critical question after incidents such as this is: “Is the vulnerable version of OpenSSL still accessible and available for download, whether in a proxy repository or on a public download site?”
This isn’t as far fetched as it initially sounds. Let’s take a look at other components that have had well-publicized vulnerabilities:1
- In 2013, over 4,000 organizations downloaded (often repeatedly) a known vulnerable version of Bouncy Castle that had been fixed for nearly 5 years. (CVE-2007-6721)
- In December 2013, nearly 7,000 organizations downloaded a version of Apache HttpClient with broken SSL validation, more than one year after the alert. (CVE-2012-5783)
- Over the last 12 months, more than 6,200 organizations have downloaded affected versions of Struts and are potentially exposed to this vulnerability.
Now is the right time to evaluate a process for mitigating these types of incidents.
The proposed approach is a three-step process that becomes a part of the software development life cycle, without slowing it down:
- Have VISIBILITY of the components used in your software
- Use AUTOMATION of policy to eradicate known vulnerable components
- Incorporate MONITORING for vulnerabilities that enables rapid remediation
STEP ONE > Use Automation to Understand What is in Your Software
The process of manually keeping track of components in your application is as outdated as rotary phones or an Underwood typewriter. Automatic inventory and maintenance of a “bill of materials” for managing components is a mandatory part of any modern software development life cycle. There are over 400,000 open source components in the Central Repository, with 13,000,000,000 downloads (yes, that is “Billion”) a year. That means that 27,000 components are downloaded every hour of every day. It is impossible to manage this type of usage without automation.
An automated component management system creates a dynamic inventory of components within your applications and monitors the integrity of those components over time.
STEP TWO > Stop Using Vulnerable Components
The Bouncy Castle, Struts and httpclient examples demonstrate that we, as an industry, are a long way from awareness and action around this problem. OWASP recently added a new “Top 10” with the directive to “not use components with known vulnerabilities‘.
We, as developers, have a responsibility to not use vulnerable software. However, there has to be a process built in to the development environment that immediately exposes the security level of components as they are integrated into an application. To see real change in the results of security around component usage, it has to be significantly simpler for developers. Component vulnerability information has to be integrated in to the tools developers use today (and throughout the software lifecycle) and developers have to replace those flawed components BEFORE the application is in production. Jim Routh, CISO Aetna has likened this to using spellcheck in Word. My question is “Why isn’t it that easy?”
STEP THREE > Know what and where new incidents affect you
Because software ages like milk and not wine, software becomes less reliable over time. Eliminating the use of components that have known vulnerabilities is a must, but ensuring you have a component inventory that is monitored for new vulnerabilities is the only way you keep your software secure over time. Unfortunately most organizations struggle to keep an accurate inventory of their components – including the 10’s or 100’s of component dependencies – in each application. So when a new vulnerability is announced, and resolved in a new component release, they don’t even know if they are impacted. While golden repositories and Open Source Review boards all sound good in theory, the math doesn’t work. The sheer volume and variety of open source components sourced into the typical software development supply chain is enormous. And they don’t help once an application goes in to production.
THE BOTTOM LINE > It is time to get rid of this entire “optional” attack surface.
It’s avoidable risk.
By allowing components with known vulnerabilities to continue to circulate and have prolonged life in our new applications, we are only extending our attack surface. It is our responsibility as practitioners in the security industry to educate developers and make it easy for them to create safe, secure applications while giving them the tools needed to provide on time, on budget projects.
Using this three-step process as a framework to identify, eliminate and manage future incidents is a huge step in the direction of creating secure software.
The HeartBleed bug is a single instance of a vulnerability that had world-wide impact. Let’s use it as an incentive to update the security industry’s vision of a more secure future.
1 These figures are based on Sonatype’s analysis of requests made to the Central Repository (aka Maven Central) during the timeframes noted.
Once upon a time, there was a great battle between speed and security. Development wanted to go fast. But, security wanted to slow down and be safe.
“We must protect our gilded apps”, cried the application security team.
“Speed is cherished by our people”, declared the development team.
For years, they endured the pain of testing late in the lifecycle, sorting through reams of false positive reports, and dealing with the added cost of pushing bad software out the door. They knew there had to be a better way…
And then came, The DevOps Revolution. The DevOps team had an answer:
“Let’s bring Application Security and Development closer together — and shift their focus further to the left”.
The DevOps team knew that by introducing awareness of security vulnerabilities and policies early in and across the software development lifecycle — without creating a time-consuming tax on development — that both teams could win. The shift was made, and they lived happily ever after.
Want to learn more about DevOps and AppSec?
At the RSA Conference 2014, we gathered some of the top DevOps experts and influencers at an evening called Wining Not Whining and asked them “Why is application security so important to the DevOps revolution?”
Share these voices and yours with others in our global DevOps community. Find the on-going conversation on twitter using the hashtag #DevOps. The full transcript is included below.
“In some respects, DevOps represents the last best hope for security. We’re never gonna be successful bolting security on after the fact. DevOps gives us the opportunity to build security holistically in to the development and operations process, and that’s the only way we’re ever gonna hope to be successful.” — Alan Shimel, DevOps.com
“When an organization is using DevOps principles, they can do deploys of hundreds, even thousands of deploys per day. On the one hand you can view that as threatening to information security as a profession, but my colleagues and I, we all believe that this is the best opportunity for information security to become relevant and integrate ourselves into the daily work of development and operations. So I urge you to seek out your DevOps kindred spirits in dev and ops and be part of a team that helps the organization win.” — Gene Kim, The Phoenix Project
“What I’ve observed, ’cause I’ve worked in just about all parts of the development lifecycle phase all the way through operations, is that usually what happens is the development organization throws a product over the wall. The operations folks are left to their own devices to have to solve whatever anomalies, vulnerabilities, and defects ended up in the software before it went into operations. And what we found in our research and what I’ve found in my own experience is if you can address security issues as early as the requirements phase of the software development lifecycle — and if you can address vulnerabilities and defects in the software while the software is going through requirements along with design and architecture development and testing — you can actually address a fairly significant percentage of the types of security issues that show up in operations today.” — Julia Allen, Carnegie Mellon’s CERT
“The reason why DevOps is so important for security is because security is the kind of thing that needs to be baked into a product, just like quality, just like stability, just like availability. These are all features that need to be invested in, and the best way to do that is to pull them forward in the lifecycle and make them equal citizens to the business features, the rest of the features that the business runs on.” — Damon Edwards, DTO Solutions
“Why is DevOps and security so important? First, DevOps is really important ’cause it’s changing the way we build software. And part of how we build software has to be including security, so together it’s just a natural fit on how to make systems and software and our jobs better.” — Nick Galbreath, Signal Sciences
“It’s very important for DevOps and security to work together to ensure consistently secure software is developed as quickly as possible and as error-free as possible. Without the proper cooperation, you’ll be paying for it later.” — Andrew Wild, Qualys
“You have a major vulnerability that is turning into a security incident in operations, and it’s eating operation’s lunch. It’s causing servers to go down, services to not be available, major asset issues, and you find out that it’s caused due to a systemic design flaw that could have been caught upstream. As a result of that, and these are just kind of notional numbers, you reduce your cost by half; you reduce your development time by half; and you have a much more robust product going into operations.” – Julia Allen, Carnegie Mellon’s CERT
Code snippet scanning is a common question we get from prospects. We typically try to dig at why the prospect actually thinks they need snippet matching. We think this comes from mis-informed demand. To create conversation with the masses on this topic, I’ve shared my perspective so you have a complete picture of the risk and cost of code snippet scanning.
Prospect Question: Is there an inexpensive option for code snippet scans of source code that we could use in conjunction with Component Lifecycle Management?
I believe people think they need snippet matching because that was actually common in c/c++ and people assume it happens frequently in modern languages (untrue), and vendors have been successful in raising awareness of this problem because that’s what they are good at. It’s like going to a surgeon and asking him what to do. Of course he’s going to say you need the operation. That’s what he does.
While it is true that developers could copy code around, in a component based language like Java (and every language since) the reality is they don’t. I can’t recall any well known, high profile lawsuits involving snippets. They involved wholesale reuse of components/frameworks/operating systems. As an example, in 2013 Fantec was taken to court because firmware of the media player included the iptables software which is licensed under the GPLv2. Specifically, this wasn’t really source code cut and paste, they included the entire iptables application in the linux based firmware.
In Mark Radcliffe’s list of the “Top Ten FOSS Legal Developments” of 2012, Item 2 states, “A separate but related case also involved the Android operating system. Oracle sued Google for the alleged infringement of Oracle’s copyrights in the Java software (which it had acquired from Sun Microsystems, Inc.)”….” However, at the end of May, Judge Alsup issued a decision finding that the Java APIs were not protectable under copyright law.”
Continuing with case analysis, Radcliffe states in Item 3, “The case involved the copying of the scripts and certain functions of the SAS analytical software.” ….”The court found that such functions and programming language were not protected under the EU Directive on Protection of Computer Programs”.
These examples reflect an accurate risk vs reward calculation: Do I need snippet matching if people aren’t really doing it and/or it’s not yet proven to be a real-world risk?
In addition to the real world risk assessment, there needs to be consideration of the cost of actually performing detailed, line by line analysis of source. This level of analysis is expensive both in terms of time and compute resources, but also generally leads to indeterminate results that require human analysis. The end result is that it can’t be done fast enough to be fully integrated into the development lifecycle and it’s not precise enough to program actionable results against.
To be clear, I’m not condoning copyright infringement. Stealing someone else’s work is wrong, plain and simple. Nor am I saying that scanning source doesn’t have its place. OpenLogic and Palimeda have built a solid following for a reason. Dave McLoughlin from OpenLogic lays out a good case for scanning in his presentation “Understanding the Value of Scanning for Open Source Software“.
If you absolutely, positively must ensure the provenance of every single line of code and can put the resources behind it, go for it….but do it with an awareness of the real world risk and costs. However, if you need a place to get started and want something that can cover the cases that are more likely to happen in the real world, perhaps snippet scanning isn’t of the highest priority.
Ultimately it’s like buying insurance. You need to assess how likely a given risk is and how much you have to lose. Just don’t expect the agent to provide realistic answers for you. Only you or your organization can make that ROI assessment, I just ask you do it with a complete picture of both the risk and the cost.
Want to win a programmable LEGO robot? Share your voice in this year’s survey.
Let me share three statistics with you from the 2013 open source development survey:
- 76% of organizations lack meaningful controls over the use of open source software in development
- 86% of developers believe their typical applications include over 80% open source components
- 71% of applications have more than one critical or severe open source component vulnerability
These stats might surprise you or may not. Surprise is not their intent. The real intent of these survey results is to SPARK DISCUSSION. Remember, it’s not the stats that count…it’s the value of the discussions that follow that make this survey so important.
Today we kicked off the fourth annual open source development and application security survey. You can take the 5 minute survey here — it takes less that 5 minutes, we promise.
Looking at last year’s findings, I see so many great discussion topics for your next team meeting, a lunch-and-learn at your office, or at a community MeetUp event. Topics like:
- How do our practices compare? Are we ahead or behind?
- What policies do we have in place, do we need new ones, or does anyone follow our policy?
- Are our development, security, and compliance practices sufficiently aligned compared to other companies our size?
We’ll send everyone the final survey results to share, compare, and discuss with your team. You can also enter into a DAILY drawing for a $100 Amazon.com giftcard and a WEEKLY drawing for a super cool LEGO Mindstorms EV3 programmable robot. The survey is only open until April 30th. And the sooner you take the survey the more chances you have to win.
I love watching TED Talks. To me, they are 15 well-spent minutes watching experts around the world provide great insights into things I thought I knew well. Some I had never imagined or topics on which I want to gain a deeper perspective.
As the words “security breach” become more and more a part of our daily business vernacular, I look to TED to provide me with insights into our current world and the future ahead of us. I found these three TED talks to be informative, educational, and provocative (and entertaining as well):
Moment to Remember: “Is he heavy set? Is he bald in front? Does he wear glasses?…Kill him.” -Watch the full video.
Swimming with Sharks – Security in the Internet of Things - Joshua Corman at TEDxNaperville
Memorable moments: “What kind of idiot gets in the water with an apex predator?” and “In 2007, former US Vice President, Dick Cheney had the wireless feature on his pacemaker disabled.” - Watch the full video.
Fighting Viruses, Defending the Net - Mikko Hypponen at TEDxEdinburgh
Moment to Remember: “With $14.9 million, you can afford to invest in your crimes, watch your victims, and record what they do…If we don’t fight online crime, we are at risk of losing it all.” - Watch the video
Have you ever asked yourself if you’re in the right security role? Or thought about if developing trusted applications or running a comprehensive DevOps program was important? Hopefully these videos give you the few minutes of encouragement that you need to know, indeed you’re on the right path and in the right job. Enjoy them!
Since its inception in 2002, the Central Repository has grown to be the largest component repository of Java and other JVM, Android, related components and beyond. It is the default repository for Apache Maven, sbt and Leiningen, and it can easily be used from Gradle, Apache Ivy and others. The Central Repository has become the default destination for open source projects that want to publish their components and reach millions of fellow developers. With its many servers powering a high performance delivery network, developers can rest assured that their components are delivered reliably and quickly.
With the Central Repository as a key entry point for many open source projects, we have heard many requests over the years about making it easier to publish components. Most suggestions have been related to the initial project creation. Once that process is complete, publishing can be fully automated and occur any time and as often as you like without any intervention.
The more complicated elements of that setup (like groupId verification) ensure a higher level of quality for the vast audience consuming from the Central Repository. This high level of integrity through curated components sets the Central Repository apart from other repositories. For example, if there is a component in org.apache, you can be sure that it was published by a member of the Apache Software Foundation. Similarly, if you own a domain (and therefore the groupId), you can rest assured that nobody else can publish components into your groupId. In addition, we ensure that javadocs and sources are available for all components, to maximize the benefits for your component users and enforce other requirements. These validation steps and requirements are the results of community input we have received and implemented over the years.
To make things easier, we have engaged in a number of improvements, with the goal of providing better overall service for the community at large. The starting point can be found at http://central.sonatype.org as a new web site that acts as the primary source of information associated to the Central Repository. This includes updated project setup documentation, service status, information on how to get assistance, and more.
Going forward, we will continue to improve the web site and documentation, and we hope to receive lots of feedback and ideas from you. Please visit the site and let your fellow developers and open source contributors know about it. We look forward to hearing from you!
Central Repository/Sonatype Ops Team (@sonatype_ops)
Wow – have 2 weeks already passed since RSA? Before we get too far out from the event, I thought I’d share a few observations …
At an event covering Security of all types, where Application Security as a very small subset and Open Source Security is an even smaller subset – I was impressed with the growing awareness of both the value and risk associated with open source components in application development.
As I talked with folks concerned about Application Security, many were aware of the fact that a large percentage of their applications are assembled from open source and third party components. (In fact, research shows that this can be 80-90% of the typical application.) Unfortunately, most have not taken notable steps to address this concern – and, if they have, it is done in a way that disrupts the speed and agility of application development processes.
Virtually everyone I spoke with has taken steps to identify security flaws in their Source Code with tools like SAST and DAST. But very few even had the visibility into what components were used in each application, let alone if those components brought with them security or license risk. As a result, there was a lot of interest in the partnership Sonatype announced with HP (see the Forbes article) to integrate Sonatype’s Component Lifecycle Management (CLM) analysis technology into HP Fortify on Demand. Now HP Fortify on Demand customers have access to an Open Source Application Scan using the Sonatype CLM analysis technology from directly within the Fortify on Demand user experience.
Of those who had taken steps to address the open source risk, most were starting with an Open Source or FOSS Review Board to approve components for use in application development projects. However, few felt confident that they could enforce their policies across the application development lifecycle. In fact, one CISO rushed to our booth immediately after Ryan Berg, Sonatype’s CSO, presented his session ‘The Game of Hide and Seek, Hidden Risks in Modern Software Development’ stating “I just started my FOSS review board and I think I’m on the wrong path.” The reality is that a lot of manual effort goes into the FOSS Review Board and developers simply can’t wait days or even weeks for approvals. And if they have to wait they will probably find a way to work around the system. Then what happens when an approved component goes bad – new vulnerabilities are announced daily. If you are in this boat, you may find this webinar of interest to learn how automated policies can guide and govern component usage across the software lifecycle while speeding development efforts.
The final thought I want to share is related to software liability. As I listened to Josh Corman, Sonatype’s CTO and Jake Kouns, CISO or Risk Based Security speak on this topic at RSA, I was struck that the most basic form of negligence is not knowing what is in your software. So if you are a Fortify on Demand customers, try the Sonatype Open Source Application Scan and if you aren’t take Sonatype’s complimentary Application Healthcheck to get a view to the visibility that Component Lifecycle Management (CLM) will provide to help you start on your journey towards end-to-end application security.
The recent FS-ISAC whitepaper, “Appropriate Software Security Control Types for Third Party Service and Product Providers”, reveals the majority of internal software applications created by financial services involve acquiring open source components and libraries to augment custom developed software. While open source code is freely available and reviewed by many independent developers, that review effort does not translate into all software components and libraries being free from risk.
As I explored in my last blog post, “The Tipping Point: Human Speed vs. Machine Speed”, we may have surpassed the manual ability to keep open source risks out of the environment due to the overwhelming population of components, their frequency of updates, and our ability to incorporate those changes into our application environments.
Today, the open source community actively fixes functional and security flaws. Coupled with this practice, alert streams sharing security vulnerability updates are regularly delivered by FS-ISAC, NIST’s National Vulnerability Database, and other organizations. Open source review boards in financial services are one of the primary consumers of this information — tracking and manually applying this stream of data to their components.
Open source review boards help ensure companies can maximize the benefits of open source, while ensuring stakeholders are in agreement around minimizing legal, technical, or business risks related to its use. To learn more about the importance of open source review boards, common practices, and future considerations for their use, we recently sat down for a video chat (7 min.) with Sonatype’s, Bruce Mayhew — Director of Security Research and Development.
For those that don’t know me, I am the new Nexus community advocate and now moderator of Nexus Live. I’ve kicked off my first session of the year with fellow community advocate, Manfred Moser and Manager QA & Support, Rich Seddon. The session started with Rich clarifiying the Nexus Security Advisory from March 3rd. We then moved onto the fun part, where I challenged Manfred to the first of three, Nexus 2 Minute Challenges, where he showed three things that can be done in Nexus in less than two minutes.
The 30 minute session ended with Rich talking about one of the issues that consistently comes up in support, the prospect of upgrading Nexus. View the full session here.
What can the financial services industry learn from the U.S. Department of Homeland Security? In this third segment of my blog series on open source component security as it relates to the recently updated Financial Services Information Sharing and Analysis Center (FS-ISAC) guidelines, I explore the need for speed: humans vs. machines.
One mantra of the Homeland Security Agency is – “if you see something, say something” – which works on a human level to keep us safe. This same mantra has been used across the open source community to keep components secure, by identifying vulnerabilities and sharing that knowledge through public channels like the Common Vulnerabilities and Exposures (CVE) database. But it is now time we recognize that “if you see something, say something” only works for open source at human speed.
Just as the U.S. Department of Homeland Security relies on electronic surveillance to keep citizens safe, the open source software community also needs to embrace this approach. To properly ensure open source is secure, we need to work at machine speed.
We have longed surpassed a tipping point in open source development and usage of open source components. Not only do custom applications rely heavily on such components, so do the open source components themselves. Let’s take the Java developer community as an example.
- There are an estimated 10 million Java developers now worldwide.
- Java developers initiated over 13 billion requests of open source components last year from the Central repository.
- The average component depends on 5 other components (which depend on a bunch of their own and so).
Looking at the Maven ecosystem and traffic associated with the Central Repository, you can readily see years of exponential growth of open source component downloads. The same patterns appear with RubyGems, NPM and other major open source ecosystems. We have entered an era of massive and highly effective component re-use, where everyone can, paraphrasing Einstein, stand on the shoulders of open source giants.
Recent research also shows that 64 million vulnerable Java open source components were downloaded in 2013. While developers can rely on the Common Vulnerabilities and Exposures (CVE) database, manual review of this database for every component is simply not feasible if an organization wants to release its software on time.
Making the challenge more considerable, organizations also have trouble keeping track of which components, including the specific versions, are used in which applications. And further amplifying the concern, you need to consider the component dependencies (five on average, but can range to hundreds).
I would argue that a manual approach today is truly impossible given:
- the volume of components used
- the complexity of each component
- the cadence with which new vulnerabilities and new component versions are announced
The better approach is to have humans establish risk thresholds and supporting policies. To have machines automate and enforce those policies. And to have humans manage the inevitable exceptions.That is, we need to complement human speed approaches with machine speed capabilities.
Whether organizations use software-based technologies and data services from Sonatype or other vendors, we are now well beyond maintaining open source security at human speed. Would you agree?
In my recent blog, ‘Financial Services Organizations have Open Eyes on Open Source‘, I shared how Sonatype’s company mission aligns with the recent FS-ISAC guidelines put out by the third party software security working group. In short, open source security can’t be an after thought. Security isn’t only the responsibility of ‘security professionals’ but instead a shared responsibility for all parties involved in developing or managing an organization’s software supply chain. Better put in the FS-ISAC guidelines, ”the most appropriate type of control for addressing the security vulnerabilities in open source, including older versions of the open source, is one that addresses vulnerabilities before the code is deployed—i.e. by applying policy controls in the acquisition and use of open source libraries by developers.”
The Drive to Secure Applications
Which brought us to the question, do developers consider themselves an application security driver or consumer? Preliminary results on 275 responses to a new survey by the Trusted Software Alliance (TSWA) showed that 67% of developers consider themselves a primary driver for application security (positive news for sure). While the developers considered themselves a primary driver for security, we have also seen survey results revealing 55% of participants “do not have clear policies” or “have policies that are not effectively enforced”. Being a primary driver of application security does not always mean that you have the policies in place to support that drive or responsibility.
Advice from the Front Lines
While developing policies could help developers and others driving application security efforts, another approach would be to ensure these policies are built into the tools developers and others use today. This is an approach echoed in a recent interview with Jim Routh, Chief Information Security Officer and lead for the Global Information Security function at Aetna (he is also Chairman of the FS-ISAC Products & Services Committee and a former board member).
Listen to Jim Routh’s 2 part interview on the Trusted Software Alliance.
Routh shared, “We have to apply techniques that address both the quality and security attributes of open source components as they are acquired. By doing so, we can improve confidence in the LEGO-like assembly process [of downloading and assembling components] used by developers.”
“The developers can then use those components with some level of confidence and certainly understand the tradeoffs between quality and security”, Routh continued. “That technology is just emerging today and is likely to mature over the next several years.”
In addition to defining policies and making them more accessible to employees companies should start looking to where their investments in application security are focused. It’s not far to say, financial services firms aren’t properly investing in application security. Billions are being spent everyday searching and testing for vulnerabilities of applications that are already in production.
The change we’re hoping to see is for companies to look deeper into their application development supply chain for potential vulnerabilities that might be introduced when 3rd party components are consumed allowing developers to make better decisions about the components they integrate into their software projects and ensure those projects remain secure overtime.
Want a simple place to start?
Sit down with your developers and ask them how they are taking advantage of open source components in the software development lifecycle. You’ll likely find the approach varies and data shows it’s because of the lack of standardized policies and tools that ensure security is built into the development process from the start.
To learn more about the FS-ISAC guidelines and the need for ‘Policy Management and Enforcement for Consumption of Open Source Libraries and Components’, read the full FS-ISAC whitepaper.
For details on the announcement, watch the full video http://youtu.be/jQWdBwUbW-I.
Today Sonatype and HP announced Sonatype’s Component Lifecycle Management (CLM) analysis technology has been integrated into HP’s cloud-based software security solution – HP Fortify on Demand. HP Fortify on Demand customers will have access to an Open Source Application Scan using the Sonatype CLM analysis technology from directly within the Fortify on Demand user experience.
HP Fortify on Demand delivers comprehensive, accurate and affordable security assessments that identify vulnerabilities in any application —web, mobile, infrastructure or cloud. Sonatype provides analysis and identification of third party and open source components commonly used as building blocks in modern applications – with a focus on security, license, quality, and policy issues. Together, these capabilities deliver a new level of visibility and analysis into overall application security and risk.
For more detailed information about this new breed of application security from HP and Sonatype, please visit http://www.sonatype.com/fortify.