Kohsuke Kawaguchi is Founder, Jenkins CI, and Chief Technology Officer, CloudBees. You can follow him on Twitter @KohsukeKawa
But as the story unfolds, there are a few details that may come as surprise. Here is a list of Heartbleed’s most recent developments.
1. One arrest has been made.
Canadian police arrested 19-year-old Solis-Reyes in London, Ontario last week. He is accused of exploiting the Heartbleed Bug vulnerability to steal social security numbers from servers of Canada’s tax collection agency and is charged with one count of mischief in relation to the data.
The accusation comes two days after the Canada Revenue Agency announced that the sensitive information of 900 Canadians had been compromised. But law enforcement has not made any direct connection.
The teenager’s lawyer tells a story of when Solis was only 14 and proved that his high school’s computer system was vulnerable to hacking when administration didn’t believe him. It’s very possible that when the Heartbleed Bug news hit mainstream media, he became curious and tested it out for himself. On Tuesday, the Solis-Reyes turned himself in.
2. The company responsible for the OpenSSL software has just 1 full-time employee.
The breach was the result of a flaw in OpenSSL, a platform designed to provide users with a free set of encryption tools that prevent hackers from obtaining user data.
The irony is that although two-thirds of all websites use this software, the foundation’s revenue stream is so insignificant that it can’t afford a full security audit or to pay a full staff. Therefore, the foundation is comprised of 1 full time employee and 10 volunteers.
Steve Marquess, founder of OpenSSL Software Foundation, released an open statement explaining:
“These guys don’t work on OpenSSL for money. They don’t do it for fame (who outside of geek circles ever heard of them or OpenSSL until “heartbleed” hit the news?). They do it out of pride in craftsmanship and the responsibility for something they believe in.”
3. One small error in one line of code can lead to something like Heartbleed.
German developer Robin Seggelmann believes he accidentally made the coding error that was overlooked by a reviewer, and made it’s way to the released version of OpenSSL two years ago. He was submitting bug fixes at the time when he made the mistake.
Being an “open source” platform– free, attainable, and open to everyone– hypothetically anyone could have spotted a vulnerability like Heartbleed. But few users participate in this way, leaving a small group of people essentially in charge of hundreds of thousands of lines of complex code, used by banks, governments, and social media sites everywhere.
4. OpenSSL had the flaw, but underfunding is to blame.
The company’s revenue stream relies heavily on donations, which amount to about $2,000 a year. They also sell annual commercial software support contracts worth $20,000 a year. Most volunteers make their money from “work-for-hire” consulting.
How does it make sense that such a widely used resource is so short-staffed and underfunded? In his statement, Marques makes it clear that he believes OpenSSL is ignored and should be paid for by the Fortune 1000 companies and governments that use it extensively.
“I stand in awe of their talent and dedication, that of Stephen Henson in particular. It takes nerves of steel to work for many years on hundreds of thousands of lines of very complex code, with every line of code you touch visible to the world, knowing that code is used by banks, firewalls, weapons systems, web sites, smart phones, industry, government, everywhere. Knowing that you’ll be ignored and unappreciated until something goes wrong.”
5. High priority websites have been fixed, but there is still affected websites.
New SSL certificates have been issued to affected websites, clearing them of the vulnerability. Also, Apple issued a statement that Apple’s desktop and mobile operating systems were never affected. But it is reported that there are still nearly 500,000 or more vulnerable SSL certificates.
Ed Felton, a computer scientist at Princeton University makes a valid analogy: “Open SSL is like Public infrastructure without a tax base”. Do you feel corporations and government should help to fund Open SSL and not be “free riders”? Let us know what you think in the comment section below.
For Boston, my favorite would be the workflow in Jenkins talk that will cover the new workflow job type Jesse and I are working on. As of this writing it is still very much a work in progress, but that talk is our way of putting a stake on the ground that we WILL have something we can talk about by then. There's also some talks that describes how they've put together pieces (including Jenkins) to create a broad automation, such as Distributed Scrum Development with Jenkins, Vagrant, Fabric and Selenium and Moving Existing Enterprise Systems to Continuous Integration and Deployment with Jenkins.
For Berlin, it turns out that we have steller line up of the speakers far beyond my expectation. You have a number of key community contributors/developers like Christopher Orr talking about how he does mobile build/test/deploy, or Vincent talking about literate plugin. I'm also looking forward to the Puppetizing Jenkins Pipelines from Julien Pivotto, which (if I understand correctly) is about deploying Jenkins and its jobs through Puppet — That is something I notice many people are very interested in nowadays.
All of them are looking forward to meeting you and hearing your thoughts and feedbacks, and I'm sure this is going to be a great learning/networking oppotunities.
Applications have quickly become the center of both the consumer and business worlds, as people in and outside of the workplace use software for more processes and activities than ever before. While the proliferation of these solutions has generally made lives easier, that same phenomenon has put an immense amount of pressure on individuals within the software development field, forcing experts to create new tools that cater to the needs of end-users and can be accessed on virtually any platform.
The mobile movement in particular has influenced the way developers create software. This complexity was highlighted in an InformationWeek report by technology expert Gregg Ostrowski, who said the evolving social, cloud and mobile application ecosystem is introducing new complexities for companies responsible for creating the technologies that make up those markets.
One of the most difficult challenges associated with adapting to these changes is understanding which road will lead to the most substantial rewards. Ostrowski said mobile device management vendors often claim their technologies can govern virtually anything at the platform level, while cloud providers encourage companies to migrate resources off-site to a third-party environment. Rather than taking partners at their word, Ostrowski asserted that businesses should first consider their needs when building next-generation applications.
Keep connectivity in mind
Mobile applications are considered the future of the software environment, but only if developers can find innovative ways to create those solutions in ways that keep functionality in mind. Connectivity demands in particular are crucial to think about, Ostrowski noted, saying he's seen organizations fail when building applications solely because IT professionals used inadequate VPNs or Wi-Fi connections when building the tools. This means that code review and analysis processes might not be as efficient as they otherwise could have been, which may lead to the production of an unreliable product.
Ostrowski encouraged developers to pursue application build programs that keep connectivity in mind by utilizing technologies that are not necessarily session-based and can still function even when Internet connections are not ideal.
Management is always a must
While building applications is important, Ostrowski said that managing those tools is just as crucial. This means conducting regular tests through static analysis and rolling out upgrades as frequently as needed. The mobile realm has placed a particularly heavy burden on IT professionals to continually revise solutions, largely because operating system upgrades happen often and without warning, especially as bring your own device becomes more popular in the enterprise. If companies do not have the ability to go back and fix solutions to adhere to the evolving needs and expectations of end-users, they will encounter substantial performance and cybersecurity challenges in the long run.
A separate Wired report highlighted similar expectations, revealing that mobile software developers need to prioritize end-user experience, which means accounting for any unforeseen updates that could affect applications. While conventional desktop solutions may have been useful, it is important that professionals do not think they can simply move these tools into the mobile realm and expect to reap the rewards. Creating a new solution is critical and development, operations and quality assurance teams must all work together to ensure those innovative technologies function as required.
As the mobile landscape expands, companies will be charged with the responsibility of launching sophisticated applications that cater to mobile-oriented employees and consumers alike. The processes that go into creating mobile solutions are not the same as the ones that were traditionally associated with software development. If teams do not embrace a continuous development cycle and account for new factors within the mobile realm, they may encounter financial and security complications.
When we talk about “risk,” what we’re talking about is the probability that some uncaught vulnerability in your product will have a negative result—possibly serious injury or worse—for someone using your product or affected by the use of your product.
General Motors is learning this right now, thanks to a 57-cent part that the company failed to replace in a timely manner, resulting in several deaths and the recall of 2.6 million vehicles. Jon Stewart skewered GM’s risk management process in a segment on his Daily Show.
No one ever starts trying to build an unsafe product, but by not establishing a good risk management process early on and evaluating what those potential risks might be, you could end up with product recalls or even worse.Three Components of Risk
Risk is a probability or threat of damage, injury, liability, loss, or any other negative occurrence that is caused by external or internal vulnerabilities, and that may be avoided through preemptive action.
The three components of risk are the severity, occurrence, and detection of issues that you may have with your product. When you look at a vulnerability, the first thing you need to determine is how severe the risk is. Once you define the severity, you need to examine how often that particular risk might occur. Finally, you need know what methods you have for detecting the risk. If you can determine what the severity and occurrence of a risk is going to be early on, you can then come up with ways to either detect or mitigate that particular risk.The Right People
A good risk management process starts with the right people: subject matter experts (SMEs). When you have SMEs aboard, you’re able to rely on their past experiences and their in-depth knowledge about where risks might lie. SMEs can also determine the probability that a particular risk could occur.
Employing the right people is only half of the equation, though. The other half is making sure you use them. In order to have a good risk management process, you must ensure that each found risk is assigned to a particular individual. That person is then responsible for making sure that the risk is managed—not just once, but throughout the product lifecycle.Eliminate, Mitigate, or Accept
The most important thing is to eliminate the risk. By eliminating the risk, you remove the possibility of it occurring once your product is released to the public. This is usually the preferred way to manage risk (for obvious reasons), but sometimes it is not possible to entirely remove the risk.
For example, including an ejection seat in a jet fighter introduces the risk that someone on the ground crew could trigger it in the course of maintaining the aircraft. Removing the ejection seat or moving the trigger are not options, so the risk cannot be eliminated.
In such cases, you have weigh the severity and possibility of occurrence for the risk, and decide if you want to accept the consequences of the risk, or figure out a way to mitigate, or reduce the possibility of the risk occurring.
In the case of the ejection seat, even one occurrence of the ground crew triggering it could seriously injure or kill someone, so accepting the risk is out of the question. You would need to mitigate the risk by finding a way to deactivate the trigger while the plane was on the ground.Risk Is Not a One-Time Event
Risk management doesn’t end when you eliminate or mitigate a risk. Sometimes, eliminating one risk introduces another. Therefore, it’s important to evaluate risks not only early on but throughout the development of your product.
You don’t want to end up being skewered by Jon Stewart, and you definitely don’t want to risk your customers’ safety. With a good risk management process in place, and the right people making sure that process is actually being followed, you substantially lower your chances of risk being released with your product and endangering your users.Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon
PQA Testing has been the home of passionate software testers in Canada since 1997. With over 80 talented and dedicated testing professionals and 5 offices throughout Canada, PQA’s team works with leading edge companies, technologies and tools to solve its clients’ software testing challenges.
PQA Testing offers a full range of software quality assurance services throughout the entire software development lifecycle. From strategic consulting to manual functional testing, PQA Testing has extensive experience in helping organizations design and execute their testing efforts. PQA Testing’s comprehensive testing expertise includes: manual functional testing, test automation implementations, performance testing, security testing and QA consulting.
For more information about PQA Testing, please visit http://www.pqatesting.com
Mark your calendars! April 24 is when you can get an in-depth look at HP LoadRunner and Performance Center.
Consider this your private invitation to the event. (But I guess you can share the invite if you like.)
Brendan Mulligan, co-founder and designer of the community photo sharing app Cluster, recently posted a series of articles on TechCrunch, detailing the importance of live user testing and how a comprehensive plan could be built and executed. The recipe breaks down something like this:
1) Define the parameters of the project:
- What are you testing? (e.g. iOS app, Android app, desktop app)
- Why are you testing? (e.g. functionality, usability, security, localization, load)
- How are you going to test? (define scope, hardware and OS requirements, maybe write a test case)
- Who is going to test? (define desired tester characteristics)
- When are you going to test?
- Where are you going to test?
2) Pre-test admin:
- Recruit, select and finalize test team and testing schedules
- Acquire the needed testing equipment, devices and additional required resources
- Prepare testing spaces, if applicable
- Onboard testers, explain scope and requirements, explain bug reporting and feedback processes
- Run live user tests
- Collect and organize bug reports, user feedback and any related materials like screenshots user videos
- Administrate tester compensation and debriefing if applicable
4) Post test analysis and action:
- Assemble bug reports, user feedback and related materials into consumable, sharable and hopefully quantifiable formats
- Distribute data and information to dev team members
- Collaborate with team members to identify and prioritize actionable items
- Develop the next iteration of the product
- Plan to run that next iteration through another live user testing plan
Rinse and repeat throughout the life of the product.
With small scale live user testing needs, DIY might be indeed be a viable path for many digital experience developers. But with modern-day digital globalization there’s really no such thing as ‘small scale’ anymore – especially when you start factoring in mobile fragmentation and localization demands. This is why we do what we do. After all, your end goal isn’t testing for testing’s sake, it’s to deliver amazing digital experiences – whether it’s desktop, mobile, wearable, web or native app – that captivate, engage, and keep customers.
Off hand, if you’re curious about what your own DIY costs might look like, we’ve got a handy calculator that can do just that.
One of the core concepts we have pushed with UrbanCode Deploy and Release has been that the collection of stuff that will be running in production together is what you should test together. That can very easily extend to mobile applications. In February, we released an Android Plugin for UCD to help with this. You can easily take a new build of your mobile app and deploy it out to devices or simulators in your test lab.
The Mobile Frontier blog has a nice write-up detailing where the plugin fits in a mobile development cycle. There’s also the two minute overview below looking at how you would use this to check for memory leaks as part of your CI/CD cycle.
Dave discussed how to build out a well factored, maintainable, resilient, and parallelized suite of tests that run locally, on a Continuous Integration system, and in the cloud in our recent webinar, “Selenium Bootcamp“.
Following the webinar, we had several follow-up questions. Dave’s agreed to respond to 8. Below you’ll find the third Q&A. Stay tuned next Wednesday for the next question.3. I would like to see strategies for getting tests to work in multiple browers. For example, if my test works in Chrome but not Firefox, what do I do?
There are two things you’re likely to run into when running your tests across multiple browsers: speed of execution, and locator limitations.
Speed of execution issues occur when things execute too quickly (which indicates that you need to add explicit waits to your test code) or timeout (which indicates that the explicit waits you have are too tight and need to be loosened). The best approach to take is an iterative one. Run your tests and find the failures. Take each failed test, adjust your code as needed, and run it against the browsers you care about. Repeat until everything’s green.
In older browsers (e.g., Internet Explorer 8) you’ll be limited by the locators you can use (e.g., CSS3 nth locators like nth-child, nth-of-type, etc) and likely run into issues with some dynamic functionality (e.g., hovers). In cases like this, it’s simple enough to find an alternative set of locators to use that are specific to this browser and update your test to use them only when this browser is being used.
-Dave Haeffner, April 9, 2014
Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.
The application development landscape is becoming less complex to navigate, allowing companies of all sizes and experiences to create more useful solutions without encountering the same troublesome errors that they had to hurdle in the past. The agile movement and the ability to create continuous software development strategies is providing large enterprises and small firms with the unique opportunity to build comprehensive apps that serve a purpose, only to be disposed of after providing executives critical information needed to once again build upon the app creation process.
A recent InformationWeek report highlighted this concept of the "disposable" application, which is a solution built with a single purpose in mind and thrown away afterward. While this idea may appear to be a waste of financial and operational resources, the solutions are actually gaining momentum within major corporations. This is because the services are relatively easy to construct, especially now that agile development processes are becoming more common, and provide significant information into what opportunities will be advantageous for the company in the long run.
The patience of end-users is quickly diminishing, which means that businesses must find more efficient ways to create applications without compromising the integrity or quality of those tools. InformationWeek echoed this phenomenon and stated that many businesses are more willingly experimenting with the creation of new software solutions, even if those platforms are only used for temporary purposes. Just because those tools are disposable, however, doesn't mean they need to be tossed out. The point is that today's enterprises will not necessarily crumble if the solutions are thrown away, as creating new ones in their place is not out of the question.
A more flexible creative process
The DevOps movement is among the most significant disruptions gaining ground in the business application development landscape. By embracing a DevOps methodology, teams that create and test software can work more efficiently with groups responsible for maintaining production environments, which ultimately will result in a more robust and effective product in the end.
A recent CA Technologies and Vanson Bourne survey of roughly 1,300 senior IT decision-makers found that organizations witnessing the benefits of having a DevOps methodology in place generally saw a 22 percent increase in customers and 19 percent growth in revenue.
"In today's world of mobile apps and online consumer reviews, companies are under enormous pressure to deliver higher quality applications faster than ever before," said Shridhar Mittal, general manager for application delivery at CA Technologies.
Overall, the DevOps movement is gaining momentum throughout the business world as decision-makers become more aware of how they can embrace those strategies and the rewards that are associated with doing so. The constantly changing application landscape is also pressuring firms to adopt more comprehensive and agile mentalities that will allow teams to build, test and launch software more efficiently.
As these trends take shape in the coming years, executives need to plan ahead and understand how embracing DevOps, static code analysis, code review and other innovative agile development processes can make it easier for IT teams to build high-quality software in less time.