This is the first week for the new Hewlett Packard Enterprise. I can’t contain my excitement for this new adventure!
Keep reading to learn more about the transition to Hewlett Packard Enterprise and how the team utilized our own software to make it as painless as possible.
The Atlassian Summit conference is currently under way and while we unfortunately didn’t make it to San Fransisco this year (we are busy adding new features to TestRail), quite a few users and customers of our test management tool attend this year. Some are even involved with giving talks and Giancarlo Bisceglia & Maurizio Mancini had a great session this year.
Their talk titled How to Build in Quality from Day 1 using Lean QA and Agile Testing shows what principles they use to embed testing and a quality culture as part of their development teams. They also cover how they use TestRail together with the Atlassian suite (JIRA, Confluence, Bamboo etc.) for full traceability between requirements, testing and bug reports. We thought this talk would be very interesting and relevant to a lot of TestRail users, so we wanted to share it.
If you’re struggling to implement QA methods that fit with agile’s core principles, you’re not alone. Join Giancarlo and Maurizio as they explain how their teams found a sweet spot at the intersection of agile and QA engineering. They’ll share common pitfalls and how to avoid them. Plus, get tips and tricks on how to capture requirements and link JIRA to test repositories for complete traceability.
You can also find full presentation slides and speaker details at the Summit website. We will continue to share interesting talks, sessions, open source projects and third-party integrations for TestRail. So if you find anything interesting to share as well, please let us know!Try TestRail 5.0 Now
I was invited to South Africa to speak at the Java Conference in Cape Town. Due to a short term change in speakers I got the opportunity to not only speak about my most favorite topic – Top Performance Problem Patterns – but also about my second favorite topic: Quality Metrics-Driven Software Delivery. As for […]
IBM UrbanCode Deploy standardizes and automates deployment processes, speeding up and simplifying how you deploy application components both internally and externally. However, as you rely more on these components and use them in production environments, you become more dependent on the IBM UrbanCode Deploy server’s availability and ability to perform. This article discusses how to design and configure a scalable server with high-availability failover and disaster recovery capabilities.Contents
- High availability versus disaster recovery
- Server/Agent Communication
- Disaster Recovery
- Real world Deployments
- Knowing when to Scale
One key here is to distinguish the capabilities of high availability versus disaster recovery. In this context, we use the term high availability to refer to horizontal scaling across multiple servers and data centers. High availability provides both a means to increase capacity and to add fault tolerance when one or more servers is out of service or failing. By contrast, disaster recovery deals with a catastrophic failure in which the entire server or cluster of servers is unavailable. Most organizations require a disaster recovery plan as part of a business continuity plan. Even with the level of data center reliability and redundancy that is available today, a disaster recovery plan is still necessary to ensure that production applications can meet service-level agreements for uptime and availability.
In this article, I’ll cover the subsystems of an IBM UrbanCode Deploy installation and how to set each up for high availability. Then, I’ll cover how to plan and prepare for disaster recovery.Subsystems
To start this discussion, here is a high-level architectural picture to help set some context. What is key is that this solution is purposely built for handling large scale deployments (in the 1000’s of deployment targets) so it has been designed accordingly.
The following architectural diagram shows the structure of the parts of the IBM UrbanCode Deploy server. The major subsystems are the configuration web tier, the workflow engine, and the server/agent communication. These subsystems access the database and file storage through the artifact and file management subsystems. We’ll take a quick look at how each of these can be scaled up and how that is done.Configuration web tier
The configuration sub-system is the user interface of the server and what clients access to configure and trigger deployments. This tier of the application exposes a set of RESTful APIs that are shared by the web, command-line client, and API users for interacting with the system.
Being a web-based solution, this sub-system can be scaled to handle more concurrent access and improve throughput by scaling horizontally across clustered servers. An HTTP load balancer distributes traffic between the servers.
The following diagram shows a cluster of servers that are set up in this way. This solution requires that you configure the servers for clustering. The performance of the cluster is dependent on the performance of the underlying shared database and file system.Workflow Engine
The workflow sub-system is responsible for orchestrating deployments across all the deployment targets, including distributing tasks and and processing updates from the agents as they perform the deployment steps.
As the number of managed agents and concurrent deployments increase, the amount of processing time and bookkeeping increases. A single server can handle hundreds of these transactions concurrently, and it is possible to continue to add processing capability to the server to support this, but at some point it is more practical to share the load across multiple servers.
The following diagram shows how scaling the workflow engine subsystem allows us to spread the load across the cluster to increase throughput. In this case, JMS mesh technology shares messaging data across the servers. One caveat here: we cannot use a load balancer to simply point to a shared URL as in the HTTP load balancer case. Because the connections from agent to server are persistent, a round robin DNS style solution is the best approach.Server/agent communication
Of course, the IBM UrbanCode Deploy environment contains far more agents than servers, and the number of incoming persistent connections to each server can get large. To manage the strain on the server, you install agent relays to provide local connection endpoints to servers. Agent relays eliminate the need for each agent to have a direct connection to the server and therefore reduce the number of persistent connections and threads that are held open on the server. Having fewer agents directly connected to the server can also simplify security rules and is analogous to the jump server in production that many organizations use to perform deployments to DMZ or other non-trusted environments.
The following diagram shows a cluster of servers that uses agent relays to moderate connections from agents. The agents connect to the agent relays, which connect to the servers through the JMS mesh. The red line represents a firewall; one agent relay makes connections through the firewall, which means that the agents that use this agent relay do not need to open their own connections through the firewall.
Leveraging shared persistent connections between servers and agent relays in this way, a single server thread or a cluster or servers can manage thousands of agents or many more.Scaling artifact repository and distribution
The artifact sub-system is the final key area we’ll talk about here. This sub-system handles the versioning and storage of deployable artifacts. Its performance depends heavily on the performance of the shared file system, and this file system is also a crucial part of a disaster recovery solution. For example, the file system should be on a fast SAN storage system to minimize latency in sending plugins and artifacts to agents. Obviously, this file system should be reliable and backed up regularly.
As stated already, the performance of this file system is crucial to scaling the solution in a clustered environment. It is best to use fast SAN storage for the file system in order to minimize the latency in grabbing and serving plugins and artifacts to agents.
Versions of IBM UrbanCode Deploy 6.1 and later have a new feature that can increase the performance of the file system and reduce latency in server-agent communication. Agent relays cache artifacts and plugins and provide these resources to agents instead of retrieving them from the file system each time. As a result, agents get resources faster and load on the server is reduced. For information about artifact caching, see http://www-01.ibm.com/support/knowledgecenter/SS4GSP_6.2.0/com.ibm.udeploy.install.doc/topics/t_agent_relay_cache_setup.html
Note: Artifact caching adds a new security requirement for agents communicating with agent relays. In this case, the server and agents are already using JMS on port 7916 and usually the HTTP Proxy Port (20080), and the new artifact caching adds a new port to the agent relay HTTP proxy port +1 (20081).Disaster recovery
To prepare for the worst possible case, we assume that nothing from the original server has survived a serious event. Of course, we always minimize the risk of this scenario by spreading the primary cluster, database, and shared file-system over two or more data centers. However, it’s also necessary to have a disaster recovery plan.
When we have a DR event, what we need to bring the server back online and functioning is a relatively short list:
- The database
- The asset repository file system
- The configuration directory of the IBM UrbanCode Deploy installation
- A new server or cluster to host the server
- Security rules in place for traffic ( HTTP/HTTPS, JMS, JDBC, licensing )
- A DNS switchover
In practice, production systems should have at least nightly database backups and keep transaction logs to replay forward to failure. The file system should be synced or duplicated as close to real time as possible; most SAN devices already provide tools for duplication. When it’s time to start new servers, the new servers can be pre-prepared VM copies of production servers or built ad-hoc, and in each case, the backup server configurations provide quick setup, including items like the correct SSL keys for client communication and for decrypting encrypted properties in the database. One thing that I have seen over looked too many times is a license to run the server; our best plans to bring up the server are in vain without a license, so be sure that your backups include a DR copy of production licenses.
Security rules can be a sticky point as well, so ensure that you have the required rules to get from your various endpoints to both the production server and to the DR site server. Finally, some kind of name switch-over using your organization’s name management solution is necessary to move traffic from the disabled server to the new server.
Testing your disaster recovery plan is a good idea. You can test your disaster recovery plan in a disconnected network segment, so you can simulate a real DNS-related cutover and ensure that services do in fact re-connect without having to update every agent/agent relay. If you test the DR plan on the production network instead, there are some caveats. For one thing, you really won’t be able to test the global DNS switchover. Also, the server knows its own URL, and depending on what actions you are testing, it may respond with this URL and inadvertently re-direct tests to the production server. In this case, go to Systems > Settings and ensure that the URLs are correct for your testing environment.Real-world deployments
The following diagram shows how an IBM UrbanCode Deploy architecture could look fully scaled up. This diagram includes each of the high-availability tactics mentioned in this article, including clustered servers, agent relays, and artifact caching.
One note on using round robin DNS versus a load balancer: In most cases, a load balancer handles the HTTP traffic to the servers, and a round robin DNS handles the JMS traffic. It is possible to do both of these tasks with only a load balancer, but you must understand how to enable round robin persistent connections, and you can’t attempt any kind of SSL off-loading. For that reason, using a load balancer for JMS traffic is officially unsupported, but it is possible to make round robin DNS work with your choice of load balancer.
This diagram (example deployment #1) shows a typical large-scale deployment. As part of a disaster recovery plan, it includes a cold standby server that takes over if the production cluster fails.
This diagram (example deployment #2) shows a large-scale deployment that uses more high-availability features, including clustered servers, redundant agent relays, and artifact caching. This type of high-availability deployment is becoming more common as enterprise customers depend more on their servers.Knowing when to scale
A single IBM UrbanCode Deploy server and a few agent relays can support hundreds of servers with dozens of daily deployments with little to no tuning required. However, achieving enterprise scale, performance, and reliability ultimately rely on implementing clustering. Making the decision to cluster should be based on multiple factors, the first being a vision or plan, and the second being the current state.
First, plan to ensure that you have capacity to meet your deployments’ needs along with availability, performance, and reliability requirements in your business continuity plans. For example, if you have availability requirements for your production web portal, then the IBM UrbanCode Deploy server should have the same requirements. This planning should include the expected usage today, expected growth for the next 6 months, and for the next year. I recommend that this is part of at least a yearly reconciliation activity to ensure you are staying on target. Most organizations do this anyway when they are doing yearly budgeting.
The second part is reacting to a changing environment; this is a DevOps world and your solution today could be dramatically different in 3 months if you start producing a range of new products or shift workloads across different technologies. In this dimension, ensuring that you have standard application monitoring is crucial. You can start with simple metrics like CPU utilization, RAM utilization, disk utilization, disk IO, net IO, database growth, and database CPU utilization. The load characteristics of your deployment are dependent on the deployment workflows that you build and use in your organization, so on this front your mileage may vary. Still, standard server high-water marks can be your roadmap along with the understanding of the component subsystems discussed above to help you identify what bottlenecks and scaling issues you may be facing.Summary
Understanding how to grow your IBM UrbanCode Deploy solution and keep your end users’ needs satisfied is key to a successful deployment. Support for out of the box clustering, connectivity to clustered databases, and building the solution with best practices and proven components can be combined to make IBM UrbanCode Deploy a real heavyweight in enterprise-level deployment automation. Keeping the server running and performing is only part of the puzzle, but it is a pretty important one that we want to make sure we have a solid plan for.Reference:
With great pleasure I can announce the very first build of Pulse 3.0 is now available from our Alpha Program page! Although this build is very much incomplete and unstable, this is a huge milestone after months of work in both the back and front ends of Pulse.
As this is a larger release we’ve made the choice to release earlier in the cycle than we otherwise would, for a couple of reasons:
- To show the massive progress that has already been achieved.
- To solicit feedback about the all-new administration UI (in time to apply said feedback before feature freeze).
We know from previous feedback that the new UI will address some pain points, mainly around quickly navigating and understanding your configuration in a large setup. We’ve also taken advantage of modern browser features to fix clunky corners: e.g. proper history integration and drag-and-drop reordering (no more clickety-click!). With such major changes, though, we’re always keen to hear what you think — good or bad — so we can keep on the right track.
So please do find the time to download and play with an alpha build, then get in touch with us via support email. We’ll be iterating fast on this release stream now, so expect to see updates about regular builds with new UI features each time. Happy building!
Whenever there is bad press coverage of Node.js, it is (typically) related to performance problems. This does not mean that Node.js is more prone to problems than other technologies – the user must simply be aware of certain things about how Node.js works . While this technology has a rather flat learning curve, the machinery that keeps Node.js ticking is quite complex […]
The post Understanding Garbage Collection and hunting Memory Leaks in Node.js appeared first on Dynatrace APM Blog.
Agile is a must for development shops. Agile is a mature, iterative, collaborative methodology that breaks the development process down into shorter sprints. At its core, Agile development is about small iterations, test automation and a continuous integration pipeline.Waterfall Was Created For a Perfect World, But We Don’t Live In One
Agile is a reaction to the slower, sequential approach known as Waterfall. Where Waterfall requires upfront planning to ensure that all details are accounted for, with no room for surprises or changes, Agile accounts for the inevitability of change, adapting to the project as it unfolds.
“Imagine a waterfall on the cliff of a steep mountain. Once the water has flowed over the edge of the cliff and has begun its journey down the side of the mountain, it cannot turn back. It is the same with waterfall development.” (Search Software Quality)
To understand the advantages of Agile, it’s important to first understand the more traditional Waterfall methodology:
- It is a sequential design process: discover, plan, build, test, review
- Each project is based on the extensive collection of clear documentation gathered at the beginning
- The whole product is only tested at the end of the cycle
- It doesn’t take into account a client’s evolving needs or leave any room for revision
With Agile, your work is divided into small, manageable tasks and testing is done continuously throughout the software development lifecycle. It can’t be cut at the end, meaning you go through every phase (analysis, design, coding, testing) at every iteration. Testing never gets squeezed in at the end, put off to a future iteration or ignored altogether, as often happens with the Waterfall method. The beauty of working the Agile way is that every two to four weeks the customer gets working software — a feat that’s nearly impossible with Waterfall.Why Automated Testing?
Automated testing is one of the most important aspects of the Agile method, and one of the biggest shifts developers have to make when switching over.
So to answer your question now: No.
You cannot switch over to Agile without also using automated testing. Agile relies on fast feedback by testing each incremental change, which is impossible to do manually. Therefore small automated tests become an essential component of each and every phase of the Agile development process. Automation opens the gates for continuous integration and allows developers a chance to break free of the rigid, unrealistic Waterfall method.What Does It Take for a Team to Succeed With Agile?
Moving to Agile is more than just changing a few processes and habits. It requires a change in culture. This is often the most difficult part for a team. For years, many development shops have tried and failed to adopt Agile because they didn’t understand just how differently an Agile team must function from a Waterfall team.
To be successful in Agile, the whole team has to be on board, changing habits, practices and — to some degree — the entire culture of the shop. You may need to hire some new developers with experience in onboarding test automation infrastructure and DevOps tools.
But if you can make the shift successfully, the whole team will enjoy a number of benefits:
- The smaller, more manageable tasks allow the project team to focus on high-quality development, testing and
- collaboration, and testers and developers collaborate daily, delivering high-quality software with less rework.
- Small and repeatable automated tests (Unit, API, Integration and Functional UI) catch issues before they have a chance to cause problems.
- Continuous integration keeps it all together and provides faster feedback.
- In the end, you can ship better code faster.
You now understand the difference between Waterfall and Agile, right? Let me recap! Waterfall moves slowly. Using this method, projects are almost never completed on time and are usually infested with bugs. Agile breaks down tasks and allows for revisions, making it easy to ship modules of better code faster.
If you haven’t already, now is the time to get your development team together and talk about shifting to Agile.
Trying new things can be overwhelming and even scary, but if you think small, you can sprint your way to success. It’s the Agile way!
Greg Sypolt (@gregsypolt) is a senior engineer at Gannett and co-founder of Quality Element. He is a passionate automation engineer seeking to optimize software development quality, while coaching team members on how to write great automation scripts and helping the testing community become better testers. Greg has spent most of his career working on software quality — concentrating on web browsers, APIs, and mobile. For the past five years, he has focused on the creation and deployment of automated test strategies, frameworks, tools and platforms.
In LoadRunner 12.50, we provided an alternative approach to deploying the Linux Load Generators (LG) using Docker. Docker is an open platform for developers and operation engineers to build, ship and run distributed applications. In our case, it supplies you with another quick alternative to deploy a Linux Load Generator (LG) for your load testing situation.
Keep reading to find out how you can utilize this capability.
Since 2011, we’ve asked respondents to the State of Medical Device Development Survey, “What are the top three pieces of information you wish you had better visibility into during the design control phase?”
Respondents have consistently identified risk controls as one of the top three, often putting it at the very top of the list. It would seem that poor visibility into risk is a persistent problem in the industry, so how can companies improve the visibility of risk controls?Source: 2015 State of Medical Device Development Report (click to enlarge) Not “Just a Compliance Check Box”
Historically, many medical device companies have viewed risk controls as something they had to do to appease auditors, and nothing more. As one participant at a recent AAMI/FDA Summit put it, “We’re doing risk management to check a box, file it away, and never look at it.”
Now, however, companies are starting to realize that risk management needs to begin early in the research phase and continue across the entire development and testing process. Not only is it better from a compliance standpoint, but it also results in defects being found much earlier, when they can be fixed for far less expense than if they’re found late in the process. A $500,000 fix found at the end of the development process may only cost a few thousand dollars to fix if caught earlier.
The first step to gaining more visibility into risk, then, is to recognize that risk controls are a “must have” component of the development process, and not just a “nice to have” feature completed simply to fill in a check box.FDA Guidance
The FDA has recently started paying closer attention to risk controls. While this may be a fairly new practice, what’s not new are the FDA’s guidelines on risk controls; some of them date back to 1997.
Here’s a good explanation and example from the FDA’s Design Control Guidance For Medical Device Manufacturers (we italicized key points):
RISK MANAGEMENT AND DESIGN CONTROLS. Risk management is the systematic application of management policies, procedures, and practices to the tasks of identifying, analyzing, controlling, and monitoring risk. It is intended to be a framework within which experience, insight, and judgment are applied to successfully manage risk. It is included in this guidance because of its effect on the design process.
Risk management begins with the development of the design input requirements. As the design evolves, new risks may become evident. To systematically identify and, when necessary, reduce these risks, the risk management process is integrated into the design process. In this way, unacceptable risks can be identified and managed earlier in the design process when changes are easier to make and less costly.
An example of this is an exposure control system for a general purpose x-ray system. The control function was allocated to software. Late in the development process, risk analysis of the system uncovered several failure modes that could result in overexposure to the patient. Because the problem was not identified until the design was near completion, an expensive, independent, back-up timer had to be added to monitor exposure times.
The FDA also offers explicit recommendations for premarket approval that state that companies should “provide traceability to link together design, implementation, testing, and risk management.” They also recommend including a traceability matrix to help guide the auditor.
The FDA’s recommendations for software validation call for “an integration of software life cycle management and risk management activities,” and there’s even a Design Controls Decision Flowchart from the FDA, which includes the following recommendation (again, we italicized key points):
While the requirement for the conduct of risk analysis appears in Section 820.30(g) Design Validation, a firm should not wait until they are performing design validation to begin risk analysis. Risk analysis should be addressed in the design plan and risk should be considered throughout the design process. Risk analysis must be completed in de-sign validation.
These are all good guidelines for controlling risk, and becoming familiar with them is another good step in raising the visibility of your risk controls.Move Away from Documents
The next step is to move away from document-centric processes. This is another part of the development process we’ve been tracking in the State of Medical Device Development Survey, and while the industry as a whole is moving away from documents, it is doing so at a glacial pace.Source: 2015 State of Medical Device Development Report (click to enlarge)
Gaining visibility into a document or spreadsheet is challenging, to say the least. Assuming the “latest copy” is easily accessible (and how often is that true?), actually understanding what’s in the document requires digging into the contents to find the relevant information.
To improve risk visibility, you need to link individual risk control artifacts to the relevant requirements and downstream artifacts. It is much easier to trace an individual risk artifact, review progress, and investigate related tasks and work items when development artifacts are linked.
However, manually linking risk controls to product requirements and downstream artifacts is a detailed, tedious, and error-prone process. It’s also a maintenance nightmare, because requirements and corresponding risks change in response to market needs and design tradeoffs over the course of the project. Coming up with unique identifiers for requirements is also a hassle, and another way errors can slip in.
Automated solutions can eliminate the tedium and decrease the chances for error, with the bonuses of providing an audit trail and building the traceability matrix for you.Automate Your Traceability
Using automated traceability solutions can make risk controls much more visible. With built-in linking between the different systems that manage requirements, hazards, test cases, and other development artifacts, these solutions let you trace artifacts from any point in the development process to any other point—forward or backward.
Changing requirements and design tradeoffs during the project will require your team to reassess safety risks due to hazards. When new or modified functional requirements are added, the potential for additional hazards arises. Identifying changes to hazards based on changes to requirements means performing the same analysis all over again— unless hazards are linked to existing requirements, which makes it easier to determine which hazards are affected.
Test planning and requirements verification are also part of reassessing risk. Ideally, test cases and test runs can be traced back to requirements, so that successful test case execution also ensures full requirements coverage. With traceability back to the safety analysis and risks, the team can demonstrate that their test suites directly verify their risk controls.
By linking risk controls with the relevant requirements and downstream artifacts, such as test cases and defects, automated traceability helps ensure that identified risks are successfully mitigated in the resulting product.Not Just “Nice to Have”
Medical device companies can no longer think of risk controls as something that’s “nice to have.” Risk management is critical to the quality and safety of the product, and to proving compliance to auditors. With the FDA and other regulators giving more attention to risk management, your risk controls need to be highly visible and fully traceable.
For more information on increasing the visibility of risk, read our white paper, Exposing Risk throughout Your Development Lifecycle. We also have a collection of videos on managing risk.
We are very happy and excited to announce this new version of UrbanCode Deploy. The product team has done an awesome job! I think you’ll agree when I say this is a release that has something for every software delivery team striving to achieve continuous delivery. From mainframe and IBM i teams entering the DevOps world, to the Ops and Infrastructure teams looking to Cloud for efficiencies and economies of scale, to the AppDev teams experimenting with Docker, this release has much to offer:
- Application Templates. This is a big new feature – one that our largest enterprise customers have been asking for. If you know about UrbanCode Deploy component templates, this is basically the same concept applied to the full application. The idea is to design an application once and reuse it many times. Centralized Services can use application templates to enforce standards across an organization and facilitate the onboarding of new teams and their applications. Developers can pick up the app templates and quickly create an application deployment pattern that can be reused. The result is that more teams can automate deployments of their applications faster than ever before.
- Support for OpenStack Kilo. UrbanCode Deploy provides capabilities organizations need to reap the benefits of hybrid cloud as they pursue continuous delivery. With support for OpenStack Kilo, UrbanCode makes good on the promise to deliver open, easily portable cloud blueprints for full stack application deployment to multiple cloud and virtualized environments.
- Support for Docker containers. Many of our customers are now raising their hands to say they are experimenting with Docker and plan to put it to use in production over the next couple of years. Some internal IBM teams are also looking to Docker for simplified ways to deliver complex application infrastructure. The enhanced Docker plugin for UrbanCode Deploy provides the ability to map Docker images to components within IBM UrbanCode Deploy. The component template included in the plugin provides a process for running Docker containers.
- Enhanced support for z System platforms. Our enterprise clients are looking to automate more and more of the software delivery pipeline, which includes powerful legacy systems that will not be replaced any time soon. The UrbanCode Deploy team recognizes the importance of delivering tools to mainframe teams so they can streamline and automate deployments with the same tool as the distributed platform teams. The new release of UrbanCode Deploy includes a tighter integration with Rational Team Concert, CICS and numerous enhancements, including support for Docker containers for Linux on z.
- New plugin for IBMi deployments. We didn’t forget IBM i as part in our mission to support complex, multiplatform environments. ARCAD, an IBM Partner with decades-long IBM i experience, has developed an UrbanCode Deploy plugin for deploying to IBM i environments.
- New integrations: UrbanCode Deploy has over a 130 plugins. New ones include Siebel, PowerShell (beta), CICS CM (also beta) and updated plugins for Cloud Foundry, SalesForce and WebLogic. Search for these on the UCD plugin page.
This release is a great milestone in the UrbanCode Deploy history. v6.2 reflects dedication for supporting and growing deployment automation for cloud, new technologies, legacy systems, with an ever-growing list of integrations. So take the next step, check out the complete release notes, learn more about UrbanCode Deploy and see all the ways you can evaluate the software yourself!
Preceding some of last week's Jenkins 2.0 discussions, there had been some threads on whether we should move Jenkins to require Java 8. The introduction of Java 8 last year brought performance improvements and highly desirable API changes, which make developing Java-based applications (arguably) much easier than before. The release was followed earlier this year by the end-of-life announcement for Java 7; the writing is on the wall: upgrade to Java 8.
I wanted to answer the question "does it even make sense to force an upgrade to Java 8?" There are plenty of technical discussions that we can have in the community on whether or not this is the right approach, but my goal was to try and measure the current Jenkins install base for Java 8 preparedness.
With some access logs data, I went through the millions of requests made to Jenkins infrastructure in 2015 and started filtering out the user agent which made those requests.
NOTE: This data is totally not scientific and is only meant to provide a coarse insight into what versions of Java access Jenkins web infrastructure.
When Jenkins hits the mirror network, it's not overriding the default user agent from the Java runtime, so many of the user agents for the HTTP request are something like Java/1.7.0_75. This indicates that the request came from a Java Runtime version 1.7.0 (update 75).
Looking at the major JVM versions making (non-unique) requests to Jenkins infrastructure we have:
- 1.8.0: 21,278,960
- 1.7.0: 27,340,214
- 1.6.0: 4,148,833
This breaks down across various updates as well, which is also particularly interesting to me because many of these Java versions have long since had security advisories posted against them.
As I mentioned before, this is not a rigorous analysis of the access log data and is also not filtered by unique IP addresses. What I found most interesting though is that the Java 8 upgrade numbers are actually fairly strong, which I didn't expect. I expect that piece of the pie will continue to grow. Hopefully so much so that we're able to move over to Java 8 before the end of 2016!
Let's check out some concrete examples shall we?
- Have you ever setup HTTP Caching properly, created a class for your project and call it done?
- What about creating a proper Web.config to configure static asset caching?
- And what about creating a MediaTypeFormatter for handling CSV or some other custom type?
- What about that BaseController that you rebuild from project to project?
- And those extension methods that you use ALL the time but rebuild for each projects...
If you answered yes to any of those questions... you are in great risk of having to code those again.
Hell... maybe someone already built them out there. But more often than not, they will be packed with other classes that you are not using. However, most of those projects are open source and will allow you to build your own Batman utility belt!
So once you see that you do something often, start building your utility belt! Grab those open source classes left and right (make sure to follow the licenses!) and start building your own class library.NuGet
Once you have a good collection that is properly separated in a project and that you seem ready to kick some monkey ass, the only way to go is to use NuGet to pack it together!
Checkout the reference to make sure that you do things properly.NuGet - Publishing
OK you got a steamy new hot NuGet package that you are ready to use? You can either push it to the main repository if your intention is to share it with the world.
If you are not ready quite yet, there are multiple way to use a NuGet package internally in your company. The easiest? Just create a Share on a server and add it to your package source! As simple as that!
Now just make sure to increment your version number on each release by using the SemVer convention.Reap the profit
OK, no... not really. You probably won't be money anytime soon with this library. At least not in real money. Where you will gain however is when you are asked to do one of those boring task yet over again in another project or at another client.
The only thing you'll do is import your magic package, use it and boom. This task that they planned would take a whole day? Got finished in minutes.
As you build up your toolkit, more and more task will become easier to accomplish.
The only thing left to consider is what NOT to put in your toolkit.Last minute warning
If you have an employer, make sure that your contract allows you to reuse code. Some contracts allows you to do that but double check with your employer.
If you are a company, make sure not to bill your client for the time spent building your tool or he might have the right to claim them as his own since you billed him for it.
In case of doubt, double check with a lawyer!
As we head into the EuroSTAR Conference expo today, let’s take a moment and check out some amazing photos. Our very own uTest member, Constantine Buker, is our onsite tester correspondent. Check in this week as we have some more EuroSTAR related content pieces on the way. Here’s what he shared with us so far. Also, be sure to […]
We’re happy to announce that Team Foundation Server (TFS) integration with TestTrack is now available. This integration means developers who are using TestTrack 2015.1.2 and later can attach TFS files and changesets to issues, test cases, or requirements without leaving their development environment. It also provides such things as end-to-end traceability between TestTrack artifacts and TFS source files.Matrix report with TFS source files
See TestTrack Third-Party Integrations for supported TFS versions.
The post Now Available! Team Foundation Server Integration with TestTrack appeared first on Blog.
With so many exciting releases in our midst (the new version of Apple TV was just released on Oct. 30, and the new version of uTest is coming soon), we have decided to launch a contest that will both prepare uTesters for the new uTest site while giving them an opportunity to win a cool new gadget. Since […]
The post New Contest Alert: Update Your Profile and Win an Apple TV! appeared first on Software Testing Blog.