Imagine this hypothetical conversation I didn’t have with someone last week…
THEM: “Is there a DevOps framework?”
ME: “Noooooo, it doesn’t work like that”
ME: “Well DevOps is more like a philosophy, or a set of values and principles. The way you apply those principles and values varies from one organisation to the next, so a framework wouldn’t really work, especially if it was quite prescriptive, like Scrum”
THEM: “But I really want one”
ME: “Ok, I’ll tell you what, I’ll hack an existing framework to make it more devopsy, does that work for you?”
THEM: “Take my money”
So, as you can see, in a hypothetical world, there is real demand for a DevOps framework. The trouble with a DevOps framework, as is always the problem with anything to do with DevOps, nobody can actually agree what the hell DevOps means, so any framework is bound to upset a whole bunch of people who simply disagree with my assumption of what DevOps means.
So, with that massive elephant in the room, I’m just going to blindly ignore it and crash on with this experimental little framework I’m calling DevOpScrum.
In my view (which is obviously the correct view) DevOps is a lot more than just automation. It’s not about Infrastructure as Code and Containers and all that stuff. All that stuff is awesome and allows us to do things in better and faster ways than we ever could before, but it’s not the be-all-and-end-all of DevOps. DevOps for me is about the way teams work together to extract greater business value, and produce a better quality solution by collaborating, working as an empowered team, and not blaming others (and also playing with cool tools, obvs). And if DevOps is about “the way teams work together” then why the hell shouldn’t there be a framework?
The best DevOps framework is the one a team builds itself, tailored specifically for that organisation’s demands, and sympathetic to its constraints. Incidentally, that’s one reason why I like Kanban so much, it’s so adaptable that you have the freedom to turn it into whatever you want, whereas scrum is more prescriptive, and if you meddle with it you not only confuse people, you anger the Scrum gods. However, if you don’t have time to come up with your own DevOps framework, and your familiar with Scrum already, then why not just hack the Scrum framework and turn it into a more DevOps-friendly solution?
Which brings us nicely to DevOpScrum, a DevOps Framework with all the home comforts of Scrum, but with a different name so as not to offend Scrum purists.
The idea with DevOpScrum is to basically extend an existing framework and insert some good practices that encourage a more operational perspective, and encourage greater collaboration between Dev and Ops.
How does it work?
Start by taking your common-or-garden Scrum framework, and then add the following:
Operability features on the backlog
A definition of Done that includes “deployable, monitored, scalable” and so on (i.e doesn’t just focus on “has the product feature been coded?”)
Continuous Delivery as a mandatory practice!
And there you have it. A scrum-based DevOps Framework.
Let’s look into some of the details…
We’ll start with The Team…
A product owner (who appreciates operability – what we once called “Non-Functional Requirements in the olden days. That term is so not cool anymore. It’s less cool than bumbags).
Devs, Testers, BAs, DBAs and all the usual suspects.
Infrastructure/Ops people. Some call them DevOps people these days. These are people who know infrastructure, networking, the cloud, systems administration, deployments, scalability, monitoring and alerting – that sort of stuff. You know, the stuff Scrum forgot about.
Roles & Responsibilities
Pretty similar to scrum, to be fair. The Product Owner has ultimate responsibility for deciding priorities and is the person you need to lobby if you think your concerns need to be prioritised higher. For this reason, the Product Owner needs to understand the importance of Operability (i.e the ability to deploy, scale, monitor, maintain and so on), which is why I recommend Product Owners in a DevOps environment get some good DevOps training (by pure coincidence we run a course called “The DevOps Product Owner” which does exactly what I just described! Can you believe that?!).
There’s no scrum master in this framework, because it isn’t scrum. There’s a DevOpScrum coach instead, who basically does the scrum master coach and is responsible for evangelising and improving the application of the DevOps values and principles.
DevOps Engineers – One key difference in this framework is that the team must contain the relevant infrastructure and Ops skills to get stuff done without relying on an external team (such as the Ops team or Infrastructure team). This role will have the skills to provide Continuous Delivery solutions, including deployment automation, environment provisioning and cloud expertise.
Yep, there’s sprints. 2 weeks is the recommended length. Anything longer than that and it’s hardly a sprint, it’s a jog. Whenever I’ve worked in 3 week sprints in the past, I’ve usually seen people take it really easy in the first couple of weeks, because the end of the sprint seemed so far away, and then work their asses off in the final week to hit their commitments. It’s neither efficient nor sustainable.
Another big difference with scrum is that the Product Backlog MUST contain operability features. The backlog is no longer just about product functionality, it’s about every aspect of building, delivering, hosting, maintaining and monitoring your product. So the backlog will contain stories about the infrastructure that the application(s) run on, their availability rates, disaster recovery objectives, deployability and security requirements (to name just a few). These things are no longer assumed, or lie outside of the team – they are considered “first class citizens” so to speak.
I recommend twice-weekly backlog grooming sessions of about an hour, to make sure the backlog is up-to-date and that the stories are in good shape prior to Sprint Planning.
Because the backlog is different, sprint planning will be subtly different as well. Obviously we’ve got a broader scope of stories to cover now that we’ve got operational stories in the backlog, but it’s important that everyone understands these “features”, because without them, you won’t be able to deliver your product in the best way possible.
I encourage the whole team to be involved, as per scrum, and treat each story on merit. Ask questions and understand the story before sizing it.
I recommend INVEST as a guiding principle for stories. Don’t be tempted to put too much detail in a story if it’s not necessary. If you can get the information through conversation with people, and they’re always available, then don’t bother writing that stuff up in detail, it’s just wasting time and effort.
The difference between Scrum and DevOpScrum in respect to stories is that in DevOpScrum we expect to see a large number of stories not written from an end-user’s perspective. Instead, we expect to see stories written from an operation engineers perspective, or an auditor’s perspective, or a security and compliance perspective. This is why I often depart from the As a… I want… So that… template for non “user” stories, and go with a “What:… Why:…” approach, but it doesn’t matter all that much.
Same as Scrum but if I catch anyone doing that tired old “what I did yesterday, what I’m doing today, blockers…” nonsense I’ll personally come and find you and make a really, really annoying noise.
Please come up with something better, like “here’s what I commit to doing today and if I don’t achieve it I’ll eat this whole family pack of Jelly Babies” or something. Maybe something more sensible than that. Maybe.
At the end of your sprint, get together and work out what you’ve learned about the way you work, the technology and tools you’ve used, the product you’re working on and the general agile health of your team. Also take a look at how the overall delivery of your product is looking. Most importantly, ask yourself if you’re collaborating effectively, in a way that’s helping to produce a well-rounded product, that’s not only feature-rich but operationally polished as well.
Learn whatever you can and keep a record of what you’ve learnt. If any of these lessons can be turned into stories and put on the backlog as improvements, then go for it. Just make sure you don’t park all of your lessons somewhere and never visit them again!
Deliver Working Software
As with Scrum, in DevOpScrum we aim to deliver something every 2 weeks. But it doesn’t have to just be a shiny front-end to demo to your customers, you could instead deliver your roll-back, patching or Disaster Recovery process and demo that instead. Believe it or not, customers are concerned with that stuff too these days.
I personally believe this should be the guiding practice behind DevOpScrum. If you’re not familiar with Continuous Delivery (CD) then Dave Farley and Jez Humble’s book (entitled Continuous Delivery, for reasons that become very obvious when you read it) is still just about the best material on the subject (apart from my blog, of course).
As with Continuous Integration, CD is more than just a tool, it’s a set of practices and behaviours that encourage good working practices. For example, CD requires high degrees of automation around testing, deployment, and more recently around server provisioning and configuration.
So there it is in some of its glory, the DevOpScrum framework (ok, it’s just a blog about a framework, there’s enough material here to write an entire book if any reasonable level of detail was required). It’s nothing more than Scrum with a few adjustments to make it more DevOps aligned.
As with Scrum, this framework has the usual challenges – it doesn’t cater for interruptions (such as production incidents) unless you add in a triage function to manage them.
There’s also a whole bunch of stuff I’ve not covered, such as release planning, burn-ups, burn-downs and Minimum Viable Products. I’ve decided to leave these alone as they’re simply the same as you’d find in scrum.
Does this framework actually work? Yes. The truth is that I’ve actually been working in this way for several years, and I know other teams are also adapting their scrum framework in very similar ways, so there’s plenty of evidence to suggest it’s a winner. Is it perfect? No, and I’m hoping that by blogging about it, other people will give it a try, make some adjustments and help it evolve and improve.
The last thing I ever wanted to do was create a DevOps framework, but so many people are asking for a set of guidelines or a suggestion for how they should do DevOps, that I thought I’d actually write down how I’ve been using Scrum and DevOps for some time, in a way that has worked for me. However, I totally appreciate that this worked specifically for me and my teams. I don’t expect it to work perfectly for everyone.
As a DevOps consultant, I spend much of my time explaining how DevOps is a set of principles rather than a set of practices, and the way in which you apply those principles depends very much upon who you are, the ways in which you like to work, your culture and your technologies. A prescriptive framework simply cannot transcend all of these things and still be effective. This is why I always start any DevOps implementation with a blank canvas. However, if you need a kick-start, and want to try DevOpScrum then please go about it with an open mind and be prepared to make adjustments wherever necessary.
The decline of hardware proxies started a long time ago. They were too expensive and inflexible even before cloud computing become mainstream. These days, almost all proxies are based on software. The major difference is what we expect from them. While, until recently, we could define all redirections as static configuration files, that changed in favor of more dynamic solutions. Since our services are being constantly deployed, redeployed, scaled, and, in general, moved around, the proxy needs to be capable of updating itself with this ever changing end-point location.
We cannot wait for an operator to update configurations with every new service (or release) we are deploying. We cannot expect him to monitor the system 24/7 and react to a service being scaled as a result of increased traffic. We cannot hope that he will be fast enough to catch a node failure which results in all services being automatically rescheduled to a healthy node. Even if we could expect such tasks to be performed by humans, the cost would be too high since an increase in the number of services and instanced we're running would mean an increase in workforce required for monitoring and reactive actions. Even if such a cost is not an issue, we are slow. We cannot react as fast as machines can and that discrepancy between a change in the system and proxy reconfiguration could, at best, result in performance issues.
Among software based proxies, Apache ruled the scene for a long time. Today, age shows its face. It is rarely the weapon of choice due to its inability to perform well under stress and relative inflexibility. Newer tools like nginx and HAProxy took over. They are capable of handling a vast amount of concurrent requests without posing a severe strain on server resources.
Even nginx and HAProxy are not enough by themselves. They were designed with static configuration in mind and require us to add additional tools to the mix. An example would be a combination of templating tools like Consul Template that can monitor changes in service registry, modify proxy configurations and reload them.
Today, we see another shift. Typically, we would use proxy services not only to redirect requests, but also to perform load balancing among all instances of a single service. With the emergence of the (new) Docker Swarm (shipped with the Docker Engine release v1.12), load balancing (LB) is moved towards software defined network (SDN). Instead performing LB among all instances, a proxy would redirect a request to an SDN end-point which, in turn, would perform load balancing.
Services architecture is switching towards microservices and, as a result, deployment and scheduling processes and tools are changing. Proxies and expectations we have from them are following those changes.
The deployment frequency is becoming higher and higher, and that poses another question. How do we deploy often without any downtime?The DevOps 2.0 Toolkit
If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.
The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.
In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.
Based on Docker and the Kubernetes container cluster manager, Red Hat OpenShift is the next generation container platform for developing, deploying and running containerized applications conveniently and at scale. In this article, Chris Morgan (@cmorgan_cloud), Technical Director for OpenShift Ecosystem, and Martin Etmajer (@metmajer), Technology Lead at the Dynatrace Innovation Lab, discuss why OpenShift and the Dynatrace […]
With the introduction of OneAgent version 107 it’s now possible to access Docker container log files related to specific applications (i.e., process groups).Different approaches to gathering log data
The Application Performance Management market has matured over the past few years. There are now a number of intelligent, automated approaches available for managing application lifecycle. Still, most applications continue to rely on logging as a foundation for diagnostics and tracing. While this may not be bad practice, it does create some challenges. For example, writing logs in dynamic, microservices-based environments involves the problem of diagnostics data persistence. Physical disks often aren’t available, containers are volatile, and saving logs as local disk files isn’t an option because such files disappear when the containers they reside in are shut down.
Different infrastructure and platform vendors approach this challenge from different angles. Some vendors advise that you should export logs to external storage. Others gather logs within proprietary frameworks, making them available and persistent long after the services they relate to have stopped.
Docker solves this dilemma by gathering logging information from all containers’ standard input/output errors and saving the data to a central repository that’s governed by a so-called logging driver. By default, the json-file logging driver is used and log messages are made available via the docker logs command. This is inconvenient, however. The driver only works for individual containers (you must provide an ID as a command line parameter), filtering is problematic, and output handling is time-consuming. A better option may be to send log information to external sources using other available drivers. Although this approach requires separate external (often expensive) tools to store and manage the data. To learn more about the issues you may encounter when confronting the challenges of Docker logging, have a look at this great blog series by Yoanis Gil Delgado.The Dynatrace approach
Here at Dynatrace, we’ve been working hard to help you tackle the management challenges involved in Docker monitoring. Our goal has been to provide an easy-to-use solution for analyzing, filtering, and parsing all the log information that applications generate, regardless of the number and stability of your Docker containers. We’ve also strived to save you the burden of setting up expensive storage solutions for your log data.
With OneAgent version 107 it’s now possible to access all Docker container log data related to specific applications (i.e., process groups). The workflow is simple and there’s no need for you to buy additional tools or pay for storage.To analyze log data for a specific application
- Select Technologies from the navigation menu.
- To filter the Process group list at the bottom of the page, select the technology type that your application’s process group runs on (you may need to scroll down to view the list).
- Select the process group you want to analyze.
The process group list entry then expands, revealing an overview chart and the process group’s Process group details button.
- Click the Process group details button to view this process group’s details page (see example below).
On the example Process group details page above, note the high dynamics of this process group, expressed in the highly variable number of detected processes (i.e., Docker containers). Because log files for this process group have been detected by Dynatrace, a Log files tab is provided. The log files for this process group are available for download from the Log files tab (see example below). Note that because this monitored process uses the json-file logging driver provided by Docker to generate its logs, a Docker container logs log file is available for download. Docker container log files can also be accessed via the Log viewer.
Best of all, with Dynatrace Docker log analytics, you don’t need to know on which container images a process group runs on. There’s no need to know the container names, IDs, or even the names of the hosts where the images are running. As long as the log data still exists in Docker’s host logging-driver history, you can find the data with the Log viewer.
Each log entry includes information about the corresponding container image and ID that logged the message and the type of output that was used. You can use this information for query filtering and thereby focus your analysis only on relevant containers and images.
Note that container IDs change with each new line in the log message stream (see example below). Even though this application (i.e., process group) is distributed across multiple hosts, processes, and containers, all log message associated with this application are analyzed and presented as a single virtual log file.
We hope that this addition to Dynatrace log analytics functionality saves you time in quickly accessing application-specific log diagnostics data and in triaging application performance issues. With this enhancement, you can now navigate from a problem details page that indicates a faulty application directly to the log files of the application’s supporting processes, in just a few clicks—regardless of how dynamic your underlying infrastructure is.
We are happy to announce OSS-Fuzz, a new Beta program developed over the past years with the Core Infrastructure Initiative community. This program will provide continuous fuzzing for select core open source software.
Open source software is the backbone of the many apps, sites, services, and networked things that make up "the internet." It is important that the open source foundation be stable, secure, and reliable, as cracks and weaknesses impact all who build on it.
Recent security storiesconfirm that errors likebuffer overflow anduse-after-free can have serious, widespread consequences when they occur in critical open source software. These errors are not only serious, but notoriously difficult to find via routine code audits, even for experienced developers. That's wherefuzz testing comes in. By generating random inputs to a given program, fuzzing triggers and helps uncover errors quickly and thoroughly.
In recent years, several efficient general purpose fuzzing engines have been implemented (e.g. AFL and libFuzzer), and we use them to fuzz various components of the Chrome browser. These fuzzers, when combined with Sanitizers, can help find security vulnerabilities (e.g. buffer overflows, use-after-free, bad casts, integer overflows, etc), stability bugs (e.g. null dereferences, memory leaks, out-of-memory, assertion failures, etc) and sometimeseven logical bugs.
OSS-Fuzz's goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution. OSS-Fuzz combines various fuzzing engines (initially, libFuzzer) with Sanitizers (initially, AddressSanitizer) and provides a massive distributed execution environment powered by ClusterFuzz.
Early successesOur initial trials with OSS-Fuzz have had good results. An example is the FreeType library, which is used on over a billion devices to display text (and which might even be rendering the characters you are reading now). It is important for FreeType to be stable and secure in an age when fonts are loaded over the Internet. Werner Lemberg, one of the FreeType developers, wasan early adopter of OSS-Fuzz. Recently the FreeType fuzzer found a new heap buffer overflow only a few hours after the source change:
ERROR: AddressSanitizer: heap-buffer-overflow on address 0x615000000ffa
READ of size 2 at 0x615000000ffa thread T0
SCARINESS: 24 (2-byte-read-heap-buffer-overflow-far-from-bounds)
#0 0x885e06 in tt_face_vary_cvtsrc/truetype/ttgxvar.c:1556:31
OSS-Fuzz automatically notifiedthe maintainer, whofixed the bug; then OSS-Fuzz automaticallyconfirmed the fix. All in one day! You can see the full list of fixed and disclosed bugs found by OSS-Fuzz so far.
Contributions and feedback are welcomeOSS-Fuzz has already found 150 bugs in several widely used open source projects (and churns ~4 trillion test cases a week). With your help, we can make fuzzing a standard part of open source development, and work with the broader community of developers and security testers to ensure that bugs in critical open source applications, libraries, and APIs are discovered and fixed. We believe that this approach to automated security testing will result in real improvements to the security and stability of open source software.
OSS-Fuzz is launching in Beta right now, and will be accepting suggestions for candidate open source projects. In order for a project to be accepted to OSS-Fuzz, it needs to have a large user base and/or be critical to Global IT infrastructure, a general heuristic that we are intentionally leaving open to interpretation at this early stage. See more details and instructions on how to apply here.
Once a project is signed up for OSS-Fuzz, it is automatically subject to the 90-day disclosure deadline for newly reported bugs in our tracker (see details here). This matches industry's best practices and improves end-user security and stability by getting patches to users faster.
Help us ensure this program is truly serving the open source community and the internet which relies on this critical software, contribute and leave your feedback on GitHub.
Product complexity continues to increase in the life sciences industry, making it more important than ever for companies to effectively leverage traceability in their product development process. However, more than half of industry experts surveyed say they are unable to use traceability for more than compliance purposes, according to the 2016 Life Sciences Product Development Survey Report. What prevents them from making better use of this key product development component?
Effective traceability allows teams to document the life of the product development process, so that every artifact can be easily traced all the way back to the originator of the initial request.Click to download the full report.
By creating relationships and links between development artifacts, effective traceability allows stakeholders, managers, and regulators to quickly review, from a high level, every action and decision within the product development lifecycle—or drill down to detailed information, as needed. Every relationship and artifact change can be traced all the way back to the original requirement, providing instant visibility into product data.
From a compliance standpoint, traceability tracks, relates, and verifies each step and activity within the development process. It helps life sciences organizations better monitor and analyze:
- Product development projects
- Verification and validation activities
- Internally validated IT systems
- Product approvals
From a business-value standpoint, an integrated product development solution:
- Improves visibility and collaboration among stakeholders, reducing errors and duplication of effort.
- Provides managers with timely and accurate information they need to make informed business decisions and keep the development process moving.
- More quickly defines and mitigates issues and challenges to speed product development.
- Boosts performance and eliminates wasteful costs by improving efficiency in every area of the development process.
In addition to meeting regulatory compliance, traceability can help companies bring quality products to market more quickly, safely, and profitably. Nearly half of survey respondents (46%) said they are able to leverage traceability to help manage risk, make better decisions, and identify validation coverage, among other key areas.
Barriers to Leveraging Traceability
With such significant benefits available, what prevents companies from taking full advantage of traceability? Why are some only using it to check off an item on a compliance list?
The 54% of survey respondents who have been unsuccessful in leveraging traceability identified some common barriers, including:
- It takes too long to find the report. (21%)
- They have no visibility into relevant data. (18%)
- The information is often inaccurate. (16%)
The most common barrier they identified, however, was a lack of the right tools and technology (36%). Only 14% of respondents reported using dedicated traceability management tools.
So what are the rest using? Most seem to be assembling traceability reports and matrices manually, using Microsoft Word and Excel. More than two-thirds (67%) said their development process is heavily centered on documents, rather than artifacts; of this group, 85% said they use Word and Excel.
Manually performing traceability, however, can be time-consuming and costly, which may explain why these respondents are unable to get more use out of their traceable data.Improving Traceability Tools
Survey respondents who use dedicated traceability management tools are mostly satisfied with the compliance enforcement, accuracy of information, and access to relevant information their tools provide. The three top areas respondents are dissatisfied with include:
- Communication and collaboration (35%)
- Automatic change notifications (35%)
- Bi-directional linking and traceability (33%)
When asked for their recommendations for improving their traceability management, the leading answer was “better tools.”
The 2016 Life Sciences Product Development Survey Report is now available. Download your free copy here.
To learn more about traceability’s benefits, as well as how to improve your traceability, explore our library of white papers and guides.
As a test manager - what drives my energy is to find problems and report them in best possible way. Honestly - seeing more problems in the application drives me and my team. We get excited if more bugs are discovered. We celebrate every new find and I can see shine faces of my team members. Any news of erratic behavior, application crash, instability of code, environment down - makes us feel happy. Often I think, are we testers sadists?
Sitting next to me - is my friend, colleague - the PM. He is worried man. Every time someone in my team stands up and asks for some clarification - this PM's heart beat goes up and must be thinking - oh no... one more bug !!! During our bug triage meetings, I speak proudly "40 new bugs today and that makes this week's overall tally of 370, 80 of these are critical". My PM friend after regaining calm says "ok - how many fixed bugs are retested? which areas of application are relatively stable? what positive news we can take to our stakeholders".
See the clear change in perspective? PM wants to see what is working, working fine, what positive news we can report? Test manager wants to boast on what new problems testing team has found. It makes sense for testers and test managers to get into shoes of PM's or Dev team once in while to understand what these folks think.
While tester should not lose their sight on finding problems and making sure that they are reported well - collaborating with PM/Dev and stakeholders to achieve a convergence of code towards release/golive date, can often be very useful for over all project stand point.
More often than not - due to changing requirements, unstable code, challenging deadlines - except testers, everyone in the team lose sight of golive. It is like being in a tunnel with no light from other end. PM's and Dev team would be watching with clenched fists to see the end of testing cycle.
The friction between Dev, Test and PM often is due to this differences in perspectives, motives and lack of communication on big picture on Go Live date.
Dear testers - when you find yourself in such situations - show empathy towards fellow team members. Pause sometimes and ask - can I see the project from their eyes, what are their worries and how I can help.
This will go long way in good team bonding and you will be called as "mature tester"
AWS CodePipeline is a more recent addition to Amazon Web Services – allowing development teams to push code changes from source check-in all the way into production in a very automated way. While code pipelines like that are not new (e.g: XebiaLabs, Electric Cloud, Jenkins Pipeline), Amazon provides seamless integration options for AWS CodeCommit, S3, […]
The post Scaling DevOps Deployments with AWS CodePipeline & Dynatrace appeared first on about:performance.
Today we are proud to announce that HPE StormRunner Load is now available in the AWS Marketplace! Keep reading to find out more about this availability and how to maximize these capabilities for yourself.
The worst interview I ever had lasted 8 hours. It started with a morning quiz on development methods and technology stacks. After that I went to a lunch with 15 strangers and was asked what I do outside of work for fun, and why I wanted to work there. The second half of the day was spent testing their software while my interviewers seemed to be working on their own projects. My last interaction that day was a hour long introduction to their API and some questioning about how I might test it. After the full day, I received a “thanks, but no thanks.” email with no feedback on their decision.
I think interviews should be better than that, and can be much better. I want to talk a little about the failings of average interviews for software testers, how I like to interview, and offer a little strategy to help put each candidate in the best light.Receive Popular Monthly Testing & QA Articles
Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.
We will never share your email. 1-click unsubscribes. The Typical Tester & QA Interview
Not very many people study software testing (especially not deeply), so interviews tend to go after the same superficial topics. Nearly every interview I have been to started with a series of questions about words – what is regression testing, what is the difference between bus severity and priority, what is a smoke test, what is black box or white box testing. Quizzing a candidate on these surface definitions is damaging. There are no standard definitions in software testing, not really, which means the interview becomes a guessing game or a false-rejection/false-acceptance game. Answering those questions in a way that doesn’t align with what the interviewer thinks has got me into trouble more than once.
When I have to answer those questions now, I usually say something like “I have heard people use that term in different ways, but when I say it I mean … What do you mean when you use the term?” This changes the conversation from a quiz, to an effort to understand each other and come to some sort of shared understanding.
The other pattern I see is interviewers focusing on things that don’t give much insight about how the person they are talking with will fit in a testing role. Several years ago, I interviewed at a company for the role of a senior tester (also see the related article Do We Still Need Dedicated Testers?). The company was trying to transition to agile, and most of the questions they asked me where themed around agile development and process. We spent about an hour talking about different scenarios – what would I do if I got push-back from developers on a bug, what was my view of testers in an agile context, when did I think testers should first be involved in a new feature.
Those scenario questions are certainly better than the definition game, and might be an interesting aside in a technical interview. They might be a good litmus test for culture fit/context fit. They can also be a distraction from discovering how good someone is at testing software. I think of these types of questions as interesting starting points, not the real meat of an interview. I want to go deeper and see how people actually test real software.Using Testing Challenges
My favorite way to interview testers is to have them test software. That might be whatever product I am working on right now, but that can be difficult because there is a lot of background and context an outsider will be missing. Simple challenges, something like the palindrome test challenge I co-wrote with Paul Harju and Matt Heusser, tend to work well for interviews. I like to pose this as an open-ended challenge, and start the exercise by saying “test this!”. You can characterize the skill of a tester by how they respond to the challenge.
A junior tester might enter a few values at boundaries – something that is definitely a palindrome, something that is definitely not a palindrome, and maybe a couple of strings that have special and Unicode characters. They might find a bug in the webpage or offer some design advice to help make it more usable. When you ask if the product is ready to ship, they’ll give you a confident yes or no.
A senior tester should take the exercise meta and start asking deeper questions that look past the web browser. How much time do I have to test? Who is the customer of this product and what do they value? What are the development team and product manager concerned about? Are there any lingering aspects of this product that are still in development? This line of questioning shows that they are capable of framing and directing their work.
Problem zero in testing is always the question of what I should be working on at this minute, and these answers will help them zero in on that. If you ask this person whether we should ship this version of the product you might get some hesitation. Rather than a confident yes or no, they will probably share their feelings on the product along with the problems they discovered, and ask more questions about what the customer needs. A senior tester usually doesn’t want to be the gatekeeper, but they can help make the team fill out missing information around the release decision.Interview Style Bias
I have interviewed people that did great during the interview, and then either did not fit in or struggled with the work when it came to the job. I have also interviewed other people who struggled during the interview, they couldn’t answer questions and were incomplete with others, but ended up being the backbone of a team after they were hired.
The most popular style of interviewing, the round-table where several people from the company sit around one person interviewing for the job, is also the worst for often introverted software people. Introverted people need time and space to think, and this style can often turn into rapid fire questioning or coding on a white board as if it were a performance art. Introverts that are fully capable of doing the work might get a less than shining review because there are too many people in the room asking too many questions in a short period of time.
Another pattern I have noticed in the past few years is the take home challenge. Companies create some sort of testing or coding challenge for a person to to before the interview. The results are reviewed in person when the person gets to the interview. Companies that use this often end up with a staff of young people, probably recently graduated from university. Older people with responsibilities during the weekends and evenings such as caring for family and children, will see this as a barrier to entry. They will either not apply to the position, or attempt the challenge and not do well. The people that do well on these challenges sacrificed time to do so.
Each style of interviewing is biased to shine a favorable light on specific types of candidates. Creating a custom interview for every candidate might not be the solution. That would be a time consuming venture and make assessment very difficult. My solution is to be kind. If a candidate is struggling in a group, maybe have a couple of interviewers leave and come back later. If they aren’t doing well with a white board testing challenge, maybe try pairing on real software.
Learning how to design better interviews and see past the bias will help create longer lasting teams (also see the related article 3 Critical Factors To Retain Your Best Software Testers).Interviewing In Practice
Interviewing testers is challenging work, we don’t make tangible things that can easily be judged. I like a strategy that includes questions designed to see how the tester thinks, exercises that shows how well they can test actual software, and a contingency plan based around personality types to help everyone show what they do best. Not everyone should be a tester, but testers can come from anywhere. This strategy can help find testers in the rough.
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
To read more, visit our blog at www.sonatype.org/nexus.
We are excited to announce the availability of CloudBees Jenkins Platform 18.104.22.168. This release delivers stability and usability by bumping the Jenkins core to 2.19.x and includes a key security fix. This is also the second “rolling release,” the output from a process we are using to provide the latest functionality to users on a more frequent release cadence. All enhancements and fixes are for the rolling release only. Fixed releases have diverged from rolling releases (locked to 2.7.X) and will follow a separate schedule.Release Highlights Jenkins Core Bumped to 2.19.x LTS Line
This is the first LTS upgrade on the rolling release and adds key fixes, such as improved dependency management for plugins. With improved dependency management, administrators are warned when dependent plugins are absent during install time. Thus administrators can catch and fix the problem before run time and provide a smooth experience to their users.Security-360 Fix Incorporated
All customers were sent the fix for Security-360 on Nov 16, 2016. This vulnerability allowed attackers to transfer a serialized Java object to the Jenkins CLI, making Jenkins connect to an attacker-controlled LDAP server, which in turn can send a serialized payload leading to code execution, bypassing existing protection mechanisms. If you have not installed the fix, we strongly urge you to upgrade to incorporate the security fix in your production environment.Support for CloudBees Assurance Program in Custom Update Centers
CloudBees Assurance Program (CAP) provides a Jenkins binary and plugins that have been verified for stability and interoperability. Jenkins administrators can easily promote this distribution to their teams by setting CAP as an upstream source in their custom update centers. This reduces the operational burden by allowing admins to use CloudBees-recommended plugins for all their masters, ensuring compliance and facilitating governance.CloudBees Assurance Program Plugin (CAP) Updates
These CloudBees verified plugins have been updated for this release of the CloudBees Jenkins Platform:
- Mailer version 1.18
- LDAP version 1.13
- JUnit version 1.19
- Email-ext version 2.51
- Token-macro version 2.0
- GitHub version 1.22.3
This release features many reliability improvements for the CloudBees Jenkins Platform, including many stability improvements to CloudBees Jenkins Operations Center connections to client masters.Improvements & Fixes
DetailsJenkins core upgraded to 2.19.3 LTS (release notes)
Improved dependency management - Flags admin when plugins dependencies are present, Jenkins will not load dependent plugins, reducing errors when initializing. Creates a smoother startup through smarter scanning of plugins.
Jobs with lots of history no longer hang the UI - Improved performance from the UI for jobs with lots of build history. Lazy loading renders faster because build history will not automatically load on startup.
Reduce configuration errors caused by invalid form submissions - Browsers will not autocomplete forms in Jenkins, reducing configuration problems due to invalid data in form submissions resulting from using the browser back button. Only select form fields (e.g. job name) will offer autocompletion. For admins, Jenkins users who use the browser back button will no longer corrupt the Jenkins configuration.CloudBees Assurance Program (CAP)
Support for Custom Update Centers - CAP is now available as an upstream source in Custom Update Centers, enabling admins to use CloudBees-recommended plugins for all their masters.
Mailer has been upgraded to version 1.18, includes a minor improvement to rendering page links and now supports the BlueOcean project.
JUnit has been upgraded to version 1.19, includes usability improvements around unsafe characters in the URI, highlighted test results.
Email-ext has been upgraded to version 2.51 contains an improvement pipeline support for expanding the tokens FAILED_TESTS, TEST_COUNTS and TRIGGER_NAME in a pipeline email notification.
Token-macro has been upgraded to 2.0 and contains improved pipeline support, allowing token macro to be used in a pipeline context, polish providing autocomplete when referencing a token name, support for variable expansion and some performance improvements when scanning large Jenkins instances.Pipeline usability improvements
Environment variables in Pipeline jobs are now available as global Groovy variables - simplifies tracking variable scope in a pipeline.
Build and job parameters are available as environment variables and thus accessible as if they were global Groovy variables - parameters are injected directly into the Pipeline script and are no longer available in ‘bindings.’
Makes job parameters, environment variables and Groovy variables much more interchangeable, simplifying pipeline creation and making variable references much more predictable.Skip Next Build plugin Adds the capability to skip all the jobs of a folder and its sub-folders or to skip all the jobs belonging to a “Skip Jobs Group.” Skip Jobs Group is intended to group together jobs that should be skipped simultaneously but are located in different folders. Support bundle Adds the logs of the client master connectivity to the support bundle. Fixes Details CloudBees Jenkins Platform core
- Possible livelock in CloudBees Jenkins Operations Center communication service.
- Possible unbounded creation of threads in CloudBees Jenkins Operations Center communication service.
- Fix NullPointerException in client master communication service when creating big CloudBees Jenkins Platform clusters.
- Fix deadlock on client master when updating number of executors in CloudBees Jenkins Operations Center cloud.
- Replace the term “slave” with “agent” in the CloudBees Jenkins Operations Center UI.
- Unable to log into client master if a remember me cookie has been set during an authentication on the client master while CloudBees Jenkins Operations Center was unavailable.
- “Check Now” on Manage Plugins doesn’t work when a client master is using a Custom Update Center.
- Technical properties appear on the configuration screen of the CloudBees Jenkins Operations Center shared cloud when they should be hidden.
- Move/copy fails in case client master is not connected to CloudBees Jenkins Operations Center.
- Move/copy screen broken with infinite loop when the browse.js `fetchFolders` function goes to error.
- Under heavy load, multiple CloudBeesMetricsSubmitter run obtaining threadInfos and slow down the application.
- The number of available nodes in a cloud should be exposed as metrics.
Role-Based Access Control plugin
The Role-based Access Control REST API ignores requirement for POST requests (allows GET) thereby eliminating 404 HTTP errors when accessing groups from a nested client master folder.GitHub Organization Folder plugin GitHub Organization Folder scanning issue when using custom marker files. CloudBees Assurance Program
LDAP upgraded to version 1.13, includes a major configuration bug fix.
GitHub has been upgraded to version 1.22.3 and contains a major bug fix for an issue that could crash Jenkins instances using LDAP for authenticationFrequently Asked Questions What is the CloudBees Assurance Program (CAP)?
The CloudBees Assurance Program (CAP) eliminates the risk of Jenkins upgrades by ensuring that various plugins work well together. CAP brings an unprecedented level of testing to ensure upgrades are no-risk events. The program bundles an ever-growing number of plugins in an envelope that is tested and certified together. The envelope installation/upgrade is an atomic operation - all certified versions are upgraded in lockstep, reducing the cognitive load on administrators in managing plugins.Who is the CloudBees Assurance Program program designed for?
The program is designed for Jenkins administrators who manage Jenkins for their engineering organizations.When was the CloudBees Assurance Program launched?
The program was launched in September 2016.What is a rolling release?
The CAP program delivers a CloudBees Jenkins Platform on a regular cadence and this is called the “rolling” release model. A new release typically lands every 4-6 weeks.Do I have to upgrade on every release?
You are encouraged too but aren’t required. You can skip a release or two and the assurance program ensures your upgrades would be smooth.What release am I on?
You can tell which version you are running by checking the footer of your CJE or CJOC instance.
How to Upgrade
- Identify which CloudBees Jenkins Enterprise release line (rolling vs. fixed) you are currently running.
- Visit go.cloudbees.com to download the latest release for your release line. (You must be logged in to see available downloads).
- If you are running CloudBees Jenkins Operations Center, you must upgrade it first, because you cannot connect a new CloudBees Jenkins Enterprise instance to an older version of CloudBees Jenkins Operations Center.
- Install the CloudBees Jenkins Platform as appropriate for your environment, and start the CloudBees Jenkins Platform instance.
- If the instance needs additional input during upgrade, the setup wizard prompts for additional input when you first access the instance.
- What plugins are installed on a fresh install of CloudBees Jenkins Platform 2.x?
- What plugins are upgraded when upgrading from CloudBees Jenkins Platform 1.x to 2.x?
- What do the options in Beekeeper Upgrade Assistant mean?
- What plugins are upgraded when I upgrade an instance from CloudBees Jenkins Platform 2.x to a newer 2.x version?
- CloudBees Jenkins Enterprise 22.214.171.124 release notes
- CloudBees Jenkins Operations Center 126.96.36.199 release notes
- CloudBees Jenkins Operations Center User Guide
- CloudBees Assurance Program
Blog Categories: JenkinsDeveloper ZoneCompany News
A few weeks ago I put out an appeal for resources for testers who are pulled into live support situations:
Looking for blogs, books, videos or other advice for testers pulled into real-time customer support, e.g. helping diagnose issues #testing— James Thomas (@qahiccupps) October 28, 2016 One suggestion I received was The Mom Test by Rob Fitzpatrick, a book intended to help entrepreneurs or sales folk to efficiently validate ideas by engagement with an appropriate target market segment. And perhaps that doesn't sound directly relevant to testers?
But it's front-loaded with advice for framing information-gathering questions in a way which attempts not to bias the the answers ("This book is specifically about how to properly talk to customers and learn from them"). And that might be, right?
The conceit of the name, I'm pleased to say, is not that mums are stupid and have to be talked down to. Rather, the insight is that "Your mom will lie to you the most (just ‘cuz she loves you)" but, in fact, if you frame your questions the wrong way, pretty much anyone will lie to you and the result of your conversation will be non-data, non-committal, and non-actionable. So, if you can find ways to ask your mum questions that she finds it easy to be truthful about, the same techniques should work with others.
The content is readable, and seems reasonable, and feels like real life informed it. The advice is - hurrah! - not in the form of some arbitrary number of magic steps to enlightenment, but examples, summarised as rules of thumb. Here's a few of the latter that I found relevant to customer support engagements, with a bit of commentary:
- Opinions are worthless ... go for data instead
- You're shooting blind until you understand their goals ... or their idea of what the problem is
- Watching someone do a task will show you where the problems and inefficiencies really are, not where the customer thinks they are ... again, understand the real problem, gather real data
- People want to help you. Give them an excuse to do so ... offer opportunities for the customer to talk; and then listen to them
- The more you’re talking, the worse you’re doing ... again, listen
These are useful, general, heuristics for talking to anyone about a problem and can be applied with internal stakeholders at your leisure as well as with customers when the clock is ticking. (But simply remembering Weinberg's definition of a problem and the Relative Rule has served me well, too.)
Given the nature of the book, you'll need to pick out the advice that's relevant to you - hiding your ideas so as not to seem like you're needily asking for validation is less often useful to a tester, in my experience - but as someone who hasn't been much involved in sales engagements I found the rest interesting background too.Image: Amazon
Now Live on DevOps Radio: Picture-Perfect CD, Featuring Dean Yu, Director, Release Engineering, Shutterfly
Jenkins World 2016 was buzzing with the latest in DevOps, CI/CD, automation and more. DevOps Radio wanted to capture some of that energy so we enlisted the help of Sacha Labourey, CEO at CloudBees, to host a series of episodes live at the event. We’re excited to present a new three-part series, DevOps Radio: Live at Jenkins World. This is episode two in the series.
Dean Yu, director of release engineering at Shutterfly, has been with the Jenkins community since before Jenkins was called Jenkins. Today, he’s a member of the Jenkins governance board and an expert in all things Jenkins and CI. He attended Jenkins World 2016 to catch up with the community, check out some sessions and sit down with Sacha Labourey for a special episode of DevOps Radio.
Sacha had a lot of questions for Dean, but the very first question he asked was, “What is new at Shutterfly?” Dean revealed how his team is using Jenkins, working on CI/CD and keeping pace with business during Shutterfly’s busiest season, the holidays. If you’re interested in learning CI/CD best practices or hearing what one Jenkins leader thinks about the future of software development and delivery, then you need to tune in today!
You don’t have to stop making your holiday card or photo book on Shutterfly.com, just plug in your headphone and tune into DevOps Radio. The latest DevOps Radio episode is available now on the CloudBees website and on iTunes.
Join the conversation about the episode on Twitter by tweeting to @CloudBees and including #DevOpsRadio in your post. After you listen, we want to know your thoughts. What did you think of this episode? What do you want to hear on DevOps Radio next? And, what’s on your holiday DevOps wishlist?
Sacha Labourey and Dean Yu talk about CD at Shutterfly, during Jenkins World 2016 (below).
P.S. Check out Dean’s massive coffee cup. It displays several pictures of his daughter and was created - naturally - on the Shutterfly website.