Now Live on DevOps Radio: Picture-Perfect CD, Featuring Dean Yu, Director, Release Engineering, Shutterfly
Jenkins World 2016 was buzzing with the latest in DevOps, CI/CD, automation and more. DevOps Radio wanted to capture some of that energy so we enlisted the help of Sacha Labourey, CEO at CloudBees, to host a series of episodes live at the event. We’re excited to present a new three-part series, DevOps Radio: Live at Jenkins World. This is episode two in the series.
Dean Yu, director of release engineering at Shutterfly, has been with the Jenkins community since before Jenkins was called Jenkins. Today, he’s a member of the Jenkins governance board and an expert in all things Jenkins and CI. He attended Jenkins World 2016 to catch up with the community, check out some sessions and sit down with Sacha Labourey for a special episode of DevOps Radio.
Sacha had a lot of questions for Dean, but the very first question he asked was, “What is new at Shutterfly?” Dean revealed how his team is using Jenkins, working on CI/CD and keeping pace with business during Shutterfly’s busiest season, the holidays. If you’re interested in learning CI/CD best practices or hearing what one Jenkins leader thinks about the future of software development and delivery, then you need to tune in today!
You don’t have to stop making your holiday card or photo book on Shutterfly.com, just plug in your headphone and tune into DevOps Radio. The latest DevOps Radio episode is available now on the CloudBees website and on iTunes.
Join the conversation about the episode on Twitter by tweeting to @CloudBees and including #DevOpsRadio in your post. After you listen, we want to know your thoughts. What did you think of this episode? What do you want to hear on DevOps Radio next? And, what’s on your holiday DevOps wishlist?
Sacha Labourey and Dean Yu talk about CD at Shutterfly, during Jenkins World 2016 (below).
P.S. Check out Dean’s massive coffee cup. It displays several pictures of his daughter and was created - naturally - on the Shutterfly website.
- Acting as a Leader
- Being a developer
- Having a systems focus
- Thinking like an entrepreneur
- Balancing strategic with tactical thinking
- Communicating well
Good software architects understand that their role as a leader is not necessarily telling developers what to do. Rather, good architects act like a guide, shepherding a team of developers towards the same technical vision drawing upon leadership skills such as story-telling, influencing, navigating conflict and building trust with individuals to turn their architectural vision into reality.
A good leader, and thus, a good architect, will listen carefully to the opinions of each contributor, fine-tuning their vision with feedback from the team. This leads well onto the next point.Being a developer
Making good architectural choices is a function of balancing an ideal target architectural state with the current state of a software system. As an example, there is no sense in adding a document database to a system if the problem domain is better suited for a relational database, even if that’s boring. An architect may feel tempted to impose technologies or architectural choices without considering the fit for the problem space – AKA behaviours of the “ivory tower architect.”
The best way an architect can mitigate this is by spending time with developers and time in the code. Understanding how the system has been built up, and the constraints of the system as it stands today will give the architect more information about the right choices for today’s environment.Having a systems focus
Seasoned developers know that code is only one aspect to working software. To make code run, a seasoned developer understands there are other important quality attributes necessary for code to run well in its production environment. They consider aspects like deployment processes, automated testing, performance, security, and supportability. Where developers may approach these quality attributes ad hoc, an architect will focus on understanding not just the code but also the quality attributes necessary to meet the many needs of different stakeholders such as support, security, and operations staff.
The good architect focuses on finding solutions that can satisfy as many of these different stakeholder needs instead of choosing a tool or approach optimised for the preferences or style of a single contributor.Thinking like an entrepreneur
All technology choices have costs and benefits, and a good architect will consider new technology choices from both perspectives. Successful entrepreneurs are willing to take risks, but seek ways to learn quickly and fail fast. Architects can approach technology choices in a similar way, seeking real-world information about short- and long-term costs and the likely benefits they will realise.
A good example is when the architect avoids committing to a new tool based on reading a new article, or having heard about it at a conference. Instead they seek to understand how relevant the tool is in their environment by running an architectural spike to gather more information. They don’t pick a tool based on how good the sales pitch is, but what value it offers, given what they need for their system. They also look for the hidden costs of tools such as how well is a tool supported (e.g. level of documentation, community adoption), how much lock-in the tool brings or the extra risks it introduces over the long-term.Balancing strategic with tactical thinking
A lot of teams build their software reactively with individual developers choosing tools and technologies that they are most comfortable with, or have the most experience with.
The good architect keeps an eye out for what newer technologies, tools or approaches might be useful but does not necessarily draw upon them immediately. Technology adoption requires a considered approach looking at a long-term horizon. Architects will seek for a good balance between agility (allowing the team to move fast) and alignment (keeping enough consistency) at both a team and organisational level.
An exercise like the Build your own Tech Radar is a useful tool to explore technologies with strategy in mind.Communicating well
Architects know that effective communication is a key skill for building trust and influencing people outside of the team. They know that different groups of people use different vocabulary and that using the technical terms and descriptions with business people makes communication more difficult. Instead of talking about patterns, tools and programming concepts, the architect uses words their audience will be familiar with. Communicating technical choices to business people with words like risk, return, costs, and benefits will serve an architect better than the words they use with their development team.
An architect also realises that communicating within the team is just as important as outside, and will use diagrams and group discussions to establish and refine the technical vision, and use a written log like an Architectural Decision Log or a wiki to provide a historical trail for future generations.Conclusion
Doing the job of a well-rounded architect is not easy. There are so many elements to focus us, each drawing upon many skills that a developer often doesn’t focus on practicing. What is most important is not necessarily the ability an architect has, but that they have enough expertise in each of these different areas to be effective. An architect who is skillful in only one of these six areas described above will not be as effective as an architect who has a good level of expertise in all of them.
How does that Stuart is saying about PMP and Project management applies to Testing? I believe, more than certification, testing profession is hit by the way we poorly define testing and adopt a model of testing that eliminates need for skill, focuses on mindless repetition of some documented procedures.
Time to reflect on. If we define and accept that definition of testing that systematically undermines skill element and focuses on process, tools, metrics etc - there is no doubt that we will become laborers.
Is testing rule based?
How much of good testing is rule based?
Wikipedia’s article on equivalence class partitioning (ECP) is a great example of the poor thinking and teaching and writing that often passes for wisdom in the testing field. It’s narrow and misleading, serving to imply that testing is some little game we play with our software, rather than an open investigation of a complex phenomenon.
(No, I’m not going to edit that article. I don’t find it fun or rewarding to offer my expertise in return for arguments with anonymous amateurs. Wikipedia is important because it serves as a nearly universal reference point when criticizing popular knowledge, but just like popular knowledge itself, it is not fixable. The populus will always prevail, and the populus is not very thoughtful.)
In this article I will comment on the Wikipedia post. In a subsequent post I will describe ECP my way, and you can decide for yourself if that is better than Wikipedia.
“Equivalence partitioning or equivalence class partitioning (ECP) is a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived.”
Not exactly. There’s no reason why ECP should be limited to “input data” as such. The ECP thought process may be applied to output, or even versions of products, test environments, or test cases themselves. ECP applies to anything you might be considering to do that involves any variations that may influence the outcome of a test.
Yes, ECP is a technique, but a better word for it is “heuristic.” A heuristic is a fallible method of solving a problem. ECP is extremely fallible, and yet useful.
“In principle, test cases are designed to cover each partition at least once. This technique tries to define test cases that uncover classes of errors, thereby reducing the total number of test cases that must be developed.”
This text is pretty good. Note the phrase “In principle” and the use of the word “tries.” These are softening words, which are important because ECP is a heuristic, not an algorithm.
Speaking in terms of “test cases that must be developed,” however, is a misleading way to discuss testing. Testing is not about creating test cases. It is for damn sure not about the number of test cases you create. Testing is about performing experiments. And the totality of experimentation goes far beyond such questions as “what test case should I develop next?” The text should instead say “reducing test effort.”
“An advantage of this approach is reduction in the time required for testing a software due to lesser number of test cases.”
Sorry, no. The advantage of ECP is not in reducing the number of test cases. Nor is it even about reducing test effort, as such (even though it is true that ECP is “trying” to reduce test effort). ECP is just a way to systematically guess where the bigger bugs probably are, which helps you focus your efforts. ECP is a prioritization technique. It also helps you explain and defend those choices. Better prioritization does not, by itself, allow you to test with less effort, but we do want to stumble into the big bugs sooner rather than later. And we want to stumble into them with more purpose and less stumbling. And if we do that well, we will feel comfortable spending less effort on the testing. Reducing effort is really a side effect of ECP.
“Equivalence partitioning is typically applied to the inputs of a tested component, but may be applied to the outputs in rare cases. The equivalence partitions are usually derived from the requirements specification for input attributes that influence the processing of the test object.”
Typically? Usually? Has this writer done any sort of research that would substantiate that? No.
ECP is a process that we all do informally, not only in testing but in our daily lives. When you push open a door, do you consciously decide to push on a specific square centimeter of the metal push plate? No, you don’t. You know that for most doors it doesn’t matter where you push. All pushable places are more or less equivalent. That is ECP! We apply ECP to anything that we interact with.
Yes, we apply it to output. And yes, we can think of equivalence classes based on specifications, but we also think of them based on all other learning we do about the software. We perform ECP based on all that we know. If what we know is wrong (for instance if there are unexpected bugs) then our equivalence classes will also be wrong. But that’s okay, if you understand that ECP is a heuristic and not a golden ticket to perfect testing.
“The fundamental concept of ECP comes from equivalence class which in turn comes from equivalence relation. A software system is in effect a computable function implemented as an algorithm in some implementation programming language. Given an input test vector some instructions of that algorithm get covered, ( see code coverage for details ) others do not…”
At this point the article becomes Computer Science propaganda. This is why we can’t have nice things in testing: as soon as the CS people get hold of it, they turn it into a little logic game for gifted kids, rather than a pursuit worthy of adults charged with discovering important problems in technology before it’s too late.
The fundamental concept of ECP has nothing to do with computer science or computability. It has to do with logic. Logic predates computers. An equivalence class is simply a set. It is a set of things that share some property. The property of interest in ECP is utility for exploring a particular product risk. In other words, an equivalence class in testing is an assertion that any member of that particular group of things would be more or less equally able to reveal a particular kind of bug if it were employed in a particular kind of test.
If I define a “test condition” as something about a product or its environment that could be examined in a test, then I can define equivalence classes like this: An equivalence class is a set of tests or test conditions that are equivalent with respect to a particular product risk, in a particular context.
This implies that two inputs which are not equivalent for the purposes of one kind of bug may be equivalent for finding another kind of bug. It also implies that if we model a product incorrectly, we will also be unable to know the true equivalence classes. Actually, considering that bugs come in all shapes and sizes, to have the perfectly correct set of equivalence classes would be the same as knowing, without having tested, where all the bugs in the product are. This is because ECP is based on guessing what kind of bugs are in the product.
If you read the technical stuff about Computer Science in the Wikipedia article, you will see that the author has decided that two inputs which cover the same code are therefore equivalent for bug finding purposes. But this is not remotely true! This is a fantasy propagated by people who I suspect have never tested anything that mattered. Off the top of my head, code-coverage-as-gold-standard ignores performance bugs, requirements bugs, usability bugs, data type bugs, security bugs, and integration bugs. Imagine two tests that cover the same code, and both involve input that is displayed on the screen, except that one includes an input which is so long that when it prints it goes off the edge of the screen. This is a bug that the short input didn’t find, even though both inputs are “valid” and “do the same thing” functionally.The Fundamental Problem With Most Testing Advice Is…
The problem with most testing advice is that it is either uncritical folklore that falls apart as soon as you examine it, or else it is misplaced formalism that doesn’t apply to realistic open-ended problems. Testing advice is better when it is grounded in a general systems perspective as well as a social science perspective. Both of these perspectives understand and use heuristics. ECP is a powerful, ubiquitous, and rather simple heuristic, whose utility comes from and is limited by your mental model of the product. In my next post, I will walk through an example of how I use it in real life.
I am very excited that we will be hosting CITCON in New York City on December 9 & 10, 2016.
Registrations are still open: http://citconf.com/newyork2016/
I am proud that my company, Intent Media https://intentmedia.com/, has signed on as the Venue Sponsor. As Chief Technology Officer, I am excited to showcase some of the great things we have been doing at Intent like
* mob programming
* serverless architectures
* employee growth based management
* continuous delivery
* polyglot programming
Should be tons of fun! Join us!
This month's Lean Coffee was hosted by Abcam. Here's some brief, aggregated comments and questions on topics covered by the group I was in.
Suggest techniques for identifying and managing risk on an integration project.
- Consider the risk in your product, risk in third-party products, risk in the integration
- Consider what kinds of risk your stakeholders care about; and to who (e.g. risk to the bottom line, customer data, sales, team morale ...)
- ... your risk-assessment and mitigation strategies may be different for each
- Consider mitigating risk in your own product, or in those you are integrating with
- Consider hazards and harms
- Hazards are things that pose some kind of risk (objects and behaviours, e.g. a delete button, and corruption of database)
- Harms are the effects those hazards might have (e.g. deleting unexpected content, and serving incomplete results)
- Consider probabilities and impacts of each harm, to provide a way to compare them
- Advocate for the resources that you think you need
- ... and explain what you won't (be able to) do without them
- Take a bigger view than a single tester alone can provide
- ... perhaps something like the Three Amigos (and other stakeholders)
- Consider what you can do in future to mitigate these kinds of risks earlier
- Categorise the issues you've found already; they are evidence for areas of the product that may be riskier
- ... or might show that your test strategy is biased
- Remember that the stuff you don't know you don't know is a potential risk too: should you ask for time to investigate that?
Didn't get time to discuss some of my own interests: How-abouts and What-ifs, and Not Sure About Uncertainty.
Can templates be used to generate tests?
- Some programming languages have templates for generating code
- ... can the same idea apply to tests?
- The aim is to code tests faster; there is a lot of boilerplate code (in the project being discussed)
- How would a template know what the inputs and expectations are?
- Automation is checking rather than testing
- Consider data-driven testing and QuickCheck
- Consider asking for testability in the product to make writing test code easier (if you are spending time reverse-engineering the product in order to test it)
- ... e.g. ask for consistent Ids of objects in and across web pages
- Could this (perceived) problem be alleviated by factoring out the boilerplate code?
How can the coverage of manual and automated testing be compared?
- Code coverage tools could, in principle, give some idea of coverage
- ... but they have known drawbacks
- ... and it might be hard to tie particular tester activity to particular paths through the code to understand where overlap exists
- Tagging test cases with e.g. story identifiers can help to track where coverage has been added (but not what the coverage is)
- What do we really mean by coverage?
- What's the purpose of the exercise? To retire manual tests?
- One participant is trying to switch to test automation for regression testing
- ... but finding it hard to have confidence in the automation
- ... because of the things that testers can naturally see around whatever they are looking at, that the automation does not give
What are the pros and cons of being the sole tester on a project?
- Chance to take responsibility, build experience ... but can be challenging if the tester is not ready for that
- Chance to make processes etc that works for you ... but perhaps there are efficiencies in sharing process too
- Chance to own your work ... but miss out on other perspectives
- Chance to express yourself ... but can feel lonely
- Could try all testers on all projects (e.g. to help when people are on holiday or sick)
- ... but this is potentially expensive and people complain about being thinly sliced
- Could try sharing testing across the project team (if an issue is that there's insufficient resource for the testing planned)
- Could set up sharing structures, e.g. team standup, peer reviews/debriefs, or pair testing across projects
What do (these) testers want from a test manager?
- Clear product strategy
- As much certainty as possible
- Allow and encourage learning
- Allow and encourage contact with testers from outside the organisation
- Recognition that testers are different and have different needs
- Be approachable
- Give advice based on experience
- Work with the tester
- ... e.g. coaching, debriefing, pointing out potential efficiency, productivity, testing improvements
- Show appreciation
- Must have been a tester
This content is password protected. To view it please enter your password below:
Thanks to everyone who attended the TestTrack 2016.1 Sneak Peek webinar last week. The webinar recording is now available if you weren’t able to attend or if you would like to watch it again. The Q&A from the webinar follows.
When will the release be available?
TestTrack 2016.1 is expected to be ready to ship December 19, 2016.
Can attachments be transferred to JIRA?
Not in the 2016.1 release. But once you’ve created a JIRA issue in TestTrack, it’s easy to open the issue in JIRA and add additional information.
Is an extra license required to use the JIRA integration?
No. There is no additional TestTrack license required to use the JIRA integration. Your users will, of course, need a JIRA license.
Can I attach JIRA issues to requirements?
Yes. JIRA issues can be attached to any type of TestTrack item.
Is there a JIRA add-on?
Not yet. Look for this is a future release!
Market disruption can spark innovation and radical change, and DevOps — as a set of best practices — has emerged from software industry disruptions. Why? Because, over the years, delivering software in many organizations has become harder, slower and more error prone. Outdated technology became a disadvantage for older, established companies competing against startups without years of […]
The post From 0 To DevOps in 80 Days: The Dynatrace Transformation Story! appeared first on about:performance.
AWS re:Invent will be heating up Las Vegas next week. Keep reading to learn how to connect with our team and have some fun.
There is an ongoing theme around Signal vs Noise in the organization, that originates from the Signal to Noise ratio in radio signals, and tells us about the problem of having too much noise hampering the effectivity of the signal.
I think many testers should try to understand this principle (the organizational one, and not the radio properties one) and apply it to the way they manage and communicate their testing.
In a nutshell, you need to separate between the Important SIGNAL and the Irrelevant NOISE. Understand where to focus your efforts and more important what to communicate to the rest of the world.
The problem is that many times, when we communicate with the the people in our company, we do not stop to think what information is important and relevant to them (this is the SIGNAL) and what stuff is not relevant or important to them (this is the NOISE).
Then we write a large report or even come to a meeting and start reciting everything in a big blob of information, that surely includes some gold nuggets, but are completely lost to the flow of irrelevant stuff.Some things to keep in mind that will make a big difference
Always understand what is important to the people you are talking to. This will depend on who you are talking to, what part of the project we are in, and what has been happening around the project at this time.
When you realize there is something important to communicate, don’t start screaming to every direction and in all channels. Look for the most appropriate person who needs to take action. If you don’t have access to this person, look for someone who may help you to pass this information along.
Always focus on the quality of the information and not on the quantity.
When you feel you need to write a large report please do it, but put the important stuff at the beginning and make it clear when you start talking or writing about less important stuff.
If there is nothing to say, keep quiet. If there is something to say, say it Sharply.
Whenever you want to propose something, go to the person who will be interested in hearing your proposal.Bottom line
Many starting testers and some starting team leads feel they need to ensure their value by generating tons of data and large reports. This may be true in some organizations, but it is the opposite in most (and definitely on those I like to work in)
Don’t try to look professional with tons of Noise, be the one that provides the Intelligent Signal and helps sail the ship in this environment of foggy noise and uncertainty.
For more on Communication skills and best practices for testers I recommend also reading: