Skip to content

Markus Gaertner (shino.de)
Syndicate content
Software Testing, Craftsmanship, Leadership and beyond
Updated: 5 hours 46 min ago

Testing inside one sprint’s time

Wed, 02/10/2016 - 23:41

Recently I was reminded about a blog entry from Kent Beck way back in 2008. He called the method he discovered during pairing the Saff Squeeze after his pair partner David Saff. The general idea is this: Write a failing test on a level that you can, then inline all code to the test, and remove everything that you don’t need to set up the test. Repeat this cycle until you have a minimal error reproducing test procedure. I realized that this approach may be used in a more general way to enable faster feedback within a Sprint’s worth of time. I sensed a pattern there. That’s why I thought to get my thoughts down while they were still fresh – in a pattern format.

Testing inside one Sprint’s time

As a development team makes progress during the Sprint, the developed code needs to be tested to provide the overall team with the confidence to go forward. Testing helps to identify hidden risks in the product increment. If the team does not address these risks, the product might not be ready to ship for production use, or might make customers shy away from the product since there are too many problems with it that make it hard to use.

With every new Sprint, the development team will implement more and more features. With every feature, the test demand – the amount of tests that should be executed to avoid new problems with the product – rises quickly.

As more and more features pile up in the product increment, executing all the tests takes longer and longer up to a point where not all tests can be executed within the time available.

One usual way to deal with the ever-increasing test demand is to create a separate test team that executes all the tests in their own Sprint. This test team works separately from the new feature development, working on the previous Sprint’s product increment to make it potentially shippable. This might help to overcome the testing demand in the short-run. In the long-run, however, that same test demand will pile further up to a point where the separate test team will no longer be able to execute all the tests within their own separate Sprint. Usually, at that point, the test team will ask for longer Sprint lengths thereby increasing the gap between the time new features were developed, and their risks will be addressed.

The separate test team will also create a hand-off between the team that implements the features, and the team that addresses risks. It will lengthen the feedback between introducing a bug, and finding it, causing context-switching overhead for the people fixing the bugs.

In regulated environments, there are many standards the product should adhere to. These additional tests often take long times to execute. Executing them on every Sprint’s product increment, therefore, is not a viable option. Still, to make the product increment potentially shippable, the development team needs to fulfill these standards.

Therefore:
Execute tests on the smallest level possible.

Especially when following object-oriented architecture and design, the product falls apart into smaller pieces that can be tested on their own. Smaller components usually lead to faster execution times for tests since fewer sub-modules are involved. In a large software system involving an application server with a graphical user interface and a database, the business logic of the application may be tested without involving the database at all. In hardware development, the side-impact system of a car may be tested without driving the car against an obstacle by using physical simulations.

One way to develop tests and move them to lower levels in the design and architecture starts with a test on the highest level possible. After verifying this test fails for the right reasons, move it further down the design and architecture. In software, this may be achieved by inlining all production code into the test, and after that throwing out the unnecessary pieces. Programmers can then repeat this process until they reached the smallest level possible. For hardware products, similarly focued tests may be achieved by breaking the hardware apart into sub-modules with defined interfaces, and executing tests on the module-level rather than the whole product level.

By applying this approach, regulatory requirements can be broken down to individual pieces of the whole product, and, therefore, can be carried out in a faster way. Using the requirements from the standards, defining them as tests, and being able to execute them at least on a Sprint cadence, helps the development team receive quick feedback about their current process.

In addition, these tests will provide the team with confidence in order to change individual sub-modules while making sure the functionality does not change.

This solution will still provide an additional risk. By executing each test on the smallest level possible, and making sure that each individual module works correctly, the development team will sub-optimize the testing approach. Even though each individual module works correctly according to its interface definition, the different pieces may not interact with each other or work on varying interface definitions. This risk should be addressed by carrying out additional tests focused on the interfaces between the individual modules to avoid sub-optimization and non-working products. There will be fewer tests for the integration of different modules necessary, though. The resulting tests will therefore still fit into a Sprint’s length of time.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Categories: Blogs

Interview with Jerry Weinberg

Sun, 01/10/2016 - 18:21

Last year, I interviewed Jerry Weinberg on Agile Software Development for the magazine that we produce at it-agile, the agile review. Since I translated it to German for the print edition, I thought why not publish the English original here as well. Enjoy.

Markus:
Jerry, you have been around in software development for roughly the past 60 years. That’s a long time, and you certainly have seen one or another trend passing by in all these years. Recently you reflected on your personal impressions on Agile in a book that you called Agile Impressions. What are your thoughts about the recent up-rising of so called Agile methodologies?

Jerry:
My gut reaction is “ Another software development fad.” Then, after about ten seconds, my brain gets in gear, and I think, “Well, these periodic fads seem to be the way we advance the practice of software development, so let’s see what Agile has to offer.” Then I study the contents of the Agile approach and realize that most of it is good stuff I’ve been preaching about for those 60 years. I should pitch in an help spread the word.

As I observe teams that call themselves “Agile,” I see the same problems that other fads have experienced: people miss the point that Agile is a system. They adopt the practices selectively, omitting the ones that aren’t obvious to them. For instance, the team has a bit of trouble keeping in contact with their customer surrogate, so they slip back to the practice of guessing what the customers want. Or, they “save time” by not reviewing all parts of the product they’re building. Little by little, they slip into what they probably call “Agile-like” or “modified-Agile.” Then they report that “Agile doesn’t make all that much difference.”

Markus:
I remember an interview that you gave to Michael Bolton a while ago where you stated that you learned from Bernie Dimsdale how John von Neumann programmed. The description appeared to me to be pretty close towards what we now call test-driven development (TDD). In fact, Kent Beck always claimed that he simply re-discovered TDD. That made me wonder, what happened in our industry between 1960s and the 2000s that made us forget the ways of smart people. As a contemporary witness of these days, what are your insights?

Jerry:
It’s perfectly natural human behavior to forget lessons from the past. It happens in politics, medicine, conflicts—everywhere that human beings try to improve the future. Jefferson once said, “The price of liberty is eternal vigilance,” and that’s good advice for any sophisticated human activity.

If we don’t explicitly bolster and teach the costly lessons of the past, we’ll keep forgetting those lessons—and generally we don’t. Partly that’s because the software world has grown so fast that we never have enough experienced managers and teachers to bring those past lessons to the present. And partly it’s because we don’t adequately value what those lessons might do for us, so we skip them to make development “fast and efficient.” So, in the end, our development efforts are slower and more costly than they need to be.

Markus:
The industry currently talks a lot about how to bring lighter methods to larger companies. Since you worked on Project Mercury – the predecessor for Project Apollo from the NASA – you probably also worked on larger teams and in larger companies. In your experience, what are the crucial factors for success in these endeavors, and what are the things to watch out for as they may do more harm than good?

Jerry:
In the first place, don’t make the mistake of thinking that bigger is somehow automatically more efficient than smaller. You have to be much more careful with communications, and one small error can cause much more trouble than in a small project.

For one thing, when there are many people, there are many ways for new or revised requirements to leak into the project, so you need to be extra explicit about requirements. Otherwise, the project grows and grows, and troubles magnify.

It is very difficult to find managers who know how to manage a large project. Managers must know or learn how to control the big picture and avoid all sorts of micromanagement temptations.

Markus:
A current trend we see in the industry appears to evolve around new ways of working, and different forms to run an organization. One piece of it appears to be the learning organization. This deeply connects to Systems Thinking for me. Recognizing you published your first book on Systems Thinking in 1975, what have you seen being crucial for organizations to establish a learning culture?

Jerry:
First of all, management must avoid building or encouraging a blaming culture. Blame kills learning.

Second, allow plenty of time and other resources for individual learning. That’s not just classes, but includes time for reflecting on what happens, visiting other organizations, and reading.

Third, design projects so there’s time and money to correct mistakes, because if you’re going to try new things, you will make mistakes.

Fourth, there’s no such thing as “quick and dirty.” If you want to be quick, be clean. Be sure each project has sufficient slack time to process and socialize lessons learned.

Finally, provide some change artists to ensure that the organization actually applies what it learns.

Markus:
What would you like to tell to the next generation(s) of people in the field of software development?

Jerry:
Study the past. Read everything you can get your hands on, talk to experienced professionals, study existing systems that are doing a good job, and take in the valuable lessons from these sources.

Then set all those lessons aside and decide for yourself what is valuable to know and practice.

Markus:
Thank you, Jerry.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Categories: Blogs