Skip to content

Blogs

Chrome OS Test Automation Lab

Testing TV - Wed, 09/28/2016 - 17:24
Chrome OS is currently shipping 60+ different Chromebook/boxes each running their own software. On the field, customers are getting a fresh system every 6 weeks. This would not be possible without a robust Continuous Integration System vetting check-ins from our 200+ developers. This talk describes the overall architecture with specific emphasis on our test automation […]
Categories: Blogs

It's Complicated

Hiccupps - James Thomas - Wed, 09/28/2016 - 06:59
In a recent episode of Rationally SpeakingSamuel Arbesman, a complexity scientist, talks about complexity in technology. Here's a few quotes I particularly enjoyed.

On levels of understanding of systems:
Technology very broadly is becoming more and more complicated ... actually so complex that no one, whether you're an expert or otherwise, fully understands these things ... They have enormous number of parts that are all interacting in highly nonlinear ways that are subject to emerging phenomena. We're going to have bugs and glitches and failures. And if we think we understand these things well and we don’t, there's going to be tons of gap between how we think we understand the system and how it actually does behave. On modelling reality with a system and then creating a model of that system:
... the world is messy and complex. Therefore, often, in order to capture all that messiness and complexity, you need a system that effectively is often of equal level of messiness and complexity ... whether or not it's explicitly including all the rules and exceptions and kind of the edge cases, or a system that learns these kinds of things in some sort of probabilistic, counterintuitive manner. It might be hard to understand all the logic in [a] machine learning system, but it still captures a lot of that messiness. I think you can see the situation where in machine learning, the learning algorithm might be fairly understandable. But then the end result ... You might be able to say, theoretically, I can step through the mathematical logic in each individual piece of the resulting system, but effectively there's no way to really understand what's going on.On "physics" and "biological" thinking:
[Physics:] A simple set of equations explains a whole host of phenomena. So you write some equations to explain gravity, and it can explain everything from the orbits, the planets, the nature of the tides ...  It has this incredibly explanatory power. It might not explain every detail, but it maybe it could explain the vast majority of what's going on within a system. That's the physics. The physics thinking approach, abstracting away details, deals with some very powerful insights. [Biology:] Which is the recognition that oftentimes ... the details not only are fun and enjoyable to focus on, but they're also extremely important. They might even actually make up the majority of the kinds of behavior that the system can exhibit. Therefore, if you sweep away the details and you try to create this abstracted notion of the system, you're actually missing the majority of what is going on. Oftentimes I think people in their haste to understand technology ... because technologies are engineered things ... think of them as perhaps being more the physics thinking side of the spectrum.On robustness:
There's this idea within complexity science ... "robust yet fragile," and the idea behind this is that a lot of these very complex systems are highly robust. They've been tested thoroughly. They had a lot of edge cases and exceptions built in and baked into the system. They're robust to an enormously large set of things but oftentimes, they're only the set of things that have been anticipated by the engineers. However, they're actually quite fragile to the unanticipated situations. Side note: I don't think there's an attempt in this discussion to draw a distinction between complex and complicated, which some do.
Image: https://flic.kr/p/Q2tMz
Categories: Blogs

Five Tricky Things With Testing

Thoughts from The Test Eye - Tue, 09/27/2016 - 20:39
Ideas

I went to SAST Väst Gothenburg today to hold a presentation that can be translated to something like “Five Tricky Things With Testing”. It was a very nice day, and I met old and new friends. Plus an opportunity to write the first blog post in a long time, so here is a very condensed version:

1. People don’t understand testing, but still have opinions. They see it as a cost, without considering the value.
Remedy: Discuss information needs, important stuff testing can help you know.

2. Psychologically hard. The more problems you find, the longer it will take to get finished.
Remedy: Stress the long-term, for yourself and for others.

3. You are never finished. There is always more to test, but you have to stop.
Remedy: Talk more to colleagues, perform richer testing.

4. Tacit knowledge. It is extremely rare that you can write down how to test, and good testing will follow.
Remedy: More contact of the third degree.

5. There are needs, but less money.
Remedy: Talk about testing’s value with the right words, and deliver value with small effort, not only with bugs.

Summary: Make sure you provide value with your testing, also for the sake of the testing community,

 

There were very good questions, including one very difficult:
How do you make sure the information reaches the ones who should get it?

Answer: For people close to you, it is not so difficult; talk about which information to report and how from the beginning. I don’t like templates, so I usually make a new template for each project, and ask if it has the right information in it.

But I guess you mean people more far away, and especially if they are higher in the hierarchy this can be very difficult. It might be people you aren’t “allowed” to talk to, and you are not invited to the meetings.
One trick I have tried is to report in a spread-worthy format, meaning that it is very easy to copy and paste the essence so your words reach participants you don’t talk to.

Better answers is up to you to find for your context.

Categories: Blogs

The Forgotten Agile Role – the Customer


Many Agile implementations tend to focus on the roles inside an organization – the Scrum Master, Product Owner, Business Owner, Agile Team, Development Team, etc.  These are certainly important roles in identifying and creating a valuable product or service.  However, what has happened to the Customer role?  I contend the Customer is the most important role in the Agile world.  Does it seem to be missing from many of the discussions?
While not always obvious, the Customer role should be front-and-center in all Agile methods and when working in an Agile context.  You must embrace them as your business partner with the goal of building strong customer relationships and gathering their valuable feedback.  Within an Agile enterprise, while customers should be invited to Sprint Reviews or demonstrations and provide feedback, they should really be asked to provide feedback all along the product development journey from identification of an idea to delivery of customer value.
Let's remind ourselves of the importance of the customer.  A customer is someone who has a choice on what to buy and where to buy it. By purchasing your product, a customer pays you with money to help your company stay in business.  For these factors, engaging the customer is of utmost importance.  Customers are external to the company and can provide the initial ideas and feedback to validate the ideas into working products.  Or if your customer is internal, are you treating them as part of your team and are you collecting their feedback regularly?
As you look across your Agile context, are customers one of your major Agile roles within your organization?  Are they front and center?  Are customers an integral part of your Agile practice?  Are you collecting their valuable feedback regularly?  If not, it may be time to do so.  
Categories: Blogs

Magic Buttons and Code Coverage

Sustainable Test-Driven Development - Fri, 09/23/2016 - 19:29
This will be a quickie.  But sometimes good things come in small packages. This idea came to us from Amir's good friend Eran Pe'er, when he was visiting Net Objectives from his home in Israel. I'd like you to imagine something, then I'm going to ask you a question.  Once I ask the question you'll see a horizontal line of dashes.  Stop reading at that point and really try to answer the question.
Categories: Blogs

Pair Testing

Agile Testing with Lisa Crispin - Fri, 09/23/2016 - 04:52
headinghome

Ernest and Chester, strong-style pairing

I’ve been meaning to write about pair testing for ages. It’s something I still don’t do enough of. Today I listened to an Agile Amped podcast about strong style pairing with Maaret Pyhäjärvi & Llewellyn Falco. I’ve learned about strong style pairing from Maaret and Llewellyn before, and even tried mob programming with them at various conferences. The podcast motivated me to try strong style pairing at work.

I’m fortunate that the other tester in our office, Chad Wagner, has amazing exploratory testing skills. We pair test a lot. Chad says that pair testing is like getting to ride shotgun versus having to drive the car. You have so much more chance to look around. He readily agreed to experiment with strong style pairing.

I’m going to oversimplify, I am sure, but in strong style pairing, if you have an idea, you give the keyboard to your pair and explain what you want to do. Chad and I worked from an exploratory testing charter using a template style from Elisabeth Hendrickson’s Explore It! We used a pairing station that has two monitors, two keyboards and two mice. It took a lot of conscious effort to not just take control, start typing and testing with our idea. Rather, if I had an idea, I would explain it to Chad and ask him to try it, and vice versa.

Since Chad is pretty new to our team, when we pair, I have a tendency to just take control and do stuff. But he has the better testing ideas. Strong style pairing was much more engaging than what we had been doing. Chad would tell me his great idea for something to try and I’d do it. An idea would spring to my head and I’d explain it to him.

One interesting outcome is we discovered we had different ways of hard refreshing a page, and neither of us knew the other way. I use shortcut keys, and Chad uses a menu that reveals itself only when you have developer tools open in Chrome. That in itself made the strong style pairing worthwhile to me!

We ended up finding four issues worthy of showing to the developers and product owner, and writing up as stories. Not a bad outcome for a couple of hours of pairing. More fun and more bugs than I would have found on my own.

Now, if only I could get my team to mob program…

The post Pair Testing appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Giving 'Back

Hiccupps - James Thomas - Thu, 09/22/2016 - 06:51

The Test team book club at Linguamatics is currently reading What Did You Say? The Art of Giving and Receiving Feedback. Here's a couple of quotes that I picked out for our last session:
  • If you’re really interested in helping people, you’ll do well to start your feedback by opening your own motives to inspection.
  • Even when it’s given at the receiver’s request, feedback describes the giver more than the receiver.
  • When the data and their model don’t match, most people discard the data.

I recall an instance when, engaged in discussion with a colleague I'll call Russell, about the data analysis he was presenting, I spotted an opportunity to offer feedback. It was about something that I knew Russell wanted to change. It was about something that I knew was open to me to give feedback on, because we had talked about it. It was about something that I thought would be beneficial for Russell in multiple ways and, I hoped, would provide some insight into a particular behaviour pattern that he had.

However, it was also the first time that I had seen this particular thing. A data set of size one. I had no evidence, yet, that it would lead to the end point that Russell desired to alter. A data set of size zero.

Against this: my instinct, my gut, and my experience. And a sense of goodwill, built up over time, over repeated interactions, over sometimes difficult sessions where I had tried to demonstrate that I do care to assist and support and advise because I want to help Russell to be the best he can be, in the respects that matter to him and for his work.

But I was still cautious. I have unwittingly burned and been burned enough times over the years to know that each of these conversations carries with it risks. Risks of misreading the context, risks of misreading the agreements, risks of misreading the mood, risks, risks, risks, ...

But I went ahead anyway. The potential benefit and the goodwill in the bank outweighed the risks, I calculated, on this occasion. And I gave my feedback. And Russell agreed with me. And I breathed a deep internal sigh of relief.

Comparing this anecdote to the quotes I pulled from the book:
  • My motives, I think, were good: I wanted to help Russell achieve a personal goal.
  • But the feedback does reflect something about me: an interest in reducing unnecessary complexity, an interest in making presentation clear, the ego that is required to believe that my colleagues will want to listen to any advice from me, ...
  • In this case, it turned out my suggestion didn't contradict Russell's model but exposed it, and in any case I had little concrete data to present.

I use this episode as an example not because it ended well, particularly, but because it's an illustration for me of how much I have been influenced by What Did You Say? in the couple of years since I first read it. I consciously I go about my day-to-day business, doing my best to be careful about when I choose to offer feedback, about when I deliberately choose not to, and about picking up and picking up on any feedback that's coming my way in return.

I try to treat this as a testing task where I can, in the sense that I try hard to observe my own actions and the responses they generate, and I think about ways in which they might be related and how I might approach things differently in the next exchange, or at another time, with this person, or someone else.

Easier said than done, of course, so I'll finish with another quote from the book, another quote that I've taken to heart and act on, that regularly helps guide me with pretty much everything that I've said above:
Don’t concentrate on giving feedback; concentrate on being congruent–responding to the other person, to yourself, and to the here-and-now situation. Don’t go around hunting for opportunities to give feedback, because feedback is effective only when the need arises naturally out of congruent interactions.Some details have been changed.
Image: Leanpub

Categories: Blogs

Testing on the Toilet: What Makes a Good End-to-End Test?

Google Testing Blog - Thu, 09/22/2016 - 00:59
by Adam Bender

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

An end-to-end test tests your entire system from one end to the other, treating everything in between as a black box. End-to-end tests can catch bugs that manifest across your entire system. In addition to unit and integration tests, they are a critical part of a balanced testing diet, providing confidence about the health of your system in a near production state. Unfortunately, end-to-end tests are slower, more flaky, and more expensive to maintain than unit or integration tests. Consider carefully whether an end-to-end test is warranted, and if so, how best to write one.

Let's consider how an end-to-end test might work for the following "login flow":



In order to be cost effective, an end-to-end test should focus on aspects of your system that cannot be reliably evaluated with smaller tests, such as resource allocation, concurrency issues and API compatibility. More specifically:
  • For each important use case, there should be one corresponding end-to-end test. This should include one test for each important class of error. The goal is the keep your total end-to-end count low.
  • Be prepared to allocate at least one week a quarter per test to keep your end-to-end tests stable in the face of issues like slow and flaky dependencies or minor UI changes.
  • Focus your efforts on verifying overall system behavior instead of specific implementation details; for example, when testing login behavior, verify that the process succeeds independent of the exact messages or visual layouts, which may change frequently.
  • Make your end-to-end test easy to debug by providing an overview-level log file, documenting common test failure modes, and preserving all relevant system state information (e.g.: screenshots, database snapshots, etc.).
End-to-end tests also come with some important caveats:
  • System components that are owned by other teams may change unexpectedly, and break your tests. This increases overall maintenance cost, but can highlight incompatible changes
  • It may be more difficult to make an end-to-end test fully hermetic; leftover test data may alter future tests and/or production systems. Where possible keep your test data ephemeral.
  • An end-to-end test often necessitates multiple test doubles (fakes or stubs) for underlying dependencies; they can, however, have a high maintenance burden as they drift from the real implementations over time.
Categories: Blogs

From jUnit to Mutation-Testing

Testing TV - Wed, 09/21/2016 - 17:36
JUnit is a well-known tool for java developers in the area of TD where it is accepted that code coverage can be measured. In this case we distinguish between coverage on the level of classes, methods and rows. The goal is to get the code coverage as high as possible on the row level, but […]
Categories: Blogs

Selenium: State of the Union

Testing TV - Wed, 09/14/2016 - 16:23
Simon Stewart from Facebook discusses the past, present, and future of Selenium WebDriver at the 2015 Selenium conference in Portland. Video producer: http://seleniumconf.org/
Categories: Blogs

Reporting, a Novel Approach

Hiccupps - James Thomas - Tue, 09/13/2016 - 07:41
 There's a girl in the park playing with an enormous bunch of balloons. She's running around, clearly very happy to have such a pretty and fun toy. She seems entranced by the way the balloons have life of their own: they hold themselves up, needing no support from her, and animatedly jostle one another as she moves. Her grip on the strings, twined together in her fist, is quite loose, and she's in danger of losing them if she's not careful. And, of course, she isn't and she does. The balloons float up and up and up from her released grasp, past a tall tree in which two nude men are arm wrestling. On their wrists each sports a watch showing ten minutes past ten, despite the time being 12:57. With their free arms they reach out and catch the balloons as they bobble by, bursting every last one of them, and smiling.In the second exercise of Mira Nair's Storytelling workshop, which ran at last night's Cambridge Tester Meetup, we were asked to write a story that included three items chosen at random from a selection. I had balloons, entwined arms with hands clasped, and a clock showing 10:10. Prior to that we'd been provided with a story structure and asked to fill it with content. Later, to take existing stories and compress them to tweet length.

The workshop's practical aspects focused mostly on structure, including on the STAR mnemonic which is intended to help interviewees give good answers to behaviour questions such as "Give me an example of when you ...". The letters stand for Situation, Task, Action, and Result and a story according to them should run like this:

  • Situation: define the background
  • Task: explain the mission
  • Action: describe the work done
  • Result: enumerate the outcomes 

The first exercise in the workshop gave us that skeleton and asked us to fit a recent episode to it. Some found it liberating ("The structure draws the story out of whatever I put in") while others struggled ("I couldn't find something that I thought would make a good story"). At work, Mira suggests, we'll more often have the problem of something to present and needing a structure to bring the best out of it than the reverse.

This was reinforced by the second exercise which provided content but, interestingly, didn't require any structure of us. More people found this more straightforward, the content anchoring everything else.

In the story I gave at the beginning I experimented with another narrative device Mira talked about, the False Start, in which an apparently predictable beginning leads to an unexpected ending and can result (with judicious use) in increased audience engagement.

The final exercise cut the content problem a different way: take an existing description (a couple of software bugs were provided) and summarise it in 140 characters or less. As testers, we author reports of potential issues regularly, and part of the skill of transmitting those reports is finding a way to quickly engage our audience, which will often mean extracting the essentials and conveying them efficiently.

Different approaches were taken here: I boiled the descriptions down as I might at work; others took the Twitter aspects and used hashtags as shorthand, effectively importing context cheaply, and others used humour to convey a sense of violated expectation.

One of the things I look for in these kinds of events is the questions they generate, the trains of thought I can follow at my leisure, the connections I can make, ... Here's a few:

  • what really distinguishes a story from any other prose, if that's possible? 
  • is the "storyness" in the eye of the author or the audience?
  • what techniques are there for picking out the relevant content for a story?
  • what aspects of storytelling are important beyond structure?
  • even with strong structure, stories can be poor, boring, uninformative.
  • readers assume much when reading a story, filling in missing details, assuming causation, intent and so on.
  • what about the inadvertent stories we tell all the time; that bored expression, that casual gesture, that throwaway remark?
  • stories don't need to be true, but my reports as a tester generally need to be true (to what I understand the situation I'm describing to be).
  • stories can be input to and output from testing.
  • don't forget the relative rule: the audience and the time are important to the effect a story will have.
  • there was discussion about tailoring stories for an audience ("stories should not be the same each time you tell them") but once written down, a story is static. 
  • I find that writing helps me to generate and understand the content. I'll often start writing before I know the story myself.
  • finding a perspective can help to make the story compelling, and that perspective can be the author's, the readers', or that of a third party.
  • I like the testing story heuristic I took from RST: status, how you tested, value/risks. But this is a content heuristic more than a delivery heuristic, although I find that order to generally be useful.

I have written before about the C's I look for in communication: conciseness, completeness, correctness, clarity, context. I realised in this workshop that I can add another: compelling.
Image: https://flic.kr/p/97ba7K
Categories: Blogs

What Test Engineers do at Google

Google Testing Blog - Mon, 09/12/2016 - 17:00
by Matt Lowrie, Manjusha Parvathaneni, Benjamin Pick, and Jochen Wuttke

Test engineers (TEs) at Google are a dedicated group of engineers who use proven testing practices to foster excellence in our products. We orchestrate the rapid testing and releasing of products and features our users rely on. Achieving this velocity requires creative and diverse engineering skills that allow us to advocate for our users. By building testable user journeys into the process, we ensure reliable products. TEs are also the glue that bring together feature stakeholders (product managers, development teams, UX designers, release engineers, beta testers, end users, etc.) to confirm successful product launches. Essentially, every day we ask ourselves, “How can we make our software development process more efficient to deliver products that make our users happy?”.

The TE role grew out of the desire to make Google’s early free products, like Search, Gmail and Docs, better than similar paid products on the market at the time. Early on in Google’s history, a small group of engineers believed that the company’s “launch and iterate” approach to software deployment could be improved with continuous automated testing. They took it upon themselves to promote good testing practices to every team throughout the company, via some programs you may have heard about: Testing on the Toilet, the Test Certified Program, and the Google Test Automation Conference (GTAC). These efforts resulted in every project taking ownership of all aspects of testing, such as code coverage and performance testing. Testing practices quickly became commonplace throughout the company and engineers writing tests for their own code became the standard. Today, TEs carry on this tradition of setting the standard of quality which all products should achieve.

Historically, Google has sustained two separate job titles related to product testing and test infrastructure, which has caused confusion. We often get asked what the difference is between the two. The rebranding of the Software engineer, tools and infrastructure (SETI) role, which now concentrates on engineering productivity, has been addressed in a previous blog post. What this means for test engineers at Google, is an enhanced responsibility of being the authority on product excellence. We are expected to uphold testing standards company-wide, both programmatically and persuasively.

Test engineer is a unique role at Google. As TEs, we define and organize our own engineering projects, bridging gaps between engineering output and end-user satisfaction. To give you an idea of what TEs do, here are some examples of challenges we need to solve on any particular day:
  • Automate a manual verification process for product release candidates so developers have more time to respond to potential release-blocking issues.
  • Design and implement an automated way to track and surface Android battery usage to developers, so that they know immediately when a new feature will cause users drained batteries.
  • Quantify if a regenerated data set used by a product, which contains a billion entities, is better quality than the data set currently live in production.
  • Write an automated test suite that validates if content presented to a user is of an acceptable quality level based on their interests.
  • Read an engineering design proposal for a new feature and provide suggestions about how and where to build in testability.
  • Investigate correlated stack traces submitted by users through our feedback tracking system, and search the code base to find the correct owner for escalation.
  • Collaborate on determining the root cause of a production outage, then pinpoint tests that need to be added to prevent similar outages in the future.
  • Organize a task force to advise teams across the company about best practices when testing for accessibility.
Over the next few weeks leading up to GTAC, we will also post vignettes of actual TEs working on different projects at Google, to showcase the diversity of the Google Test Engineer role. Stay tuned!
Categories: Blogs

When the whole team owns testing: Building testing skills

Agile Testing with Lisa Crispin - Wed, 09/07/2016 - 21:09

The Twitterverse and other social media continue to host many discussions on topics such as “Do testers need to code?” As Pete Walen points out in his recent post, the “whole team” approach to delivering software, popularized with agile development, is often misunderstood as “everyone must write production code”.

The “Whole Team Approach” in practice

Janet Gregory and I, along with our many collaborators, explain a healthy whole team approach to testing and quality in our books, Agile Testing and More Agile Testing. You can find many stories of teams working together to build in quality in our books. Here’s a recent example from my own team.

Improving our exploratory testing

My team works together to build quality into our product. I recently wrote about how we use example mapping to build shared understanding even before we start coding. Exploratory testing is another practice that our team (and indeed our whole company) values highly and which everyone, regardless of role, does at least occasionally. But most people are not experienced in ET techniques.

We have very few testers on our team compared to the numbers in other roles, but everyone does testing. We’ve been trying experiments to help designers, PMs and programmers improve their exploratory testing skills.

Building skills

Last December, I did a short exploratory testing workshop to help team members learn to write and use charters and apply their imagination and critical thinking powers as they test. We have occasional “group hugs” where team members in all roles pair up, assign themselves charters and test while sharing what they learn.

To help new team members learn about ET and give everyone more practice, we did a new hour-long ET workshop (split into two sessions, because we had to accommodate up to 40 people). One session had a Zoom session so remote team members could participate.

How the workshop worked

Exploratory testing charter

Exploratory testing charter by a programmer pair.

We started with a 10 minute overview of why we do exploratory testing, how to create and use personas, charters (based on Elisabeth Hendrickson’s template from Explore It!), various ET techniques and pointers to more resources. (Contact me if you’d like a copy of the slides). We gave the group a recently delivered feature to explore. Then everyone paired up or formed a group of three, wrote a charter and dove in.

As a tester it was funny to hear team members in other roles say things

Exploring with a mind map

Some pairs used mind maps

like “There’s no information in this epic, I don’t know how the feature should work!” Welcome to my world! They jumped right in, though. Some wrote charters in Tracker stories (yes, we use our own product), some mind mapped them or wrote them on paper. They had fun exploring, getting ideas from each other and calling us over to show us bugs or ask questions.

Outcomes

Each workshop session tested a different feature set, and each found several issues, including a serious one. The feedback they gave resulted in some design changes as well. Both features are now much improved.

More importantly, team members have some new testing tools in their toolbox. One of the designers commented that working through more examples would have helped him know what to do in his own testing. I’m working on planning testing dojo sessions so people have an opportunity to practice their exploratory testing skills.

I’d love to hear how your team is growing their whole team testing skills!

The post When the whole team owns testing: Building testing skills appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Testing Microservices with Citrus

Testing TV - Wed, 09/07/2016 - 17:15
Citrus is a powerful open source integration testing tool for automated service API tests. The framework concentrates on the interfaces to boundary applications and services with the ability to exchange messages on client and server side with different message transports (Http, JMS, TCP/IP, FTP, …) and formats (XML, JSON). The primary goal is an automated […]
Categories: Blogs

How Michael Bolton and I Collaborate on Articles

James Bach's Blog - Mon, 09/05/2016 - 08:28

(Someone posted a question on Quora asking how Michael and I write articles together. This is the answer I gave, there.)

It begins with time. We take our time. We rarely write on a deadline, except for fun, self-imposed deadlines that we can change if we really want to. For Michael and I, the quality of our writing always dominates over any other consideration.

Next is our commitment to each other. Neither one of us can contemplate releasing an article that the other of us is not proud of and happy with. Each of us gets to “stop ship” at any time, for any reason. We develop a lot of our work through debate, and sometimes the debate gets heated. I have had many colleagues over the years who tired of my need to debate even small issues. Michael understands that. When our debating gets too hot, as it occasionally does, we know how to stop, take a break if necessary, and remember our friendship.

Then comes passion for the subject. We don’t even try to write articles about things we don’t care about. Otherwise, we couldn’t summon the energy for the debate and the study that we put into our work. Michael and I are not journalists. We don’t function like reporters talking about what other people do. You will rarely find us quoting other people in our work. We speak from our own experiences, which gives us a sort of confidence and authority that comes through in our writing.

Our review process also helps a lot. Most of the work we do is reviewed by other colleagues. For our articles, we use more reviewers. The reviewers sometimes give us annoying responses, and they generally aren’t as committed to debating as we are. But we listen to each one and do what we can to answer their concerns without sacrificing our own vision. The responses can be annoying when a reviewer reads something into our article that we didn’t put there; some assumption that may make sense according to someone else’s methodology but not for our way of thinking. But after taking some time to cool off, we usually add more to the article to build a better bridge to the reader. This is especially true when more than one reviewer has a similar concern. Ultimately, of course, pleasing people is not our mission. Our mission is to say something true, useful, important, and compassionate (in that order of priority, at least in my case). Note that “amiable” and “easy to understand” or “popular” are not on that short list of highest priorities.

As far as the mechanisms of collaboration go, it depends on who “owns” it. There are three categories of written work: my blog, Michael’s blog, and jointly authored standalone articles. For the latter, we use Google Docs until we have a good first draft. Sometimes we write simultaneously on the same paragraph; more normally we work on different parts of it. If one of us is working on it alone he might decide to re-architect the whole thing, subject, of course, to the approval of the other.

After the first full draft (our recent automation article went through 28 revisions, according to Google Docs, over 14-weeks, before we reached that point), one of us will put it into Word and format it. At some point one of us will become the “article boss” and manage most of the actual editing to get it done, while the other one reviews each draft and comments. One heuristic of reviewing we frequently use is to turn change-tracking off for the first re-read, if there have been many changes.  That way whichever of us is reviewing is less likely to object to a change based purely on attachment to the previous text, rather than having an actual problem with the new text.

For the blogs, usually we have a conversation, then the guy who’s going to publish it on his blog writes a draft and does all the editing while getting comments from the other guy. The publishing party decides when to “ship” but will not do so over the other party’s objections.

I hope that makes it reasonably clear.

(Thanks to Michael Bolton for his review.)

Categories: Blogs

Happy Labor Day and The Software Quality Perspective

I hope you are having a great week. Me? I'm looking forward to a weekend Labor Day holiday with family and friends. To kick it off, I'm getting a cavity filled tomorrow!

For those of us in the USA, the Labor Day holiday is to commemorate the contributions of working people and labor unions. Since I have worked in IT most of my working years, I have never belonged to a union, so I'll just speak briefly here to the work ethic in software quality. However, I think much of this could apply to other fields as well.

Like you, perhaps, I have been on projects that required extreme effort and commitment to complete. Even then, some of the projects failed.

Over my 25+ years in software testing consulting, I have heard people complain about how difficult some tasks can become. My reply is something along the lines of, "Yes, that why we call it work."

I have also worked for managers that were totally clueless when it came to how to treat people. These managers expected 100% availability, no allowance for sickness or family emergencies, provided no training or encouragement to the team, and generally created a work environment that was de-motivating in nature. That is the dark side of work, in my opinion.

If I had to capsulize what a person should bring to a project, they would include:
  • Motivation - Passion for the job
  • Skills - Knowing how to do the job, and continuously learning new skills
  • Creativity - Being able to do things differently and better
  • Problem solving - So that the team lead doesn't have to do everything
  • Integrity - Doing the right thing when no one is looking
  • Caring - For the quality of work performed, and for the welfare of others
  • Vision - To see the big picture of what they are doing
  • Calling - To know why they are doing what they are doing
  • Respect - For others, for other people's ideas, for leaders

You might have other things that would fit well on the list. By the way, my two favorite books on this topic are "Peopleware" by DeMarco and Lister, and "The Mythical Man-Month" by Fred Brooks.

So, relax this weekend and enjoy the fruit of your labor. Ironically, some people will not be able to do that. They will be working. This normally includes law enforcement, military, medical professionals, broadcasters, and people working tech support, food service and retail. I salute those fine people and wish them safety in what they do.
Categories: Blogs

TDD and Design: Frameworks

Sustainable Test-Driven Development - Tue, 08/30/2016 - 20:52
Increasingly, in modern software development, we create software using components that are provided as part of a language, framework, or other element of an overall development ecosystem.  In test-driven development this can potentially cause difficulties because our code becomes dependent on components that we did not create, that may not be amenable to our testing approaches, and that the test
Categories: Blogs

Code Quality in Practice

Testing TV - Tue, 08/30/2016 - 10:23
We started Code Climate with a simple hypothesis: static analysis can help developers ship better code, faster. Five years later, we analyze over one trillion lines of code each day spanning a wide variety of programming languages, and along the way we’ve learned a lot about code quality itself: what it means, why you want […]
Categories: Blogs

Tools: Take Your Pick Part 4

Hiccupps - James Thomas - Tue, 08/30/2016 - 06:56

Back in Part 1 I started this series of posts one Sunday morning with a very small thought on tooling. (Thinking is a tool.) I let my mind wander over the topic and found that I had opinions, knowledge, ideas, and connections that I hadn't made explicit before, or perhaps only in part. (Mind-wandering is a tool.)

I wrote the stuff that I was thinking down. (Writing is a tool.) Actually, I typed the stuff that I was thinking up.1 I have recently been teaching myself to touch type in a more traditional style to (a) stop the twinges I was feeling in my right hand from over-extension for some key combinations and (b) become a faster, more consistent, typist so that my thoughts are less mediated in their transmission to a file. (Typing is a tool.)

I reviewed and organised my thoughts and notes. With each review, each categorisation, each classification, each arrangement, each period of reflection away from the piece of writing, I found more thoughts. (Reviewing and rationalisation and reflection are tools.) I challenged myself to explore ideas further, to tease out my intuitions, to try to understand my motivations, to dig deeper into whatever point it was I thought I was making. (Exploration is a tool.)

The boundaries between some of these tools are not clear some of the time. And that doesn't matter, some of the time. For me, in this work, it doesn't matter at all. My goal is to get my thoughts in order and hopefully learn something about myself, for me, and perhaps also something more general that I can share with my team, my colleagues, the wider readership of this blog.

That the boundaries are not clear is an interesting observation, I find, and it came about only because I was trying to list a set of tools used somewhat implicitly here. (Lists are tools.) Not knowing which tool we are using suggests that we can use tools without needing to know that we are using them at all. Part 2 started off with this:

   When all you have is a hammer, everything looks like a nail.

And I discussed how this does not necessarily mean that every use of the hammer is mindless. But I now also wonder whether it's possible to proceed without even realising that you have a hammer. With non-physical tools - such as those I've listed above - this seems to me a distinct risk. Side-effects of it might include that you don't know how you arrived at solutions and so can't easily generalise them, you don't realise that you are missing out large parts of the search space and so can't consider it; you don't provide yourself with the opportunity to look for improvement, ...

I think that reflection and introspection help me to mitigate those risks, to some extent. Although, of course, some of the risks themselves will be unknown unknowns and so less amenable to mitigation without additional effort being made to make them known. (Another problem to be solved. But which tool to use?)

The more I practise that kind of reflection the more practised I become at recognising simultaneously the problem and meta problems around it, or the problem space and the context of which is it a part, or the specific problem instance and the general problem class. I have had my fingers burned trying to verbalise these things to others, and I probably now over-compensate through clunky conversational tactics to try to make clear that I'm shifting modes of thinking.

Another thought on sub-conscious problem-solving approaches: perhaps the recognition (correct or not) of an instance of a nail-like problem triggers a particular hammer (tacit or explicit):
When it looks enough like a nail, I hit it.But every decision to use a tool is also an opportunity to make a different decision. Deciding to think about how and why a decision was made gives insight that can be useful next time a decision is made, including the realisation that a decision is being made.

Part 1 of this essay was in the domain of cleaning,  Part 2 more general and theoretical, and Part 3 focused back on work. They are grouped that way and written in that order because I found it helpful and convenient, but the way the thoughts came to me was messier, non-linear, fragmentary. I used tools to note down the thoughts: my phone, scraps of paper, my notebook, text files on the computer, ... Tools to preserve the output of tools to provide input to tools that will themselves generate output on the topic of tools.

I find myself chuckling to and at myself as I write this last part of the summary. While attempting to pull together the threads (attempting to pull together threads is a tool) I realise that in this piece which is ostensibly about hammers and nails - and having observed that perhaps we sometimes don't recognise the hammer - there may actually be no nail.

When I started out I had no particular problem to solve - beyond my own interest in exploring a thought - and so no particular need for a tool, although I have deployed several, deliberately and not. But. ironies aside, that's fine with me: quite apart from any benefits that may accrue (and there may be none, to me or you, y'know) the process itself is enjoyable, intellectually challenging, and satisfying for me. And exercising with my tools keeps me and them in some kind of working order too.
Image: https://flic.kr/p/qjYUm

Footnote1.  Writing down but typing up? There's a thought for another day.
Categories: Blogs

Tools: Take Your Pick Part 3

Hiccupps - James Thomas - Tue, 08/30/2016 - 05:47


In Part 1 of this series I observed my behaviour in identifying problems, choosing tools, and finding profitable ways to use them when cleaning my bathroom at home. The introspection broadened out in Part 2 to consider tool selection more generally. I speculated that, although we may see someone apparently thoughtlessly viewing every problem as a nail and hence treatable with the same hammer, that simple action can hide deeper conscious and unconscious thought processes. In Part 3 I find myself with these things in mind, reflecting on the tools I use in my day-to-day work.

One class of problems that I apply tools to involves a route to the solution being understood and a desire to get there quickly. I think of these as essentially productivity or efficiency problems and one of the tools I deploy to resolve them is a programming or scripting language.

Programming languages are are tools, for sure, but they are also tool factories. When I have some kind of task which is repetitive or tiresome, or which is substantially the same in a bunch of different cases, I'll look for an opportunity to write a script - or fabricate a tool - which does those things for me. For instance, I frequently clone repositories from different branches of our source code using Mercurial. I could type this every time:

$ hg clone -r branch_that_I_want https://our.localrepo.com/repo_that_I_want

... and swear a lot when I forget that this is secure HTTP or mistype localrepo again. Or I could write a simple bash script, like this one, and call it hgclone:

#!/bin/bash

hg clone -r $1 https://our.localrepo.com/$2

and then call it like this whenever I need to clone:

$ hgclone branch_that_I_want repo_that_I_want

Now I'm left dealing with the logic of my need but not the implementation details. This keeps me in flow (if you're a believer in that kind of thing) or just makes me less likely to make a mistake (you're certainly a believer in mistakes, right?) and, in the aggregrate, saves me significant time, effort and pain.

Your infrastructure will often provide hooks for what I sometimes think of as micro tools too. An example of this might be aliases and environment variables. In Linux, because that's what I use most often, I have set things up so that:
  • commands I like to run a particular way are aliased to always run that way.
  • some commands I run a lot are aliased to single characters.
  • some directory paths that I need to use frequently are stored as environment variables.
  • I can search forwards and backwards in my bash history to reuse commands easily.

One of the reasons that I find writing (and blogging, although I don't blog anything like as much as I write) such a productive activity is that the act of doing it - for me - provokes further thoughts and connections and questions. In this case, writing about micro tools I realise that I have another kind of helper, one that I could call a skeleton tool.

Those scripts that you return to again and again as starting points for some other piece of work, they're probably useful because of some specific piece of functionality within them. You hack out the rest and replace them in each usage, but keep that generally useful bit. That bit is the skeleton. I have one in particular that is so useful I've made a copy of it with only the bits that I was reusing to make it easier to hack.

Another class of problem I bump into is more open-ended. Often I'll have some idea of the kind of thing I'd like to be able to do because I'm chasing an issue. I may already have a tool but its shortcomings, or my shortcomings as a user, are getting in the way. I proceed here in a variety of ways, including:
  • analogy: sometimes I can think of a domain where I know of an answer, as I did with folders in Thunderbird.
  • background knowledge: I keep myself open for tool ideas even when I don't need tools for a task. 
  • asking colleagues: because often someone has been there before me.
  • research: that frustrated lament "if only I could ..." is a great starting point for a search. Choosing sufficient context to make the search useful is a skill. 
  • reading the manual: I know, old-fashioned, but still sometimes pays off.

On one project, getting the data I needed was possible but frustratingly tiresome. I  had tried to research solutions myself, had failed to get anything I was happy with, and so asked for help:
#Testers: what tools for monitoring raw HTTP? I'm using tcpdump/Wireshark and Fiddler. I got networks of servers, including proxies #testing— James Thomas (@qahiccupps) March 26, 2016 This lead to a couple of useful, practical findings: that Fiddler will read pcap files, and that chaosreader can provide raw HTTP in a form that can be grepped. I logged these findings in another tool - our company wiki - categorised so that others stand a chance of finding them later.

Re-reading this now, I notice that in that Twitter thread I am casting the problem in terms of the solution that I am pursuing:
I would like a way to dump all HTTP out of .pcap. Wireshark cuts it up into TCP streams. Later, I recast the problem (for myself) in a different way:
I would like something like tcpdump for HTTP.The former presupposes that I have used tcpdump to capture raw comms and now want to inspect the HTTP contained within it, because that was the kind of solution I was already using. The latter is agnostic about the method, but uses analogy to describe the shape of the solution I'm looking for. More recently still, I have refined this further:
I would like to be able to inspect raw HTTP in real time, and simultaneously dump it to a file, and possibly modify it on the fly, and not have to configure my application to use an external proxy (because that can change its behaviour).Having this need in mind means that when I happen across a tool like mitmproxy (as I did recently) I can associate it with the background problem I have. Looking into mitmproxy, I bumped into HTTPolice, which can be deployed alongside it and used to lint my product's HTTP.  Without the background thinking I might not have picked up on mitmproxy when it floated past me; without picking up on mitmproxy I would not have found HTTPolice or, at least, not found it so interesting at that time.

Changing to a new tool can give you possibilities that you didn't know were there before. Or expose a part of the space of possible solutions that you hadn't considered, or change your perspective so that you see the problem differently and a different class of solutions becomes available.

Sometimes the problem is that you know of multiple tools that you could start a task in, but you're unsure of the extent of the task, or the time that you'll need to spend on it, whether you'll need to work and rework or this is a one-shot effort and other meta problems of the problem itself. I wondered about this a while ago on Twitter:
With experience I become more interested in - where other constraints permit - setting up tooling to facilitate work before starting work.— James Thomas (@qahiccupps) December 5, 2015
And where that's not possible (e.g. JFDI) doing in a way that I hope will be conducive to later retrospective tooling.— James Thomas (@qahiccupps) December 5, 2015
And I mean "tooling" in a very generic sense. Not just programming.— James Thomas (@qahiccupps) December 5, 2015
And when I say "where other constraints permit" I include contextual factors, project expectations, mission, length etc not just budget— James Thomas (@qahiccupps) December 5, 2015
Gah. I should've started this at https://t.co/DWcsnKiSfS. Perhaps tomorrow.— James Thomas (@qahiccupps) December 5, 2015
I wonder if this is irony.— James Thomas (@qahiccupps) December 5, 2015 A common scenario for me at a small scale is, when gathering data, whether I should start in text file, or Excel, or an Excel table. Within Excel, these days, I usually expect to switch to tables as soon as it becomes apparent I'm doing something more than inspecting data.

Most of my writing starts as plain text. Blog posts usually start in Notepad++ because I like the ease of editing in a real editor, because I save drafts to disk, because I work offline. (I'm writing this in Notepad++ now, offline because the internet connection where I am is flaky.) Evil Tester wrote about his workflow for blogging and his reasons for using offline editors too.

When writing in text files I also have heuristics about switching to a richer format. For instance, if I find that I'm using a set of multiply-indented bullets that are essentially representing two-dimensional data it's a sign that the data I am describing is richer than the format I'm using. I might switch to tabulated formatting in the document (if the data is small and likely to remain that way), I might switch to wiki table markup (if the document is destined for the wiki), or I might switch to a different tool altogether (either just for the data or for everything.)

At the command line I'll often start in shell, then move to bash script, then move to a more sophisticated scripting language.  If I think I might later add what I'm writing to a test suite I might make a different set of decisions to writing a one-off script. If I know I'm searching for repro steps I'll generally work in a shell script, recording various attempts as I go and commenting them out each time so that I can easily see what I did that lead to what. But if I think I'm going to be doing a lot of exploration in an area I have little idea about I might be more interactive but use script to log my attempts.

At a larger scale, I will try to think through workflows for data in the project: what will we collect, how will we want to analyse it, who will want to receive it, how will they want to use it? Data includes reports: who are we reporting to, how would they like to receive reports, who else might be interested? I have a set of defaults here: use existing tooling, use existing conventions, be open about everything.

Migration between tools is also interesting to me, not least because it's not always a conscious decision. I find I've begun to use Notepad++ more on Windows whereas for years I was an Emacs user on that platform. In part this is because my colleagues began to move that way and I wanted to be conversant in the same kinds of tools as them in order to share knowledge and experience. On the Linux command line I'll still use Emacs as my starting point, although I've begun to teach myself vi over the last two or three years. I don't want to become dependent on a tool to the point where I can't work in common, if spartan, environments. Using different tools for the same task has the added benefit of opening my mind to different possibilities and seeing how different concepts repeat across tools, and what doesn't, or what differs.

But some migrations take much longer, or never complete at all: I used to use find and grep together to identify files with certain characteristics and search them. Now I often use ack. But I'll continue to use find when I want to run a command on the results of the search, because I find its -exec option a more convenient tool than the standalone xargs.

Similarly I used to use grep and sed to search and filter JSON files. Now I often use jq when I need to filter cleanly, but I'll continue with grep as a kind of gross "landscaping" tool, because I find that the syntax is easier to remember even if the output is frequently dirtier.

On the other hand, there are sometimes tools that change the game instantly, In the past I used Emacs as a way to provide multiple command lines inside a single connection to a Linux server. (Aside: putty is the tool I use to connect to Linux servers from Windows.) When I discovered screen I immediately ditched the Emacs approach. Screen gives me something that Emacs could not: persistence across sessions. That single attribute is enough for me to swap tools. I didn't even know that kind of persistence was possible until I happened to be moaning about it to one of our Ops team. Why didn't I look for a solution to a problem that was causing me pain?

I don't know the answer to that.

I do know about Remote Desktop so I could have made an analogy and begun to look for the possibility of command line session persistence. I suspect that I just never considered it to be a possibility. I should know better. I am not omniscient. (No, really.) I don't have to imagine a solution in order to find one. I just have to know that I perceive a problem.

That's a lesson that, even now, I learn over and over. And here's another: even if there's not a full solution to my problem there may be partial solutions that are improvements on the situation I have.

In Part 4 I'll try to tie together the themes from this and the preceding two posts.
Image: https://flic.kr/p/5mPY4G
Syntax highlighting: http://markup.su/highlighter
Categories: Blogs