Skip to content

Blogs

New Book: The Agile Enterprise: Building and Running your Agile Enterprise

Imagine an enterprise where everyone focuses on the highest customer value.  Where strategy to tasks are visible so everyone knows if their work is aligned with the highest value work. Imagine an enterprise where a discovery mindset wins over certainty thinking. Where experimentation with increments and feedback help define the way toward customer value. Imagine a company where employees use 100% of their brain power to self organize around the work and be trusted to think of better ways to work. Where leaders encourage employees to put customer value first.  Imagine an enterprise where customers embrace the products and services being built because they are engaged in the building of the work all along the way. If you can imagine it, it can be yours!  In this unique and cutting edge Agile book, veteran Enterprise Agile Coach Mario Moreira, will guide you to build and run an Agile enterprise at every level and at every point from idea to delivery. Learn how Agile-mature organizations adapt nimbly to micro changes in market conditions and customer needs and continuously deliver optimal value to customers.  Learn cutting-edge practices and concepts as you extend your implementation of Agile pervasively and harmoniously through the whole enterprise for greater customer value and business success. Readers of The Agile Enterprisewill learn how to:
  • Establish a Customer Value Driven engine with an enterprise idea pipeline to process an enterprise’s portfolio of ideas more quickly and productively toward customer value and through all levels of the enterprise
  • Incorporate the Discovery Mindset; experimental, incremental, design, and divergent thinking; and fast feedback loops to increase the odds that what you build aligns more closely to what customer wants.
  • Leverage Lean Canvas, Personas, Story Mapping, Cost of Delay, Discovery Mindset, Servant leadership, Self-organization, and more to deliver optimum value to customers
  • Use continuous Agile Budgeting and enterprise idea pipeline at the senior levels of the enterprise to enable you to adapt to the speed of the market.
  • Reinvent Human Resources, Portfolio Management, Finance, and many areas of leadership toward new roles in the enablement of customer value. 
  • Establish a holistic view of the state of your Agile Galaxy from top-to-bottom and end-to-end allowing you to understand where you are today and where you’d like to go in your Agile future.
  • Be truly Agile throughout the enterprise, focusing on customer value and employees over all else.
This book is geared for: Sponsors of Agile Transformations; Executives and Senior Management; Agile Coaches, Consultants, and Champions; Portfolio Management; Project Management Offices (PMOs); Business and Finance; Human Resources (HR); Investors and Entrepreneurs; Scrum Masters, Agile Project Managers, and Product Owners.  This book concludes with an adventuring through an Agile Enterprise story that shows you how an enterprise may transform to Agile in an incremental manner with materials in this book.  Contributors to this book include JP Beaudry (on Story Mapping) and David Grabel (on Cost of Delay).  Let the material in The Agile Enterprise help you achieve your successful customer value driven enterprise. -->-->-->-->
Categories: Blogs

Rodent Controls

Hiccupps - James Thomas - Sun, 03/26/2017 - 12:01

So I wasn't intending to blog again about The Design of Everyday Things by Don Norman but last night I was reading the final few pages and got to a section titled Easy Looking is Not Necessarily Easy to Use. From that:How many controls does a device need? The fewer the controls the easier it looks to use and the easier it is to find the relevant controls. As the number of controls increases, specific controls can be tailored for specific functions. The device may look more and more complex but will be easier to use.  We studied this relationship in our laboratory ... We found that  to make something easy to use, match the number of controls to the number of functions and organize the panels according to function. To make something look like it is easy, minimize the number of controls.How can these conflicting requirements be met simultaneously? Hide the controls not being used at the moment. By using a panel on which only the relevant controls are visible, you minimize the appearance of complexity. By having a separate control for each function, you minimize complexity of use. It is possible to eat your cake and have it, too.Whether with cake in hand, mouth, or both, I would note that easy saying is not necessarily easy doing. There's still a considerable amount of art in making that heuristic work for any specific situation.

One aspect of that art is deciding what functions it makes sense to expose at all. Fewer functions means fewer controls and less apparent complexity. Catherine Powell's Customer-Driven Knob  was revelatory for me on this:
Someone said, "Let's just let the customer set this. We can make it a knob." Okay, yes, we could do that. But how on earth is the customer going to know what value to choose?  As in my first post about The Design of Everyday Things, I find myself drawn to comparisons with The Shape of Actions. In this case, it's the concept of RAT, or Repair, Attribution and all That, the tendency of users to adapt themselves to accommodate the flaws in their technology.

When I wrote about it in The RAT Trap I didn't use the word design once, although I was clearly thinking about it:
A takeaway for me is that software which can exploit the human tendency to repair and accommodate and all that - which aligns its behaviour with that of its users - gives itself a chance to feel more usable and more valuable more quickly.Sometimes I feel like I'm going round in circles with my learning. But so long as I pick up something interesting - a connection, a reinforcement, a new piece of information, an idea - frequently enough I'm happy to invest the time.
Image: https://flic.kr/p/dewUvv
Categories: Blogs

Implication of emphasis on automation in CI

Thoughts from The Test Eye - Sat, 03/25/2017 - 16:49
Ideas

Introduction
I would believe, without any evidence, that a majority of the test community and product development companies have matured in their view on testing. At conferences you less frequently see the argumentation that testing is not needed. From my own experience and perceiving the local market, there is often new assignments for testers. Many companies try to hire testers or get in new consulting testers. At least looking back a few years and up until now.

At many companies there is an ever increasing focus and interest in Continuous Deployment. Sadly, I see troublesome strategies for testing in many organisations. Some companies intend to focus fully on automation, even letting go of their so called manual testers. Other companies focus on automation by not accepting testers to actually test and explore. This troubles me. Haven’t testers been involved in the test strategy? Here are few of my pointers, arguments and reasoning.

Automation Snake oil
In 1999 James Bach wrote the article Automation Snake Oil [see reference 1], where he brings up a thoughtful list of arguments and traps to be avoided. Close to 17 years later, we see the same problems. In many cases they have increased because of the Continuous Deployment ideas, but also because of those from Agile development. That is, if you ignore all the new ideas gained in the test domain as well as all research done.
The miracle status of automation is not a new phenomenon, together with the lure of saving time and cost it is seducing. In some cases it will probably be true, but it is not a replacement of thinking people. Instead it could be an enabler for speed and quality.

Testing vs. Checking
In 2009, Michael Bolton wrote an article that clarified a distinction between Testing and Checking. Since then the definition has evolved. The latest article Testing vs. Checking Refined [see reference 2] is the last in the series. Most of the testers I know and that I debate with are aware of this concept and agree with the difference or acknowledge the concept.

If you produce test strategies in a CI-environment that put an emphasis on automation, and if it means mostly doing checking and almost no testing (as in exploration), then you won’t find the unexpected. Good testing include both.

Furthermore when developing a new feature, are you focusing on automating checks fulfilling the acceptance criteria or do you try to find things that have not been considered by the team? If you define the acceptance criteria, then only check if that is fulfilled. It will only enable you to reach a small part of the way toward good quality. You might be really happy how fast it goes to develop and check (not test) the functionality. You might even be happy that you can repeat the same tests over and over. But I guess you failed to run that one little test that would have identified the most valuable thing.

Many years ago a tester came to me with a problem. He said, “We have 16000 automated tests, still our customers have problems and we do not find their problems”. I told him that he might need to change strategy and focus more on exploration. Several years later another tester came to me with the same problem, from the same product and projects. He said, “We have 24000 automated tests, still our customers have problems and we do not find their problems!”. I was a bit surprised that the persistence in following the same strategy for automation while at the same time expecting a different outcome.

In a recent argument with a development manager and Continuous Deployment enthusiast. They explained their strategy and emphasis on automation. They put little focus on testing and exploration. Mostly hiring developers who needed to automate tests (or rather checks). I asked how they do their test design? How do they know what they need to test? One of my arguments was that they limited their test effort based on what could be automated.

We know that there is an infinite amount of tests. If you have done some research, you have an idea what different stakeholders value and what they are afraid will happen. If that is so, then you have an idea what tests would be valuable to do or which areas you wish to explore. Out of all those tests, you probably only want to run part of these tests only once, where you want to investigate something that might be a problem, learn more about the systems behavior or try a specific, very difficult setup or configuration of the system. This is not something that you would want to automate because it is too costly and it is enough to learn about it just once, as far as you know. There are probably other tests that you want to repeat, but most probably with variation in new dimensions, and do more often. It could be tests that focus on specific risks or functionality that must work at all times. Out of all those that you actually want to test several times, a part of those you plan and want to automate. Out of those that you have planned to automate, only a fraction can be automated. Since automation takes a long time and is difficult, you have probably only automated a small part of those.

If you are a stakeholder, how can you consider this to be ok?

Rikard Edgren visualized the concept of what is important and what you should be in focus in a blog post called “In search of the potato” [see reference 3].

His main points are that the valuable and important is not only in the specification or requirements, you need to go beyond that.

Another explanation around the same concept of the potato is that of mapping the information space by knowns and unknowns.

The majority of test automation focus on checking an aspect of the system. You probably want to make repeatable tests on things that you know or think you know, thus the Known Knowns. In making this repeatable checking you will probably save time in finding things that you thought you knew, but that might change over time by evolving the system, thus evaluating the Unknown Knowns. In this area you can specify what you expect, would a correct result would be. With limitation on the Oracle problem, more on that below.

If you are looking beyond the specification and the explicit, you will identify things that you want to explore and want to learn more about. Areas for exploration, specific risks or just an idea you wish to understand. This is the Known Unknowns. You cannot clearly state your expectations before investigating here. You cannot, for the most part, automate the Known Unknowns.

While exploring/testing, while checking or while doing anything with the system, you will find new things that no one so far had thought of, thus things that fall into the Unknown Unknowns. Through serendipity you find something surprisingly valuable. You rarely automate serendipity.

You most probably dwell in the known areas for test automation. Would it be ok to ignore things that are valuable that you do not know of until you have spent enough time testing or exploring?

The Oracle Problem
A problem that is probably unsolvable, is that there are none (or at least very few) perfect or true oracles [see reference 4, 5, 6].

A “True oracle” faithfully reproduces all relevant results for a SUT using independent platform, algorithms, processes, compilers, code, etc. The same values are fed to the SUT and the Oracle for results comparison. The Oracle for an algorithm or subroutine can be straightforward enough for this type of oracle to be considered. The sin() function, for example, can be implemented separately using different algorithms and the results compared to exhaustively test the results (assuming the availability of sufficient machine cycles). For a given test case all values input to the SUT are verified to be “correct” using the Oracle’s separate algorithm. The less the SUT has in common with the Oracle, the more confidence in the correctness of the results (since common hardware, compilers, operating systems, algorithms, etc., may inject errors that effect both the SUT and Oracle the same way). Test cases employing a true oracle are usually limited by available machine time and system resources.
Quote from Douglas Hoffman in A taxonomy of Test Oracles [see reference 6].

Here is a the traditional view of a system under test is like the figure 1 below.

In reality, the situation is much more complex, see figure 2 below.

This means that we might have a rough idea about the initial state and the test inputs, but not full control of all surrounding states and inputs. We get a result of a test that can only give an indication that something is somewhat right or correct. The thing we check can be correct, but everything around it that we do not check or verify can be utterly wrong.

So when we are saying that we want to automate everything, we are also saying that we put our trust in something that is lacking perfect oracles.

With this in mind, do we want our end-users to get a system that could work sometimes?

Spec Checking and Bug Blindness

In an article from 2011, Ian McCowatt expresses his view on A Universe of behavior connected to Needed, Implemented and Specified based on the book Software Testing: A Craftsman’s Approach” by Paul Jorgensen.

For automation, I would expect that focus would be on area 5 and 6. But what about unimplemented specifications in area 2 and 3? Or unfullfilled needs in area 1 and 2? Or unexpected behaviors in area 4 and 7? Partly undesired behaviors will be covered in area 6 and 7, but enough?

As a stakeholders, do you think it is ok to limit the overall test effort to where automation is possible?

Concluding thoughts

It seems like we have been repeating the same things for a long time. This article is for those of you who are still fighting battles against strategies for testing which state automate everything.

References

  1. Test Automation Snake Oil, by James Bach – http://www.satisfice.com/articles/test_automation_snake_oil.pdf
  2. Testing and Checking Refined, by James Bach & Michael Bolton – http://www.satisfice.com/blog/archives/856
  3. In search of the potato, by Rikard Edgren – http://thetesteye.com/blog/2009/12/in-search-of-the-potato/
  4. The Oracle Problem and the Teaching of Software Testing, by Cem Kaner - http://kaner.com/?p=190
  5. On testing nontestable programs, by ELAINE J. WEYUKER – http://www.testingeducation.org/BBST/foundations/Weyuker_ontestingnontestable.pdf
  6. A Taxonomy for Test Oracles, by Douglass Hoffman – http://www.softwarequalitymethods.com/Papers/OracleTax.pdf
  7. Spec Checking and Bug Blindness, by Ian McCowatt – http://exploringuncertainty.com/blog/archives/253

Categories: Blogs

Quality Software Australia Conference 2017

My Load Test - Fri, 03/24/2017 - 06:51
I will be speaking at the upcoming Quality Software Australia conference in Melbourne on May 11, 2017. Those who plan to attend the conference can look forward to presentations from local and international thought-leaders on devops/QAops, CI, testing microservices, test automation, and other topics of interest for people who care about software quality. My presentation […]
Categories: Blogs

New Swag: MyLoadTest USB sticks

My Load Test - Fri, 03/24/2017 - 05:52
I have just placed an order for a large number of 8GB USB 3 memory sticks with a MyLoadTest logo on them. The USB sticks are available for clients, and for performance testers who help us out by finding bugs, by contributing to the web performance community, or by referring work to MyLoadTest. Don’t forget […]
Categories: Blogs

Final Posting

Rico Mariani's Performance Tidbits - Thu, 03/23/2017 - 22:54

My last day at Microsoft will be tomorrow, 3/24/2017.

I really did want to get to some of the old comments that had been neglected but alas, there’s no time.

I’m not sure I will actually lose access to this blog because of the way authorization happens but I think it would be a bad idea for me to keep using it because of its strong affiliation with MS.

Best wishes to all and thanks for reading all these years.

-Rico

@ricomariani

Categories: Blogs

Test Data: Food for Test Automation Framework

Testing TV - Thu, 03/23/2017 - 20:57
Building a Test Automation Framework is easy – there are so many resources / guides / blogs / etc. available to help you get started and help solve the issues you get along the journey. Teams already building 1000s of tests of various types – UI, web service-based, integration, unit, etc. is a proof of […]
Categories: Blogs

Can You Afford Me?

Hiccupps - James Thomas - Wed, 03/22/2017 - 23:56

I'm reading The Design of Everyday Things by Donald Norman on the recommendation of the Dev manager, and borrowed from our UX specialist. (I have great team mates.)

There's much to like in this book, including
  • a taxonomy of error types: at the top level this distinguishes slips from mistakes. Slips are unconscious and generally due to dedicating insufficient attention to a task that is well-known and practised. Mistakes are conscious and reflect factors such as bad decision-making, bias, or disregard of evidence.
  • discussion of affordances: an affordance is the possibility of an action that something provides, and that is perceived by the user of that thing. An affordance of a chair is that you can stand on it. The chair affords (in some sense is for) supporting, and standing on it utilises that support.
  • focus on mappings: the idea that the layout and appearance of the functional elements significantly impacts on how a user relates them to their outcome. For example, light switch panels that mimic the layout of lights in a room are easier to use.
  • consideration of the various actors: the role of the designer is to satisfy their client; the client may or may not be the user; the designer may view themselves as a proxy user; the designer is almost never a proxy user; the users are users; there is rarely a single user (type) to be considered.

But the two things I've found particularly striking are the parallels with Harry Collins' thoughts in a couple of areas:
  • tacit and explicit knowledge: or knowledge in the head and knowledge in the world, as Norman has it. When you are new to some task, some object, you have only knowledge that is available in the world about it: those things that you can see or otherwise sense. It is on the designer to consider how the affordances suggested by an object affect its usability. This might mean - for example - following convention, e.g. the push side of doors shouldn't have handles and the plate to push on should be at a point where pushing is efficient.
  • action hierarchies: actions can be viewed at various granularities. In Norman's model they have seven stages and he gives an example of several academics trying to thread an unfamiliar projector. In The Shape of Actions, Collins talks about an experiment attempting to operate a laboratory air pump. Both authors deconstruct the high-level task (operate the apparatus) into sub-tasks, some of which are familiar to some extent - perhaps by analogy, or by theoretical knowledge, or by having seen someone else doing it - and some of which are completely unfamiliar and require explicit experience of that specific task on that specific object.

I love finding connections like this, even if I don't know quite what they can afford me, just yet.

Categories: Blogs

Happy 10th Birthday Google Testing Blog!

Google Testing Blog - Wed, 03/22/2017 - 23:22
by Anthony Vallone

Ten years ago today, the first Google Testing Blog article was posted (official announcement 2 days later). Over the years, Google engineers have used this blog to help advance the test engineering discipline. We have shared information about our testing technologies, strategies, and theories; discussed what code quality really means; described how our teams are organized for optimal productivity; announced new tooling; and invited readers to speak at and attend the annual Google Test Automation Conference.

Google Testing Blog banner in 2007

The blog has enjoyed excellent readership. There have been over 10 million page views of the blog since it was created, and there are currently about 100 to 200 thousand views per month.

This blog is made possible by many Google engineers who have volunteered time to author and review content on a regular basis in the interest of sharing. Thank you to all the contributors and our readers!

Please leave a comment if you have a story to share about how this blog has helped you.

Categories: Blogs

The Gift of Feedback (in a Booklet)

thekua.com@work - Sun, 03/19/2017 - 20:00

Receiving timely relevant feedback is an important element of how people grow. Sports coaches do not wait until the new year starts to start giving feedback to sportspeople, so why should people working in organisations wait until their annual review to receive feedback? Leaders are responsible for creating the right atmosphere for feedback, and to ensure that individuals receive useful feedback that helps them amplify their effectiveness.

I have given many talks on the topic and written a number of articles on this topic to help you.

However today, I want to share some brilliant work from some colleagues of mine, Karen Willis and Sara Michelazzo (@saramichelazzo) who have put together a printable guide to help people collect feedback and to help structure witting effective feedback for others.

Feedback Booklet

The booklet is intended to be printed in an A4 format, and I personally love the hand-drawn style. You can download the current version of the booklet here. Use this booklet to collect effective feedback more often, and share this booklet to help others benefit too.

Categories: Blogs

A Field of My Stone

Hiccupps - James Thomas - Sat, 03/18/2017 - 09:04

The Fieldstone Method is Jerry Weinberg's way of gathering material to write about, using that material effectively, and using the time spent working the material efficiently. Although I've read much of Weinberg's work I'd never got round to Weinberg on Writing until last month, and after several prompts from one of my colleagues.

In the book, Weinberg describes his process in terms of an extended analogy between writing and building dry stone walls which - to do it no justice at all - goes something like this:
  • Do not wait until you start writing to start thinking about writing.
  • Gather your stones (interesting thoughts, suggestions, stories, pictures, quotes, connections, ideas) as you come across them. 
  • Always have multiple projects on the go at once. 
  • Maintain a pile of stones (a list of your gathered ideas) that you think will suit each project.
  • As you gather a stone, drop it onto the most suitable pile.
  • Also maintain a pile for stones you find attractive but have no project for at the moment.
  • When you come to write on a project, cast your eyes over the stones you have selected for it.
  • Be inspired by the stones, by their variety and their similarities.
  • Handle the stones, play with them, organise them, reorganise them.
  • Really feel the stones.
  • Use stones (and in a second metaphor they are also periods of time) opportunistically.
  • When you get stuck on one part of a project move to another part.
  • When you get stuck on one project move to another project.

The approach felt extremely familiar to me. Here's the start of an email I sent just over a year ago, spawned out of a Twitter conversation about organising work:
I like to have text files around [for each topic] so that as soon as I have a thought I can drop it into the file and get it out of my head. When I have time to work on whatever the thing is, I have the collected material in one place. Often I find that getting material together is a hard part of writing, so having a bunch of stuff that I can play with, re-order etc helps to spur the writing process.For my blogging I have a ton of open text files:


You can see this one, Fieldstoning_notes.txt and, to the right of it, another called notes.txt which is collected thoughts about how I take notes (duh!) that came out of a recent workshop on note-taking (DUH!) at our local meetup.

I've got enough in that file now to write about it next, but first here's a few of the stones I took from Weinberg on Writing itself:

Never attempt to write what you don’t care about.

Real professional writers seldom write one thing at a time.

The broader the audience, the more difficult the writer’s job.

Most often [people] stop writing because they do not understand the essential randomness involved in the creative process.

... it’s not the number of ideas that blocks you, it’s your reaction to the number of ideas.

Fieldstoning is about always doing something that’s advancing your writing projects.

The key to effective writing is the human emotional response to the stone.

If I’ve been looking for snug fits while gathering, I have much less mortaring to do when I’m finishing

Don’t get it right; get it written.

"Sloppy work" is not the opposite of "perfection." Sloppy work is the opposite of the best you can do at the time.
Categories: Blogs

Pairing For Learning – Across the Team

Agile Testing with Lisa Crispin - Fri, 03/17/2017 - 02:32

I’ve written a lot about pairing over the years, most recently about strong-style pairing with others on my team. Pairing is an excellent way to transfer skills, it offers a lot of advantages for overcoming cognitive biases when testing, and it’s just plain fun.

For most of the 4.5 years I’ve been on my current team, I haven’t been able to pair with developers as much as I would like. For one thing, the developers pair with each other 100% of the time. Also, I suspect that the dev managers worried that testers will slow their developers down even if they’re soloing because the pod is odd that day.

On the pairing journeyOn the pairing journey

Fortunately, the development managers also understand the value of exploratory testing, and want developers to improve their ET skills. I’ve written about ways we have helped non-testing team members learn exploratory testing skills. The workshops and other efforts helped, but developers felt they needed to learn more. Our team is moving towards continuous delivery, and the managers feel that developers need to step up their exploratory testing at the story level to mitigate the risk of bad issues getting out in production. Our team embarked on an experiment: each tester should pair with a developer at least one day a week.

Experiment underway

Squeal! I get to pair not only with other testers, product managers and designers, but also with developers. Our team is divided into several vertical “pods”, and as we are so few testers, each of us has to help on two or more pods. I was pleasantly surprised that “my” pods embraced this experiment from the start. It also happened that for various reasons, my pods were “odd”, there was one developer each day who would have to solo. Instead of solo-ing, they paired with me! Not only were they ok with this, they were actually eager to do it. One day recently, two pods were vying to have me pair with a dev!

The main intent of the experiment was to help devs learn how to write exploratory testing charters and execute exploratory testing. In practice, this has meant everything from writing charters, doing ET charters, and simply working on stories.

Doing exploratory testing activities together has the expected benefits. The devs learn good techniques for writing charters (we use Elisabeth Hendrickson’s template from her book Explore It! ), useful exploring resources such as personas and heuristics, the importance of reporting what testing they did and what they learned. I think that we testers help the devs think beyond the happy path.

Pairing on “production code” too! Do uncomfortable things in pairsGreat advice from Mike Sutton. See https://www.slideshare.net/mike.sutton/the-power-of-communities-of-practice-in-testing for more!

I’ve found pairing on story work surprisingly valuable. For one thing, I have new insight into what the developers’ job is like – it’s not easy! I get to watch them test-drive their code (they do offer to let me drive, but they are so freaking fast in their IDEs, I don’t know all those shortcuts! But I will work up the nerve eventually!) and they explain their thought process as they go. I’m learning a lot about our app’s architecture and reasons behind behavior I observe in testing, such as performance issues. As they write unit tests, I might suggest another test case, and hear “Oh, good, I didn’t think of that!” Or I might ask why a particular test is using double negatives (one of my pet peeves), and that turns out to be a helpful suggestion.

My patient teammates transfer lots of their skills to me. I’ve learned some new git parameters, I’ve learned a lot about using browser dev tools to debug CSS and other valuable activities, I’ve learned a little about BEM. I’m being exposed to lots of new things that help me understand our coding standards and process better, which I think will help me do a better job of testing.

Our app is mostly Rails and JS, but the team is also starting to code some pages in Elm. Pairing with a dev writing Elm code was rather mind-bending. Elm code prevents runtime exceptions by detecting issues during compilation and giving friendly hints on how to correct them.

We are fortunate that our developers pair at least 7 hours a day. We have pair workstations, each fitted out with an iMac, a 27″ Thunderbolt monitor, with mirrored displays, two keyboards and two mice. People move around every day, so if they have their own favorite keyboard and/or mouse they just carry it with them. This makes pairing comfortable – no craning your neck to see what the other person is doing, no weird personal space issues. If your work area doesn’t have comfortable pair workstations, see if you can set up at least one pairing area that pairs can use.

Good for what ails you

I’ve always been a fan of pairing, though I also am subject to the same impediments I hear other people cite. “We have too much work to do, we should divide and conquer”. “I will slow that person down too much because she’ll have to explain everything to me.” “My true nature of being an imposter will be exposed.” When I can overcome those excuses, I find pairing powerful for so many purposes.

Pairing is caringPairing is caring

Are you a tester on a team where the programmers are using poor coding practices and throwing code over the wall, expecting you to find all the bugs? Find the friendliest programmer, work up your courage, and go ask her if she will pair with you for an hour to test a feature, or write automated tests. Whatever success that brings, keep building on it. You will start building relationships that will let you get your whole delivery team engaged in solving testing problems.

If, like me, you find it hard to stop the merry-go-round of daily routine and make time to pair, put it on the calendar. Pick one day a week to pair with a tester, coder, designer, PO, BA, whomever. Once when another tester and I were finding it hard to make time for pairing, we added a daily one-hour meeting to our calendars and stuck to it as much as possible (I blogged about that experience too). I’ve also paired with total strangers who volunteered to pair with me via Twitter, with terrific results!

Take a baby step, pair with someone for an hour today. At the end of your hour, do a mini-retrospective together to discuss the benefits and disadvantages you experienced. Keep iterating and see if the benefits outweigh the downsides. (When I really do pair – I find no downsides). It’s a great way to learn!

The post Pairing For Learning – Across the Team appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Deeper Testing (1): Verify and Challenge

DevelopSense Blog - Thu, 03/16/2017 - 21:13
What does it mean to do deeper testing? In Rapid Software Testing, James Bach and I say: Testing is deep to the degree that it has a probability of finding rare, subtle, or hidden problems that matter. Deep testing requires substantial skill, effort, preparation, time, or tooling, and reliably and comprehensively fulfills its mission. By […]
Categories: Blogs

Well, That was a Bad Idea

Hiccupps - James Thomas - Fri, 03/10/2017 - 11:02

I was listening to Giselle Aldridge and Paul Merrill on the Reflection as a Service podcast one morning this week as I walked to work. They were talking about ideas in entrepreneurship, assessing their value, when and how and with who they should be discussed, and how to protect them when you do open them up to others' scrutiny.

I was thinking, while listening, that as an entrepreneur you need to be able to filter the ideas in front of you, seeking to find one that has a prospect of returning sufficiently well on an investment. Sometimes, you'll have none that fit the bill and so, in some sense, they are bad ideas (for you, at that time, for the opportunity you had in mind, at least). In that situation one approach is to junk what you have and look for new ideas.  But an alternative is to make a bad idea better.

I was speculating, as I was thinking, and listening, that there might be heuristics for turning those bad ideas into good ideas. So I went looking, and I found an interesting piece by Alan Dix, a lecturer at Birmingham University, titled Silly Ideas:
Thinking about bad ideas is part brainstorming, but more important about learning to think about any idea, new good ideas you have yourself, other people's existing ideas and products.Dix suggests that deliberately (stating that you are) starting with bad ideas is itself a useful heuristic. You are naturally less attached to bad ideas; they can provoke you into trains of thought that you might not otherwise have encountered; you will have more confidence that you can improve them; they will likely generate more questions and challenge your assumptions.

He gives a set of questions for interrogating an idea, something like a SWOT analysis:

  • what is good about it? in what contexts? why?
  • what is bad about it? in what contexts? why?
  • in what contexts it is optimal?
  • how would you sell it? how would you defend it?

For me, a key aspect of this analysis is the focus on context. An idea is not necessarily unequivocally good or bad. Aspects of it might be good, or bad, or better or worse, in different scenarios, for different purposes. Dix invites you to discover in which it might be which and for which. To draw another parallel, this feels akin to factoring.

Armed with data about the idea, you can now look to change it in ways that keep the good and lose the bad, and maybe change the context or manner in which it's used. Or throw it away completely and use the learnings you have from the domain to make a fresh start with a new idea.

The new idea I like best here is that of starting from a point that you assert is bad. I've encountered similar suggestions before: that functional fixedness can be reduced by starting a familiar process from an unfamiliar situation, that in brainstorming you shouldn't reject ideas as you come up with them, and that of not evaluating until you have options in the rule of three.

I enjoy ideas simply for the sake of having them. I am fascinated by the way in which ideas spawn ideas and by the way that connections are made between them. I celebrate the fact that multiple perspectives on the same idea can differ enormously. I particularly like exploring the ambiguity that can result from those perspectives at work, where the task is often to tease out and then squeeze out ambiguity, or for fun, making up corny puns. And corny puns are never a bad idea.
Image: ITV News
Categories: Blogs

ISTQB Advanced Security Tester Certification Public Course May 16 - 19, 2017 - Salt Lake City Area

I am excited to announce one of the first public courses in the USA (and perhaps the world) for the ISTQB Advanced Security Tester Certification. This course will be held May 16 - 19, 2017 in Sandy, UT.

With cyber attacks occurring daily, most businesses and government agencies are under constant cyber attack. Unfortunately, many organizations are not doing enough to defend their physical and digital assets. Even more concerning is that while some organizations have firewalls, intrusion detection systems and other defenses, few of those organizations regularly test their defenses to determine their effectiveness.

In this course, you will learn a complete framework for testing security, regardless of the technology involved. This course and certification covers much more than just penetration testing. Certainly, penetration testing is an important part of security testing, but there are many other threats and vulnerabilities that require other security testing approaches.

Who Should Attend?

This course is for:
  • Software testers that hold the ISTQB Certified Tester, Foundation Level (CTFL) and want to expand their knowledge of security testing, 
  • Security testers who hold the CTFL and wish to obtain an advanced certification to solidify their knowledge, 
  • Security administrators who want to learn more about how to test the security defenses in their organization, and 
  • Anyone who wants to learn more about security testing but do not necessarily want to take the CTAL-SEC exam.

What You Need to Know:

1. This course follows the ISTQB Advanced Security Tester Syllabus and is written and presented by Randall W. Rice, chair of the ISTQB Advanced Security Tester Syllabus Working Group and holder of the CTAL-SEC, as well as all three ISTQB Core Advanced Certifications.

2. Anyone may attend this training, but to sit for the ISTQB Advanced Security Tester exam, you must hold the ISTQB Certified Tester, Foundation Level (CTFL) designation (or equivalent) and have 3+ years of software testing and related experience. Basic security and security testing concepts are assumed knowledge.

3. The course is four full days in length. No exam will be administered during the class, but attendees that meet pre-requisites and select the exam add-on option will receive a voucher to take the exam at a Kryterion Exam Center. http://www.kryteriononline.com/Locate-Test-Center

4. This is an intense, advanced level course with 28 exercises that cover all K3 and K4 learning objectives.

5. The venue will be announced soon. It will be in the Sandy, UT area. It is your responsibility to book your own hotel room.

6. Light breakfast and lunches are included.

7. A remote attendee option is available.

8. The cost is $2,495 (exam not included) for in-person attendees and $1,995 for remote attendees. There is a 10% discount for groups of 3 or more people.

9. The course program and details can be seen here: http://www.riceconsulting.com/home/index.php/ISTQB-Training-for-Software-Tester-Certification/istqb-advanced-security-tester-course.html

10. To register, please visit https://www.mysoftwaretesting.com/ISTQB_Adv_Security_Tester_Certification_Course_p/istqbsecpub.htm


If you have any questions, please contact me at 405-691-8075 or from the contact form at http://www.riceconsulting.com.

I hope to see you at this event!

Thanks,

Randy
Categories: Blogs

100 Day Deep Work - Day 6: Checking Multiple Links using WebDriver

Yet another bloody blog - Mark Crowther - Thu, 03/09/2017 - 00:11


Here we are, Day 6 and pretty much a continuation of yesterday. I suppose I should say refinement or refactoring. Compared to the approach taken in yesterday’s session I scaled this right back to something simpler. Why? Here comes today’s (and yesterday’s) real learning.

I was breaking the golden rule of taking it one step at a time. Instead I was trying to work out what the overall solution looked like then code it up. That was doomed to failure so I went back to baby steps and simplified things.
First off, recall the (cut down) testing problem from yesterday:You have a website with 2 navigation links. The expectation is more links will be added in the future. The test must check the known set of text links are present and if any new ones have been added.
That’s the first part so let’s work out some code for that.

Here’s the list of expected links:

public enum links{    News,     Sport}
We need to a) locate the links section and b) count how many links there are on the page:

var locateTheLinkSection = Driver.Instance.FindElement(By.XPath("//ul[@class='nav-tabs']"));
var actuaLinkSetCount = locateTheLinkSection.FindElements(By.TagName("a")).Count();
Good, we have a count of what is on the page, but is that what was expected?
var expectedLinkSetCount = links.Count();
Console.WriteLine($"Expected link Set Count is {expectedLinkSetCount}.");Console.WriteLine($"Actual link count is: {actuaLinkSetCount}");
We could wrap this in an if-else or a try-catch to actually DO something given the outcome, but that’s a way to get our check done.

I’d still like to report what was new, what actual links got found, etc. not just a count - but that’s something to look at later.

Good, onto day 7!

Mark


(Be sure to have a look at the book - Deep Work)
-------------------------------------------------------------------------------------------------------------
Day 5: http://cyreath.blogspot.co.uk/2017/03/100-day-deep-work-day-5-iterating-over.htmlDay 4: http://cyreath.blogspot.co.uk/2017/03/100-day-deep-work-day-4-configuration.htmlDay 3: http://cyreath.blogspot.co.uk/2017/03/100-day-deep-work-c-enumerations.html
Day 2: http://cyreath.blogspot.co.uk/2017/03/100-day-deep-work-day-2-comparing.html
Day 1: http://cyreath.blogspot.co.uk/2017/02/100-day-deep-work-day-1-c-namespaces.html
Day 0: http://cyreath.blogspot.co.uk/2017/02/100-day-deep-work-day-0-learning-plan.html



Categories: Blogs

Becoming a great product owner – book review

Agile Testing with Lisa Crispin - Wed, 03/08/2017 - 05:00

The product owner role in agile delivery teams has been one of the least understood and even most criticized. How can one person represent all the business stakeholders, customers and end users? But if we don’t have a product owner, who gets agreement from all the different perspectives and brings the advance clarity we need to know how a new feature should behave?

Those are some of the reasons I was keen to read Geoff Watts’ new book, Product Mastery: From Good to Great Product Ownership. Another reason is that I’m a long-time fan of Geoff’s work, and have learned a lot from his other books like the Coach’s Casebook.

As a tester, I work so closely with product owners, we depend on each other. Learning more about the skills POs need helps me improve how I work. This book is a valuable read for anyone on a software delivery team. If you’re currently a PO or aspire to grow into that role, it’s a must-read.

My Review: Product Mastery

Being a product owner is in some ways an impossible job, but it IS a job, and people need to know how to do it. Geoff applies his “DRIVEN” model to explore what makes POs good – and great.

As I started reading this book, I was a bit taken aback by Geoff’s “DRIVEN” model for POs: Decisive, Ruthless, Informed, Versatile, Empowering, Negotiable. “Ruthless” is not a term I associate with Geoff and his work. It strikes me as a negative term. But as I read on, the concepts all came together and made sense. Geoff presents a bold vision of what a PO can be, and what she can contribute.

Learning by example

The fictionalized, but still real-life, stories provides a great learning experience. Like many people, I learn best by hearing from people who had the same problem I have – and how they solved it. The stories illustrate the problems and cognitive biases that trap us.

I especially like the powerful questions sprinkled throughout the book. This book will make you question and think about how you can best serve your customers. I appreciate that it delves into common problems such as imposter syndrome. Geoff encourages POs to be brave and believe in themselves. He explains ways to approach difficult conversations.

Models for POs

As a fan of models, I found the ones presented here useful, such as the decision-making matrix, and the matrix of influence. The book also introduces techniques I’ve found extremely helpful, such as the Perfection Game from Jim McCarthy‘s Core Protocols, and user story mapping from Jeff Patton.

An especially interesting part of the book is on difference between customer feedback and customer data, and how to use it. We need to sit up and take notice of the analytical data available to us now. Geoff provides help with leadership skills, what styles are helpful in which situations.

Experiment to learn

Readers will get ideas for experiments to help their delivery and customer teams achieve shared understanding of features and stories, and find innovative ways to deliver value to the business in a timely way.

The post Becoming a great product owner – book review appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

100 Day Deep Work - Day 5: Iterating Over Links

Yet another bloody blog - Mark Crowther - Wed, 03/08/2017 - 02:03


Day 5 of the 100 Day Deep Work challenge!
Today’s session was a bit all over the place. I spent most of the time imagining what I think the solution to my current problem could be, than actually solving the problem in code. Such is the way sometimes I guess but I certainly look forward to having mental models of code patterns in my mind to apply more readily.
Here’s the problem:You have a website with 4 navigation links, 3 are text and 1 is a link under a company logo. The expectation is more links will be added in the future, they might be text or images. The test must check the known set of text links or linked images are present and if any new ones have been added. If new ones have been added, this should simply be reported and the known set tested anyway. The classes/methods for this must be reusable, to cover other sets of links across the website.
To do this I started splitting out the list of known links into a Dictionary
        public static IDictionary<string, string> knownTextLinkAndURLList            = new Dictionary<string, string>           {                { "News","http://www.bbc.co.uk/news/" }                { "Sport","http://www.bbc.co.uk/sport" }            };

Then I created a method to loop over the links by looking for the link text
       public static void TopNavigationLinksCheck()        {            foreach (var navigationLink in knownTextLinkAndURLList.Key)            {                if (IsElementPresent(By.LinkText(navigationLink)))                {                    Console.WriteLine($"As expected, I saw: '{navigationLink}' link text");                }                else                {                    Console.WriteLine($"I could NOT locate the link: {navigationLink}");                    TakeScreenshot.SaveScreenshot();                    var pageNavigationElementException = $"I expected {navigationLink} but it was not located";                    throw new Exception(pageNavigationElementException);                }            }        }

Which used a reusable extension in the If statement I copied down from Stack Overflow:
        public static bool IsElementPresent(By by)        {            try            {                Driver.Instance.FindElement(by);                return true;            }            catch (NoSuchElementException)            {                return false;            }        }
This seems fine for checking over text links, but we need to do that then check the actual link is correct. This is assuming we're concerned link text may be written incorrectly / against an agree style and that the link applied might vary, say news.bbc instead of bbc.co.uk/news for example.
Two things to investigate further then; 1) How to check both link text and link, 2) How to confirm there are no new links added (we can already check if any are removed or their links changed)
Day 6 here we come!
Mark.
(Be sure to have a look at the book - Deep Work)
-------------------------------------------------------------------------------------------------------------
Day 4: http://cyreath.blogspot.co.uk/2017/03/100-day-deep-work-day-4-configuration.htmlDay 3: http://cyreath.blogspot.co.uk/2017/03/100-day-deep-work-c-enumerations.html
Day 2: http://cyreath.blogspot.co.uk/2017/03/100-day-deep-work-day-2-comparing.html
Day 1: http://cyreath.blogspot.co.uk/2017/02/100-day-deep-work-day-1-c-namespaces.html
Day 0: http://cyreath.blogspot.co.uk/2017/02/100-day-deep-work-day-0-learning-plan.html
Categories: Blogs

Automated Testing with Drupal & PHPSpec

Testing TV - Tue, 03/07/2017 - 18:52
Like many Drupal teams, the CSRA/New Target web team at the Administrative Office of the US Courts is working to bring automated testing into our development and maintenance workflow. In this talk, I’ll share our current thinking on the following issues: 1. Why Behavior-Driven Development (BDD) is essential 2. Limits of using the Behat Drupal […]
Categories: Blogs

commit -m "My idea is ..."

Hiccupps - James Thomas - Tue, 03/07/2017 - 07:30

One of the many things I've learned over the years is that (for me) getting an idea out - on paper, on screen, on a whiteboard, into the air; in words, or pictures, or verbally, ... - is a strong heuristic for making it testable, for helping me to understand it, and for provoking new ideas.

Once out, and once I've forced myself to represent the idea in prose or some other kind of model, I usually find that I've teased out detail in some areas that were previously woolly. I can begin to challenge the idea, to see patterns and gaps between it and the other ideas, to search the space around it and see further ideas, perhaps better ideas.

Once out, I feel like I have freed up some brain room for more thoughts. I don't have to maintain the cloud of things that the idea was when it was only in mind and I was repeatedly running over it to keep it alive, to remember it.

Once out, once I've nailed it down that first time, I have a better idea of how to explain it to someone else. So I can choose to share the idea and get the benefits of others' challenges to it.

Don't get me wrong, I do a lot of thinking in my head. But pulling an idea out, even to somewhere only visible to me, is a commitment to the idea of the idea - which doesn't mean that I think it's a good idea; just that it's worth exploring.
Image: https://flic.kr/p/7nxbXj
Categories: Blogs