Skip to content


Improving Specification by Example, BDD & ATDD

Testing TV - Wed, 06/21/2017 - 19:12
To get the most out of Behaviour Driven Development (BDD), Specification by Example (SBE) or Acceptance Test-Driven Development (ATDD), you need much more than a tool. You need high value specifications. How do we get the most out of our specification and test writing effort? How do we write testable scenarios that business people will […]
Categories: Blogs

Floating Point Quality: Less Floaty, More Pointed

James Bach's Blog - Tue, 06/20/2017 - 20:14

Years ago I sat next to the Numerics Test Team at Apple Computer. I teased them one day about how they had it easy: no user interface to worry about; a stateless world; perfectly predictable outcomes. The test lead just heaved a sigh and launched into a rant about how numerics testing is actually rather complicated and brimming with unexpected ambiguities. Apparently, there are many ways to interpret the IEEE floating point standard and learned people are not in agreement about how to do it. Implementing floating point arithmetic on a digital platform is a matter of tradeoffs between accuracy and performance. And don’t get them started about HP… apparently HP calculators had certain calculation bugs that the scientific community had grown used to. So the Apple guys had to duplicate the bugs in order to be considered “correct.”

Among the reasons why floating point is a problem for digital systems is that digital arithmetic is discrete and finite, whereas real numbers often are not. As my colleague Alan Jorgensen says “This problem arises because computers do not represent some real numbers accurately. Just as we need a special notation to record one divided by three as a decimal fraction: 0.33333…., computers do not accurately represent one divided by ten. This has caused serious financial problems and, in at least one documented instance, death.”

Anyway, Alan just patented a process that addresses this problem “by computing two limits (bounds) containing the represented real number that are carried through successive calculations.  When the result is no longer sufficiently accurate the result is so marked, as are further calculations using that value.  It is fail-safe and performs in real time.  It can operate in conjunction with existing hardware and software.  Conversion between existing standardized floating point and this new bounded floating point format are simple operations.”

If you are working with systems that must do extremely accurate and safe floating point calculations, you might want to check out the patent.

Categories: Blogs

Code Health: Too Many Comments on Your Code Reviews?

Google Testing Blog - Tue, 06/20/2017 - 19:20
This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.

By Tom O'Neill

Code reviews can slow down an individual code change, but they’re also an opportunity to improve your code and learn from another intelligent, experienced engineer. How can you get the most out of them?Aim to get most of your changes approved in the first round of review, with only minor comments. If your code reviews frequently require multiple rounds of comments, these tips can save you time.

Spend your reviewers’ time wisely—it’s a limited resource. If they’re catching issues that you could easily have caught yourself, you’re lowering the overall productivity of your team.Before you send out the code review:
  • Re-evaluate your code: Don’t just send the review out as soon as the tests pass. Step back and try to rethink the whole thing—can the design be cleaned up? Especially if it’s late in the day, see if a better approach occurs to you the next morning. Although this step might slow down an individual code change, it will result long-term in greater average throughput.
  • Consider an informal design discussion: If there’s something you’re not sure about, pair program, talk face-to-face, or send an early diff and ask for a “pre-review” of the overall design.
  • Self-review the change: Try to look at the code as critically as possible from the standpoint of someone who doesn’t know anything about it. Your code review tool can give you a radically different view of your code than the IDE. This can easily save you a round trip.
  • Make the diff easy to understand: Multiple changes at once make the code harder to review. When you self-review, look for simple changes that reduce the size of the diff. For example, save significant refactoring or formatting changes for another code review.
  • Don’t hide important info in the submit message: Put it in the code as well. Someone reading the code later is unlikely to look at the submit message.
When you’re addressing code review comments:
  • Re-evaluate your code after addressing non-trivial comments: Take a step back and really look at the code with fresh eyes. Once you’ve made one set of changes, you can often find additional improvements that are enabled or suggested by those changes. Just as with any refactoring, it may take several steps to reach the best design.
  • Understand why the reviewer made each comment: If you don’t understand the reasoning behind a comment, don’t just make the change—seek out the reviewer and learn something new.
  • Answer the reviewer’s questions in the code: Don’t just reply—make the code easier to understand (e.g., improve a variable name, change a boolean to an enum) or add a comment. Someone else is going to have the same question later on.
Categories: Blogs

Become an ISTQB Advanced Level Certified Tester - Live Virtual Class Forming for July 31 - Aug 3, 2017

I hope you can take advantage of a unique opportunity to attend live virtual training for the ISTQB Advanced Security Tester certification this Summer.

The dates of the course are from 9 a.m. to 5:30 p.m. EDT on July 31 - August 3, 2017.

I will be the instructor of the course. As chair of the ISTQB Advanced Security Tester Working Group, I can bring a unique perspective to the training and prepare you to take the exam.

Here's what you need to know:

1. This is a live virtual class that you can take from your desk or home. You will be able to interact with me, ask questions, make comments, etc.

2. This will be an intensive course with over 20 exercises. I will present some material, then we will have exercise time. At the completion of each exercise, I give my perspective about the solutions.

3. We will go over every question in the ASTQB Sample Exam after each major section in the syllabus. There are nine sections in the syllabus.

4. If you can't make all the sessions, I am also including the e-learning version at no extra cost so you can make-up any sessions needed.

5. The exam is not included in the price of the course. However, the exam can be added for $200. You can use the exam voucher at any Kryterion exam center. Please note that while anyone may take the course and gain a lot from it, in order for you to take the exam, you must first hold the ISTQB Foundation Certification (CTFL) and have 3 or more years relevant experience in software testing or a related field.

6.  The course also includes a printed workbook. Please allow 5 - 7 days for printing and shipping the book to you. If you live outside of the USA, allow 14 days to receive the book.

7. After July 15, the registration price increases by $200. So, it's best to register soon.

8. Before registering for the class, please review the course outline and ISTQB Advanced Security Tester syllabus so you will be aware of the topics we will cover. While we do cover penetration testing, this is not a class on penetration testing. This certification and course covers many aspects of cybersecurity and the testing of security defenses.

9.  You will leave the class with an increased knowledge of how to help protect your organization by testing your security defenses to ensure they are working effectively.

10.  This course is fully accredited by the ASTQB.

11.  You can register at

If you have any other questions, please feel free to contact me by phone (405-691-8075) or through the contact form at,453/id,1/view,formmaker/.

I hope to see you in the course!


Categories: Blogs

AutoMapper 6.1.0 released

Jimmy Bogard - Wed, 06/14/2017 - 13:25

See the release notes:


As with all of our dot releases, the 6.0 release broke some APIs, and the dot release added a number of new features. The big features for 6.1.0 include those for reverse-mapping support. First, we detect cycles in mapping classes to automatically preserve references.

Much larger however is unflattening. For reverse mapping, we can now unflatten into a richer model:

public class Order {  
  public decimal Total { get; set; }
  public Customer Customer { get; set; } 
public class Customer {  
  public string Name { get; set; }

We can flatten this into a DTO:

public class OrderDto {  
  public decimal Total { get; set; }
  public string CustomerName { get; set; }

We can map both directions, including unflattening:

Mapper.Initialize(cfg => {  
  cfg.CreateMap<Order, OrderDto>()

By calling ReverseMap, AutoMapper creates a reverse mapping configuration that includes unflattening:

var customer = new Customer {  
  Name = "Bob"
var order = new Order {  
  Customer = customer,
  Total = 15.8m

var orderDto = Mapper.Map<Order, OrderDto>(order);

orderDto.CustomerName = "Joe";

Mapper.Map(orderDto, order);


Dogs and cats living together! We now have unflattening.


Categories: Blogs

A Test Manager?

Hiccupps - James Thomas - Mon, 06/12/2017 - 09:31

CEWT #4 was about test management and test managers. One of the things that became apparent during the day was how much of a moveable feast the role associated with this title is. And that reflects my own experience.

A few months ago, when discussing courses for the line managers in the Test team, a trainer outlined what his course would cover and asked whether I'd got any heuristics for management. I gave him these, none of which were included in his synopsis:
  • Clear and present. (Say what you think and why, and what you are committed to; encourage and answer any question; be approachable, available and responsive, or say when you can be)
  • It’s all about MOI. (Motivation: explain why we are doing what we’re doing; Organisation: set things up to facilitate work, opportunities; Innovation: be ready with ideas when they’re needed)
  • Congruency in all decisions. (Consider the other person, the context, yourself)

In advance of CEWT, one of my team asked me what I felt my responsibilities as a Test Manager are. Off the top of my head, I suggested they included following:
  • Provide appropriate testing resource to the business.
  • Assist in the personal development of my staff.
  • Develop relationships in my teams, with my teams, across teams.

At the pub after CEWT last night I was asked what I did as a Test Manager. I replied that it's changed a lot over time, but has encompassed situations where:
  • I was the sole tester. (And also learning how to be a tester.)
  • I was planning and scheduling the testing for a small test team, working on a single product. (And also learning about planning and scheduling for others.)
  • I was planning assignments of testers to projects and teams across products. (And also learning about how to work without knowing so much about some of the work my team are doing.)
  • I was managing larger and larger teams. (And learning how to be a better manager.)
  • I was delegating larger and larger projects to other testers. (And learning how to help others to manage larger projects.)
  • I was keeping track of more and more projects across the company, as we grew. (And learning about finding ways to get the right information at the right costs.)
  • I was delegating line management responsibility to other testers. (And learning about how to help others find and express themselves in line management roles.)

Ask a slightly different question, or a different test manager, or in a different context, or about a different time ...

Get a different answer.
Categories: Blogs

Does certification have value or not?

I read a blogpost in Dutch named “Does certification have value or not?” by Jan Jaap Cannegieter. I wanted to reply, but there was no option to reply, so I decided to turn my comments into a blogpost. Since the original blogpost is in Dutch I have translated it here.

The proponents claim that you prove to have a foundation in testing with certification, you possess certain knowledge and it supports education.” (text in blue is from the blogpost, translated by me).

Three things are said here:

  1. prove to have foundation
    Foundation? What foundation? You learn a few terms/definitions and an over-simplified “standard” process? And how important is this anyway? Also the argument of an common language is nicely debunked by Michael Bolton here: “Common languages aint so common
  2. possess certain knowledge
    When passing an exam, you indicate to be able to remember certain things. It doesn’t prove you can apply that knowledge. And is that knowledge really important in our craft? I think knowledge is over appreciated and skills are undervalued. I’d rather have someone who has the skills to play football well instead of somebody who knows the rules. From a foundation training, wouldn’t you at least expect to learn the basic testing skills? In no ISTQB training, students use a computer. Imagine giving someone a driver’s license without having ever sat in a car …
  3. supports education
    Really? Can you tell me how? I think the opposite is true! As an experienced teacher (I also did my share of certification training in the past), my experience is that there is too much focus on passing the exam rather than learning useful skills. Unfortunately, preparing the students for the exam takes a lot of time and focus away from the stuff that really matters. Time I would rather use differently.

Learning & tacit knowledge

So how do people learn skills? There are many resources I could point to. Try these:

In his wonderful book “The psychology of software testing” John Stevenson talks about learning op page 49:

The “sit back and listen” approach can be effective in acquiring information but appears to be very poor in the development of thinking skills or acquiring the necessary knowledge to apply what has been explained. The majority of trainers have come to realise the importance of hands on training “Learn by doing” or “experiential learning”.

John points to resources like: and the book “Experiential learning: experience as the source of learning and development” by David Kolb. Also Jerry Weinberg has written books on experiential learning.

The resources on learning skills mentioned by me earlier, will tell you that experienced people know what is relevant and how things are related. Also practice, experimentation and reflection are important parts of learning. Learning of a skill  depends heavily on tacit knowledge. On page 50 in his book John Stevenson writes:

Pakivi Tynjaklak makes an interesting comment in the International Journal of Educational research: “The key to professional development is making explicit that which has earlier been tacit and implicit, and thus opening it to critical reflection and transformation” – This means that what we learn may not be something we can explain easily (tacit) and that as we learn we try to find ways to make it explicit. This is the key to understanding and knowledge when we take something which is implicit and make it explicit. Therefore, able to reflect on what is learned and explaining our understanding.

And since testing is collecting information or learning about a product, the importance of tacit knowledge also applies to testing: John writes in his book on page 197:

However testing is about testing the information we do not know or cannot explain (the hidden stuff). To do this we have to use tacit knowledge (skills, experience, thinking) and we need to experience it to be able to work it out. This is what is meant by tacit knowledge“.

Back to the blogpost:

The opponents say certification only shows that you’ve learned a particular book well, it says nothing about the tester’s ability and can be counterproductive because the tester is trained to a standard tester.

  • Learned a particular book
    Agree, see arguments 1 and 2 above.
  • it says nothing about the tester’s ability
    Agree, see my argumentation on skills in point 2 above: “knowledge is over appreciated and skills are undervalued”. To learn we need practice and reflection. Also tacit knowledge is an important part of learning.
  • Trained to a standard tester
    Agree. No testing that I know of, is standard. Testing is driven by context. And testers with excellent skills have the ability to work in any context without using standards or templates. Have a look at the Ted Talk by Dr. Derek Cabrera “How Thinking Works“. He explains that critical thinking is a skill that is extremely important. Schools (and training providers)  nowadays are over-engineering the content curriculum: students do not learn to think, they learn to memorize stuff. Students are learned to follow instructions, like painting by the numbers or fill in templates. To fix this, we need to learn how to think better! Learning to paint by numbers is exactly what certification based on knowledge does with testers! Read more about learning, thinking and how to become an excellent tester in one of my earlier blogpost: “a road to awesomeness“.

Comparison with driving license
Does a driving license show anything? Well, at least you have studied the traffic rules well and know them. And, while driving, it is quite useful if we all use the same rules. If you doubt that, you should drive a couple of rounds in Mumbai.

In testing we should NEVER use the same rules as a starting point. “The value depends on the context!” . Driving in Mumbai or anywhere by strictly adhering to the rules, will result in accidents and will get you killed. You need skills to drive a car and be able to anticipate, observe, respond to unexpected behaviour of others, etc. This is what will keep you out of trouble while driving.

As I explained earlier on the TestNet website: this comparison is wrong in many ways. For a driver’s license, you must do a practical exam. And to pass the practice exam, most people will take lessons! You will be driving for at least 20 hours before your exam. And the exam is not a laboratory: it means you go on the (real) road with a real car. A multiple-choice exam does not even remotely resemble a real situation. That’s also how pointless ISTQB or TMap certificates are. Nowhere in the training or the exam, the student uses software nor does the the student has to test anything!

This is the heart of the problem! People do not learn how to test, but they learn to memorize outdated theory about testing. Unfortunately in many companies new and inexperienced testers are left unattended in complex environments without the right supervision and support!

So what would you prefer in your project: someone who can drive a car (someone who has the basic skills to test software), or someone knows the rules (someone who knows all the process steps and definitions by heart?). In addition, ISTQB states that the training is intended for people with 6 months of experience here.  So how are new testers going to learn the first 6 months?

The foundation for a tester?

The argument that the ISTQB foundation training provides a basis for a beginner to start is nonsense! It teaches the students a number of terms and a practically unusable standard process. In addition, there is a lot of theory about test techniques and approaches, but the practical implementation is lacking. There are many better alternatives as described in the resources earlier in this blogpost: learning by doing! Of course with the right guidance, support and supervision. Teach beginners the skills to do their work, as we learn the skills to drive a car in driving lessons. In a safe environment with an experienced driver next to us. Until we are skilled enough to do it without supervision. Sure, theory and explicit knowledge are important, but skills are much more important! And we need tacit knowledge to apply the explicit knowledge in our work.

So please stop stating that foundation training like TMap and ISTQB are a good start for people to learn about testing. There aren’t. Learning to drive a car starts with practicing actually driving the car.

Jan Jaap states he thinks a tester should be certified: “And what about testers? I think that they should also be certified. From someone who calls himself a professional tester we may expect some basic knowledge and knowledge about certain methods?“.
I think we may expect professional testers to have expertise in different methods. They should be able to do their job, which demands skills and knowledge. We may expect a bit more from professional testers than only some basic knowledge and knowledge about methods.

“Many of the well-known certification programs originated when IT projects looked very different and, in my view, these programs did not grow with the developments. So they train for the old world”

Absolutely true.

“Another point where the opponents have a point is the value purchasing departments or intermediaries attach to certificates. In many of the purchasing departments and intermediaries, the attitude seems that if someone has a certificate, it is also a good tester. And to say that, more is needed.”

It is indeed very sad that this is the main reason why certificates are popular. Many people get certificated because of the popular demand of organisations who do not recognise the true value of these certificates. Organisations are often not able (or do not what to spend the time needed) to recognise real professional testers and so they rely on certificates. On how to solve this problem I did a webinar “Tips, Tricks & Lessons Learned for Hiring Professional Testers” and wrote an article about it for Testing Circus.

Learning goals & value

On the ISTQB website I found the Foundation Level learning goals. Let’s have a look at them. Quotes from the website are in purple.

Foundation Level professionals should be able to:

  • Use a common language for efficient and effective communication with other testers and project stakeholders.
    Okay, we can check if the student knows how ISTQB defines stuff with an exam. However, understanding what it means or how to deal with it in a daily practice is very different. Also, again, common language is a myth.
  • Understand established testing concepts, the fundamental test process, test approaches, and principles to support test objectives.
    Concepts and test process: okay, you can check if a student remembers these. However, the content is old and outdated and in many places incorrect! I think understanding approaches cannot be checked in a (multiple-choice) exam. Maybe some definitions, but how to apply them? No way.
  • Design and prioritize tests by using established techniques; analyze both functional and non-functional specifications (such as performance and usability) at all test levels for systems with a low to medium level of complexity.
    Design and prioritize tests? Interesting. Where is this trained? Or being tested in the exam? Analyse specifications? That is not even part of the training. Applying some techniques is, but there is a lot more to designing and prioritize tests and analysing specifications.
  • Execute tests according to agreed test plans, and analyze and report on the results of tests.
    Execution of tests nor analysis or reporting of test results is part of the exam. In class only the theory about test reporting is discussed but never practiced.
  • Write clear and understandable incident reports.
    How do you check this with a multiple choice exam? And how you train this skill without actually testing software in class? No exercises in class that actually ask you to write such reports.
  • Effectively participate in reviews of small to medium-sized projects.
    The theory about reviews is part of the class. To effectively participate in reviews, you need to do it and learn from experience.
  • Be familiar with different types of testing tools and their uses; assist in the selection and implementation process.
    Some tools and their goals and uses are mentioned in class. So I will agree with the first part. But to assist in selection and implementation, again you need skills.

So looking at the learning goals above, I doubt if the current classes teach this. The exam for sure doesn’t prove that a foundation level professionals is be able to do this things. A lot of promises that are just wrong! Certificate training like ISTQB-F and TMap as they are now are simply not worth the money! The training and exam are mostly 3 days and cost around 1.700 euro in the Netherlands. I think that is a crazy investment for what you get in return… There are better ways to invest that money, time and effort!

I think that a more valuable 3 day foundation training is doable. But surely not the way it is done now by TMap or ISTQB. I’ve written a blog post about it years ago: “What they teach us in TMap Class and why it is wrong!“.

More blogs / presentations about certification:

Categories: Blogs

How to update app.config file using PowerShell?

Testing tools Blog - Mayank Srivastava - Fri, 06/09/2017 - 12:49
Below code will help to update app.config file with given data- #It helps to connect db and get the data. $HostName = $env:computername $connectionstring = 'Server=XX.XX.XX.XX;Database=TestDataBase;User Id=VM;Password=VMTest;MultipleActiveResultSets=True' $connection = New-Object System.Data.SqlClient.SqlConnection $connection.ConnectionString = $connectionString $connection.Open() $query = "SELECT [Params].value('(/root//Version/node())[1]', 'nvarchar(max)') as FirstName from Request where [Params].value('(/root//Name/node())[1]', 'nvarchar(max)') = '"+$HostName+"'" $command = $connection.CreateCommand() $command.CommandText = $query…
Categories: Blogs

Code Health: Reduce Nesting, Reduce Complexity

Google Testing Blog - Thu, 06/08/2017 - 20:24
This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.

By Elliott Karpilovsky

Deeply nested code hurts readability and is error-prone. Try spotting the bug in the two versions of this code:

Code with too much nesting Code with less nesting
response = server.Call(request)

if response.GetStatus() == RPC.OK:
if response.GetAuthorizedUser():
if response.GetEnc() == 'utf-8':
if response.GetRows():
vals = [ParseRow(r) for r in
avg = sum(vals) / len(vals)
return avg, vals
raise EmptyError()
raise AuthError('unauthorized')
raise ValueError('wrong encoding')
raise RpcError(response.GetStatus())
response = server.Call(request)

if response.GetStatus() != RPC.OK:
raise RpcError(response.GetStatus())

if not response.GetAuthorizedUser():
raise ValueError('wrong encoding')

if response.GetEnc() != 'utf-8':
raise AuthError('unauthorized')

if not response.GetRows():
raise EmptyError()

vals = [ParseRow(r) for r in
avg = sum(vals) / len(vals)
return avg, vals

Answer: the "wrong encoding" and "unauthorized" errors are swapped. This bug is easier to see in the refactored version, since the checks occur right as the errors are handled.

The refactoring technique shown above is known as guard clauses. A guard clause checks a criterion and fails fast if it is not met. It decouples the computational logic from the error logic. By removing the cognitive gap between error checking and handling, it frees up mental processing power. As a result, the refactored version is much easier to read and maintain.

Here are some rules of thumb for reducing nesting in your code:
  • Keep conditional blocks short. It increases readability by keeping things local.
  • Consider refactoring when your loops and branches are more than 2 levels deep.
  • Think about moving nested logic into separate functions. For example, if you need to loop through a list of objects that each contain a list (such as a protocol buffer with repeated fields), you can define a function to process each object instead of using a double nested loop.
Reducing nesting results in more readable code, which leads to discoverable bugs, faster developer iteration, and increased stability. When you can, simplify!
Categories: Blogs

But is it Automation?

Hiccupps - James Thomas - Thu, 06/08/2017 - 10:27

Recently, I needed to quickly explore an aspect of the behaviour of an application that takes a couple of text file inputs and produces standard output.

To get a feel for the task I set up one console with an editor open on two files (1.txt and 2.txt) and another console in which I ran the application this way:
$ more 1.txt; more 2.txt; diff -b 1.txt 2.txt
a b c d e f
a b c
d e f
< a b c d e f
> a b c
> d e f

$ more 1.txt; more 2.txt; diff -b 1.txt 2.txt
a b c d e f
a b c d e f

$ more 1.txt; more 2.txt; diff -b 1.txt 2.txt
a b c d e f
a b cd e f
< a b c d e f
> a b cd e f
As you can see I have a single command line that dumps both the inputs and the outputs. (And diff was not the actual application I was testing!)

After each run I changed some aspect of the inputs in the first console, pressed up and enter in the second console.

What am I achieving here? I have a simple runner and record of my experiments and an easy visual comparison across the whole set. It's quick to set up and in each iteration I'm in the experiment rather than the infrastructure of the experiment.

I could have, for example, created a ton of files and run them in some kind of scripted harness or laboriously by hand. But I was short of time and I wanted to spend the time I had on exploring - on responding to what I'd observed - and not on managing data or investing in stuff I wasn't sure would be valuable yet.

I still hear and see too much about manual and automated testing for my comfort. Is what I did here manual testing? Is it automation? Could a "manual tester" really not get their head around something like this? Could an "automation tester" really not stoop so low as to use something this unsophisticated?

Bottom line for me: there's a tool that is at my disposal to serve my needs at appropriate cost, with appropriate trade-offs, and in appropriate situations. Why wouldn't I use it?
Syntax highlighting:
Categories: Blogs

Testing in Future Space with ScalaTest

Testing TV - Wed, 06/07/2017 - 18:28
ScalaTest is a popular open source testing tool in the Scala ecosystem. In ScalaTest 3.0’s new async testing styles, tests have a result type of Future[Assertion]. Instead of blocking until a future completes, then performing assertions on the result, you map assertions onto the future and return the resulting Future[Assertion] to ScalaTest. The test will […]
Categories: Blogs

Book Review: Scaling Teams - Mon, 06/05/2017 - 23:31

This weekend I finished reading Scaling Teams by Alexander Grosse & David Loftesness.

I know Grosse personally and was looking forward to reading the book, knowing his own personal take on dealing with organisations and the structure.

tl;dr Summary

A concise book offering plenty of practical tips and ideas of what to watch out for and do when an organisation grows.

Detailed summary

The authors of the book have done a lot of extensive reading, research and talking to lots of other people in different organisations understanding their take on how they have grown their organisations. They have taken their findings and opinions and grouped them into five different areas:

  • Hiring
  • People Management
  • Organisational Structure
  • Culture
  • Communication

In each of these different areas, they describe the different challenges that organisations experience when growing, sharing a number of war stories, warning signs to look out for and different approaches of dealing with them.

I like the pragmatic approach to their “there’s no single answer” to a lot of their advice, as they acknoweldge in each section the different factors about why you might favour one option over another and there are always trade-offs you want to think about. In doing so, they make some of these trade-offs a lot more explict, and equip new managers with different examples of how companies have handled some of these situations.

There are a lot of links to reading materials (which, in my opinion, were heavily web-centric content). The articles were definitely relevant and up to date in the context of the topics being discussed but I would have expected that for a freshly published book. A small improvement would have been a way to have them all grouped together at the end in a referenced section, or perhaps, (hint hint), they might publish all the links on their website.

What I really liked about this book its wide reaching, practical advice. Although the book is aimed at rapidly growing start-ups, I find the advice useful for many of the companies we consult for, who are often already considered very succesful business.

I’ll be adding it to my list of recommended reading for leaders looking to improve their technology organisations. I suggest you get a copy too.

Categories: Blogs

Taking Note

Hiccupps - James Thomas - Sat, 06/03/2017 - 07:40

In Something of Note, a post about Karo Stoltzenburg and Neil Younger's recent workshop on note-taking, I wrote:
I am especially inspired to see whether I can distil any conventions from my own note-taking ...  I favour plain text for note-taking on the computer and I have established conventions that suit me for that. I wonder are any conventions present in multiple of the approaches that I use?Since then I've been collecting fieldstones as I observe myself at work, talking to colleagues about how they see my note-taking and how it differs from theirs, and looking for patterns and lack of patterns in that data.

ConventionsI already knew that I'd been revising and refining how I take notes on the computer for years. Looking back I can see that I first blogged about it in The Power of Fancy Plain Text in 2011 but I'd long since been crafting my conventions and had settled on something close to Mediawiki markup for pretty much everything.  And Mediawiki's format still forms the basis for much of my note-taking, although that's strongly influenced by my work context.

These are my current conventions for typed notes:
  • * bullet lists. Lots of my notes are bullets because (I find) it forces me to get to "the thing"
  • ... as a way to carry on thoughts across bullets while preserving the structure
  • > for my side of a conversation (where that is the context), or commentary (in other contexts)
  • / emphasis
  • " for direct quotes
  • ---- at start line and end line for longer quoted examples, code snippets, command line trace etc
  • ==, ====, ==== etc for section headers
  • +,-,? as variant bullet points for positive, negative, questionable
  • !,? as annotations for important and need-to-ask

These are quick to enter,  being single characters or repeated single characters. They favour readability in the editor over strict adherence to Mediawiki, e.g. I use a slash rather than  repeated single quotes for emphasis because it looks better in email and can be search-replaced easily.

I am less likely to force a particular convention on paper and I realise that I haven't put much time into thinking about the way I want to take notes in that medium. Here's what I've come up with by observation:
  • whole sentences or at least phrases
  • quotation marks around actual quotes
  • questions to me and others annotated with a name
  • starring for emphasis
  • arrows to link thoughts, with writing on the arrows sometimes
  • boxes and circles (for emphasis, but no obvious rhyme or reason to them)
  • structure diagrams; occasional mind map
  • to-do lists - I rarely keep these in files
  • ... and I cross out what I've done
  • ... and I put a big star next to things I crossed out that I didn't mean to

Why don't I care to think so hard about hand-written notes? Good question. I think it's a combination of these factors: I don't need to, I write less on paper these days, the conventions I've evolved intuitively serve me well enough, it is a free-form medium and so inventing on the fly is natural, information lodges on paper for a short time - I'll type up anything I want to keep later.

Similarities and DifferencesI want to get something of that natural, intuitive spirit when typing too, although I'm not expecting the same kind of freedom as a pen on paper. What I can aim for is less mediation between my brain and the content I'm creating. To facilitate this I have, for example:
  • practised typing faster and more accurately, and without looking at my fingers
  • learned more keyboard shortcuts, e.g. for navigating between applications, managing tabs within applications, placing the cursor in the URL bar in browsers, and moving around within documents
  • pinned a set of convenient applications to the Windows taskbar in the same order on all of the computers I use regularly
  • set up the Quick Access Toolbar in Office products, and made it the same across all Office products that I use
  • made more use of MRU (most recently used) lists in applications, including increasing their size and pinning files where I can

With these, for example, I can type Windows-7, Alt-5 to open Excel and show a list of recently-used and pinned files. Jerry Weinberg aims to record his fieldstones within five seconds of thinking of them. I don't have such strict goals for myself, but I do want to make entering my data as convenient as possible, and as much like simply picking up a notepad and turning to the page I was last working on as I can.

That's one way I'm trying to bring my hand and typed note-taking closer together in spirit, at least. There are also some content similarities. For instance, I tend to write whole sentences, or at least phrases. Interestingly, I now see that I didn't record that in my list of conventions for typed notes above. Those conventions concentrate solely on syntax and I wonder if that is significant.

I don't recall an experiment where I tried hard not to write in sentences. The closest I can think of is my various attempts to use mind maps, where I find myself frustrated at the lack of verbal resolution that the size of the nodes encourages - single words for the most part. Again, I wonder whether I don't trust myself enough to remember the points that I had in mind from the shorter cues.

In both hand and typed notes, I overload some of the conventions and trust context to distinguish them. For example, on paper I can use stars for emphasis or specifically to note that something needs to be considered undeleted. On screen I'll use ? for questions and also uncertainty. I also find that I rarely start numbered lists because I don't want the overhead of going back and renumbering if I want to insert am item into the list,

Something else that I do in both cases is "layering". In Something of Note I mentioned that I'd shown my notes to another tester and we'd observed that I take what I've written and add "layers" of emphasis, connections, sub-thoughts, and new ideas on top of them. (Usually I'll do this with annotations, or perhaps sidebars linked to content with arrows.)

Similarly, one of my colleagues watched me taking notes on the computer during a phone call and commented on how I will (mostly unconsciously) take down points and then go back and refine or add to them as more information is delivered, or I have commentary on the points I've recorded.

There are some differences between the two modes of note-taking. One thing that I notice immediately is that there is no equivalent to doodling in my computer-based notes where my hand-written notes are covered in doodles. I don't know what to conclude from that.

Also, I will use different textual orientations in my written notes, to squeeze material into spaces which mean it is physically co-located with text that is related to it in some way. I don't have that freedom on screen and so any relationships have to be flagged in other ways, or rely on e.g. dynamically resizing lists to add data - something that's less easy on paper.

Where I am aggregating content into a single file over time - as I do with my notes in 1-1 meetings - I almost always work top-down so that the latest material is at the bottom and I can quickly scroll up to get recent context. (I find this intuitive, but I know others prefer latest material at the top.)

Because I don't aggregate content over time in the same way on paper, I don't have quite the same option. I write all of my notes into the same notebook, regardless of context (though I may start a new page for a new topic) so I don't have lots of places to look for a particular note that I made.

Within a notebook, I can flick back through pages to look for related material. I date-stamp my notebooks with a sticker on the front so that I can in principle go back to earlier books, but I rarely do either over periods anything longer than a handful of days.

One other major difference - a side-effect, but a significant one - is that I can easily search my computer notes.

ChoosingI found that there are situations where I'll tend to use one or other of the note-taking techniques, given free choice. I prefer hand-written notes for:
  • technical meetings
  • meetings where it's less important that I maintain a record
  • meetings where typing would be intrusive or colleagues have said they find it distracting
  • informal presentations, our Team Eating brown bag lunches, local meetups
  • face-to-face job interviews
  • team meetings
  • to-do lists
  • when I need to make diagrams
  • when I don't have access to my computer

Whereas computer-based notes tend to be used for:
  • 1-1 (whether I'm the manager or the report)
  • writing reports
  • writing testing notes (including during sessions)
  • writing blogs
  • where I'm trying to think through an idea
  • when I want to copy-paste data from elsewhere or use hyperlinks
  • when I want to not have to write up later
  • when I want to be able to continue adding content over an extended period of time 

And there are occasions where I use both in tandem. For example, when engaged in testing I'll often record evidence in screenshots and drop the file location into my notes.

I might sketch a mind map on paper to help me to explore a space, then write it up in an editor because that helps me to explore the nature of the relationships.  This is probably a special case of a more general approach where I'll start on paper and switch to screen when I feel I have enough idea - or sometimes when I don't - because editing is cheaper on the computer. From Tools: Take Your Pick:
Most of my writing starts as plain text. Blog posts usually start in Notepad++ because I like the ease of editing in a real editor, because I save drafts to disk, because I work offline ... When writing in text files I also have heuristics about switching to a richer format. For instance, if I find that I'm using a set of multiply-indented bullets that are essentially representing two-dimensional data it's a sign that the data I am describing is richer than the format I'm using. In particular, I will aggressively move to Excel for tabular data. (And I have been refining the way I use Excel for quick one-off projects too; I love tables.)

ReflectionsI am an inveterate note-taker and I think I'll always prefer to record more rather than less. But when it comes to the formatting, I'll always prefer less over more. For me, the form should serve the content and nothing else, and a simpler format is (all other things being equal) a more portable format.

It appears that I'm happy to exploit differences where it serves me well, or doesn't disadvantage me too much - I clearly am not trying to go to only hand-written or only computer-based notes. But I do want to reduce variation where it doesn't have value because it means I can switch contexts without having to switch technique and that means a lower cost of switching, because I might already be switching domain, task, type of reasoning etc. In a similar spirit, I am interested in consolidating content. I want related notes in the same place by default.

But I'm not a slave to my formatting conventions: something recorded somehow now is better than nothing recorded perfectly later. I will tend to do the expedient over the consistent, and then go back and fix it if that's merited. I very deliberately default to sticking to my conventions but notice when I find myself regularly going against them, because that indicates that I probably need to change something.

Right now I am in the process of considering whether to change from ---- at the start and end of blocks to using three dashes and four dashes at start and end respectively. Why? Because sometimes I need to replace the blocks with <pre> and </pre> tags for the wiki. Marking up the start and end with the same syntax doesn't aid me in search-replacing.

When I am trying to introduce some new behaviour, I will force myself to do it. If I fail, I'll go back and redo it to help to build up muscle memory. I think of this as very loosely like a kata. For example, I was slower at typing for a while when I started to type in a more traditional way, but I put up with that cost in the belief that I would ultimately end up in a better place. (And I did.)

I think that my computer note-taking is influencing the way that I write non-note content. To give one illustration: over the years I have evolved my written communications (particularly email) to have a more note-like structure. I am now likely to write multiple one-sentence paragraphs, pared back to the minimum I think is necessary to get across the point or chain of reasoning that I want to deliver.

Likewise, I try to write more, shorter paragraphs in my blog, because research I've read, and my own experience, is that this is a more consumable format on screen.  (After seeing how much content I'd aggregated for this blog post, I considered splitting it too.)

I use text files as repositories of related information, but I also sometimes have a level of organisation above the file I'm working in. I'm recruiting as I write this. If, after I review a CV, I want to talk to the candidate, I start a text file in the folder I'm maintaining for this round of recruitment. My notes on the CV go there, as do questions I'll ask when we speak. On the phone I'll type directly into the file, recording their answers, my thoughts on their answers, new questions I want to ask and so on. At the end of the interview, I'll briefly review and note down my conclusions in the file too.

The same technique applies to my team. I have weekly 1-1 with my team and an annual review cycle. I make a folder per person, inside that a folder per cycle and, inside that I have a text file, called Notes.txt. In 1-1 I will enter notes while we talk. Outside of 1-1 I'll drop thoughts, questions, suggestions and so on into the file in preparation for our next meeting. Over time, this becomes an historical record too, so I can provide longitudinal context to discussions.

This stuff works for me - or at least, is working for me right now better than anything else I've tried recently and given the kinds of assessments I've made of it - but none of it is set in stone. My overarching goal is to be efficient and effective and I'm always interested in other people's conventions in case I can learn something that helps me to improve my own.
Categories: Blogs

Agile Testing Essentials – LiveLessons video course

Agile Testing with Lisa Crispin - Fri, 06/02/2017 - 15:04
Agile Testing Essentials video courseAgile Testing Essentials video course

Janet Gregory and I offer our new five hour introduction to agile testing, based on our booksAgile Testing: A Practical Guide for Testers and Agile Teams, and More Agile Testing: Learning Journeys for the Whole Team. “Agile Testing Essentials” is for anyone working on or with a software delivery team who wants to learn the basic principles and practices for building quality into your software product. 

In the course, we spend 5-10 minutes explaining some agile testing concept, technique or practice, then give you an exercise to help you practice it yourself. Then we discuss and show you how we would approach the exercise. Janet and I share our personal experiences and give lots of examples to hep you learn.

Read Lisi Hocke’s review and Mike Talks’ review to learn more about the course and whether it will fit your needs. Please email me with any questions.

The post Agile Testing Essentials – LiveLessons video course appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

How to convert PowerShell Object to a String?

Testing tools Blog - Mayank Srivastava - Wed, 05/31/2017 - 09:42
Alike any other languages, PowerShell script also supports conversion of an object to a string for many kinds of manipulations. In my course of actions, I have come across two ways which help to convert Object to a string very easily. Out-String $string = Get-CimInstance Win32_OperatingSystem | Select-Object {$_.Version} | Out-String %{$_.Version} $string = Get-CimInstance…
Categories: Blogs

Being Agile in HR with Peer Recruiting

A collaboration by Alexa Fuhren and Mario Moreira
Does a manager know better than a team who fits best to a role? How can we recruit the right people that fit best to our Agile organization? The answer is, by being Agile ourselves, particularly in the recruiting process!
In a more traditional working environment if there is a vacancy in a team, the manager approaches the recruiter, shares the requirements of the role, hands over the responsibility for the recruiting process to the HR department, and will be involved again when interviewing and selecting candidates. The recruiter is responsible for creating a job ad, posting it in appropriate recruiting channels, pre-selecting candidates, inviting the manager to interviews and making an offer to the selected candidate. The team usually plays a minor role in selecting the candidate.Many teams in Agile operate with a self-organizing model.  This model includes much more team ownership, autonomy, as well as responsibility and accountability for all team members than traditionally operating teams. In self-organizing models, the concept of peer recruiting can be applied where the team should play a much stronger role in selecting the right candidate that fits best to the team. Due to a better person-team fit, a reduction of early employee turnover could be a desired outcome.
If teams are responsible for selecting new team members, this will change the role of the recruiter from owning the recruiting process to supporting the process and coaching the team. Depending on the knowledge and experience of the team, the recruiter will be more or rather less involved in selecting the right candidate.
Self-organizing teams can be responsible for the whole recruiting process and accountable for hiring the right candidate. It starts with creating a (new) job profile for the vacancy. The Recruitment Coach will challenge the team to figure out which profile is needed to increase their current and future team performance. When creating a job ad, the Recruitment Coach can give advice on how to make it compelling and will provide templates that are in line with corporate design.
Team members can post the job ad on job boards and in their social media channels (LinkedIn, Xing, Facebook, chatrooms, private networks). After pre-selecting the candidates based on previously defined criteria, the team invites the selected candidates for interviews, roles plays, presentations etc. They can choose to ask the manager or recruiter to interview the candidates. The recruiter’s role will be to train the team on interview techniques and how to avoid evaluation errors like stereotyping, the halo effect or the Pygmalion effect etc.
Implementing peer recruiting means moving the decision to the people who know best who fits to their teams. It helps to speed up the recruiting process by reducing long decision making processes with managers and HR.
What is in it for the company?
  • Faster decisions due to less interactions with HR and the manager
  • Higher team commitment
  • Less turnover in the first 6 months of employment due to a better company-person fit
  • Recruiter can focus on strategic work, e.g. employer branding, building networks etc., and become a valuable coach for the recruiting processes
What is in it for the candidate?
  • Candidate experiences an Agile culture right from the first contact with the company
  • Candidate gets to know the colleagues he will closely work with
  • Job interview at eye level with team members instead of the potential manager
Peer recruiting shifts the recruiter’s role to a coach who supports the business in making hiring decisions faster, selecting candidates that fit best to the company and lowering the early turnover rate. Enabling the team to select new team members increase their autonomy which can lead to higher team commitment and higher team performance.

Learn more about Alexa Fuhren at:

Mario Moreira writes more about Agile and HR in his book "The Agile Enterprise" in Chapter 21 "Reinventing HR for Agile"p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Arial; color: #0433ff; -webkit-text-stroke: #0433ff} span.s1 {text-decoration: underline ; font-kerning: none}
Categories: Blogs

Best Practise

Hiccupps - James Thomas - Sun, 05/28/2017 - 18:26
I've said many times on here that writing for me is a kind of internal dialogue: compose a position, propose it in writing, challenge it in thought, and repeat.

I get enormous value from this approach, and have done for a long time. But in two discussions last week (Lean Coffee and a testing workshop with one of the other teams at Linguamatics) I found additional nuances that I hadn't considered previously.

First: in some sense, the approach I take is like pairing with myself. Externalising my ideas sets up, for me, the opportunity to take an alternative perspective that doesn't exist to the same extent when I'm only working in my head. It's often about the way I'm thinking as much as the content of my thoughts, and I speculate that this is a good grounding for being criticised by others when we're working together.

Second: writing and re-reading makes my position clear to me, and forces me to work out a way in which I can put it across. Since I started blogging there are numerous times in discussions that I've realised I am paraphrasing from something I've written. In the past I've tended to be a bit embarrassed by that but now I can see that, in fact, it's largely because I spent the time working it out before that I have it available to me now.

These are both things that are useful to me and that I want to get more benefit from. And, while I might agree that outside of a specific context there are no best practices, I also know that if I want to get those outcomes from my writing, I'd best practise.
Categories: Blogs

Three ways to handle CFRs - Sun, 05/28/2017 - 17:00

Cross-Functional Requirements (CFRs) are some of the key system characteristics that are important to design and account for. Internally we refer to these as CFRs, although classically they might be called Non-Functional Requirements (NFRs) or System Quality Attributes, however their cross-cutting nature means you always need to consider the impact of CFRs on new or existing functionality.

In the Tech Lead courses that I run, we discuss how it’s important that the Tech Lead ensures that the relevant CFRs are identified and accounted for either in design or development. Here are three ways I have seen some CFRs accounted handled.

1. CFRs satisfied via user stories and acceptance criteria

Security, authentication and authorisation stories are CFRs that naturally lend themselves to actually building out testable functionality. It’s important to consider the effort the risk and, in my experience, is important to start implementing these early to make sure they meet the needs and can evolve.

For these sorts of CFRs, it’s useful to identify these as natural user stories, and once implemented become acceptance criteria on future user stories that touch that area of the system.

As as example, authorisation can be dealt with by introducing a new persona role and what they might do (or not do) that others can have:

As an administrator, I would like to change the email server settings via a user interface, so that I do not need to raise an IT change request for it.

If this is the first time that this user story is implemented, then some acceptance criteria might look like:

  • Only a user with an administrator role can access this page
  • Only a user with an administrator role can successfully update the email setting (check the API)
  • Users with no administrator access receive a 403 or equivalent

This new role addition often means considering new acceptance criteria for every story going forward (if it should be accessible only by administrators or by all.

2. CFRs satisfied through architectural design

Scalability and durability are often CFRs that require upfront thinking about the architectural design, and perhaps planning for redundancy in the form of additional hardware, network, or bandwidth capacity. A web-based solution that needs to be scalable might draw upon the 12-factor application principles, as well as considering the underlying hardware. Failing to think about the architectural patterns that enable scalability and start coding will lead to additional rework later, or make it even impossible to scale.

3. CFRs satisfied via the development process

User experience is a CFR which often requires people, making automated testing much more difficult. An application where a high level of user experience is best dealt with by ensuring that a person with a background in UX is involved and that certain activities and feedback cycles are planned into the software development process to continually fine-tune the user experience as an application evolves.

Changes to the development process might include explicit user research activities, continuous user testing activities, the addition of an A/B capability and some training for product people and the development team to ensure that the developed software meets the desired level of user experience.


Every system has their own set of Cross-Functional Requirements (CFRs) and it is essential that teams focus on identifying the relevant and important CFRs and find ways to ensure they are met. In this article, I shared three typical ways that CFRs might be met.

How else have you seen these handled?

Categories: Blogs

Cambridge Lean Coffee

Hiccupps - James Thomas - Wed, 05/24/2017 - 21:48

This month's Lean Coffee was hosted by Redgate. Here's some brief, aggregated comments and questions  on topics covered by the group I was in.

What benefit would pair testing give me?
  • I want to get my team away from scripted test cases and I think that pairing could help.
  • What do testers get out of it? How does it improve the product?
  • It encourages a different approach.
  • It lets your mind run free.
  • It can bring your team closer together.
  • It can increase the skills across the test group.
  • It can spread knowledge between teams.
  • You could use the cases as jumping-off points.
  • I am currently pairing with a senior tester on two approaches at the same time: functional and performance.
  • For pairing to work well, you need to know each other, to have a relationship.
  • There are different pairing approaches.
  • How long should you pair for?
  • We turned three hour solo sessions into 40 minute pair sessions.
  • You can learn a lot, e.g. new perspectives, short-cuts, tips.
  • Why not pair with developers?

Do you have a default first test? What it is? Why?
  • Ask what's in the build, ask what the expectation is.
  • A meta test: check that what you have in front of you is the right thing to test.
  • It changes over time; often you might be biased by recent bugs, events, reading etc to do a particular thing.
  • Make a mind map.
  • A meta test: inspect the context; what does it make sense to do here?
  • A pathetic test: just explore the software without challenging it. Allow it to demonstrate itself to you.
  • Check that the problem that is fixed in this build can be reproduced in an earlier build.

How do you tell your testing story to your team?
  • Is it a report, at the whiteboard, slides, a diagram, ...?
  • Great to hear it called a story, many people talk about a report, an output etc.
  • Some people just want a yes or no; a ship or not.
  • I like the RST approach to the content: what you did, what you found, the values and risks.
  • Start writing your story early; it helps to keep you on track and review what you've done
  • Writing is like pairing with yourself!
  • In TDD, the tests are the story.

One thing that would turn you off a job advert? One thing that would make you interested?
  • Off: a list of skills (I prefer a story around the role).
  • Off: needing a degree.
  • Interested: the impression that there's challenge in the role and unknowns in the tasks.
  • The advert is never like the job!
  • Interested: describes what you would be working on.
  • Off: "you will help guarantee quality".
  • Interested: learning opportunities.
  • Interested: that it's just outside of my comfort zone.
Categories: Blogs

Dealing With Optimistic Concurrency Control Collisions

Jimmy Bogard - Wed, 05/24/2017 - 00:06

Optimistic Concurrency Control (OCC) is a well-established solution for a rather old problem - handling two (or more) concurrent writes to a single object/resource/entity without losing writes. OCC works (typically) by including a timestamp as part of the record, and during a write, we read the timestamp:

  1. Begin: Record timestamp
  2. Modify: Read data and make tentative changes
  3. Validate: Check to see if the timestamp has changed
  4. Commit/Rollback: Atomically commit or rollback transaction

Ideally, step 3 and 4 happen together to avoid a dirty read. Most applications don't need to implement OCC by hand, and you can rely either on the database (through snapshot isolation) or through an ORM (Entity Framework's concurrency control). In either case, we're dealing with concurrent writes to a single record, by chucking one of the writes out the window.

But OCC doesn't tell us what to do when we encounter a collision. Typically this is surfaced through an error (from the database) or an exception (from infrastructure). If we simply do nothing, the easiest option, we return the error to the client. Done!

However, in systems where OCC collisions are more likely, we'll likely need some sort of strategy to provide a better experience to end users. In this area, we have a number of options available (and some we can combine):

  • Locking
  • Retry
  • Error out (with a targeted message)

My least favorite is the first option - locking, but it can be valuable at times.

Locking to avoid collisions

In this pattern, we'll have the user explicitly "check out" an object for editing. You've probably seen this with older CMS's, where you'll look at a list of documents and some might say "Checked out by Jane Doe", preventing you from editing. You might be able to view, but that's about it.

While this flow can work, it's a bit hostile for the user, as how do we know when the original user is done editing? Typically we'd implement some sort of timeout. You see this in cases of finite resources, like buying a movie ticket or sporting event. When you "check out" a seat, the browser tells you "You have 15:00 to complete the transaction". And the timer ticks down while you scramble to enter your payment information.

This kind of flow makes better sense in this scenario, when our payment is dependent on choosing the seat we want. We're also explicit to the user who is locking the item with a timeout message counter, and explicit to other users by simply not showing those seats as available. That's a good UX.

I've also had the OTHER kind of UX, where I yell across the cube farm "Roger are you done editing that presentation yet?!?"


Another popular option is to retry the transaction, steps 1-4 above. If someone has edited the record from under us, we just re-read the record including the timestamp, and try again. If we can detect this kind of exception, from a broad category of transient faults, we can safely retry. If it's a more permanent exception, validation error or the like, we can fall back to our normal error handling logic.

But how much should we retry? One time? Twice? Ten times? Until the eventual heat death of the universe? Well, probably not that last one. And will an immediate retry result in a higher likelihood of success? And in the meantime, what is the user doing? Waiting?

With an immediate error returned to the user, we leave it up to them to decide what to do. Ideally we've combined this with option number 3, and give them a "please try again" message.

That still leaves the question - if we retry, what should be our strategy?

It should probably be no surprise here that we have a lot of options on retries, and also a lot of literature on how to handle them.

Before we look at retry options, we should go back to our user - a retry should be transparent to them, but we do need to set some bounds here. Assuming that this retry is happening as the result of a direct user interaction where they're expecting a success or failure as the result of the interaction, we can't just retry forever.

Regardless of our retry decision, we must return some sort of result to our user. A logical timeout makes sense here - how about we just make sure that the user gets something back within time T. Maybe that's 2 seconds, 5 seconds, 10 seconds, this will be highly dependent on your end user's expectation. If they're already dealing with a highly contentious resource, waiting might be okay for them.

The elephant

One option I won't discuss, but is worth considering, is to design your entity so that you don't need concurrency control. This could include looking at eventually consistent data structures like CRDTs, naturally idempotent structures like ledgers, and more. For my purposes, I'm going to assume that you've exhausted these options and really just need OCC.

In the next post, I'll take a look at a few retry patterns and some ways we can incorporate them into a simple web app.

Categories: Blogs