Skip to content

Blogs

Rice Consulting Announces Accreditation of New Certification Training Course for Testing Cyber Security

Press Release: For Immediate Release

Oklahoma City, OK, February 24, 2017:  Randall Rice, internally-recognized author, consultant and trainer in software testing and cyber security testing is excited to announce the accreditation of his newest course, ISTQB Advanced Security Tester Certification Course.

This is a course designed for software testers and companies who are looking for effective ways to test the security measures in place in their organization. This course teaches people in-depth ways to find security flaws in their systems and organizations before they are discovered by hackers.

The course is based on the Advanced Security Tester Syllabus from the International Software Testing Qualifications Board (ISTQB), of which Randall Rice is chair of the Advanced Security Tester Syllabus working party.  The American Software Testing Qualifications Board (ASTQB) granted accreditation on Tuesday, February 21, 2017. Accreditation verifies that the course content covers the certification syllabus and glossary. In addition, the reviewers ensure that the course covers the materials at the levels indicated in the syllabus.

“With thousands of cyber attacks occurring on a daily basis against many businesses and corporations, it is urgent that companies have some way to know if their security defenses are actually working effectively. One reason we keep hearing about large data breaches is because companies are trusting too much in technology and are failing to test the defenses that are in place. Simply having firewalls and other defenses installed does not ensure security,” explained Randall Rice. “This course provides a holistic framework that people can use to find vulnerabilities in their systems and organizations. This framework addresses technology, people and processes used to achieve security.”

This course is currently available on an on-site basis, public courses and in online format. For further details, visit http://www.riceconsulting.com/home/index.php/ISTQB-Training-for-Software-Tester-Certification/istqb-advanced-security-tester-course.html. To schedule a course to be presented in your company, contact Randall Rice at 405-691-8075 or by e-mail.

Randall W. Rice, author and trainer of the course is a Certified Tester, Advanced Level and is on the Board of Directors of the ASTQB. He is the co-author with William E. Perry of two books, “Surviving the Top Ten Challenges of Software Testing” and “Testing Dirty Systems.”

Categories: Blogs

100 Day Deep Work - Mastering Automation

Yet another bloody blog - Mark Crowther - Fri, 02/24/2017 - 14:25
Hi All,
I recently caught a tweet to a blog post by James Willett (Twitter / Blog) where he mentioned the idea of doing a 100 day Deep Work Challenge. The basic idea of which is that over 100 days you do a 90 minute focused session to achieve a defined learning or productivity goal. It’s a great idea that I’ve decided to take up the challenge!
Now I haven’t read the book that James refers to, but hey, grab it via my Amazon link. I’ve Instead read the very informative blog post he created. Make sure to read it and have a look at the infographic he produced. While I recognise reading the book would probably be wise to read, I’m going to say I don’t need to as I already know what I want to study and having done similar challenges in the past, James’ post is a good enough guide.
Seriously, go read it http://james-willett.com/2017/02/the-100-day-deep-work-challenge/
So what’s my challenge?
A New Year’s ResolutionAt the start of 2017 I made a commitment to transforming my technical capability with automation – by the end of the year. Yes, I’ve been doing automation as an element of my delivery toolkit for about 5 years, but I’ve never felt I have the deep expertise that I have around testing. I’m happy that 90% of the time I am the best tester in the room. Not being arrogant, it’s just I’ve studied, written, presented, mentored, taught and applied what I do for the last 15+ years. I better be pretty good by now!
With automation however, I’ve always felt there’s a huge body of knowledge I have yet to acquire and a depth of expertise that I have a duty to possess when delivering automation to clients, that I don’t currently possess. That troubles me. My wife disagrees, saying I am probably better than I think. She may be right, but I know what level I want to achieve and how that looks in terms of delivery and I’m not there yet.
#100DayDeepWorkSo, to the Challenge. In summary, I’m going to focus on the deep learning and subsequent practical use of C#, Selenium WebDriver, SpecFlow (and so BDD) and Git. As I’m not paying for the SpecFlow+ Runner I’m going to generate reports using Pickles.
Let’s look in details at the 6 Rules James outlines in his blog post:
1) 90 Minutes everydayThat’s actually fine, I spend easily that each day studying generally anyway and though it’s a longish session the idea is that I accelerate the learning.Caveat – There’s a catch here, I am NOT doing this at weekends. Simply because we have a family agreement that I can work and study as hard as I like in the week, but weekends are for family. Laptop shut, 100% attention to family. No exceptions.
2) No distractionsAs Rule 3 stipulates doing Deep learning first, that’s fine as I’ll be locked in a room on my own
3) Deep Work firstThe Deep Work will be done first thing in the morning so that’s also just fine. It means getting up a notable amount of time earlier, but that just means I need to get to be earlier. Not a bad thing as it’ll stop me ‘ghosting’ around through the small hours as I often do.  I need to be out to work by 8.00am, so my start time is going to be 6am. Ugh, let’s see if I can keep that up!
4) Set an Overall GoalThe Goal to achieve is reasonably simple to prove as a friend and I have set up a new site called; www.TheSeleniumGuys.com where the goal is to provide a real back-to-basics and step-by-step series of posts and pages that allow newcomers to automation to get set-up and running with Selenium based automation. If that site isn’t content heavy by mid-year, you know I didn’t complete the challenge.
5) Summarise every sessionEvery session will be summarised on this blog, using the tag #100DayDeepWork and I’ll post a link on Twitter each day and sometimes on LinkedIn. Yep, no hiding if I succeed or fail. I’ll not only post the update about what I’m learning, I’ll share how the challenge is going generally.
6) Chart your ProgressI’m going to make a Calendar / Chart with the days showing, then publish it each day on this blog and link it via Twitter too. As per the Caveat in Step 1, that means I’ll achieve the 100 days in roughly 5 months. Feels like a long haul already.
There it is; 100 days of Deep Work, 100 Tweets, 100 Blog posts. Let’s see how this goes!
As a last thought – Let’s add a Good Cause into the mixBlog views and advert clicks off those posts generate revenue. My ad revenue is minimal, about a £1 a week on average. If you take the time to view the posts daily, you’ll generate ad revenue. If you see an ad you like then click it and they’ll be a bit extra generated. At the footer of each post I’ll add any affiliate links I have. Use them to generate affiliate revenue. 
At the end of the 100 days I’ll add up all revenue generated from this crazy project and donate it to a Charity you suggest + 50% from my own pocket :)
OK, onto the Deep Work!
Mark


Categories: Blogs

The Testing Kraftwerk

Hiccupps - James Thomas - Fri, 02/24/2017 - 10:20

If you're around testers or reading about testing it won't be long before someone mentions models. (Probably after context but some time before tacit knowledge.)

As a new tester in particular, you may find yourself asking what they are exactly, these models. It can be daunting when, having asked to see someone else's model, you are shown a complex flowchart, or a state diagram, or a stack of UML, a multi-coloured mindmap, or a barrage of blocked-out architectural components linked by complex arrangements of arrows with various degrees of dottedness.

But stay strong, my friend, because - while those things and many others can be models and can be useful - models are really just a way of describing a system, typically to aid understanding and often to permit predictions about how the system will behave under given conditions. What's more, the "system" need not be the entirety of whatever you're looking at nor all of the attributes of it.

It's part of the craft of testing to be able to build a model that suits the situation you are in at the time. For some web app, say, you could make a model of a text field, the dialog box it is in, the client application that launched it, the client-server architecture, or the hardware, software and comms stacks that support the client and server.

You can model different bits of the same system at the same time in different ways. And that can be powerful, for example when you realise that your models are inconsistent, because if that's the case, perhaps the system is inconsistent too ...

I'm a simple kind of chap and I like simple models, if I can get away with them. Here's a bunch of my favourite simple model structures and some simple ideas about when I might try to use them, rendered simply.

Horizontal LineYou're looking at some software in which events are triggered by other events. The order of the events is important to the correct functioning of the system. You could try to model this in numerous ways, but a simple way, a foothold, a first approximation, might be to simply draw a horizontal line and mark down the order you think things are happening in.


Well done. There's your model, of the temporal relationship between events. It's not sophisticated, but it represents what you think you know. Now test it by interacting with the system. Ah, you found out that you can alter the order. Bingo, your model was wrong, but now you can improve it. Add some additional horizontal lines to show relationships. Boom!

Vertical PileSo horizontal lines are great, sure, but let's not leave the vertical out of it. While horizontal seems reasonably natural for temporal data, vertical fits nicely with stacks. That might be technology stacks, or call sequences, process phases, or something else.

Here's an example showing how some calls to a web server go through different libraries, and which might be a way in to understanding why some responses conform to HTTP standards and some don't. (Clue: the ones that don't are the ones you hacked up yourself.)


Scatter PlotCombine your horizontal and vertical and you've got a plane on which to plot a couple of variables. Imagine that you're wondering how responsiveness of your application varies with the number of objects created in its database. You run the experiments and you plot the results.


If you have a couple of different builds you might use different symbols to plot them both on the same chart, effectively increasing its dimensionality. Shape, size, annotations, and more can add additional dimensions.

Now you have your chart you can see where you have data and you can begin to wonder about the behaviour in those areas where you have no data. You can then arrange experiments to fill them, or use your developing understanding of the application to predict them. (And then consider testing your prediction, right?)

Just two lines and a few dots, a biro and a scrap of paper. This is your model, ladies and gentlemen.

TableA picture is worth a thousand words, they say. A table can hold its own in that company. When confronted with a mass of text describing how similar things behave in different ways under similar conditions I will often reach for a table so that I can compare like with like, and see the whole space in one view. This kind of approach fits well when there are several things that you want to compare in several dimensions.

In this picture, I'm imagining that I've taken written reports about the work that was done to test some versions of a piece of software against successive versions of the same specification. As large blocks of text, the comparisons are hard to make. Laid out as a table I have visibility of the data and I have the makings of a model of the test coverage.


The patterns that this exposes might be interesting. Also, the places that there are gaps might be interesting. Sometimes those gaps highlight things that were missed in the description, sometimes they're disallowed data points, sometimes they were missed in the analysis. And sometimes they point to an error in the labels. Who knows, this time? Well, you will soon. Because you've seen that the gaps are there you can go and find out, can't you?

I could have increased the data density of this table in various ways. I could have put traffic lights in each populated cell to give some idea of the risk highlighted by the testing done, for example. But I didn't. Because I didn't need to yet and didn't think I'd want to and it'd take more time.

Sometimes that's the right decision and sometimes not. You rarely know for sure. Models themselves, and the act of model building, are part of your exploratory toolkit and subject to the same kinds of cost/value trade-offs as everything else.

A special mention here for Truth tables which I frequently find myself using to model inputs and corresponding outcomes, and which tester isn't fascinated by those two little blighters?

CircleThe simple circle. Once drawn you have a bipartition, two classes. Inside and outside. Which of the users of our system run vi and Emacs? What's that? Johnny is in both camps? Houston, we have a problem.


This is essentially a two variable model, so why wouldn't we use a scatter plot? Good question. In this case, to start with I wasn't so interested in understanding the extent of vi use against Emacs use for a given user base. My starting assumption was that our users are members of one editor religion or another and I want to see who belongs in each set. The circle gives me that. (I also used a circle model for separating work I will do from work I won't do in Put a Ring on It.)

But it also brings Johnny into the open. The model has exposed my incorrect assumption. If Johnny had happened not to be in my data set, then my model would fit my assumptions and I might happily continue to predict that new users would fall into one of the two camps.

Implicit in that last paragraph are other assumptions, for example that the data is good, and that it is plotted accurately. It's important to remember that models are not the thing that they model. When you see something that looks unexpected in your model, you will usefully ask yourself these kinds of questions:

  • is the system wrong?
  • is the data wrong?
  • is the model wrong?
  • is my interpretation wrong?
  • ...
Venn DiagramThe circle's elder sister. Where the circle makes two sets, the Venn makes arbtrarily many. I used a Venn diagram only this week - the spur for this post, as it happens - to model a collection of text filters whose functionality overlaps. I wanted to understand which filters overlapped with each other. This is where I got to:


In this case I also used the size of the circles as an additional visual aid. I think filter A has more scope than any of the others so I made it much larger. (I also used a kind of Venn diagram model of my testing space in Your Testing is a Joke.)

And now I have something that I can pass on to others on my team - which I did - and perhaps we can treat each of the areas on the diagram as an initial stab at a set of equivalence classes that might serve useful when testing this component.

In this post, I've given a small set of model types that I use frequently. I don't think that any of the examples I've given couldn't be modelled another way and on any given day I might have modelled them other ways. In fact, I will often hop between attempts to model a system using different types as a way to provoke thought, to provide another perspective, to find a way in to the problem I'm looking at.

And having written that last sentence I now see that this blog post is the beginnings of a model of how I use models. But sometimes that's the way it works too - the model is an emergent property of the investigation and then feeds back into the investigation. It's all part of the craft.
Image: In Deep Music Archive


Categories: Blogs

Are your Mocks Mocking at You?

Testing TV - Wed, 02/22/2017 - 18:41
Ever since J.B. Rainsberger’s ‘integrated tests are a scam’, many developers try to get rid of their massively integrated tests and test their units in isolation. Co-operation of units is tested with mocks and stubs. But – depending on the language used – this mocking can be more or less trustworthy. I present a prototypical […]
Categories: Blogs

Use "Golden Image" to test Big Ball Of Mud software systems

Chris McMahon's Blog - Tue, 02/21/2017 - 01:42

So I had a brief conversation on Twitter with Noah Sussman about testing a software system designed as a "Big Ball Of Mud" (BBOM).

We could talk about the technical definition of BBOM, but in practical terms a BBOM is a system where we understand and expect that changing one part of the system is likely to cause unknown and unexpected results in other, unrelated parts of the system. Such systems are notoriously difficult to test, but I have tested them long ago in my career, and I was surprised that Noah hadn't encountered this approach of using a "Golden Image" to accomplish that.

Let's assume that we're creating an automated system here. Every part of the procedure I describe can be automated.

First you need some tests. And you'll need a test environment. BBOM systems come in many different flavors, so I won't specify a test environment too closely. It might be a clone of the production system, or a version of prod with fewer data.  It might be something different than that.

Then you need to be able to make a more-or-less exact copy of your test environment. This may mean putting your system on a VM or a Docker image, or it may be a matter of simply copying files. However you accomplish it, you need to be able to make faithful "Golden Image" copies of your test environment at a particular point in time.

Now you are ready to do some serious testing of a BBOM system using Golden Images:

Step One: Your test environment right now is your Golden Image. Make a copy of your Golden Image.

Step Two: Install the software to be tested on the copy of your Golden Image. Run your tests. If your tests pass, deploy the changes to production. Check to make sure that you don't have to roll back any of the production changes. If your tests fail or if your changes to production get rolled back, go back to Step One.

Step Three: the copy of your first Golden Image with the successful changes is your new Golden Image. You may or may not want to discard the now obsolete original Golden Image, see Step Five below.

Step Four: Add more tests for the system. Repeat the procedure at Step One.

Step Five (optional) You may want to be able to compare aspects of a current Golden Image test environment with previous versions of the Golden Image. Differences in things like test output behavior, file sizes, etc. may be useful information in your testing practice.





Categories: Blogs

Help Linnea

“There is a saying that it takes a whole village to raise a child. Now we need a whole village to save our Linnea”

Linnea, Kristoffer Nordströms daughter, is five and a half years and comes from Karlskrona in Sweden. Her world revolved up until recently around My Little Ponies, riding her bicycle and popcorn… lots of popcorn. She who has one best friend: her beloved big brother Kristian.
That was her world – until a few months ago when she suddenly and shockingly became afflicted, and got emergency surgery for a brain tumor.
After the operation, we hoped that the bad news would end. But now the family lives in the hospital and has been told that the tumor is an aggressive variety called DIPG (Diffuse Intrinsic Pontine Glioma). The short story is that there is a heart-breakingly minimal chance of survival using established treatments.

There is a possible treatment that we are now aiming for: one that means the tumor is treated through catheters implanted directly into the tumor. Studies and reports show that such a direct treatment gives Linnea the best chance of one day becoming healthy. The cost of treatment and the journeys are very high. Higher than the average person can pay for: £ 65.000 for the first operation and then £ 6.500 for treatments thereafter. In the current situation, it is unclear how many of these Linnea will need.

Please help Kristoffer and his family!

Categories: Blogs

Before Testing

Hiccupps - James Thomas - Mon, 02/20/2017 - 07:25

I happened across Why testers? by Joel Spolsky at the weekend. Written back in 2010, and - if we're being sceptical - perhaps a kind of honeytrap for Fog Creek's tester recruitment process, it has some memorable lines, including:
what testers are supposed to do ... is evaluate new code, find the good things, find the bad things, and give positive and negative reinforcement to the developers.Otherwise it’s depressing to be a programmer. Here I am, typing away, writing all this awesome code, and nobody cares.you really need very smart people as testers, even if they don’t have relevant experience. Many of the best testers I’ve worked with didn’t even realize they wanted to be testers until someone offered them the job.The job advert that the post points at is still there and reinforces the focus on testing as a service to developers and the sentiments about feedback, although it looks like, these days, they do require test experience.

It's common to hear testers say that they "fell into testing" and I've offered jobs to, and actually managed to recruit from, non-tester roles. On the back of reading Spolsky's blog I tweeted this:
#Testers, one tweet please. What did you do before testing? What's the most significant difference (in any respect) between that and now?— James Thomas (@qahiccupps) February 18, 2017 And, while it's a biased and also self-selected sample (to those who happen to be close enough to me in the Twitter network, and those who happened to see it in their timeline, and those who cared to respond) which has no statistical validity, I enjoyed reading the responses and wondering about patterns.

Please feel free to add your own story about the years BT (Before Testing) to either the thread or the comments here.
Image: https://flic.kr/p/rgXeNz
Categories: Blogs

Refactoring Towards Resilience: Async Workflow Options

Jimmy Bogard - Fri, 02/17/2017 - 23:44

Other posts in this series:

In the last post, we looked at coupling options in our 3rd-party resources we use as part of "button-click" place order, and whether or not we truly needed that coupling or not. As a reminder, coupling is neither good nor bad, it's the side-effects of coupling to the business we need to evaluate in terms of desirable and undesirable, based on tradeoffs. We concluded that in terms of what we needed to couple:

  • Stripe: minimize checkout fallout rate; process offline
  • Sendgrid: Send offline
  • RabbitMQ: Send offline

Basically, none of our actions we determined to need to happen right at button click. This doesn't hold true for every checkout page, but we can make that assumption for this example.

Sidenote - in the real-life version of this, we opted for Stripe - undo, SendGrid - ignore, RabbitMQ - ignore, with offline manual re-sending based on alerts.

With this in mind, we can design a process that manages the side effects of an order placement separate from placing the order itself. This is going to make our process more complicated, but distributed system tend to be more complicated if we decide we don't want to blissfully ignore failures.

Starting the worfklow

Now that we've decided we can process our three resources out-of-band, the next question becomes "how do I signal to that back-end processing to do its work?" This largely depends on what your backend includes, if you're on-prem or cloud etc etc. Azure for example offers a number of options for us for "async processing" including:

  • Azure WebJobs
  • Azure Service Bus
  • Azure Scheduler

In my situation, we weren't deploying to Azure so that wasn't an option for us. For on-prem, we can look at:

From these three, I'm inclined towards Hangfire as it's easy to integrate into my Web API/MVC/ASP.NET Core app. A single background job executing say, once a minute, can check for any pending messages and send them along:

RecurringJob.AddOrUpdate(() => {  
    using (var db = new CartContext()) {
        var unsent = db.OutboxMessages.ToList();
        foreach (var msg in unsent) {
            Bus.Send(msg);
            db.OutboxMessages.Delete(msg);
        }
        db.SaveChanges();
    }
}, "*/1 * * * *");

Not too complicated, and this will catch any unsent messages from our API that we tried to send after the DB transaction. Once a minute should be quick enough to catch unsent messages and still not have it seem like to the end user that they're missing emails.

Now that we've got a way to kick off our workflow, let's look at our workflow options themselves.

Workflow options

There's still some ordering I need to enforce on my external resource operations, as I don't want emails to be sent without payment success. Additionally, because of the resilience options we saw earlier, I don't really want to couple each operation together. Because of this, I really want to break my workflow in multiple steps:

In our case, we can look at three major workflows: Routing Slip, Saga, and Process Manager. The Process Manager pattern can further break down into more detailed patterns, from the Microservices book "Choreography" and "Orchestration", or as I detailed them a few years back, "Controller" and "Observer".

With these options in mind, let's look at each in turn to see if they would be appropriate to use for our workflow.

Routing Slip

Routing slip is an interesting pattern that allows each individual step in the process to be decoupled from the overall process flow. With a routing slip, our process would look like:

We start create a message that includes a routing slip, and include a mechanism to forward along:

Bus.Route(msg, new [] {"Stripe", "SendGrid", "RabbitMQ"}  

Where "RabbitMQ" is really just and endpoint at this point to publish a "OrderComplete" message.

From the business perspective, does this flow make sense? Going back to our coordination options for Stripe, under what conditions should we flow to the next step? Always? Only on successful payment? How do we handle failures?

The downside of the Routing Slip pattern is it's quite difficult to introduce logic to our workflow, handle failures, retries etc. We've used it in the past successfully, and I even built an extension to NServiceBus for it, but it tends to fall down in our scenario especially around Stripe where I might need to do an refund. Additionally, it's not entirely clear if we publish our "Order Complete" message when SendGrid is down. Right now, it doesn't look good.

Saga

In the saga pattern, we have a series of actions and compensations, the canonical example being a travel booking. I'm booking together a flight, car, and hotel, and only if I can book all 3 do I call my travel booking "complete":

//vasters.com/archive/Sagas.html

Does a Saga make sense in my case? From our coordination examination, we found that only the Stripe resource had a capability of "Undo". I can't "Undo" a SendGrid call, nor can I "Undo" a message published.

For this reason, a Saga doesn't make much sense. Additionally, I don't really need to couple my process together like this. Sagas are great when I have an overall business transaction that I need to decompose into smaller, compensate-friendly transactions. That's clearly not what I have here.

Process Manager - Orchestration/Controller

Our third option is a process manager that acts as a controller, orchestrating a process/workflow from a central point:

Now, this still doesn't make much sense because we're coupling together several of our operations, making an assumption that our actions need to be coordinated in the first place.

So perhaps let's take a step back at our process, and examine what actually needs to be coordinated with what!

Process Examination

So far we've looked at process patterns and tried to bolt them onto our steps. Let's flip that, and go back to our original flow. We said our process had 4 main parts:

  1. Save order
  2. Process payment
  3. Email customer
  4. Notify downstream

From a coupling perspective, we said that "Process Payment" must happen only after I save the order. We also said that "Email customer" must happen only if "Process Payment" was successful. Additionally, our "notify downstream" step must only happen if our order successfully processed payment.

Taking a step back, isn't the "email customer" a form of notifying downstream systems that the order was created? Can we just make an additional consumer of the OrderCreatedEvent be one that sends onwards? I think so!

But we still have the issue of payment failure, so our process manager can handle those cases as well. And since we've already made payments asynchronous, we need some way to signal to the help team that an order is in a failed payment state.

With that in mind, our process will be a little of both, orchestration AND choreography:

We treat the email as just another subscriber of our event, and our process manager now is only really concerned about completing the order.

In our final post, we'll look at implementing our process manager using NServiceBus.

Categories: Blogs

Transferring testing skills

Agile Testing with Lisa Crispin - Fri, 02/17/2017 - 22:46
Transferring Agile Testing Skills webinarTransferring Agile Testing Skills webinar

Recently I did a short webinar with Agile Testing Days “AgileTD Mondays” series on “Transferring Testing Skills to the Whole Team. It’s about 20 minutes long, I hope it will inspire you to spread some testing love around your agile team!

How do you help non-testers on the team learn testing skills? I’d love to hear more ideas.

There are several other excellent webinars in the series – please check them all out!

The post Transferring testing skills appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

The Test Case Is Not The Test

DevelopSense Blog - Thu, 02/16/2017 - 20:17
A test case is not a test. A recipe is not cooking. An itinerary is not a trip. A score is not a musical performance, and a file of PowerPoint slides is not a conference talk. All of the former things are artifacts; explicit representations. The latter things are human performances. When the former things […]
Categories: Blogs

Security Testing Toolbox

Testing TV - Wed, 02/15/2017 - 14:54
Kali, Veil, Metasploit, BeEF. All tools in an arsenal that exist to break through security barriers of software. This talk introduces the tools available and shows how they are used to get through your defense. It is more a massive demo than a talk and is an exploration of the tools and what they do. […]
Categories: Blogs

Refactoring Towards Resilience: Evaluating Coupling

Jimmy Bogard - Tue, 02/14/2017 - 23:25

Other posts in this series:

So far, we've been looking at our options on how to coordinate various services, using Hohpe as our guide:

  • Ignore
  • Retry
  • Undo
  • Coordinate

These options, valid as they are, make an assumption that we need to coordinate our actions at a single point in time. One thing we haven't looked at is breaking the coupling of our actions, which greatly widens our ability to deal with failures. The types of coupling I encounter in distributed systems (but not limited to) include:

  • Behavioral
  • Temporal
  • Platform
  • Location
  • Process

In our code:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

Of the coupling types we see here, the biggest offender is Temporal coupling. As part of placing the order for the customer's cart, we also tie together several other actions at the same time. But do we really need to? Let's look at the three external services we interact with and see if we really need to have these actions happen immediately.

Stripe Temporal Coupling

First up is our call to Stripe. This is a bit of a difficult decision - when the customer places their order, are we expected to process their payment immediately?

This is a tough question, and one that really needs to be answered by the business. When I worked on the cart/checkout team of a Fortune 50 company, we never charged the customer immediately. In fact, we did very little validation beyond basic required fields. Why? Because if anything failed validation, it increased the chance that the customer would abandon the checkout process (we called this the fallout rate). For our team, it made far more sense to process payments offline, and if anything went wrong, we'd just call the customer.

We don't necessarily have to have a black-and-white choice here, either. We could try the payment, and if it fails, mark the order as needing manual processing:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    try {
        var payment = await stripeService.PostPaymentAsync(order);
    } catch (Exception e) {
        Logger.Exception(e, $"Payment failed for order {order.Id}");
        order.MarkAsPaymentFailed();
    }
    if (!order.PaymentFailed) {
        await sendGridService.SendPaymentSuccessEmailAsync(order);
    }
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

There may also be business reasons why we can't process payment immediately. With orders that ship physical goods, we don't charge the customer until we've procured the product and it's ready to ship. Otherwise we might have to deal with refunds if we can't procure the product.

There are also valid business reasons why we'd want to process payments immediately, especially if what you're purchasing is digital (like a software license) or if what you're purchasing is a finite resource, like movie tickets. It's still not a hard and fast rule, we can always build business rules around the boundaries (treat them as reservations, and confirm when payment is complete).

Regardless of which direction we go, it's imperative we involve the business in our discussions. We don't have to make things technical, but each option involves a tradeoff that directly affects the business. For our purposes, let's assume we want to process payments offline, and just record the information (naturally doing whatever we need to secure data at rest).

SendGrid Temporal Coupling

Our question now is, when we place an order, do we need to send the confirmation email immediately? Or sometime later?

From the user's perspective, email is already an asynchronous messaging system, so there's already an expectation that the email won't arrive synchronously. We do expect the email to arrive "soon", but typically, there's some sort of delay. How much delay can we handle? That again depends on the transaction, but within a minute or two is my own personal expectation. I've had situations where we intentionally delay the email, as to not inundate the customer with emails.

We also need to consider what the email needs to be in response to. Does the email get sent as a result of successfully placing an order? Or posting the payment? If it's for posting the payment, we might be able to use Stripe Webhooks to send emails on successful payments. In our case, however, we really want to send the email on successful order placement not order payment.

Again, this is a business decision about exactly when our email goes out (and how many, for what trigger). The wording of the message depends on the condition, as we might have a message for "thank you for your order" and "there was a problem with your payment".

But regardless, we can decouple our email from our button click.

RabbitMQ Coupling

RabbitMQ is a bit of a more difficult question to answer. Typically, I generally assume that my broker is up. Just the fact that I'm using messaging here means that I'm temporally decoupled from recipients of the message. And since I'm using an event, I'm behaviorally decoupled from consumers.

However, not all is well and good in our world, because if my database transaction fails, I can't un-send my message. In an on-premise world with high availability, I might opt for 2PC and coordinate, but we've already seen that RabbitMQ doesn't support 2PC. And if I ever go to the cloud, there are all sorts of reasons why I wouldn't want to coordinate in the cloud.

If we can't coordinate, what then? It turns out there's already a well-established pattern for this - the outbox pattern.

In this pattern, instead of sending our messages immediately, we simply record our messages in the same database as our business data, in an "outbox" table":

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    dbContext.SaveMessage(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

Internally, we'll serialize our message into a simple outbox table:

public class Message {  
    public Guid Id { get; set; }
    public string Destination { get; set; }
    public byte[] Body { get; set; }
}

We'll serialize our message and store in our outbox, along with the destination. From there, we'll create some offline process that polls our table, sends our message, and deletes the original.

while (true) {  
    var unsentMessages = await dbContext.Messages.ToListAsync();
    var tasks = new List<Task>();
    foreach (var msg in unsentMessages) {
        tasks.Add(bus.SendAsync(msg)
           .ContinueWith(t => dbContext.Messages.Remove(msg)));
    }
    await Task.WhenAll(tasks.ToArray());
}

With an outbox in place, we'd still want to de-duplicate our messages, or at the very least, ensure our handlers are idempotent. And if we're using NServiceBus, we can quite simply turn on Outbox as a feature.

The outbox pattern lets us nearly mimic the 2PC coordination of messages and our database, and since this message is a critical one to send, warrants serious consideration of this approach.

With all these options considered, we're now able to design a solution that properly decouples our different distributed resources, still satisfying the business goals at hand. Our next post - workflow options!

Categories: Blogs

People are Strange

Hiccupps - James Thomas - Tue, 02/14/2017 - 19:01

Managers. They're the light in the fridge: when the door is open their value can be seen. But when the door is closed ... well, who knows?

Johanna Rothman and Esther Derby reckon they have a good idea. And they aim to show, in the form of an extended story following one manager as he takes over an existing team with problems, the kinds of things that managers can do and do do and - if they're after a decent default starting point - should consider doing.

What their book, Behind Closed Doors, isn't - and doesn't claim to be - is the answer to every management problem. The cast of characters in the story represent some of the kinds of personalities you'll find yourself dealing with as a manager, but the depth of the scenarios covered is limited, the set of outcomes covered is generally positive, and the timescales covered are reasonably short.

Michael Lopp, in Managing Humans, implores managers to remember that their staff are chaotic beautiful snowflakes. Unique. Individual. Special. Jim Morrison just says, simply, brusquely, that people are strange. (And don't forget that managers are people, despite evidence to the contrary.)

Either way, it's on the manager to care to look and listen carefully and find ways to help those they manage to be the best that they can be in ways that suit them. Management books necessarily use archetypes as a practical way to give suggestions and share experiences, but those new to management especially should be wary of misinterpreting the stories as a how-to guide to be naively applied without consideration of the context.

What Behind Closed Doors also isn't, unlike so much writing on management, is dry, or full of heroistic aphorisms, or preachy. In fact, I found it an extremely easy read for several reasons: it's well-written; it's short; the story format helps the reader along; following a consistent story gives context to situations as the book progresses; sidebars and an appendix keep detail aside for later consumption; I'm familiar with work by both of these authors already; I'm a fan of Jerry Weinberg's writing on management and interpersonal relationships and this book owes much to his insights (he wrote the foreword here); I agree with much of the advice.

What I found myself wanting - and I'd buy Rothman and Derby's version of this like a shot - is more detailed versions of some of the dialogues in this book with commentary in the form of the internal monologues of the participants. I'd like to hear Sam, the manager, thinking though the options he has when trying to help Kevin to learn to delegate and understand how he chose the approach that he took. I'd like to hear Keven trying to work out what he thinks Sam's motives are and perhaps rejecting some of Sam's premises.  I'd also like to see a deeper focus on a specific relationship over an extended period of time, with failures, and techniques for rebuilding trust in the face of them.

But while I wait for that, here's a few quotes that I enjoyed, loosely grouped.

On the contexts in which management takes place:
Generally speaking, you can observe only the public behaviors of managers and how your managers interact with you. Sometimes people who have never been in a management role believe that managers can simply tell other people what to do and that’s that. The higher you are in the organization, the more other people magnify your reactions. Because managers amplify the work of others, the human costs of bad management can be even higher than the economic costs. Chaos hides problems—both with people and projects. When chaos recedes, problems emerge. The moral of this fable is: Focus on the funded work.On making a technical contribution as a manager:
Some first-level managers still do some technical work, but they cannot assign themselves to the critical path.

It’s easier to know when technical work is complete than to know when management work is complete.

The more people you have in your group, the harder it is to make a technical contribution.

The payoff for delegation isn’t always immediate.

It takes courage to delegate.On coaching:
You always have the option not to coach. You can choose to give your team member feedback (information about the past), without providing advice on options for future behavior.

Coaching doesn’t mean you rush in to solve the problem. Coaching helps the other person see more options and choose from them.

Coaching helps another person develop new capability with support.

And it goes without saying, but if you offer help, you need to follow through and provide the help requested, or people will be disinclined to ask again.

Helping someone think through the implications is the meat of coaching.On team-building:
Jelled teams don’t happen by accident; teams jell when someone pays attention to building trust and commitment

Over time they build trust by exchanging and honoring commitments to each other.

Evaluations are different from feedback.

A one-on-one meeting is a great place to give appreciations.

[people] care whether the sincere appreciation is public or private ... It’s always appropriate to give appreciation for their contribution in a private meeting.

Each person on your team is unique. Some will need feedback on personal behaviors. Some will need help defining career development goals. Some will need coaching on how to influence across the organization.

Make sure the career development plans are integrated into the person’s day-to-day work. Otherwise, career development won’t happen.

"Career development" that happens only once a year is a sham.On problem solving:
Our rule of thumb is to generate at least three reasonable options for solving any problem.

Even if you do choose the first option, you’ll understand the issue better after considering several options.

If you’re in a position to know a problem exists, consider this guideline for problem solving: the people who perform the work need to be part of the solution.

We often assume that deadlines are immutable, that a process is unchangeable, or that we have to solve something alone. Use thought experiments to remove artificial constraints,

It’s tempting to stop with the first reasonable option that pops into your head. But with any messy problem, generating multiple options leads to a richer understanding of the problem and potential solutions

Before you jump to solutions, collect some data. Data collection doesn’t have to be formal. Look for quantitative and qualitative data.

If you hear yourself saying, “We’ll just do blah, blah, blah,” Stop! “Just” is a keyword that lets you know it just won’t work.

When the root cause points to the original issue, it’s likely a system problem.On managing:
Some people think management is all about the people, and some people think management is all about the tasks. But great management is about leading and developing people and managing tasks.

When managers are self-aware, they can respond to events rather than react in emotional outbursts.

And consider how your language affects your perspective and your ability to do your job.

Spending time with people is management work.

Part of being good at [Managing By Walking Around and Listening] is cultivating a curious mind, always observing, and questioning the meaning of what you see.

Great managers actively learn the craft of management.Image: http://www.45cat.com/record/j45762
Categories: Blogs

Open Letter about Agile Testing Days cancelling US conference

Chris McMahon's Blog - Tue, 02/14/2017 - 03:21
I sent the following by email contact pages to Senator John McCain, Senator Jeff Flake, and Representative Martha McSally of Arizona in regard to Agile Testing Days cancelling their US conference on 13 February.



Agile Testing Days is a top-tier tech conference about software testing and Quality Assurance in Europe. They had planned their first conference in the USA to be held in Boston MA, with a speaker lineup from around the world. They cancelled the entire conference on 13 February because of the "current political situation" in the USA. Here is their statement: https://agiletestingdays.us/

Although I was not scheduled to attend or to speak at this particular conference, it is conferences such as Agile Testing Days where the best ideas in my field are presented, and it is from conferences such as Agile Testing Days that many of my peers get those ideas, and I rely on conversations from those who do speak and attend in order to stay current in my field.

As a resident of Arizona, cancelling such conferences affects me directly. I have enough expertise and skill to live anywhere I choose. I choose to live in Arizona, but my work absolutely depends on the free flow of people and information across national and state borders.

It is shameful that such a prestigious and respected multi-national software organization finds it necessary to cancel their first ever conference in the USA because of the outrageous policies of the current administration. I urge you to take measures to make organizations such as Agile Testing Days and their attendees and speakers feel safe and welcome, as they should be.

Chris McMahon
Senior Member of Technical Staff, Quality Assurance
Salesforce.org
Tucson, AZ
Categories: Blogs

Discomfort as a Tool for Change

Google Testing Blog - Mon, 02/13/2017 - 18:53
by Dave Gladfelter (SETI, Google Drive)
IntroductionThe SETI (Software Engineer, Tools and Infrastructure) role at Google is a strange one in that there's no obvious reason why it should exist. The SWEs (Software Engineers) on a project understand its problems best, and understanding a problem is most of the way to fixing it. How can SETIs bring unique value to a project when SWEs have more on-the-ground experience with their impediments?

The answer is scope. A SWE is rewarded for being an expert in their particular area and domain and is highly motivated to make optimizations to their carved-out space. SETIs (and Test Engineers and EngProdin general) identify and solve product-wide problems.

Product-wide problems frequently arise because local optimizations don't necessarily add up to product-wide optimizations. The reason may be the limits of attention, blind spots, or mis-aligned incentives, but a group of SWEs each optimizing for their own sub-projects will not achieve product-wide maxima.

Often SETIs and Test Engineers (TEs) know what behavior they'd like to see, such as more integration tests. We may even have management's ear and convince them to mandate such tests. However, in the absence of incentives, it's unlikely that the decisions SWEs make in response to such mandates will add up to the behavior we desire. Mandates around methods/practices are often ineffective. For example, a mandate of documentation for each public method on an interface often results in "method foo does foo."

The best way to create product-wide efficiencies is to change the way the team or process works in ways that will (initially) be uncomfortable for the engineering team, but that pays dividends that can't be achieved any other way. SETIs and TEs must work to identify the blind spots and negative interactions between engineering teams and change the environment in ways that align engineering teams' incentives. When properly incentivized, SWEs will make optimal decisions enhanced by product-wide vision rather than micro-management.
Common Product-Wide ProblemsHard-to-use APIsOne common example of local optimizations resulting in cross-team de-optimization is documentation and ease-of-use of internal APIs. The team that implements an internal API is not rewarded for making it easy to use except in the most oblique ways. Clients are compelled to use the internal APIs provided to them, so the API owner has a monopoly and will set the price of using it at "you must read all the code and debug it yourself" in the absence of incentives or (rare) heroes.
Big, slow releasesAnother example is large and slow releases. Without EngProd help or external pressure, teams will gravitate to the slowest, biggest release possible.

This makes sense from the position of any individual SWE: releases are painful, you have to ensure that there are no UI and API regressions, watch traffic and error rates for some time, and re-learn and use tools and processes that are complex and specific to releases.

Multiple teams will naturally gravitate to having one big release so that all of these costs can be bundled into one operation for "efficiency." The result is that engineers don't get feedback on features for weeks and versioning of APIs and data stores is ignored (since all the parts of the system are bundled together into one big release). This greatly slows down developer and feature velocity and greatly increases risks of cascading failures when the release fails.
How EngProd fixes product-wide problemsSETIs can nibble around the edges of these kinds of problems by writing tools and automation. TEs can create easy-to-use test environments that facilitate isolating and debugging faults in integration and ambiguities in APIs. We can use fancy technologies to sample live traffic and ensure that new versions of systems behave the same as previous versions. We can review design docs to ensure that they have an appropriate test plan. Often these actions do have real value. However, these are not the best way to align incentives to create a product-wide solution. Facilitating engineering teams' fruitful collaboration (and dis-incentivizing negative interactions) gives EngProd a multiplier that is hard to achieve with only tooling and automation.

Heroes are few and far between so we must turn to incentives, which is where discomfort comes in. Continuity is comfortable and change is painful. EngProd looks at how to change the problem so that teams are incentivized to work together fruitfully and disincentivized (discomforted) to pursue local optimizations exclusively.

So how does EngProd align incentives? Certainly there is a place for optimizing for optimal behaviors, such as easy-to-use integration environments. However, incentivizing optimal behaviors via negative feedback should not be overlooked. Each problem is different, so let's look at how to address the two examples above:
Incentivizing easy-to-use APIsEngineers will make the things they're incentivized to make. For APIs, make teams incentivized to provide integration help in the form of fakes. EngProd works with team leads to ensure there are explicit objectives to provide Fakes for their APIs as part of the rollout.

Fakesare as-simple-as-possible implementations of a service that still can be used to do pre-submit testing of client interactions with the system. They don't replace integration tests, but they reduce the likelihood of finding errors in subsequent integration test runs by an order of magnitude.
Furthermore, have some subset of the same client-owned and server-owned tests run against the fakes (for quick presubmit testing) as well as the real implementation (for continuous integration testing) and work with management to make it the responsibility of the Fake owner to debug any discrepancies for either the client- or the server-owned tests.

This reverses the pain! API owners, who are in a position to make APIs better, are now the ones experiencing negative incentives when APIs are not easy to use. Previously, when clients felt the pain, they had no recourse other than to file easily-ignored bugs ("Closed: working as intended") or contribute changes to the API owners' codebase, hurting their own performance with distractions.

This will incentivize API owners to design APIs to be as simple as possible with as few side-effects as possible, and to provide high-quality fakes that make it easy for clients to integrate with the API. Some teams will certainly not like this change at first, but I have seen API teams come to the realization that this is the best choice for the larger effort and implement these practices despite their cost to the team in the short run.

Helping management set engineering team objectives may not seem like a typical SETI responsibility, but although management is responsible for setting performance incentives and objectives, they are not well-positioned to understand how the low-level decisions of different teams create harmful interactions and lower cross-team performance, so they need SETI and TE guidance to create an environment that encourages optimal behaviors.
Fast, small releasesBeing forced to release more frequently than is required by feature deployment requirements has many beneficial side-effects that make release velocity a goal unto itself. SETIs and TEs faced with big, slow releases work with management to mandate a move to a set of smaller, more frequent releases. As release velocity is ratcheted up, negative behaviours such as too much manual testing or too much internal coupling become more painful, and many optimal behaviors are incentivized.
Less coupling between systemsWhen software is released together, it is easy to treat the seams between different components as implementation details. Resulting systems becoming so intertwined (coupled) that responsibilities between them are completely and randomly mixed and their interactions are too complex for any one person to understand. When two components are released separately and at different times, different versions of them must be compatible with one another. Engineers who were previously complacent about this fragility will become fearful of failed releases due to implicit contract changes. They will change their behavior in beneficial ways such as defining the contract between components explicitly and creating regression testing for it. The result is a system composed of robust, self-contained, more easily understood components.
Better/More automated testingManual testing becomes more painful as release velocity is ramped up. This will incentivize automated regression, UI and performance tests. This makes the team more agile and able to catch defects sooner and more cheaply.
Faster feedbackWhen incremental feature changes can be released to dogfood or other beta channels more frequently, user interaction designers and product managers get much faster feedback about what paths lead to better user engagement and experience than in big, slow releases where an entire feature is deployed simultaneously. This results in a better product.
ConclusionThe SETIs and TEs optimize interactions between teams and create fixes for product-wide, cross-team problems in order to improve engineering productivity and velocity. There are many worthwhile projects that EngProd can do using broad knowledge of the system and expertise in refactoring, automation and testing, such as creating test fixtures that enable continuous integration testing or identifying and combining duplicative tests or tools.

That said, the biggest problem that EngProd is positioned to solve is to break the chain of local optimizations resulting in cross-team de-optimizations. To that end, discomfort is a tool that can incentivize engineers to find solutions that are optimal for the entire product. We should look for and advocate for these transformative changes.
Categories: Blogs

The Bug in Lessons Learned

Hiccupps - James Thomas - Fri, 02/10/2017 - 21:52

The Test team book club read Lessons Learned in Software Testing the other week. I couldn't find my copy at the time but Karo came across it today, on Rog's desk, and was delighted to tell me that she'd discovered a bug in it...
Categories: Blogs

Refactoring Towards Resilience: Evaluating RabbitMQ Options

Jimmy Bogard - Fri, 02/10/2017 - 19:50

Other posts in this series:

In the last post, we looked at dealing with an API in SendGrid that basically only allows at-most-once calls. We can't undo anything, and we can't retry anything. We're going to find some similar issues with RabbitMQ (although it's not much different than other messaging systems).

RabbitMQ, like all queuing systems I can think of, offer a wide variety of reliability modes. In general, I try to make my message handlers idempotent, as it enables so many more options up stream. I also don't really trust anyone sending me messages so anything I can do to ensure MY system stays consistent despite what I might get sent is in my best interest.

Looking back at our original code:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

We can see that if anything fails after the "bus.Publish" line, we don't really know what happened to our message. Did it get sent? Did it not? It's hard to tell, but going to our picture of our transaction model:

Transaction flow

And our options we have to consider as a reminder:

Coordination Options

Let's take a look at our options dealing with failures.

Ignore

Similar to our SendGrid solution, we could just ignore any failures with connecting to our broker:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    try {
        await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    } catch (Exception e) {
        Logger.Exception(e, $"Failed to send order created event for order {order.Id}");
    }
    return RedirectToAction("Success");
}

This approach would shield us from connectivity failures with RabbitMQ, but we'd still need some sort of process to detect these failures and retry those sends later on. One way to do this would be simply to flag our orders:

} catch (Exception e) {
    order.NeedsOrderCreatedEventRaised = true;
    Logger.Exception(e, $"Failed to send order created event for order {order.Id}");
}    

It's not a very elegant solution, as I'd have to create flags for every single kind of message I send. Additionally, it ignores the issue of a database transaction rolling back, but my message is still sent. In that case, my message will still get sent, and consumers could get events for things that didn't actually happen! There are other ways to fix this - but for now, let's cover our other options.

Retry

Retries are interesting in RabbitMQ because although it's fairly easy to retry my message on my side, there's no guarantee that consumers can support a message if it came in twice. However, in my applications, I try as much as possible to make my message consumers idempotent. It makes life so much easier, and allows so many more options, if I can retry my message.

Since my original message includes the unique order ID, a natural correlation identifier, consumers can have an easy way of ensuring their operations are idempotent as well.

The mechanics of a retry could be similar to our above example - mark the order as needing a retry of the event to be raised at some later point in time, or retry in the same block, or include a resiliency layer on top of sending.

Undo

RabbitMQ doesn't support any sort of "undo" natively, so if we wanted to do this ourselves, we'd have to have some sort of compensating event published. Perhaps an event, "OrderNotActuallyCreatedJustKiddingAboutBefore"?

Perhaps not.

Coordinate

RabbitMQ does not natively support any sort of two-phase commit, so coordination is out.

Next steps

Now that we've examined all of our options around the various services our application integrates with, I want to evaluate each service in terms of the coupling we have today, and determine if we truly need that level of coupling.

Categories: Blogs

Webinar Slides and Recording - Security Testing: The Missing Link in Information Security

Thanks to everyone who participated in today's webinar. I really enjoyed the time together, even if I did experience a complete system failure and restart in the latter part of the webinar. Just to let you know how the rest of today went, I was checking out this evening at Wal-mart (not self-checkout) and after I scanned my debit card, the pin pad displayed a message, "System shutdown in progress". I don't know what it is about me, but I swear, systems fail in my presence. It has been that way for over 20 years now! Oh, the joys of being a tester!

OK, here we go...

Here is the recording link. I have edited the video so that all slides are shown and discussed.

Here is a PDF with the slides in 2-up format.

Here is a PDF with the slides in full color format.

I hope you find the information helpful. Feel free to share it. I hope it can help you build the awareness of the need for security testing in your organization.

Thanks!

Randy

Categories: Blogs

Testing maturity in an agile/CDT environment

One day during a team meeting at Joep‘s previous job at a bank the Team Manager of Testing, listed a number of topics his testers could work on in the coming months. One of those topics was “testing maturity”. This topic was on the list not because this manager was such a fan of maturity models, but because the other team managers (Business Analysis and Development) had produced one for their own teams and higher management would like to have one for testing as well. And although Joep saw little value in a classic five-tiered maturity model either, he was intrigued by the question: so what can you do with respect to maturity models that is of value?

Joep asked Huib to help him think of a way to create a valuable, context-driven way to work on maturity. Since Huib had been working for the same bank, they met and discussed the possibilities. Soon they found out that the criteria should be variable since maturity depends on context. They started experimenting with stack ranking and quite soon they had the first version of their “maturity model”.

After a first try-out at the bank Joep worked, we let it rest for a while. After a couple of months we wrote this article. It is the first version and it needs to be refined and polished. The heuristics lists are probably to long and need to be reduced. We think of this model as a card game that can be played with teams.

Currently we are also working on an agile version of this model, a card game for agile teams to assess their “maturity” to help them to find possible areas for improvements. More about that later.

We are curious about your thoughts. What do you think? Maybe you want to try the game? Feel free to try it out. We hope you will share your experiences with us.

Article (pdf) – Card game (pdf)

 

Categories: Blogs

Y2K

Hiccupps - James Thomas - Sun, 02/05/2017 - 06:36

What Really Happened in Y2K? That's the question Professor Martyn Thomas is asking in a forthcoming lecture and in a recent Chips With Everything podcast, from which I picked a few quotes that I particularly enjoyed.

On why choosing to use two digits for years was arguably a reasonable choice, in its time and context:
The problem arose originally because when most of the systems were being programmed before the 1990s computer power was extremely expensive and storage was extremely expensive. It's quite hard to recall that back in 1960 and 1970 a computer would occupy a room the size of a football pitch and be run 24 hours a day and still only support a single organisation.It was because those things were so expensive, because processing was expensive and in particular because storage was so expensive that full dates weren't stored. Only the year digits were stored in the data. On the lack of appreciation that, despite the eventual understated outcome, Y2K exposed major issues:
I regard it as a signal event. One of these near-misses that it's very important that you learn from, and I don't think we've learned from it yet. I don't think we've taken the right lessons out of the year 2000 problem. And all the people who say it was all a myth prevent those lessons being learned.On what bothers him today:
I'm [worried about] cyber security. I think that is a threat that's not yet being addressed strategically. We have to fix it at the root, which is by making the software far less vulnerable to cyber attack ... Driverless cars scare the hell out of me, viewed through the lens of cyber security.We seem to feel that the right solution to the cyber security problem is to train as many people as we can to really understand how to look for cyber security vulnerabilities and then just send them out into companies ... without recognising that all we're doing is training a bunch of people find all the loopholes in the systems and then encourage companies to let them in and discover all their secrets.Similarly, training lots of school students to write bad software, which is essentially what we're doing by encouraging app development in schools, is just increasing the mountain of bad software in the world, which is a problem. It's not the solution.On building software:
People don't approach building software with the same degree of rigour that engineers approach building other artefacts that are equally important. The consequence of that is that most software contains a lot of errors. And most software is not managed very well.One of the big problems in the run-up to Y2K was that most major companies could not find the source code for their big systems, for their key business systems. And could not therefore recreate the software even in the form that it was currently running on their computers.  The lack of professionalism around managing software development and software was revealed by Y2K ... but we still build software on the assumption that you can test it to show that it's fit for purpose.On the frequency of errors in software:
A typical programmer makes a mistake in, if they're good, every 30 lines of program. If they're very, very good they make a mistake in every 100 lines. If they're typical it's in about 10 lines of code. And you don't find all of those by testing.  On his prescription:
The people who make the money out of selling us computer systems don't carry the cost of those systems failing. We could fix that. We could say that in a decade's time - to give the industry a chance to shape up - we were going to introduce strict liability in the way that we have strict liability in the safety of children's toys for example.Image: https://flic.kr/p/7wbBSu 
Categories: Blogs