Skip to content

Blogs

Works on my machine

The Social Tester - 18 hours 17 min ago
I’m proud to announce a new range of merchandise from The Social Tester. I’m promoting my new online store at Zazzle. My first design, available in two flavors, is the classic “Works on my machine”. With the alternative “Doesn’t work on my machine”. Each month I’m hoping to put a new design online. The service […]
Categories: Blogs

Even better – communicating while drawing!

Agile Testing with Lisa Crispin - Thu, 09/18/2014 - 22:39
Drawings explaining the business rules for a feature

Drawings explaining the business rules for a feature

In my previous post I told about a great experience collaborating with teammates on a difficult new feature. The conversations around this continued this week when we were all in the office. Many of our remaining questions were answered when the programmers walked us through a whiteboard drawing of how the new algorithm should work. As we talked, we updated and improved the drawings. We took pictures to attach to the pertinent story.

Not only is a picture worth a thousand words, but talking with a marker in hand and a whiteboard nearby (or their virtual equivalents) greatly enhances communication.

The post Even better – communicating while drawing! appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Container Usage Guidelines

Jimmy Bogard - Wed, 09/17/2014 - 21:25

Over the years, I’ve used and abused IoC containers. While the different tools have come and gone, I’ve settled on a set of guidelines on using containers effectively. As a big fan of the Framework Design Guidelines book and its style of “DO/CONSIDER/AVOID/DON’T”, I tried to capture what has made me successful with containers over the years in a series of guidelines below.

Container configuration

Container configuration typically occurs once at the beginning of the lifecycle of an AppDomain, creating an instance of a container as the composition root of the application, and configuring and framework-specific service locators. StructureMap combines scanning for convention-based registration, and Registries for component-specific configuration.

X AVOID scanning an assembly more than once.

Scanning is somewhat expensive, as scanning involves passing each type in an assembly through each convention. A typical use of scanning is to target one or more assemblies, find all custom Registries, and apply conventions. Conventions include generics rules, matching common naming conventions (IFoo to Foo) and applying custom conventions. A typical root configuration would be:

var container = new Container(cfg =>
    cfg.Scan(scan => {
        scan.AssemblyContainingType<IMediator>();
        scan.AssemblyContainingType<UiRegistry>();
 
        scan.AddAllTypesOf(typeof(IValidator<>));
        scan.AddAllTypesOf(typeof(IRequestHandler<,>));
 
        scan.WithDefaultConventions();
        scan.LookForRegistries();
    }));

Component-specific configuration is then separated out into individual Registry objects, instead of mixed with scanning. Although it is possible to perform both scanning and component configuration in one step, separating component-specific registration in individual registries provides a better separation of conventions and configuration.

√ DO separate configuration concerning different components or concerns into different Registry classes.

Individual Registry classes contain component-specific registration. Prefer smaller, targeted Registries, organized around function, scope, component etc. All container configuration for a single 3rd-party component organized into a single Registry makes it easy to view and modify all configuration for that one component:

public class NHibernateRegistry : Registry {
    public NHibernateRegistry() {
        For<Configuration>().Singleton().Use(c => new ConfigurationFactory().CreateConfiguration());
        For<ISessionFactory>().Singleton().Use(c => c.GetInstance<Configuration>().BuildSessionFactory());
 
        For<ISession>().Use(c => {
            var sessionFactory = c.GetInstance<ISessionFactory>();
            var orgInterceptor = new OrganizationInterceptor(c.GetInstance<IUserContext>());
            return sessionFactory.OpenSession(orgInterceptor);
        });
    }
}

X DO NOT use the static API for configuring or resolving.

Although StructureMap exposes a static API in the ObjectFactory class, it is considered obsolete. If a static instance of a composition root is needed for 3rd-party libraries, create a static instance of the composition root Container in application code.

√ DO use the instance-based API for configuring.

Instead of using ObjectFactory.Initialize and exposing ObjectFactory.Instance, create a Container instance directly. The consuming application is responsible for determining the lifecycle/configuration timing and exposing container creation/configuration as an explicit function allows the consuming runtime to determine these (for example, in a web application vs. integration tests).

X DO NOT create a separate project solely for dependency resolution and configuration.

Container configuration belongs in applications requiring those dependencies. Avoid convoluted project reference hierarchies (i.e., a “DependencyResolution” project). Instead, organize container configuration inside the projects needing them, and defer additional project creation until multiple deployed applications need shared, common configuration.

√ DO include a Registry in each assembly that needs dependencies configured.

In the case where multiple deployed applications share a common project, include inside that project container configuration for components specific to that project. If the shared project requires convention scanning, then a single Registry local to that project should perform the scanning of itself and any dependent assemblies.

X AVOID loading assemblies by name to configure.

Scanning allows adding assemblies by name, “scan.Assembly(“MyAssembly”)”. Since assembly names can change, reference a specific type in that assembly to be registered. 
Lifecycle configuration

Most containers allow defining the lifecycle of components, and StructureMap is no exception. Lifecycles determine how StructureMap manages instances of components. By default, instances for a single request are shared. Ideally, only Singleton instances and per-request instances should be needed. There are cases where a custom lifecycle is necessary, to scope a component to a given HTTP request (HttpContext).

√ DO use the container to configure component lifecycle.

Avoid creating custom factories or builder methods for component lifecycles. Your custom factory for building a singleton component is probably broken, and lifecycles in containers have undergone extensive testing and usage over many years. Additionally, building factories solely for controlling lifecycles leaks implementation and environment concerns to services consuming lifecycle-controlled components. In the case where instantiation needs to be deferred or lifecycle needs to be explicitly managed (for example, instantiating in a using block), depending on a Func<IService> or an abstract factory is appropriate.

√ CONSIDER using child containers for per-request instances instead of HttpContext or similar scopes.

Child/nested containers inherit configuration from a root container, and many modern application frameworks include the concept of creating scopes for requests. Web API in particular creates a dependency scope for each request. Instead of using a lifecycle, individual components can be configured for an individual instance of a child container:

public IDependencyScope BeginScope() {
    IContainer child = this.Container.GetNestedContainer();
    var session = new ApiContext(child.GetInstance<IDomainEventDispatcher>());
    var resolver = new StructureMapDependencyResolver(child);
    var provider = new ServiceLocatorProvider(() => resolver);
    child.Configure(cfg =>
    {
        cfg.For<DbContext>().Use(session);
        cfg.For<ApiContext>().Use(session);
        cfg.For<ServiceLocatorProvider>().Use(provider);
    });
             
    return resolver;
}

Since components configured for a child container are transient for that container, child containers provide a mechanism to create explicit lifecycle scopes configured for that one child container instance. Common applications include creating child containers per integration test, MVVM command handler, web request etc.

√ DO dispose of child containers.

Containers contain a Dispose method, so if the underlying service locator extensions do not dispose directly, dispose of the container yourself. Containers, when disposed, will call Dispose on any non-singleton component that implements IDisposable. This will ensure that any resources potentially consumed by components are disposed properly.
Component design and naming

Much of the negativity around DI containers arises from their encapsulation of building object graphs. A large, complicated object graph is resolved with single line of code, hiding potentially dozens of disparate underlying services. Common to those new to Domain-Driven Design is the habit of creating interfaces for every small behavior, leading to overly complex designs. These design smells are easy to spot without a container, since building complex object graphs by hand is tedious. DI containers hide this pain, so it is up to the developer to recognize these design smells up front, or avoid them entirely.

X AVOID deeply nested object graphs.

Large object graphs are difficult to understand, but easy to create with DI containers. Instead of a strict top-down design, identify cross-cutting concerns and build generic abstractions around them. Procedural code is perfectly acceptable, and many design patterns and refactoring techniques exist to address complicated procedural code. The behavioral design patterns can be especially helpful, combined with refactorings dealing with long/complicated code can be especially helpful. Starting with the Transaction Script pattern keeps the number of structures low until the code exhibits enough design smells to warrant refactoring.

√ CONSIDER building generic abstractions around concepts, such as IRequestHandler<T>, IValidator<T>.

When designs do become unwieldy, breaking down components into multiple services often leads to service-itis, where a system contains numerous services but each only used in one context or execution path. Instead, behavioral patterns such as the Mediator, Command, Chain of Responsibility and Strategy are especially helpful in creating abstractions around concepts. Common concepts include:

  • Queries
  • Commands
  • Validators
  • Notifications
  • Model binders
  • Filters
  • Search providers
  • PDF document generators
  • REST document readers/writers

Each of these patterns begins with a common interface:

public interface IRequestHandler<in TRequest, out TResponse>
    where TRequest : IRequest<TResponse> {
    TResponse Handle(TRequest request);
}
 
public interface IValidator<in T> {
    ValidationResult Validate(T instance);
}
 
public interface ISearcher {
    bool IsMatch(string query);
    IEnumerable<Person> Search(string query);
}

Registration for these components involves adding all implementations of an interface, and code using these components request an instance based on a generic parameter or all instances in the case of the chain of responsibility pattern.

One exception to this rule is for third-party components and external, volatile dependencies.

√ CONSIDER encapsulating 3rd-party libraries behind adapters or facades.

While using a 3rd-party dependency does not necessitate building an abstraction for that component, if the component is difficult or impossible to fake/mock in a test, then it is appropriate to create a facade around that component. File system, web services, email, queues and anything else that touches the file system or network are prime targets for abstraction.

The database layer is a little more subtle, as requests to the database often need to be optimized in isolation from any other request. Switching database/ORM strategies is fairly straightforward, since most ORMs use a common language already (LINQ), but have subtle differences when it comes to optimizing calls. Large projects can switch between major ORMs with relative ease, so any abstraction would limit the use of any one ORM into the least common denominator.

X DO NOT create interfaces for every service.

Another common misconception of SOLID design is that every component deserves an interface. DI containers can resolve concrete types without an issue, so there is no technical limitation to depending directly on a concrete type. In the book Growing Object-Oriented Software, Guided by Tests, these components are referred to as Peers, and in Hexagonal Architecture terms, interfaces are reserved for Ports.

√ DO depend on concrete types when those dependencies are in the same logical layer/tier.

A side effect of depending directly on concrete types is that it becomes very difficult to over-specify tests. Interfaces are appropriate when there is truly an abstraction to a concept, but if there is no abstraction, no interface is needed.

X AVOID implementation names whose name is the implemented interface name without the “I”.

StructureMap’s default conventions do match up IFoo with Foo, and this can be a convenient default behavior, but when you have implementations whose name is the same as their interface without the “I”, that is a symptom that you are arbitrarily creating an interface for every service, when just resolving the concrete service type would be sufficient instead.  In other words, the mere ability to resolve a service type by an interface is not sufficient justification for introducing an interface.

√ DO name implementation classes based on details of the implementation (AspNetUserContext : IUserContext).

An easy way to detect excessive abstraction is when class names are directly the interface name without the prefix “I”. An implementation of an interface should describe the implementation. For concept-based interfaces, class names describe the representation of the concept (ChangeNameValidator, NameSearcher etc.) Environment/context-specific implementations are named after that context (WebApiUserContext : IUserContext).
Dynamic resolution

While most component resolution occurs at the very top level of a request (controller/presenter), there are occasions when dynamic resolution of a component is necessary. For example, model binding in MVC occurs after a controller is created, making it slightly more difficult to know at controller construction time what the model type is, unless it is assumed using the action parameters. For many extension points in MVC, it is impossible to avoid service location.

X AVOID using the container for service location directly.

Ideally, component resolution occurs once in a request, but in the cases where this is not possible, use a framework’s built-in resolution capabilities. In Web API for example, dynamically resolved dependencies should be resolved from the current dependency scope:

var validationProvider = actionContext
    .Request
    .GetDependencyScope()
    .GetService(typeof(IValidatorProvider));

Web API creates a child container per request and caches this scoped container within the request message. If the framework does not provide a scoped instance, store the current container in an appropriately scoped object, such as HttpContext.Items for web requests. Occasionally, you might need to depend on a service but need to explicitly decouple or control its lifecycle. In those cases, containers support depending directly on a Func.

√ CONSIDER depending on a Func<IService> for late-bound services.

For cases where known types need to be resolved dynamically, instead of trying to build special caching/resolution services, you can instead depend on a constructor function in the form of a Func. This separates wiring of dependencies from instantiation, allowing client code to have explicit construction without depending directly on a container.

public EmailController(Func<IEmailService> emailServiceProvider) {
    _emailServiceProvider = emailServiceProvider;
}
 
public ActionResult SendEmail(string to, string subject, string body) {
    using (var emailService = _emailServiceProvider()) {
        emailService.Send(to, subject, body);
    }
}

In cases where this becomes complicated, or reflection code is needed, a factory method or delegate type explicitly captures this intent.

√ DO encapsulate container usage with factory classes when invoking a container is required.

The Patterns and Practices Common Service Locator defines a delegate type representing the creation of a service locator instance:

public delegate IServiceLocator ServiceLocatorProvider();

For code needing dynamic instantiation of a service locator, configuration code creates a dependency definition for this delegate type:

public IDependencyScope BeginScope()
{
    IContainer child = this.Container.GetNestedContainer();
    var resolver = new StructureMapWebApiDependencyResolver(child);
    var provider = new ServiceLocatorProvider(() => resolver);
    child.Configure(cfg =>
    {
        cfg.For<ServiceLocatorProvider>().Use(provider);
    });
    return new StructureMapWebApiDependencyResolver(child);
}

This pattern is especially useful if an outer dependency has a longer configured lifecycle (static/singleton) but you need a window of shorter lifecycles. For simple instances of reflection-based component resolution, some containers include automatic facilities for creating factories.

√ CONSIDER using auto-factory capabilities of the container, if available.

Auto-factories in StructureMap are available as a separate package, and allow you to create an interface with an automatic implementation:

public interface IPluginFactory {
    IList<IPlugin> GetPlugins();
}
 
For<IPluginFactory>().CreateFactory();

The AutoFactories feature will dynamically create an implementation that defers to the container for instantiating the list of plugins.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

How to get current date in CodedUI (C#)?

Testing tools Blog - Mayank Srivastava - Wed, 09/17/2014 - 16:33
Below code help to get current date- DateTime currentDate = DateTime.Today; String DateValue = currentDate.ToString(“mM/d/yyyy”); If we want to add or subtract days in current date we can use below code- DateTime currentDate = DateTime.Today.AddDays(Convert.ToInt32(“1″)); String DateValue = currentDate.ToString(“mM/d/yyyy”); or DateTime currentDate = DateTime.Today.AddDays(Convert.ToInt32(“-1″)); String DateValue = currentDate.ToString(“mM/d/yyyy”);
Categories: Blogs

Den Lilla Svarta om Teststrategi

Thoughts from The Test Eye - Wed, 09/17/2014 - 11:14
Ideas

I am quite proud to announce a new free e-book about test strategy.

It contains ideas Henrik Emilsson and I have have discussed for years.

It is not a textbook, but it contains many examples and material that hopefully will inspire your own test strategies (the careful reader will recognize stuff from this blog and inspiration from James Bach’s Heuristic Test Strategy Model.)

Reader requirement: Understand Swedish.

Download Den Lilla Svarta om Teststrategi

DenLillaSvarta

Categories: Blogs

Unit Testing Client Side Code

Testing TV - Tue, 09/16/2014 - 12:51
This video demonstrates how to use tools like Mocha and PhantomJS to build rigorous client tests. It also discusses programming techniques to make Javascript client code more easily testable. Video source: http://blog.jerryorr.com/2014/08/yes-you-can-unit-test-client-side-code.html Event organizer: http://webconference.psu.edu/
Categories: Blogs

How to do data driven testing in CodedUI (C#) using Excel.

Testing tools Blog - Mayank Srivastava - Tue, 09/16/2014 - 10:30
Data Driven Testing, where we driver our test script with different sets of data to validate different functionalities of the application. CodedUI tool has in-built support to do data driven test from different sources, however I am going to put some light on NPOI libraries which helps to write your own code to develop driven […]
Categories: Blogs

Teamwork for building in quality

Agile Testing with Lisa Crispin - Mon, 09/15/2014 - 01:19

I can’t say it enough: it takes the whole team to build quality into a software product, and testing is an integral part of software development along with coding, design and many other activities. I’d like to illustrate this yet again with my experience a couple days ago.

Some of my teammates at our summer picnic - we have fun working AND playing!

Some of my teammates at our summer picnic – we have fun working AND playing!

Recently our team has been brainstorming and experimenting with ways to make our production site more reliable and robust. Part of this effort involved controlling the rate at which the front end client “pings” the server when the server is under a heavy load or there is a problem with our hosting service. Friday morning, after our standup, the front end team decided on a new algorithm for “backing off” the ping interval when necessary and then restoring it to normal when appropriate.

Naturally I had questions about how we would test this, not only to ensure the correct new behavior, but to make sure other scenarios weren’t adversely affected. For example, the client should behave differently when it detects that the user’s connection has gone offline. There are many risks, and the impact of incorrect behavior can be severe.

I was working from home that day. Later in the morning, the programmer pair working on the story asked if we could meet via Skype. They also had asked another tester in the office to join us. The programmers explained three conditions in which the new pinger behavior should kick in, and how we could simulate each one.

The other tester asked if there were a way we could control the pinger interval for testing. For example, sometimes the ping interval should go up to five minutes. That’s a long time to wait when testing. We discussed some ideas how to do this. The programmers started typing examples of commands we might be able to do in the browser developer tools console to control the pinger interval, as well as to get other useful information while testing. We came up with a good strategy.

Note that this conversation took place before they started writing the actual code. While they started writing the code using TDD, the other tester and I started creating given-when-then style test scenarios on a wiki page. We started with the happy path, then thought of more scenarios and questions to discuss with the programmers. By anticipating edge cases and discussing them, we’ll end up with more robust code much more quickly than if we waited until the code was complete to start thinking about testing it end to end.

There are so many stories in this vein in Janet Gregory‘s and my new book, More Agile Testing: Learning Journeys for the Whole Team. We’ll have more information about the book on our book website soon, including introductions to our more than 40 contributors. The book will be available October 1. It takes a village to write a useful book on agile testing, and it takes a whole team to build quality into a software product!

 

 

 

The post Teamwork for building in quality appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

How to find if object is displaying in web page with CodedUI (C#)

Testing tools Blog - Mayank Srivastava - Sun, 09/14/2014 - 19:13
Below code will help to determine that if object is displaying or not… BrowserWindow browser = BrowserWindow.Launch(new System.Uri(“http:www.google.com”)); UITestControl Object = new UITestControl(browser); Object.SearchProperties.Add(“Id”, “Object_ID”); String objStatus = Obj.Exists.ToString(); if (objStatus == “True”) { Console.WriteLine(“Next button is available”); } else { Console.WriteLine(“Next button is not available”); }
Categories: Blogs

How to read html table cell data with CodedUI (C#)

Testing tools Blog - Mayank Srivastava - Sat, 09/13/2014 - 19:03
Below code will help to get the data from table cell… BrowserWindow browser = BrowserWindow.Launch(new System.Uri(“http://YourWebApplicationURL.com&#8221;)); HtmlTable table = new HtmlTable(browser); //Below line will identify the table object. table.SearchProperties.Add(“Id”, “Table_ID”); for (int i = 1; i <= 1; i++) { for (int j = 1; j <= 8; j++) { HtmlCell cell = new HtmlCell(table); […]
Categories: Blogs

The Software Tester's Greatest Asset

I interact with thousands of testers each year. In some cases, it's in a classroom setting, in others, it may be over a cup of coffee. Sometimes, people dialog with me through this blog, my website or my Facebook page.

The thing I sense most from testers that are "stuck" in their career or just in their ability to solve problems is that they have closed minds to other ways of doing things. Perhaps they have bought into a certain philosophy of testing, or learned testing from someone who really wasn't that good at testing.

In my observation, the current testing field is fragmented into a variety of camps, such as those that like structure, or those that reject any form of structure. There are those that insist their way is the only way to perform testing. That's unfortunate - not the debate, but the ideology.

The reality is there are many ways to perform testing. It's also easy to use the wrong approach on a particular project or task. It's the old Maslow "law of the instrument" that says, "I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."

Let me digress for a moment...

I enjoy working on cars, even though it can be a very time-consuming, dirty and frustrating experience. I've been working on my own cars for over 40 years now. I've learned little tricks along the way to remove rusted and frozen bolts. I have a lot of tools - wrenches, sockets, hammers...you name it. The most helpful tool I own is a 2-foot piece of pipe. No, I don't hit the car with it! I use it for leverage. (I can also use it for self defense, but that's another story.) It cost me five dollars, but has saved me many hours of time. Yeah, a lowly piece of pipe slipped over the end of a wrench can do wonders.

The funny thing is that I worked on cars for many years without knowing that old mechanic's trick. It makes me wonder how many other things I don't know.

Here's the challenge...

Are you open to other ways of doing things, even if you personally don't like it?

For example, if you needed to follow a testing standard, would that make you storm out of the room in a huff?

Or, if you had to do exploratory testing, would that cause you to break out in hives?

Or, if your employer mandated that the entire test team (including you) get a certification, would you quit?

I'm not suggesting you abandon your principles or beliefs about testing. I am suggesting that in the things we reject out of hand, there could be just the solution you are looking for.

The best thing a tester does is to look at things objectively, with an open mind. When we jump to conclusions too soon, we may very well find ourselves in a position where we have lost our objectivity.

As a tester, your greatest asset is an open mind. Look at the problem from various angles. Consider the pros and cons of things, realizing that even your list of pros and cons can be skewed. Then, you can work in many contexts and also enjoy the journey.


Categories: Blogs

How to get current page URL with CodedUI (C#).

Testing tools Blog - Mayank Srivastava - Fri, 09/12/2014 - 10:02
Below code will help to get current page URL by using C# (CodedUI). BrowserWindow browser = BrowserWindow.Launch(new System.Uri(“YourAppURL.com”)); Microsoft.VisualStudio.TestTools.UITesting.HtmlControls.HtmlDocument PageObject = new Microsoft.VisualStudio.TestTools.UITesting.HtmlControls.HtmlDocument(browser); String URL = PageObject.PageUrl.ToString(); System.Console.WriteLine(URL);
Categories: Blogs

Zone of control vs Sphere of influence

Gojko Adzic - Fri, 09/12/2014 - 09:22

In The Logical Thinking Process, H. William Dettmer talks about three different areas of systems:

  • The Zone of control (or span of control) includes all those things in a system that we can change on our own.
  • The Sphere of influence includes activities that we can impact to some degree, but can’t exercise full control over.
  • The External environment includes the elements over which we have no influence.

These three system areas, and the boundaries between them, provide a very useful perspective on what a delivery team can hope to achieve with user stories. Evaluating which system area a user story falls into is an excellent way to quickly spot ideas that require significant refinement.

This is an excerpt from my upcoming book 50 Quick Ideas to Improve your User Stories. Grab the book draft from LeanPub and you’ll get all future updates automatically.

A good guideline is that the user need of a story (‘In order to…’) should ideally be in the sphere of influence of the delivery team, and the deliverable (‘I want…’) should ideally be in their zone of control. This is not a 100% rule and there are valid exceptions, but if a story does not fit into this pattern it should be investigated – often it won’t describe a real user need and rephrasing can help us identify root causes of problems and work on them, instead of just dealing with the symptoms.

zone_of_control

When the user need of a story is in the zone of control of the delivery group, the story is effectively a task without risk, which should raise alarm bells. There are three common scenarios: The story might be fake, micro-story or misleading.

Micro-stories are what you get when a large business story is broken down into very small pieces, so that some small parts no longer carry any risk – they are effectively stepping stones to something larger. Such stories are OK, but it’s important to track the whole hierarchy and measure the success of the micro-stories based on the success of the larger piece. If the combination of all those smaller pieces still fails to achieve the business objective, it might be worth taking the whole hierarchy out or revisiting the larger piece. Good strategies for tracking higher level objectives are user story mapping and impact mapping.

Fake stories are those about the needs of delivery team members. For example, ‘As a QA, in order to test faster, I want the database server restarts to be automated’. This isn’t really about delivering value to users, but a task that someone on the team needs, and such stories are often put into product backlogs because of misguided product owners who want to micromanage. For ideas on how to deal with these stories, see the chapter Don’t push everything into stories in the 50 Quick Ideas book.

Misleading stories describe a solution and not the real user need. One case we came across recently was ‘As a back-office operator, in order to run reports faster, I want the customer reporting database queries to be optimised’. At first glance, this seemed like a nice user story – it even included a potentially measurable change in someone’s behaviour. However, the speed of report execution is pretty much in the zone of control of the delivery team, which prompted us to investigate further. We discovered that the operator asking for the change was looking for discrepancies in customer information. He ran several different reports just to compare them manually. Because of the volume of data and the systems involved, he had to wait around for 20 to 30 minutes for the reports, and then spend another 10 to 20 minutes loading the different files into Excel and comparing them. We could probably have decreased the time needed for the first part of that job significantly, but the operator would still have had to spend time comparing information. Then we traced the request to something outside our zone of control. Running reports faster helped the operator to compare customer information, which helped him to identify discrepancies (still within our control potentially), and then to resolve them by calling the customers and cleaning up their data. Cleaning up customer data was outside our zone of control, we could just influence it by providing information quickly. This was a nice place to start discussing the story and its deliverables. We rephrased the story to ‘In order to resolve customer data discrepancies faster…’ and implemented a web page that quickly compared different data sources and almost instantly displayed only the differences. There was no need to run the lengthy reports, the database software was more than capable of zeroing in on the differences very quickly. The operator could then call the customers and verify the information.

When the deliverable of a story is outside the zone of control of the delivery team, there are two common situations: the expectation is completely unrealistic, or the story is not completely actionable by the delivery group. The first case is easy to deal with – just politely reject it. The second case is more interesting. Such stories might need the involvement of an external specialist, or a different part of the organisation. For example, one of our clients was a team in a large financial organisation where configuration changes to message formats had to be executed by a specialist central team. This, of course, took a lot of time and coordination. By doing the zone of control/sphere of influence triage on stories, we quickly identified those that were at risk of being delayed. The team started on them quickly, so that everything would be ready for the specialists as soon as possible.

How to make it work

The system boundaries vary depending on viewpoint, so consider them from the perspective of the delivery team.

If a story does not fit into the expected pattern, raise the alarm early and consider re-writing it. Throw out or replace fake and misleading stories. Micro-stories aren’t necessarily bad, but going so deep in detail is probably an overkill for anything apart from short-term plans. If you discover micro-stories on mid-term or long-term plans, it’s probably better to replace a whole group of related stories with one larger item.

If you discover stories that are only partially actionable by your team, consider splitting them into a part that is actionable by the delivery group, and a part that needs management intervention or coordination.

To take this approach even further, consider drawing up a Current reality tree (outside the scope of this post, but well explained in The Logical Thinking Process), which will help you further to identify the root causes of undesirable effects.

Categories: Blogs

But What do I Know?

Hiccupps - James Thomas - Fri, 09/12/2014 - 06:59


The novelty of hypertext over traditional text is the direct linking of references. This allows the reader to navigate immediately from one text to another, or to another part of the same text, or expose more detail of some aspect of that text in place. This kind of hyperlinking is now ubiquitous through the World Wide Web and most of us don't give it a second thought.

I was looking up hypermedia for the blog post I wanted to write today when I discovered that there's another meaning of the term hypertext in the study of semiotics and, further, that the term has a counterpart, hypotext. Thse two are defined in relation to one another, credited to Gérard Genette: "Hypertextuality refers to any relationship uniting a text B (which I shall call the hypertext) to an earlier text A (I shall, of course, call it the hypotext), upon which it is grafted in a manner that is not that of commentary."

In a somewhat meta diversion, following a path through the pages describing these terms realised a notion that I'd had floating around partially-formed for a while: quite apart from the convenience, an aspect of hypertext that I find particularly valuable is the potential for maintaining and developing the momentum of a thought by chasing it through a chain of references. I frequently find that this process and the speed of it, is itself a spur to further ideas and new connections. For example, when I'm stuck on a problem and searching hasn't got me to the answer, I will sometimes recourse to following links through sets of web pages in the area, guided by the sense that they might be applicable, by them appearing to be about stuff I am not familiar with, by my own interest, by my gut.

I don't imagine that I would have thought that just now had I not followed hyperlink to its alternative definition and then to hypolink and then made the connection from the links between pages to the chain of thoughts which parallels, or perhaps entwines, or maybe leaps off from them.

And that itself is pleasing because the thing I wanted to capture today grew from the act of clicking through links (I so wish that could be a single verb and at least one other person thinks so too: clinking anyone?). I started at Adam Knight's The Facebook Effect, clinked through to a Twitter thread  from which Adam obtained the image he used and then on to Overcoming Impostor Syndrome which contained the original image.

The image that unites these three is the one I'm using at the top here and what it solidified for me was the way that we can be inhibited from sharing information because we feel that everyone around us will already know it or will have remembered it because we know we told them it once before. I've seen it, done it and still do it myself in loads of contexts including circulating interesting links to the team, running our standups and reporting the results of investigations to colleagues.

As testers it can be particularly dangerous, not necessarily because of impostor or Facebook effects, but because we need to be aware that when we choose not to share, or acknowledge, or reacknowledge some significant issue with the thing we're testing we may be inadvertently hiding it (although context should guide the extent to which we need to temper the temptation to over-report and be accepting of others reminding us of existing information). It's one of the reasons I favour open notebook testing.

Note to self: I don't know what you know, you know?
Image: Overcoming Impostor Syndrome
Categories: Blogs

Chrome - Firefox WebRTC Interop Test - Pt 2

Google Testing Blog - Tue, 09/09/2014 - 22:09
by Patrik Höglund

This is the second in a series of articles about Chrome’s WebRTC Interop Test. See the first.

In the previous blog post we managed to write an automated test which got a WebRTC call between Firefox and Chrome to run. But how do we verify that the call actually worked?

Verifying the CallNow we can launch the two browsers, but how do we figure out the whether the call actually worked? If you try opening two apprtc.appspot.com tabs in the same room, you will notice the video feeds flip over using a CSS transform, your local video is relegated to a small frame and a new big video feed with the remote video shows up. For the first version of the test, I just looked at the page in the Chrome debugger and looked for some reliable signal. As it turns out, the remoteVideo.style.opacity property will go from 0 to 1 when the call goes up and from 1 to 0 when it goes down. Since we can execute arbitrary JavaScript in the Chrome tab from the test, we can simply implement the check like this:

bool WaitForCallToComeUp(content::WebContents* tab_contents) {
// Apprtc will set remoteVideo.style.opacity to 1 when the call comes up.
std::string javascript =
"window.domAutomationController.send(remoteVideo.style.opacity)";
return test::PollingWaitUntil(javascript, "1", tab_contents);
}

Verifying Video is PlayingSo getting a call up is good, but what if there is a bug where Firefox and Chrome cannot send correct video streams to each other? To check that, we needed to step up our game a bit. We decided to use our existing video detector, which looks at a video element and determines if the pixels are changing. This is a very basic check, but it’s better than nothing. To do this, we simply evaluate the .js file’s JavaScript in the context of the Chrome tab, making the functions in the file available to us. The implementation then becomes

bool DetectRemoteVideoPlaying(content::WebContents* tab_contents) {
if (!EvalInJavascriptFile(tab_contents, GetSourceDir().Append(
FILE_PATH_LITERAL(
"chrome/test/data/webrtc/test_functions.js"))))
return false;
if (!EvalInJavascriptFile(tab_contents, GetSourceDir().Append(
FILE_PATH_LITERAL(
"chrome/test/data/webrtc/video_detector.js"))))
return false;

// The remote video tag is called remoteVideo in the AppRTC code.
StartDetectingVideo(tab_contents, "remoteVideo");
WaitForVideoToPlay(tab_contents);
return true;
}

where StartDetectingVideo and WaitForVideoToPlay call the corresponding JavaScript methods in video_detector.js. If the video feed is frozen and unchanging, the test will time out and fail.

What to Send in the CallNow we can get a call up between the browsers and detect if video is playing. But what video should we send? For chrome, we have a convenient --use-fake-device-for-media-stream flag that will make Chrome pretend there’s a webcam and present a generated video feed (which is a spinning green ball with a timestamp). This turned out to be useful since Firefox and Chrome cannot acquire the same camera at the same time, so if we didn’t use the fake device we would have two webcams plugged into the bots executing the tests!

Bots running in Chrome’s regular test infrastructure do not have either software or hardware webcams plugged into them, so this test must run on bots with webcams for Firefox to be able to acquire a camera. Fortunately, we have that in the WebRTC waterfalls in order to test that we can actually acquire hardware webcams on all platforms. We also added a check to just succeed the test when there’s no real webcam on the system since we don’t want it to fail when a dev runs it on a machine without a webcam:

if (!HasWebcamOnSystem())
return;

It would of course be better if Firefox had a similar fake device, but to my knowledge it doesn’t.

Downloading all Code and Components Now we have all we need to run the test and have it verify something useful. We just have the hard part left: how do we actually download all the resources we need to run this test? Recall that this is actually a three-way integration test between Chrome, Firefox and AppRTC, which require the following:

  • The AppEngine SDK in order to bring up the local AppRTC instance, 
  • The AppRTC code itself, 
  • Chrome (already present in the checkout), and 
  • Firefox nightly.

While developing the test, I initially just hand-downloaded these and installed and hard-coded the paths. This is a very bad idea in the long run. Recall that the Chromium infrastructure is comprised of thousands and thousands of machines, and while this test will only run on perhaps 5 at a time due to its webcam requirements, we don’t want manual maintenance work whenever we replace a machine. And for that matter, we definitely don’t want to download a new Firefox by hand every night and put it on the right location on the bots! So how do we automate this?

Downloading the AppEngine SDK
First, let’s start with the easy part. We don’t really care if the AppEngine SDK is up-to-date, so a relatively stale version is fine. We could have the test download it from the authoritative source, but that’s a bad idea for a couple reasons. First, it updates outside our control. Second, there could be anti-robot measures on the page. Third, the download will likely be unreliable and fail the test occasionally.

The way we solved this was to upload a copy of the SDK to a Google storage bucket under our control and download it using the depot_tools script download_from_google_storage.py. This is a lot more reliable than an external website and will not download the SDK if we already have the right version on the bot.

Downloading the AppRTC Code
This code is on GitHub. Experience has shown that git clone commands run against GitHub will fail every now and then, and fail the test. We could either write some retry mechanism, but we have found it’s better to simply mirror the git repository in Chromium’s internal mirrors, which are closer to our bots and thereby more reliable from our perspective. The pull is done by a Chromium DEPS file (which is Chromium’s dependency provisioning framework).

Downloading Firefox
It turns out that Firefox supplies handy libraries for this task. We’re using mozdownload in this script in order to download the Firefox nightly build. Unfortunately this fails every now and then so we would like to have some retry mechanism, or we could write some mechanism to “mirror” the Firefox nightly build in some location we control.

Putting it TogetherWith that, we have everything we need to deploy the test. You can see the final code here.

The provisioning code above was put into a separate “.gclient solution” so that regular Chrome devs and bots are not burdened with downloading hundreds of megs of SDKs and code that they will not use. When this test runs, you will first see a Chrome browser pop up, which will ensure the local apprtc instance is up. Then a Firefox browser will pop up. They will each acquire the fake device and real camera, respectively, and after a short delay the AppRTC call will come up, proving that video interop is working.

This is a complicated and expensive test, but we believe it is worth it to keep the main interop case under automation this way, especially as the spec evolves and the browsers are in varying states of implementation.

Future Work

  • Also run on Windows/Mac. 
  • Also test Opera. 
  • Interop between Chrome/Firefox mobile and desktop browsers. 
  • Also ensure audio is playing. 
  • Measure bandwidth stats, video quality, etc.


Categories: Blogs

Tackling cross-cutting concerns with a mediator pipeline

Jimmy Bogard - Tue, 09/09/2014 - 18:17

Originally posted on the Skills Matter website

In most of the projects I’ve worked on in the last several years, I’ve put in place a mediator to manage the delivery of messages to handlers. I’ve covered the motivation behind such a pattern in the past, where it works well and where it doesn’t.

One of the advantages behind the mediator pattern is that it allows the application code to define a pipeline of activities for requests, as opposed to embedding this pipeline in other frameworks such as Rails, node.js, ASP.NET Web API and so on. These frameworks have many other concerns going on besides the very simple “one model in, one model out” pattern that so greatly simplifies conceptualizing the system and realizing more powerful patterns.

As a review, a mediator encapsulates how a series of objects interact. Our mediator looks like:

public interface IMediator
{
    TResponse Send<TResponse>(IRequest<TResponse> request);
    Task<TResponse> SendAsync<TResponse>(IAsyncRequest<TResponse> request);
    void Publish<TNotification>(TNotification notification) where TNotification : INotification;
    Task PublishAsync<TNotification>(TNotification notification) where TNotification : IAsyncNotification;
}

This is from a simple library (MediatR) I created (and borrowed heavily from others) that enables basic message passing. It facilitates loose coupling between how a series of objects interact. And like many OO patterns, it exists because of missing features in the language. In other functional languages, passing messages to handlers is accomplished with features like pattern matching.

Our handler interface represents the ability to take an input, perform work, and return some output:

public interface IRequestHandler<in TRequest, out TResponse>
    where TRequest : IRequest<TResponse>
{
    TResponse Handle(TRequest message);
}

With this simple pattern, we encapsulate the work being done to transform input to output in a single method. Any complexities around this work are encapsulated, and any refactorings are isolated to this one method. As systems become more complex, isolating side-effects becomes critical for maintaining overall speed of delivery and minimizing risk.

We still have the need for cross-cutting concerns, and we’d rather not pollute our handlers with this work.

These surrounding behaviors become implementations of the decorator pattern. Since we have a uniform interface of inputs and outputs, building decorators around cross-cutting concerns becomes trivial.

Pre- and post-request handlers

One common request I see is to do work on the requests coming in, or post-process the request on the way out. We can define some interfaces around this:

public interface IPreRequestHandler<in TRequest> {
    void Handle(TRequest request);
}

public interface IPostRequestHandler<in TRequest, in TResponse> {
    void Handle(TRequest request, TResponse response);
}

With this, we can modify inputs before they arrive to the main handler or modify responses on the way out.

In order to execute these handlers, we just need to define a decorator around our main handler:

public class MediatorPipeline<TRequest, TResponse> 
    : IRequestHandler<TRequest, TResponse> 
    where TRequest : IRequest<TResponse> {

    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IPreRequestHandler<TRequest>[] _preRequestHandlers;
    private readonly IPostRequestHandler<TRequest, TResponse>[] _postRequestHandlers;

    public MediatorPipeline(
        IRequestHandler<TRequest, TResponse> inner,
        IPreRequestHandler<TRequest>[] preRequestHandlers,
        IPostRequestHandler<TRequest, TResponse>[] postRequestHandlers
        ) {
        _inner = inner;
        _preRequestHandlers = preRequestHandlers;
        _postRequestHandlers = postRequestHandlers;
    }

    public TResponse Handle(TRequest message) {

        foreach (var preRequestHandler in _preRequestHandlers) {
            preRequestHandler.Handle(message);
        }

        var result = _inner.Handle(message);

        foreach (var postRequestHandler in _postRequestHandlers) {
            postRequestHandler.Handle(message, result);
        }

        return result;
    }
}

And if we’re using a modern IoC container (StructureMap in this case), registering our decorator is as simple as:

cfg.For(typeof (IRequestHandler<,>))
   .DecorateAllWith(typeof (MediatorPipeline<,>));

When our mediator builds out the handler, it delegates to our container to do so. Our container builds the inner handler, then surrounds the handler with additional work. If this seems familiar, many modern web frameworks like koa include a similar construct using continuation passing to define a pipeline for requests. However, since our pipeline is defined in our application layer, we don’t have to deal with things like HTTP headers, content negotiation and so on.

Validation

Most validation frameworks I use validate against a type, whether it’s validation with attributes or delegated validation to a handler. With Fluent Validation, we get a very simple interface representing validating an input:

public interface IValidator<in T> {
    ValidationResult Validate(T instance);
}

Fluent Validation defines base classes for validators for a variety of scenarios:

public class CreateCustomerValidator: AbstractValidator<CreateCustomer> {
  public CreateCustomerValidator() {
    RuleFor(customer => customer.Surname).NotEmpty();
    RuleFor(customer => customer.Forename).NotEmpty().WithMessage("Please specify a first name");
    RuleFor(customer => customer.Discount).NotEqual(0).When(customer => customer.HasDiscount);
    RuleFor(customer => customer.Address).Length(20, 250);
    RuleFor(customer => customer.Postcode).Must(BeAValidPostcode).WithMessage("Please specify a valid postcode");
  }

  private bool BeAValidPostcode(string postcode) {
    // custom postcode validating logic goes here
  }
}

We can then plug our validation to the pipeline as occurring before the main work to be done:

public class ValidatorHandler<TRequest, TResponse>
    : IRequestHandler<TRequest, TResponse>
    where TRequest : IRequest<TResponse> {

    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IValidator<TRequest>[] _validators;
    
    public ValidatorHandler(IRequestHandler<TRequest, TResponse> inner,
        IValidator<TRequest>[] validators) {
        _inner = inner;
        _validators = validators;
    }

   public TResponse Handle(TRequest request) {
        var context = new ValidationContext(message);

        var failures = _validators
            .Select(v => v.Validate(context))
            .SelectMany(result => result.Errors)
            .Where(f => f != null)
            .ToList();

        if (failures.Any()) 
            throw new ValidationException(failures);

        return _inner.Handle(request);
   }
}

In our validation handler, we perform validation against Fluent Validation by loading up all of the matching validators. Because we have generic variance in C#, we can rely on the container to inject all validators for all matching types (base classes and interfaces). Having validators around messages means we can remove validation from our entities, and into contextual actions from a task-oriented UI.

Framework-less pipeline

We can now push a number of concerns into our application code instead of embedded as framework extensions. This includes things like:

  • Validation
  • Pre/post processing
  • Authorization
  • Logging
  • Auditing
  • Event dispatching
  • Notifications
  • Unit of work/transactions

Pretty much anything you’d consider to use a Filter in ASP.NET or Rails that’s more concerned with application-level behavior and not framework/transport specific concerns would work as a decorator in our handlers.

Once we have this approach set up, we can define our application pipeline as a series of decorators around handlers:

var handlerType = cfg.For(typeof (IRequestHandler<,>));

handlerType.DecorateAllWith(typeof (LoggingHandler<,>));
handlerType.DecorateAllWith(typeof (AuthorizationHandler<,>));
handlerType.DecorateAllWith(typeof (ValidatorHandler<,>));
handlerType.DecorateAllWith(typeof (PipelineHandler<,>));

Since this code is not dependent on frameworks or HTTP requests, it’s easy for us to build up a request, send it through the pipeline, and verify a response:

var handler = container.GetInstance<IHandler<CreateCustomer>>();

var request = new CreateCustomer {
    Name = "Bob"
};

var response = handler.Handle(request);

response.CreatedCustomer.Name.ShouldBe(request.Name);

Or if we just want one handler, we can test that one implementation in isolation, it’s really up to us.

By focusing on a uniform interface of one model in, one model out, we can define a series of patterns on top of that single interface for a variety of cross-cutting concerns. Our behaviors become less coupled on a framework and more focused on the real work being done.

All of this would be a bit easier if the underlying language supported this behavior. Since many don’t, we rely instead of translating these functional paradigms to OO patterns with IoC containers containing our glue.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Introduction to Karma

Testing TV - Mon, 09/08/2014 - 18:38
This is a quick tutorial covering installing Karma and running tests from the command line and Webstorm. Karma is a test runner for JavaScript that runs on Node.js. It is very well suited to testing AngularJS or any other JavaScript projects. Using Karma to run tests using one of many popular JavaScript testing suites (Jasmine, […]
Categories: Blogs

Are you Ready for your Agile Journey?

The common pattern in approaching an Agile deployment is to begin by conducting Agile practices training typically on Scrum or another Agile method.  While this will allow the team to begin mechanically applying Agile practices, it doesn’t address the culture shift that must occur, a culture shift that helps to inform the mind and shape behaviors, a shift toward "being Agile".  I term this approach of focusing on the cultural aspects of Agile as “readiness”. 
Readiness is the beginning of the process of acclimatizing the mind toward Agile values and principles and what they really mean.  It includes making decisions on the elements for your implementation. Although it is important to lead with readiness, this framework may be used iteratively depending on whether you plan for a more holistic deployment or iterative deployment of certain elements. This first starts with the premise that Agile is a culture change.  The implication is that Agile is more than a change in procedure or learning a new skill.  A culture change is a transformation in belief and behavior.  It requires a change by more than one person, and instead by a number of people within your organization.  As you can guess, this takes time. 
Over the years, I’ve established what I term the Ready, Implement, Coach, and Hone (RICH) deployment framework specifically focuses on readiness activities that help you prepare not only to adopt the mechanical aspects of agile practices but more importantly, begin a meaningful transformation of behavior toward an Agile mindset. 
Readiness starts the moment someone asks the question, "Is Agile right for me?” The goal is to work through this question, understand the context, and figure out how Agile might be deployed. Readiness can start weeks and even months before you really get serious about moving down the agile path. However, it can also begin when you are ready to commit.
What are some of the “readiness” activities?  These activities can help you shape the implementation according to the context and need of an organization. Readiness provides us with an opportunity to:
  • Assess the current environment and current state of agility
  • Lay the educational groundwork of agile values and principles
  • Understand and adapt to self-organizing teams and away from command and control
  • Shift the focus to delivering customer value and away from an iron triangle mentality
  • Discuss the agile business benefits
  • Gauge the team and management willingness

Readying the mind shouldn’t be taken lightly. It is important to understand the ‘what’ and ‘why’ prior to discussing the how and when.  It is important that the teams understand and really embrace the Agile values and principles.  Does senior management believe in the principles?  Do the teams feel they can operate in an Agile manner that aligns with the values and principles?  In fact, I will dare say that if the team acts in the manner that expresses the Agile values and principles and forgoes the mechanical application of agile practices, then there is a greater chance that Agile will survive and thrive within a company.  
Since there is already an overwhelming amount of material that focuses on “how to implement Agile” from a "doing" perspective, may I suggest that a different approach.  Provide the time to prepare the mind toward the Agile mindset and then incorporate this mindset into the culture, education, and decision-making process for your proposed implementation. With that goal in mind, let the readiness games begin!  How ready are you?

To read more about the importance of readiness and additional readiness activities in detail, consider reading the book Being Agile
Categories: Blogs

The ISO29119 debate

On Linkedin David Morgan started an interesting discussion titled: “ISO/IEC/IEEE 29119 – why the fear and opprobrium“. In this discussion Cor van Rijn asked me this question:

@Huib,
your comment gives the impression that you do not believe in standards,
Please enlighten me and let me know what are the DISadvantages to standards.
Personally I am a strong believer in standards, given that they are applied in a matter that is suitable to the environment and the problem (so with regards to complexity, size and risk) and that they should be used as a guideline and that issues that are not relevant should be omitted, due to your consideration and specific situation.
In that respect I would like to refer to the IEEE standards for software test documentation where this idea is phrased explicitly in the text of the standard.

Much has been said about ISO 29119 the last weeks. For some background, please have a look at the many things said online in my collection of resources on the controversy.

So what is wrong with this ISO 29119 standard?

  • The standard is not available publicly. How can I comply to or even discuss a standard that is not publicly available?
  • ISO is an commercial organisation. The standard is a form of “rent-seeking“. One form of rent-seeking is using regulations or standards in order to create or manipulate a market for consulting, training, and certification.
  • The standard embodies a document heavy test process which is unnecessary and therefor in many situations waste. Didn’t history show us that documentation and processes are important but that there are more important things we should consider?
  • The standard doesn’t speak about the most important thing in testing: skills! The word skill is used 8 times in the first three parts of the standard (270 pages!) and not once it has been made clear WHICH skills are needed. Only that you need to get the right skills to do the job.
  • There is much wrong with the content. For instance: the writers don’t understand exploratory testing AT ALL. I wish I could quote the standard, but it is copyrighted. [Edit: it turns out I can quote the standard, so I edited this blog post and added some quotes (in blue) from the ISO 29119 standard, part 1-3]. Here are some examples: (HS is my comments to the quotes)
    Example 1: The definition on page 7 in part 1: “exploratory testing experience-based testing in which the tester spontaneously designs and executes tests based on the tester’s existing relevant knowledge, prior exploration of the test item (including the results of previous tests), and heuristic “rules of thumb” regarding common software behaviours and types of failure. Note 1 to entry: Exploratory testing hunts for hidden properties (including hidden behaviours) that, while quite possibly benign by themselves, could interfere with other properties of the software under test, and so constitute a risk that the software will fail.
    HS: Spontaneously? Like magic? Or maybe using skills? There are many, many more heuristics I use while testing. And most important: I miss learning in this definition. Testing is all about learning. ET doesn’t only hunt for hidden properties, it is about learning about the product tested.
    2. The advantages and disadvantages of scripted and unscripted testing on page 33 in part 1:
    “Disadvantages Unscripted Testing
    Tests are not generally repeatable.”
    HS: Why are the test not repeatable? I take notes. If needed I can repeat ANY test I do. I think it is not an interesting question if tests are repeatable or not. Although I do not understand the constant pursuit for repeatable tests. To me that is old school thinking. More interesting is to teach testers about reasons to repeat tests.
    “The tester must be able to apply a wide variety of test design techniques as required, so more experienced testers are generally more capable of finding defects than less experienced testers.”
    HS: Duh! Isn’t that the case in ANY testing?
    “Unscripted tests provide little or no record of what test execution was completed. It can thus be difficult to measure the dynamic test execution process, unless tools are used to capture test execution.”
    HS: Bullocks! I take notes of what has been tested and I dare to say that my notes are more valuable than a pile of test cases saying passed or failed. It is not about test execution completed, it is about coverage achieved. In my experience exploratory testers are way better in reporting their REAL coverage and tell a good story about their testing. Even if tools are used to capture test execution, how would you measure the execution process? Count the minutes on the video?
    3. Test execution on page 37 in part 2:
    8.4.4.1 Execute Test Procedure(s) (TE1) This activity consists of the following tasks:
    a) One or more test procedures shall be executed in the prepared test environment.
    NOTE 1 The test procedures could have been scripted for automated execution, or could have been recorded in a test specification for manual test execution, or could be executed immediately they are designed as in the case of exploratory testing.
    b) The actual results for each test case in the test procedure shall be observed.
    c) The actual results shall be recorded.
    NOTE 2 This could be in a test tool or manually, as indicated in the test case specification.
    NOTE 3 Where exploratory testing is performed, actual results can be observed, and not recorded.
    HS: Why should I record every actual result? That’s a lot of work and administration. But wait, if I do exploratory testing, I don’t have to do that? *sigh*
  • I think there is no need for this standard. I have gone through the arguments used in favour of this standard in the slides of a talk by Stuart Reid (convener of ISO JTC1/SC7 WG26 (Software Testing) developing the new ISO 29119 Software Testing standard) held at SIGIST in 2013 and Belgium Testing Days in 2014:

stuartreid1

“Confidence in products”? Sure, with a product standard maybe. But testing is not a product or a manufactory process! “Safety from liability”? So this standard is to cover my ass? Remember that a well designed process badly executed will still result in bad products. Guidelines and no “best practice”? I wish it would, but practice shows that these kind of standards become mandatory and best practice very soon…

 

 

stuartreid2

Common terminology is dangerous. Read Michael Bolton posts about it here and here. To be able to truly understand each other, we need to ask questions and discuss in depth. Shallow agreement about a definition will result in problems. Professional qualifications and certification schemes? We have those and they didn’t help, did they? Benchmarks of “good industry practice” are context dependant. The purpose of a standard is to describe stuff context-free. So how can a standard be used as a benchmark? Ah! Best practice after all?

 

stuartreid3Who is demanding this standard? And please tell me why they want it. There will always be conflicts in definitions and processes. We NEED different processes to do our job well in the many different contexts we work in. A baseline for the testing discipline? Really? Without mentioning any context? What the current industry practice is lacking are skills! We need more excellent testers. The only way to become excellent is to learn, practice and get feedback from mentors and peers. That is how it works. Buyers are unclear what good test practice is? How does that work with selecting a doctor or a professional soccer player? Would you look at their certifications and standards used or is there something else you would do?

I do believe in standards. I am very happy that there are standards: mostly standards for products, not processes. Testing is a performance and not a pile of documents and a process you can standardise. I think there is a very different process needed to test a space shuttle, a website and a computer chip producing machine.

I wish that standards would be guidelines, but reality shows standards become mandatory often. This post by Pradeep Soundararajan gives you some examples. That is why I think this standard should be stopped.

Finally, let’s have a look at what the ISO claims on the “http://www.softwaretestingstandard.org/” website:

ISO/IEC/IEEE 29119 Software Testing is an internationally agreed set of standards for software testing that can be used within any software development life cycle or organisation. By implementing these standards, you will be adopting the only internationally-recognised and agreed standards for software testing, which will provide your organisation with a high-quality approach to testing that can be communicated throughout the world. ”.

Really? I think it is simply not true. First of all, since the petition is signed by hundreds of people from all over the world and over 30 people blogged about it, I guess the standard is not really internationally agreed.  And second: how will it provide my (clients) organisation with a high-quality approach? Again: the quality of any approach lies in the skills and the mindset of the people doing the actual work.

I think this standard is wrong and I signed the petition and the manifesto. I urge you to do the same.

This post was edited after Esko Arajärvi told me I can quote text from the standard. ISO is governed by law of Switzerland and their Federal Act of October 9, 1992 on Copyright and Related Rights (status as of January 1, 2011) says in article 25: Quotations. 1 Published works may be quoted if the quotation serves as an explanation, a reference or an illustration, and the extent of the quotation is justified for such purpose. 2 The quotation must be designated as such and the source given. Where the source indicates the name of the author, the name must also be cited.
Categories: Blogs

SIGIST – It’s better, but is it enough?

The Social Tester - Fri, 09/05/2014 - 17:30
The BCS SIGIST (Special Interest Group In Software Testing) was the first conference event I ever attended. It was about 6 years ago and I remember being amazed that people actually got together to talk about testing. SIGIST was where I first saw Michael Bolton, James Whittaker, Dot Graham, James Lyndsay and Julian Harty. There […]
Categories: Blogs