Skip to content

Feed aggregator

Tools for thinking about context – Agile sliders reimagined

Philosophically, I’m aligned to the context-driven testing view of the world. Largely, this is influenced by a very early awareness of contextual factors to success in my first job, and the wild difference between games testing and corporate testing roles that I had. Since 2003, the work of the context-driven school founders has been a significant influence in how I speak about testing.

In 2004, when I worked on my first agile project at ANZ, I was lucky to fall in with a group of developers and analysts who were skilled and keen to solve some of the problems we had in enterprise intranet projects. A huge piece of that was using agile ideas to solve *our* most pressing problems, not all of the problems that enterprise had. To do that we aligned the corporate project management practices and rules to more general principles, and then set about satisfying the principles in a way that met our other objectives (the main one being to make it cost less than $30,000 to put a static page on the intranet).

Another question that came up was how we might move away from a rule-based governance framework to one that was oriented to principles and context. The meat of this blog post is the result of how that initial idea connected to my context-driven approach. It then turned into a model for project context. It was also spurred on slightly be the over-simplification I perceived in the commonly used agile project sliders – Cost, Time, Scope, Quality (though Mike Cohn has a somewhat improved version, and reminds me I should finally read that Rob Thomsett book Steve Hayes recommended).

This has hidden in my blog drafts for a good seven or eight years. It is intended to support my test/delivery strategy mnemonic, though both are useful independently. I’ve recently started sharing it with my testers and colleagues, so I feel it’s time to open it up to the world for review.

The usual caveat applies, that this is a model that works for me. If you find it helpful, I would love to hear from you. If you improve it or change it, I’d love to know about that two. Here are a few ways I hope it might help:

- To help us and other stakeholders consider elements of context that require us to assess the suitability of our standard approaches.
- To help ensure stakeholders understand that each piece of work they undertake is different in subtle but important ways.
- To help ensure that the test/delivery strategy/approach is reasonable.
- It may help us to create a record of project characteristics that we could search for stories about projects similar to the one we’re undertaking now.

This is rough, but given my observations of the context-driven community and software development in general, getting this out is more important than polishing it. So here is a model for context, intended to be put up somewhere visible with associated sliders:

Time to Market/Time constrained/Time criticality
A scale that indicates how time critical this piece of work is. That is, how bad is it if it takes longer than expected?

Business Risk
A scale that indicates the likelihood of failing for business reasons

Technical Risk
A scale that indicates the likelihood of failing for technical/technology reasons

Is this inherently complex?

Similar, but different to complexity. A big system with simple functions brings its own challenges.

How well understood is this problem? Have others solved similar problems before?

Value ($)
What is the size of the benefit?

Team Size
How big is the team?

# External Stakeholders
How many of the stakeholders are not within the same management structures (eg. Subject to shared KPIs)?

Interfaces (external, internal)
Are there lots of interfaces to this product?

Cost/Budget/$ Constrained
How significant is the impact of spending more than planned?

Criticality (failure impact)
How bad will it be if this fails in production? (Max is life/safety critical)

How important is this relative to other projects in the organisation?

Scope/Feature constrained
How much opportunity is there to vary the scope of what is delivered? Fixed scope translates to risk if other things are constrained (especially time and budget).

Is everything required to solve the problem within in the team? What things/people/knowledge that are needed to deliver are shared or external to the group?

Feedback cycle time
How quickly can you get feedback on questions regarding the product? This includes how quickly and how often you can test, as well as how long it takes questions regarding direction of the solution to be answered (eg. Availability of product stakeholders).

Communication bandwidth
When you are able to communicate as a team, what is the quality of that communication? Is it limited by technology, language? Offshore teams are frequently low feedback, low bandwidth communications.

Communication frequency
How often are you able to communicate with the team? Subtle difference to feedback cycle, in that someone may be able to quickly provide answers when available, but not very often.

Time constrained?
How fixed is the schedule? What is the impact of overrunning the planned completion date?

Team cohesion/familiarity
How long has the team worked together?

Team experience/maturity/skill
Has the team worked on this domain for a long time? Is there broad experience of different ways of working? Is the team strong technically?

Compliance requirement? (Note that this is not a slider, it’s a checkbox)
This can arguably be modelled using other properties, but may be worth flagging separately when a project is something that must be done.

Additional contributors:
Thanks to Shane Clauson for prompting an addition to this model last week. Thanks to Vito Trifilo for cooking the barbecued ribs that brought me and Shane together!

Categories: Blogs

Experiment with Example Mapping

Agile Testing with Lisa Crispin - Fri, 06/03/2016 - 02:04

At Agile 2015, I learned about example mapping from Matt Wynne. Linda Rising’s session reinforced my enthusiasm to continue doing small, frugal experiments.I came back to work the following week feeling like it would be pretty easy to try an experiment with example mapping.

Example Example Map from JoEllen Carter (

Example Example Map from JoEllen Carter (

You can use example mapping in your specification workshops, Three Amigos meetings (more on that below), or whatever format your team uses to discuss upcoming stories with your product owner and/or business stakeholders. Write the story on a yellow index card. Write business rules or acceptance criteria on blue index cards. For each business rule, write examples of desired and undesired behavior on green index cards. Questions are going to come up that nobody in the room can answer right now – write those on red cards. That’s all there is to it!

The problem

My team was experiencing a high rate of stories being rejected because of missing capabilities, and our cycle time was longer than we’d like. I asked our product owner (PO) if we could experiment with a new approach to our pre-Iteration Planning Meetings (IPM).

Up to this point, our “pre-IPM” (pre-Iteration Planning Meeting) meetings were a bit slapdash and hurried. The PO, the development anchor and me met shortly before the IPM went quickly through the stories that would be discussed. There wasn’t a lot of time to think of questions to ask.

Our experiment

For our new experiment, we decided to try a “Three Amigos” (coined by George Dinwiddie) or what Janet Gregory and I call  “Power of Three” approach. This is also similar to Gojko Adzic’s specification workshops. In our case it was Four Amigos. We decided to have our pre-IPM meeting time boxed to one hour, and hold it two business days before the IPM. The PO, designer, tester and developer anchor gathered to discuss the stories that would be discussed and estimated in the IPM. Our goal was to build a base of shared understanding that we could build on in the IPM, so that when testing and coding starts on a story, everyone knows what capabilities are needed.

We tried out example mapping as a way to learn more about each story ahead of the IPM. Now, I’ve practiced example-driven development since I learned about it from Brian Marick back around 2003. So, I was surprised how effective it is to add rules along with the examples. The truth is, you can’t write adequate tests and code just from a few examples – you need the business rules too.

Since we have remote team members,using Matt Wynne’s color-coded index cards wouldn’t work for us. We tried using CardboardIt with its virtual index cards, but it proved a bit slow and awkward for our purpose. Since our product is Pivotal Tracker, a SaaS project tracking tool, we decided to try using it for the examples, rules and questions. For our planning meetings, we share the Tracker project screen in the Zoom video meeting for remote participants, so putting our example maps in text in our Tracker stories is a natural enough fit for us.

We’ve iterated on our Amigos meeting techniques over several months now. We don’t example map every story. For example, design stories may be covered well enough with the Invision design doc. What’s important is that we are chipping away at the problem. Feedback from my teammates is that they have a much better understanding of each story before we even start talking about it in the IPM. There may still be questions and conversations about the story, but it’s going deeper into the story capabilities because the basics are already there. And, our story rejection rate has gone down, as has our cycle time!

A template for structuring a conversation

During the pre-IPM “amigos” meeting, each story gets a goal/purpose, rules, examples, and maybe a scenario or two. We found that the purpose or goal is crucial – what value will this story deliver? The combination of rules and examples that illustrate them provides the right information to write business-facing tests that guide development. Here’s an example of our example mapping outcomes in a Tracker story:

ExampleMap example

Example mapping outcomes captured in a Tracker story


Devs use the info to help them write tests to guide dev. Ideally (at least in my opinion), those would be Cucumber BDD tests. The example map provides personas and scenarios along with the rules. However, sometimes it makes more sense to leverage existing functional rspec tests or Jasmine unit tests for the JS.

As a developer pair start working a story, they have a conversation with one or more testers, so that we’re all on the same page. When questions come up, we reconvene the Amigos to discuss them.

User stories act as a placeholder for a conversation. Techniques like example mapping help structure those conversations and ensure that everyone on the delivery team shares the customer’s understanding of the capabilities that story should provide. Since we’re a distributed team, we want a place to keep detailed-enough results of those conversations. Putting example maps in stories is working really well for that.

Example mapping is straightforward and easy to try out. Ask your team to try it out with your Amigos a day or two before your next iteration planning meeting. If it doesn’t work well for you, there are lots of other techniques you can try out! I hope to cover some of those here in the coming weeks.

The post Experiment with Example Mapping appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Move to the cloud with confidence: 4 ‘must do’ steps to migrate applications to the cloud

HP LoadRunner and Performance Center Blog - Thu, 06/02/2016 - 23:05

ADM stories cloud.PNG_

What are 4 ‘must do’ steps you should follow in migrating applications to the cloud? Learn more now.

Categories: Companies


DevelopSense Blog - Thu, 06/02/2016 - 19:03
Several years ago in one of his early insightful blog posts, Pradeep Soundarajan said this: “The test doesn’t find the bug. A human finds the bug, and the test plays a role in helping the human find it.” More recently, Pradeep said this: Instead of saying, “It is programmed”, we say, “It is automated”. A […]
Categories: Blogs

5 tips from Dynatrace 2016 Digital Experience Report

Dynatrace recently released its annual Digital Experience Report comparing the digital experiences offered by leaders in seven industries. You can watch the webinar or read the report to find out who the leaders are in your industry. Beyond identifying the leaders, the report dives deep into performance analysis to give you some ideas on how to improve.  In this post, […]

The post 5 tips from Dynatrace 2016 Digital Experience Report appeared first on about:performance.

Categories: Companies

Earlier Test Automation without Culture and Process Change Drama

Telerik TestStudio - Thu, 06/02/2016 - 17:46
Many organizations are seeing tremendous benefits in moving to one of the various test-first methodologies such as Test Driven Development (TDD), Behavior Driven Development (BDD) or Acceptance Test Driven Development (ATDD). Test-first approaches improve team communication effectiveness, dramatically shorten feedback cycles and get testing activities working in parallel with development versus happening after development is complete. While we believe in the value of these methodologies, sometimes teams and organizations aren’t able to fully dive into them for a number of reasons. Regardless, teams can still see tremendous improvements even without wholesale adoption of “formalized” test-first methodologies. This blog post will help you formulate ideas to improve your testing activities and push them earlier, even without adopting a full-up methodology. All testing activities benefit from earlier collaboration with the team; however, functional user interface automation has some specific benefits that come out of early, effective collaboration. 2015-04-14T17:00:00Z 2016-06-02T15:45:41Z Jim Holmes
Categories: Companies

Announcing the 2016 Life Sciences Product Development Survey

The Seapine View - Thu, 06/02/2016 - 17:30

2016 Life Sciences Product Development SurveyThe 2016 Life Sciences Product Development Survey is now live! Last year, more than 900 industry professionals shared their insights on managing development artifacts, proving compliance to the FDA, and fostering innovation.

We invite you to add your voice in this year’s survey—whether you’re a hardware engineer or oversee product development. From your feedback, we’ll be able to assess and share how life sciences organizations manage their core product development artifacts, compliance, and traceability in the R&D phases.

Take the 2016 Life Sciences Product Development Survey now!

Categories: Companies

Early Bird Started for SEETEST 2016

Software Testing Magazine - Thu, 06/02/2016 - 16:34
The South East European Testing Boaard (SEETB) together with Quality House and ANIS would like to cordially invite you to take part in the South East European Software Testing (SEETEST) Conference 2016. If you register before July 15th you will get a 20% Early Bird discount! The South East European Software Testing (SEETEST) Conference is a conference focused on Software Testing and Software Quality Management in South East Europe that will take place in Bucharest, Romania, September 15 and 16 2016. The conference program will have one day of tutorials (September 15), followed by one day of keynotes, presentations (September 16) and exhibition. Your can register on
Categories: Communities

Workshop outputs from “How Architects nurture Technical Excellence” - Thu, 06/02/2016 - 15:45
Workshop background

Earlier this week, I ran a workshop at the first ever Agile Europe conference organised by the Agile Alliance in Gdansk, Poland. As described in the abstract:

Architects and architecture are often considered dirty words in the agile world, yet the Architect role and architectural thinking are essential amplifiers for technical excellence, which enable software agility.

In this workshop, we will explore different ways that teams achieve Technical Excellence and explore different tools and approaches that Architects use to successfully influence Technical Excellence.

During the workshop, the participants explored:

  • What are some examples of Technical Excellence?
  • How does one define Technical Excellence?
  • Explored the role of the Architect in agile environments
  • Understood the broader responsibilities of an Architect working in agile environments
  • Focused on specific behaviours and responsibilities of an Architect that help/hinder Technical Excellence

What follows are the results of the collective experiences of the workshop participants during Agile Europe 2016.

How Architects nurture Technical Excellence from Patrick Kua Examples of Technical Excellence

  • A set of coding conventions & standards that are shared, discussed, abided by by the team
  • Introducing more formal code reviews worked wonders, code quality enabled by code reviews, user testing and coding standards, Peer code review process
  • Software modeling with UML
  • First time we’ve used in memory search index to solve severe performance RDBMS problems
  • If scrum is used, a good technical Definition of Done (DoD) is visible and applied
  • Shared APIs for internal and external consumers
  • Introducing ‘no estimates’ approach and delivering software/features well enough to be allowed to continue with it
  • Microservice architecture with docker
  • Team spirit
  • Listening to others (not! my idea is the best)
  • Keeping a project/software alive and used in prod through excellence customer support (most exclusively)
  • “The art must not suffer” as attitude in the team
  • Thinking wide!
  • Dev engineering into requirements
  • Problems clearly and explicitly reported (e.g. Toyota)
  • Using most recent libraries and ability to upgrade
  • Right tools for the job
  • Frequent availability of “something” working (like a daily build that may be incomplete functionality, but in principle works)
  • Specification by example
  • Setting up technical environment for new software, new team members quickly introduced to the project (clean, straightforward set up)
  • Conscious pursuit of Technical Excellence by the team through this being discussed in retros and elsewhere
  • Driver for a device executed on the device
  • Continuous learning (discover new tech), methodologies
  • Automatic deployment, DevOps tools use CI, CD, UT with TDD methodology, First implementation of CD in 2011 in the project I worked on, Multi-layered CI grid, CI env for all services, Continuous Integration and Delivery (daily use tools to support them), Continuous Integration, great CI
  • Measure quality (static analysis, test coverage), static code analysis integrated into IDE
  • Fail fast approach, feedback loop
  • Shader stats (statistical approach to compiler efficiency)
  • Lock less multithreaded scheduling algorithm
  • Heuristic algorithm for multi threaded attributes deduction
  • It is easy to extend the product without modifying everything, modularity of codebase
  • Learn how to use something complex (in depth)
  • Reuse over reinvention/reengineering
  • Ability to predict how a given solution will work/consequences
  • Good work with small effort (efficiency)
  • Simple design over all in one, it’s simple to understand what that technology really does, architecture of the product fits on whiteboard
Categories: Blogs

Should all testers have OCD?

PractiTest - Thu, 06/02/2016 - 14:42


Vocational Psychology is a field in which when selecting the right job for a person, psychologist’s search for a match between a person’s personality and the job requirements. There are a number of additional factors such as the required skills, abilities, work environment, person’s family conditions and many more.


In the Testing Community it is frequently said that the best testers suffer from OCD, obsessive compulsive disorder, and that testing is in fact, the adaptive activity that is taking advantage of this phenomena and ‘turning lemons into lemonade’.


OCD prevalence is about 1%-2% of the population, so if indeed this is the case, our testing community must have a much higher percentage

Some of the main characteristics of OCD include:

  • Excessive double-checking of things, such as locks, appliances, and switches.
  • Repeatedly checking in on loved ones to make sure they’re safe.
  • Counting, tapping, repeating certain words, or doing other senseless things to reduce anxiety.


A real match between Testing and OCD?

OCD image

The O-net website is an online tool that assists users selecting the right job for them, and it describes the various elements of pretty much any job title.


When looking at the interest elements that are included for the software testing position we can find:

  • Investigative — Investigative occupations frequently involve working with ideas, and require an extensive amount of thinking. These occupations can involve searching for facts and figuring out problems mentally.
  • Realistic — Realistic occupations frequently involve work activities that include practical, hands-on problems and solutions. They often deal with plants, animals, and real-world materials like wood, tools, and machinery. Many of the occupations require working outside, and do not involve a lot of paperwork or working closely with others.
  • Conventional — Conventional occupations frequently involve following set procedures and routines. These occupations can include working with data and details more than with ideas. Usually there is a clear line of authority to follow.

At a quick glance it seem as if there is indeed some match between software tester job interest elements and OCD characteristic, although there is not a 100% match. So even if you don’t think you fit the OCD definition, you can definitely excel at your job as a software tester

Categories: Companies

Bugs and Vulnerabilities are 1st Class Citizens in SonarQube Quality Model along with Code Smells

Sonar - Thu, 06/02/2016 - 12:46

In SonarQube 5.5 we adopted an evolved quality model, the SonarQube Quality Model, that takes the best from SQALE and adds what was missing. In doing so, we’ve highlighted project risks while retaining technical debt.

Why? Well, SQALE is good as far as it goes, but it’s primarily about maintainability, with no concept of risk. For instance, if a new, blocker security issue cropped up in your application tomorrow, under a strict adherence to the SQALE methodology you’d have to ignore it until you fixed all the Testability, Reliability, Changeability, &etc issues. When in reality, new issues (i.e. leak period issues) of any type are more important than time-tested ones, and new bugs and security vulnerabilities are the most important of all.

Further, SQALE is primarily about maintainability, but the SQALE quality model also encompasses bugs and vulnerabilities. So those important issues get lost in the crowd. The result is that a project can have blocker-level bugs, but still get an A SQALE rating. For us, that was kinda like seeing a green light at the intersection while cross-traffic is still flowing. Yes, it’s recoverable if you’re paying attention, but still dangerous.

So for the SonarQube Quality Model, we took a step back to re-evaluate what’s important. For us it was these things:

  1. The quality model should be dead simple to use
  2. Bugs and security vulnerabilities shouldn’t be lost in the crowd of maintainability issues
  3. The presence of serious bugs or vulnerabilities in a project should raise a red flag
  4. Maintainability issues are still important and shouldn’t be ignored
  5. The calculation of remediation cost (the use of the SQALE analysis model) is still important and should still be done

To meet those criteria, we started by pulling Reliability and Security issues (bugs and vulnerabilities) out into their own categories. They’ll never be lost in the crowd again. Then we consolidated what was left into Maintainability issues, a.k.a. code smells. Now there are three simple categories, and prioritization is easy.

We gave bugs and vulnerabilities their own risk-based ratings, so the presence of a serious Security or Reliability issue in a project will raise that red flag we wanted. Then we renamed the SQALE rating to the Maintainability rating. It’s calculated based on the SQALE analysis model (technical debt) the same way it always was, except that it no longer includes the remediation time for bugs and vulnerabilities:

To go help enforce the new quality model, we updated the default Quality Gate:

  • 0 New Bugs
  • 0 New Vulnerabilities
  • New Code Maintainability rating = A
  • Coverage on New Code >= 80%

The end result is an understandable, actionable quality model you can master out of the box; quality model 2.0, if you will. Because managing code quality should be fun and simple.

Categories: Open Source

Your complete guide to Application Delivery Management at HPE Discover

HP LoadRunner and Performance Center Blog - Thu, 06/02/2016 - 04:33


I can't believe it, Hewlett Packard Enterprise Discover is next week! I know that the event can be overwhelming, so the team has created an easy-to-understand graphic guide to the Las Vegas event. Keep reading to see it for yourself.


Categories: Companies

CQRS and REST: the perfect match

Jimmy Bogard - Wed, 06/01/2016 - 22:02

In many of my applications, the UI and API gravitate towards task-oriented UIs. Instead of “editing an invoice”, I “approve an invoice”, with specialized models, behaviors and screens just for accomplishing that task. But what happens when we move from a server-side application to one more distributed, to be accessed via an API?

In a previous post, I talked about the difference between entities, resources, and representations. It turns out that by removing the constraint around entities and resources, it opens the door to REST APIs that more closely match how we’d build the UI if it were a completely server-side application.

With a server side application, taking the example of invoices, I’d likely have a page to view invoices:

GET /invoices

This page would return the table of invoices, with links to view invoice details (or perhaps buttons to approve them). If I viewed invoice details, I’d click a link to view a page of invoice details:

GET /invoices/684

Because I prefer task-based UIs, this page would include links to specific activities you could request to perform. You might have an Approve link, a Deny link, comments, modifications etc. All of these are different actions one could take with an invoice. To approve an invoice, I’d click the link to see a page or modal:

GET /invoices/684/approve

The URLs aren’t important here, I could be on some crazy CMS that makes my URLs “GET /fizzbuzzcms/action.aspx?actionName=approve&entityId=684”, the important thing is it’s a distinct URL, therefore a distinct resource and a specific representation.

To actually approve the invoice, I fill in some information (perhaps some comments or something) and click “Approve” to submit the form:

POST /invoices/684/approve

The server will examine my form post, validate it, authorize the action, and if successful, will return a 3xx response:

HTTP/1.1 303 See Other
Location: /invoices/684

The POST, instead of creating a new resource, returned back with a response of “yeah I got it, see this other resource over here”. This is called the “Post-Redirect-Get” pattern. And it’s REST.


Not surprisingly, we can model our REST API exactly as we did our HTML-based web app. Though technically, our web app was already RESTful, it just served HTML as its representation.

Back to our API, let’s design a CQRS-centric set of resources. First, the collection resource:

GET /invoices

HTTP/1.1 200 OK
    "id": 684,
    "invoiceNumber": "38042-L-275-684",
    "customerName": "Jon Smith",
    "orderTotal": 58.85,
    "href": "/invoices/684"
    "id": 688,
    "invoiceNumber": "33453-L-275-688",
    "customerName": "Maggie Smith",
    "orderTotal": 863.88,
    "href": "/invoices/688"

I’m intentionally not using any established media type, just to illustrate the basics. No HAL or Siren or JSON-API etc.

Just like the HTML page, my collection resource could join in 20 tables to build out this representation, since we’ve already established there’s no connection between entities/tables and resources.

In my client, I can then follow the link to see more details about the invoice (or, alternatively, included links directly to actions). Following the details link:

GET /invoices/684

HTTP/1.1 200 OK
  "id": 684,
  "invoiceNumber": "38042-L-275-684",
  "customerName": "Jon Smith",
  "orderTotal": 58.85,
  "shippingAddress": "123 Anywhere"
  "lineItems": [ ]
  "href": "/invoices/684",
  "links": [
    { "rel": "approve", "prompt": "Approve", "href": "invoices/684/approve" },
    { "rel": "reject", "prompt": "Reject", "href": "invoices/684/reject" }

I now include links to additional resources, which in the CQRS world, those additional resources are commands. And just like our HTML version of things, these resources can return hypermedia controls, or, in the case of a modal dialog, I could have embedded the hypermedia controls inside the original response. Let’s go with the non-modal example:

GET /invoices/684/approve

HTTP/1.1 200 OK
  "invoiceNumber": "38042-L-275-684",
  "customerName": "Jon Smith",
  "orderTotal": 58.85,
  "href": "/invoices/684/approve",
  "fields": [
    { "type": "textarea", "optional": true, "name": "comments" }
  "prompt": "Approve"

In my command resource, I include enough information to instruct clients how to build a response (given they have SOME knowledge of our protocol). I even include some display information, as I would have in my HTML version. I have an array of fields, only one in my case, with enough information to instruct something to render it if necessary. I could then POST information up, perhaps with my JSON structure or form encoded if I liked, then get a response:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 303 See Other
Location: /invoices/684

Or, I could have my command return an immediate response and have its own data, because maybe approving an invoice kicks off its own workflow:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 201 Created
Location: /invoices/684/approve/3506
  "id": 3506,
  "href": "/invoices/684/approve/3506",
  "status": "pending"

In that example I could follow the location or the body to the approve resource. Or maybe this is an asynchronous command, and approval acceptance doesn’t happen immediately and I want to model that explicitly:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 202 Accepted
Location: /invoices/684/approve/3506
Retry-After: 120

I’ve received your approval request, and I’ve accepted it, but it’s not created yet so try this URL after 2 minutes. Or maybe approval is its own dedicated resource under an invoice, therefore I can only have one approval at a time, and my operation is idempotent. Then I can use PUT:

PUT /invoices/684/approve
comments=I love lamp

HTTP/1.1 201 Created
Location: /invoices/684/approve

If I do this, my resource is stored in that URL so I can then do a GET on that URL to see the status of the approval, and an invoice only gets one approval. Remember, PUT is idempotent and I’m operating under the resource identified by the URL. So PUT is only reserved for when the client can apply the request to that resource, not to some other one.

In a nutshell, because I can create a CQRS application with plain HTML, it’s trivial to create a CQRS-based REST API. All I need to do is follow the same design guidelines on responses, pay attention to the HTTP protocol semantics, and I’ve created an API that’s both RESTful and CQRSful.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Q & A Ranorex 6.0 Webinar

Ranorex - Wed, 06/01/2016 - 12:59

A major software update with a ton new features led to a great Q&A session in our Ranorex 6.0 Webinar! It was a pleasure presenting the new Ranorex 6.0 features to all of you, and I’ve received some excellent questions. As I didn’t get to all of them in the webinar, I’ll cover the most popular questions here. But before I continue, I’d like to take this opportunity to thank all 640 attendees for their valuable input – you truly made this webinar a success!

Updating to Ranorex 6.0

How much does it cost to update to Ranorex 6.0?
All major software updates, and thus also Ranorex 6.0, are included in our maintenance services at no additional costs. If your maintenance services have expired (have to be renewed annually), please visit our Renewal page or contact us for further information.

Can I install and use both Ranorex 5.4 and Ranorex 6.0?
You can only install and use one Ranorex version at a time. As Ranorex will create an automatic backup of your entire solution when updating to Ranorex 6.0, you don’t have to worry about the possibility of your projects being corrupted.

Remote Testing with Ranorex Remote

What license do I need to set up a Ranorex Agent and use a Ranorex Remote?
You need a Ranorex Runtime Floating License to set up a Ranorex Agent and use Ranorex Remote. Remember that only until June 30, 2016 you can save 30% on Ranorex Runtime Floating Licenses, so be quick! The agent takes the license at startup and keeps it until the agent is shut down. Thus, a Runtime License is blocked as long as the Ranorex Agent is active.

How do I deploy settings to a Ranorex Agent?
Settings from your local machine can easily be deployed to a Ranorex Agent. Please consult our dedicated User Guide section for detailed information on how to do so.

Do I need an active user session to run a remote test?
Yes, you need to make sure the Ranorex Agent is running in an active user session. Find out how to keep your remote machine unlocked even if you close the RDP session in our dedicated User Guide section.

How can I start a remote test?
You can start a remote test directly out of Ranorex Studio using the Remote Pad. By pressing the ‘Run’ button next to an agent’s name, the currently selected Run Configuration will be executed on this agent. Please find further information in our User Guide.

Can I send tests to an “Agent Pool”?
You have to specifically select the Ranorex Agent you want to execute your test on.

Which tests can I run on Ranorex Agents?
You can only run test suites on Ranorex Agents. If you want to find out more about how to execute remote tests with Ranorex Remote, you can find detailed instructions here.

Can I debug remote tests?
Debugging is only possible for locally executed tests. As Ranorex Remote only enables remote test execution, debugging is not possible at the moment.

Can I continue working on my local machine during remote test execution?
Yes, this is a main purpose of Ranorex Remote. Your local machine is not blocked during remote test execution.

Are remote tests executed sequentially or in parallel?
A Ranorex Agent can only execute one test at a time. If multiple tests are sent to one Ranorex Agent for remote test execution, these tests will be queued at the agent and executed in order of arrival. If you want to execute multiple tests in parallel to each other, you need to send each test to a different Ranorex Agent. As an example: If you want to execute three tests at the same time, you have to send these tests to three different Ranorex Agents, which require a total of three Ranorex Runtime Floating Licenses. You can find further information on remote test execution in our dedicated User Guide section.

Can I integrate Ranorex Agents into a CI system? Can I schedule tests with Ranorex Remote?
At the moment, you can only start remote tests from Ranorex Studio and scheduling is not possible. Please check out our Product Roadmap to find what’s planned for Ranorex in the near future.

Learn more about Ranorex Remote

Ranorex Code Editor and Time Saving Features

Can I create custom code templates in Ranorex Studio?
Yes. In Ranorex Studio, simply select Tools > Options > Code Templates. Type your custom code template in the last row of the table and confirm by pressing ‘OK’.

Code templates

Which programming languages are supported in the Ranorex Code Editor?
The Ranorex Code Editor supports VB.NET and C#. Ranorex 6.0 is not based on the latest version of Sharp Develop, as otherwise VB.Net would not have been supported anymore.

Can I auto-create variables?
Yes, you can not only auto-create variables in Ranorex Studio, but also auto-create parameters. You can find instructions on how to auto-create variables in our dedicated User Guide section here, and on how to auto-create parameters here.

Learn more about Ranorex 6.0 Download 6.0 Trial

The post Q & A Ranorex 6.0 Webinar appeared first on Ranorex Blog.

Categories: Companies

Why record-playback (almost) never works and why almost nobody is using it anyway

Alister Scott once again calls out a number of spot-on technical points regarding the use of automation tools. In this case, he discusses record/playback automation tools.

Technical reasons aside, we also need to look at the non-technical reasons.

I’ve only once encountered someone trying to rely on the record-playback feature of an automation tool (my boss, working as a consultant, and earning a commission on the licence of the tool). Record-playback exists primarily as a marketing tool. When we say ‘record-playback fails’, I generally take that to mean the product was purchased based on the dream of programerless programming (I’m looking at you too, ‘Business Process Modelling’) and quickly fell into disuse when the maintenance cost exceeded the benefit of the automation.

The other common failing of course is that the most developed record-playback tools are (were?) expensive. I’m not sure what the per-licence cost of QTP/UFT is these days, but it used to be about 30% of a junior tester. Calculating the costs for even small teams, I could never defend the cost of the tool for regression automation over the value of an extra thinking person to do non-rote testing activities.

So if we can get through the bogus value proposition, especially relative to the abundance of licence-free options, there is a very limited space in which record-playback might add value:

- Generating code as a starting point for legacy projects. I’ve used record-playback to show testers how to start growing tools organically. That is, begin with a long, procedural record of something you want to automate. Factor out common steps, then activities, then business outcomes. Factor out data. Factor out environment, and so forth as you determine which parts of your system are stable in the face of change.
- If your test approach is highly-data driven, and the interface is pretty stable with common fields for test variations, you could quite feasibly get sufficient benefit from record playback if your testers were mostly business SMEs and there was little technical expertise. For example, if testing a lending product you might have input files for different kinds of loans with corresponding amortisation and loan repayment schedules. When testing a toll road, we had a pretty simple interface with lots of input variations to test pricing. In this situation, the cost of the test execution piece relative to the cost of identifying and maintaining test data is relative small.
- When we have some execution that we want to repeat a lot, in a short space of time with an expectation that it will be thrown away, quickly recording a test can be beneficial. In this case, we still have free macro-recording tools as alternatives to expensive ‘testing’ tools.

If we think of record-playback as tool assistance, rather than a complete test approach, there are some in-principle opportunities. In practice, they don’t usually stack up economically and we have other strategic options to achieve similar ends.

Categories: Blogs

GSOC Project Intro: Automatic Plugin Documentation

About me I am Cynthia Anyango from Nairobi, Kenya. I am a second year student at Maseno University. I am currently specializing on Ruby on Rails and trying to learn Python. I recently started contributing to Open source projects.My major contribution was at Mozilla, where I worked with the QA for Cloud services. I did manual and automated tests for various cloud services. I wrote documentation too. Above that, I am competent and I am always passionate about what I get my hands on. Project summary Currently Jenkins plugin documentation is being stored in Confluence. Sometimes the documentation is scattered and outdated. In order to improve the situation we would like...
Categories: Open Source

Tibco Business Events Memory leak analysis in live production

As a performance architect I get called into various production performance issues. One of our recent production issues happened with Tibco Business Event (BE) Service constantly violating our Service Level Agreements (SLA) after running 10-15 hours since the last restart. If we keep the services running for longer we would see them crash due to an “out of […]

The post Tibco Business Events Memory leak analysis in live production appeared first on about:performance.

Categories: Companies

Ranorex 6.0 Released

Software Testing Magazine - Tue, 05/31/2016 - 15:30
Ranorex 6.0 has finally been released. Ranorex is easy-to-use test automation software for developing and managing projects in teams made up of both testers and developers. The new features provided by this release are: Ranorex Remote for remote testing, Git integration, faster test execution, code editor enhancements. Ranorex Remote Directly from Ranorex Studio, you can now deploy your tests to multiple Ranorex Agents for remote test execution. Continue working on your computer during remote test execution, and receive an automatic notification once the test has been executed and the report is ready. Your colleagues can view which tests ran on a specific agent and have full access to all test reports. Git Meets Ranorex For the first time now, Ranorex supports a decentralized version control system: Git. The benefits? Full access to all Git functionalities within Ranorex Studio and enhanced collaboration between teams. About Ranorex Ranorex is a software development company that provides innovative software testing solutions to hundreds of companies and education institutions around the world. Ranorex provides a comprehensive range of tools for the software automation. Visit
Categories: Communities

Shaving Time with Sauce Labs – Announcing our #TestDaddy Contest!

Sauce Labs - Tue, 05/31/2016 - 11:00

In honor of Father’s Day, taking place Saturday, June 19th, Sauce Labs is running a three-week long Twitter contest. Enter now for your chance to win a deluxe Dollar Shave Club Father’s Day shave kit and a subscription for three months – for you or, as a gift to Dad!

#TestDaddy Contest Rules

Participating is simple: tell us how Sauce Labs helps you save time, or “shave” time, off your busy day via a tweet to @saucelabs, using the hash tag #TestDaddy. Keep your  response to 140 characters or less – no long, shaggy dog stories!

The contest runs now through Wednesday, June 15th. On Friday, June 17th, one winner will be announced via the @saucelabs Twitter handle. Winners will be selected based on the creativity of their responses, and will win a deluxe Dollar Shave Club Father’s Day shave kit and a subscription for three months!


So tell us how you’ve shaved time off your software development and testing process by using Sauce, and get in the running for a deluxe Dollar Shave Club Father’s Day shave kit and 3-month subscription. Don’t forget to use the required hash tag, #TestDaddy to be included, and hey, if you want to follow @saucelabs, Bob’s your uncle!

The contest is open to dads, moms, little and big shavers, and anyone using Sauce Labs of course!

Categories: Companies

New display of Pipeline’s "snippet generator"

Those of you updating the Pipeline Groovy plugin to 2.3 or later will notice a change to the appearance of the configuration form. The Snippet Generator tool is no longer a checkbox enabled inside the configuration page. Rather, there is a link Pipeline Syntax which opens a separate page with several options. (The link appears in the project’s sidebar; Jenkins 2 users will not see the sidebar from the configuration screen, so as of 2.4 there is also a link beneath the Pipeline definition.) Snippet Generator continues to be available for learning the available Pipeline steps and creating sample calls given various configuration options. The new page also...
Categories: Open Source

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today