Skip to content

Feed aggregator

European Testing Conference, Helsinki, Finland, February 8-10 2017

Software Testing Magazine - Mon, 10/31/2016 - 08:00
The European Testing Conference is a two-day event that explores advanced new methods into making software more effective. The 2017 event will take place in Helsinki, Finland. In the agenda of the European Testing Conference you can find topics like “The Error of Our Ways”, “Taking inspiration from unlikely sources”, “How to Get Automation Included In Your Definition of Done”, “Testing the Modern JS Stack”, “Introduction to Approval Testing with TextTest”, “JUnit 5 – Next Generation Testing on the JVM”, “My Experiences with Testing and Checking”, “Testing in a Continuous Delivery World”, “Symbiosis of Mobile Analytics and Software Testing”, “Dealing with Device Fragmentation in Mobile Games Testing”, “Testing Responsive Websites”. Web site for the European Testing Conference: Location for the European Testing Conference: Wanha Satama, Helsinki, Finland.
Categories: Communities

Building an Agile Culture of Learning

Does your Agile education begin and end with barely a touch of training?  A number of colleagues have told me that in their companies, Agile training ranged from 1 hour to 1 day.  Some people received 2 days of Scrum Master training. With this limited training, they were expected to implement and master the topic.  Agile isn’t simply a process or skill that can be memorized and applied. It is a culture shift. Will this suffice for a transformation to Agile?
Education is an investment in your people.  A shift in culture requires an incremental learning approach that spans time.  What works in one company doesn’t work in another. A learning culture should be an intrinsic part of your Agile transformation that includes skills, roles, process, culture and behavior education with room to experience and experiment.
An Agile transformation requires a shift toward a continuous learning culture which will give you wings to soar!  You need a combination of training, mentoring, coaching, experimenting, reflecting, and giving back. These education elements can help you become a learning enterprise.  Let's take a closer look at each:
Training is applied when an enterprise wants to build employee skills, educate employees in their role, or roll out a process. It is often event driven and a one-way transfer of knowledge. What was learned can be undone when you move back into your existing culture.
Coaching helps a team put the knowledge into action and lays the groundwork for transforming the culture. Coaching provides a two-way communication process so that questions can be asked along the way. A coach can help you course-correct and promote right behaviors for the culture you want.
Mentoring focuses on relationships and building confidence and self-awareness. The mentee invests time by proposing topics to be discussed with the mentor in the relationship. In this two-way communication, deep learning can occur.
Experimentingfocuses on trying out the new skills, roles, and mindset in a real world setting.  This allows first-hand knowledge of what you’ve learned and allows for a better understanding of Agile.
Reflecting focuses on taking the time to consider what you learned whether it is a skill, process, role, or culture, and determine what you can do better and what else you need on your learning journey. 
Giving back occurs when the employee has gained enough knowledge, skills, experience, to start giving back to their community to make the learning circle complete. Helping others highlight a feeling of ownership to the transformation and the learning journey.
It takes a repertoire of educational elements to achieve an Agile culture and becoming a Learning enterprise. When you have people willing to give back is when the learning enterprise has become full circle and your enterprise can soar.


For more Agile related Learning and Education articles, consider reading:

Categories: Blogs

xUnit and Pipeline

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. The JUnit plugin is the go-to test result reporter for many Jenkins projects, but the it is not the only one available. The xUnit plugin is a viable alternative that supports JUnit and many other test result file formats. Introduction No matter the project, you need to gather and report test results. JUnit is one of the most widely supported formats for recording test results. For a scenarios where your tests are stable and your framework can produce JUnit output, this makes the JUnit plugin ideal for reporting results in Jenkins. It will consume results from a specified file or...
Categories: Open Source

All Day DevOps: Practitioner-to-Practitioner

Sonatype Blog - Sun, 10/30/2016 - 15:25
Over the past year, I have traveled to and delivered presentations at 18 DevOps events. I’ve also heard that over the past several years, John Willis has participated in more than 200 DevOps events. But not all of us have the time or budget to get ourselves and our teams out to these events where...

To read more, visit our blog at
Categories: Companies

Enterprise DevOps

ISerializable - Roy Osherove's Blog - Sat, 10/29/2016 - 19:01

As part of my work in the past few years I’ve been more and more deeply involved in large scale DevOps implemementations in very large companies. I’m now writing about my experiences in here:

Some of the latest posts include:
And about DevOps metrics:

Categories: Blogs

Announcing the Telerik DevCraft R3 2016 Release Webinar

Telerik TestStudio - Fri, 10/28/2016 - 17:06
​Join us for a webinar to learn about the latest upgrades to our DevCraft development tooling suite—all designed to make your life as a developer easier. 2016-10-17T14:37:05Z 2016-10-28T14:51:00Z Sam Basu
Categories: Companies

DevOps in an ITIL environment

James Betteley's Release Management Blog - Fri, 10/28/2016 - 16:52

At IPExpo in London a couple of weeks ago, I was asked if it was possible to “Do DevOps in an ITIL environment”.

My simple answer is “yes”.

ITIL and DevOps are two different things, they both attempt to provide a set of “best practices”; ITIL for Service Delivery and Maintenance, DevOps for Software Delivery and Support.

DevOps is mostly concerned with a couple of things:

  • The mechanics of building and delivering software changes (we’re talking about Continuous Delivery, deployment automation, Configuration automation and so on).
  • The behaviours, interactions and collaboration between the different functions involved in delivering software (Business, Dev, Test, Ops etc)

ITIL largely stays away from anything to do with the mechanics, and doesn’t touch on culture and collaboration – preferring instead to focus more on the tangible concepts of IT service support. It’s essentially a collection of procedures and processes for delivering and supporting IT services. Most of those procedures and practices are just common sense good ideas.

DevOps isn’t a prescriptive framework, it’s more like a philosophy (in the same way as Agile isn’t a framework). Because it’s not prescriptive, it can work with any framework (such as scrum) provided that framework isn’t at odds with the DevOps philosophy (such as waterfall).

ITIL provides a set of concepts which you then implement in your own way. For example, ITIL promotes the concepts of Incident and Problem Management. It doesn’t tell you exactly HOW you should do them, it simply suggests that these are good processes to have. There are recommendations around actions such as trend analysis and root-cause analysis, but it doesn’t prescribe how you should implement these.

Change Control

Probably the area with the greatest amount of cross-over is change management. ITIL explicitly mentions it as a procedure for the efficient handling of all changes, and goes on to talk about Change Advisory Boards, Types of Change, Change Scheduling and a bunch of other “things to do with deploying changes to an environment”.

DevOps also advocates smooth and efficient processes for deploying changes through environments – so there’s no conflict here. The only slight misalignment is that in ITIL, change management is seen as an activity that happens during the Service Transition phase, while in DevOps we tend to advocate the identification and promotion of pre-authorised changes (standard change), which means the change management process effectively starts prior to service transition. But that’s about it really.

Some people get a bit carried away with the role of the Change Advisory Board in ITIL, and insist that every change must pass through some sort of CAB process (usually involving a monthly CAB meeting, where a bunch of stakeholders review all changes queued up for a production deployment, which usually only serves to cause a delay in your software delivery process and add very little value). ITIL doesn’t explicitly say it has to happen this way – it’s not that prescriptive!

Similarly, DevOps doesn’t say you can’t have a CAB process. If you’ve got a highly complex and unstable environment that’s receiving some sporadic high-risk changes, then CAB review is probably a good idea. The only difference here is that DevOps would encourage these Change Advisory Board reviews to happen earlier in the process to ensure risk is mitigated right from the start, rather than right at the end.


So, in summary, ITIL and DevOps are not having a fight in the schoolyard at home time, there’s nothing to see here, go about your business.

Categories: Blogs

SonarQube Embraces the .NET Ecosystem

Sonar - Fri, 10/28/2016 - 15:05

In the last couple months, we have worked on further improving our already-good support for the .NET ecosystem. In this blog post, I’ll summarize the changes and the product updates that you’re about to see.

C# plugin version 5.4

We moved all functionalities previously based on our own tokenizer/parser to Roslyn. This lets us do the colorization more accurately and will allow future improvements with less effort. Also, we’re happy to announce the following new features:

  • Added symbol reference highlighting, which has been available for Java source code for a long time.
  • Improved issue reporting with exact issue locations.
  • Added the missing complexity metrics: “complexity in classes” and “complexity in functions”
  • Finally, we also updated the rule engine (C# analyzer) to the latest version, so you can benefit from the rules already available through SonarLint for Visual Studio.

With these changes you should have the same great user experience in SonarQube for C# that is already available for Java.

VB.NET plugin version 3.0

The VB.NET plugin 2.4 also relied on our own parser implementation, which meant that it didn’t support the VB.NET language features added by the Roslyn team, such as string interpolation, and null-conditional operators. The deficit resulted in parsing errors on all new constructs, and on some already existing ones too, such as async await, and labels that are followed by statements on the same line. The obvious solution to all these problems was to use Roslyn internally. In the last couple months, we made the necessary changes, and now the VB.NET plugin uses the same architecture as the C# plugin. This has many additional benefits above and beyond eliminating the parsing errors, such as enabling the following new features in this version of the VB.NET plugin:

  • Exact issue location
  • Symbol reference highlighting
  • Colorization based on Roslyn
  • Copy-paste detection based on Roslyn
  • Missing complexity metrics are also computed
  • Support all the coverage and testing tools already available for C#

Additionally, we removed the dependency between the VB.NET and C# plugins, so if you only do VB.NET development, you don’t have to install the C# plugin any more.

While we were at it, we added a few useful new rules to the plugin: S1764, S1871, S1656, S1862. Here’s an issue we found with these rules in Roslyn itself:

Scanner for MsBuild version 2.2

Some of the features mentioned above couldn’t be added just by modifying the plugins. We had to improve the Scanner for MSBuild to make the changes possible. At the same time, we fixed many of the small annoyances and a few bugs. Finally, we upgraded the embedded SonarQube Scanner to the latest version, 2.8, so you’ll benefit from all changes made there too (v2.7 changelog, v2.8 changelog).

Additionally, when you use MSBuild14 to build your solution, we no longer need to compute metrics, copy-paste token information, code colorization information, etc. in the Scanner for MSBuild “end step”, so you’ll see a performance improvement there. These computations were moved to the build phase where they can be done more efficiently, so that step will be a little slower, but the overall performance should still be better.

FxCop plugin version 1.0

A final change worth mentioning is that we extracted FxCop analysis from the C# plugin into a dedicated community plugin. This move seems to align with what Microsoft is doing: not developing FxCop any longer. Microsoft’s replacement tool will come in the form of Roslyn analyzers.

Note that we not only extracted the functionality to a dedicated plugin, but fixed a problem with issues being reported on excluded files (see here).


That’s it. Huge architectural changes with many new features driven by our main goal to support .NET languages to the same extent as we support Java, JavaScript, and C/C++.

Categories: Open Source

The test automation silver bullet – Myth or Reality?

Well obviously it’s a myth.  But that said, why do many still seek it?  It is still perceived to be the Holy Grail of testing and keeps millions of people in gainful employment every day and to an extent that includes me.

Part of the problem is the lack of understanding in senior positions as they learn of automation successes and believe they can be applied directly to all their software testing problems and beyond.  I am one of the first to talk about our clients’ successes, and why not, they are successes and they should be proud of them.  But they need to be considered in context as that context is all important.

What is Test Automation Success?

Firstly, what type of organisation was it who had the success?  The testing needs of HSBC Bank are quite different from those of Costco or Pfizer.  The industry demands mean different things have higher importance in one compared to the other, and a major success for one might be of minor value to another.  Just because one company tests the whole of its website content every night automatically without a single script and has a compete audit trail of the content to satisfy a corporate compliance program and changes, does not mean that will solve any of another’s needs.

Then there is the technology.  Different enterprises have different systems, some built in-house, some purchased, some consumed in the cloud.  Many are going through an application modernisation program to transition from one approach to another.  Their needs will depend on the nature of the programme.  So each organisation must decide what is relevant to them.   Looking to the future it looks bright for the ERPs such as SAP HANA, Oracle Applications, Infor M3, Dynamix to name a few our clients test, but especially for those available in the cloud.  Who wants to write when you can purchase or subscribe? Testing a cloud application is a totally different proposition to testing an API or piece of SOA code in an agile development.  So unless you are one of the companies who are creating these ERP and enterprise applications, you will want to think about how you will test what you can move to the cloud, and who will test it.

The “Where are we?” question.

Just like in the old joke about asking for directions which earns the response – “If I was going there, I wouldn’t start from here”.  If you don’t know where you are, how you do things now, understand what, why and how you test, you are never going to make a success of automation.  If your approach to testing is poor now, the jump to test automationwill probably just make it worse.  You have to gain mastery of what you have and do now, and do proper organised Application Quality Management.  But that doesn’t mean you can’t tackle test automation at all, it just means you need to choose the software testing tools carefully to be relevant to your need and to set out on that journey.  Much is made of having fully documented test cases, but that does not mean you have to write all these up before any step. For example, you can automate the capture of business processes to create reusable test cases and build a library in your Application Quality Management solution. One of our clients; Marston’s, took a joined up approach to the whole problem from business process capture to test cases to test automation and a regression pack for SAP.  But there were different successes along the way and different testing tools used in the process.

Whose job is it?

It must be the QA team, the responsibility of the QA manager surely?  Mind you, the QA team are not usually responsible for bug creation so the developers must play a role. And the Shift Left argument proposes that actually if we did enough testing early on we would not need a QA team, so maybe we should just make developers responsible for quality.  But in the end, the users have to use it, to know how to use it, to have a good user experience (UX) and to be able to do their jobs well.  So maybe they are the most important group, they certainly will be in a cloud testing scenario. In this case, they will need testing tools designed to support UAT.  That might include automation, or a process of getting to automation by making manual testing and documentation easier in the first instance.  This is much more Shift Right than Shift Left.  We created TestDrive-UAT with this need in mind and because this area often consumes more resource than any other whilst having until now, any technology aimed at helping solve the problem, a growing problem.  If you take a bigger view of the problem you might see areas of synergy and mutual gain.  For example, for a change or a new system to be rolled out successfully users need to understand how to perform tasks.  This is why TestDrive-UAT and other parts of the solution such as TestDrive-Assist create documentation and ‘how to’ videos as a by-product of the testing process.  Is it part of software testing?  Perhaps traditionally not, but it is part of getting a successful deployment.

Test Data.  Testing is all about data.

Good testing and a good testing strategy need to include test data.  It’s an area many have applied successful automation to and to Original Software it means typically data extraction, manipulation, protection and validation. In the past, I have seen the only successful use of legacy test automation tools to be in creation of new records for test data.  I’m not saying that’s a bad thing but I have heard vendors and users talk about the number of automated scripts they run successfully every week, only to find these scripts don’t actually do any testing, they just create test data. The whole concept of reusable protected test data thankfully provides a different strategy.  White Paper.

Living with Test Automation

If there is one thing that kills test automation it is script maintenance. It is essential to minimise this for longer term success.  It is massively important in Agile methodologies as there is simply not time to have a long-winded approach to maintenance.  A technical approach perhaps involving code may be acceptable if testing SOA components if the pieces are small and their functions and output do not change much. But when components are combined or if testing the User Interface (UI) then resilience is massively important.  A coded approach to automation here will probably fail, it just creates too big a millstone on the rest of the project unless it is very well and expensively resourced.  We have seen this so many times and it is the main reason why test automation has historically had a bad reputation. If you are developing code changes you have enough of a problem as it is without initiating a parallel test automation coding project with similar challenges.  TestDrive reflects the pinnacle of code free UI automation. There is no code to change when the application changes.  The way the solution is architected with data, checking, logic, linking and interfaces abstracted from scripts (process and steps) reduces the impact of change from the start.  But also the patented features such as automatic UI annotation means that changing object technical parameters, names or locations are handled automatically.  This is especially important where you don’t have any control of them in your vendor supplied applications.  But the busy-sense, self-healing, drag and drop interface, data drive and automatic picture taking capabilities all add to the minimised maintenance and maximise usability goals.

Reuse is a key goal which drove TestDrive’s architecture, but also the approach to responsive web testing on mobile and other devices.  This has meant that a single script can test any mobile device that the browser can emulate such as a Samsung phone or tablet, a Nokia phone, an Ipad or Iphone, HTC etc.  A single solution supporting eCommerce channels.

Lessons in seeking the silver bullet

In order to make test automation successful there will be different tools deployed in different places by different people, one tool will not do it all. Each will have their own successes and criteria for success ideally in an integrated strategy to Software Quality Assurance.  This is great, because you don’t have to find a single silver bullet.  There are many and they can be selected to fit the needs of each case and the people involved in the different phases, all the way from unit testing to UAT.

  1. There is not only one, there may be many.
  2. One person’s or company’s success may not map to your needs.
  3. Get good at testing first so you don’t try and automate a bad approach.
  4. Choose tools that match the need and the skills.
  5. Do not build an on-going maintenance burden.
  6. Don’t do it all at once.
  7. Do something, a small step if you cannot take a big one.
  8. Look at the big picture and all the areas that are involved.
  9. Look for actions that bring multiple benefits.
Categories: Companies

CQRS/MediatR implementation patterns

Jimmy Bogard - Thu, 10/27/2016 - 18:36

Early on in the CQRS/ES days, I saw a lot of questions on modeling problems with event sourcing. Specifically, trying to fit every square modeling problem into the round hole of event sourcing. This isn’t anything against event sourcing, but more that I see teams try to apply a single modeling and usage strategy across the board for their entire application.

Usually, these questions were answered a little derisively  – “you shouldn’t use event sourcing if your app is a simple CRUD app”. But that belied the truth – no app I’ve worked with is JUST a DDD app, or JUST a CRUD app, or JUST an event sourcing app. There are pockets of complexity with varying degrees along varying axes. Some areas have query complexity, some have modeling complexity, some have data complexity, some have behavior complexity and so on. We try to choose a single modeling strategy for the entire application, and it doesn’t work. When teams realize this, I typically see people break things out in to bounded contexts or microservices:


With this approach, you break your system into individual bounded contexts or microservices, based on the need to choose a single modeling strategy for the entire context/app.

This is completely unnecessary, and counter-productive!

A major aspect of CQRS and MediatR is modeling your application into a series of requests and responses. Commands and queries make up the requests, and results and data are the responses. Just to review, MediatR provides a single interface to send requests to, and routes those requests to in-process handlers. It removes the need for a myriad of service/repository objects for single-purpose request handlers (F# people model these just as functions).

Breaking down our handlers

Usage of MediatR with CQRS is straightforward. You build distinct request classes for every request in your system (these are almost always mapped to user actions), and build a distinct handler for each:


Each request and response is distinct, and I generally discourage reuse since my requests route to front-end activities. If the front-end activities are reused (i.e. an approve button on the order details and the orders list), then I can reuse the requests. Otherwise, I don’t reuse.

Since I’ve built isolation between individual requests and responses, I can choose different patterns based on each request:


Each request handler can determine the appropriate strategy based on *that request*, isolated from decisions in other handlers. I avoid abstractions that stretch across layers, like repositories and services, as these tend to lock me in to a single strategy for the entire application.

In a single application, your handlers can execute against:

It’s entirely up to you! From the application’s view, everything is still modeled in terms of requests and responses:


The application simply doesn’t care about the implementation details of a handler – nor the modeling that went into whatever generated the response. It only cares about the shape of the request and the shape (and implications and guarantees of behavior) of the response.

Now obviously there is some understanding of the behavior of the handler – we expect the side effects of the handler based on the direct or indirect outputs to function correctly. But how they got there is immaterial. It’s how we get to a design that truly focuses on behaviors and not implementation details. Our final picture looks a bit more reasonable:


Instead of forcing ourselves to rely on a single pattern across the entire application, we choose the right approach for the context.

Keeping it honest

One last note – it’s easy in this sort of system to devolve into ugly handlers:


Driving all our requests through a single mediator pinch point doesn’t mean we absolve ourselves of the responsibility of thinking about our modeling approach. We shouldn’t just pick transaction script for every handler just because it’s easy. We still need that “Refactor” step in TDD, so it’s important to think about our model before we write our handler and pay close attention to code smells after we write it.

Listen to the code in the handler – if you’ve chosen a bad approach, refactor! You’ve got a test that verifies the behavior from the outermost shell – request in, response out, so you have a implementation-agnostic test providing a safety net for refactoring. If there’s too much going on in the handler, push it down into the domain. If it’s better served with a different model altogether, refactor that direction. If the query is gnarly and would better suffice in SQL, rewrite it!

Like any architecture, one built on CQRS and MediatR can be easy to abuse. No architecture prevents bad design. We’ll never escape the need for pull requests and peer reviews and just standard refactoring techniques to improve our designs.

With CQRS and MediatR, the handler isolation supplies the enablement we need to change direction as needed based on each individual context and situation.

Categories: Blogs

Level-Up Your Automation Game with the New DevOps Radio Episode, Featuring Joshua Nixdorf, Technical Director at Electronic Arts

Imagine you spent all day working on video games. Seems like a dream, right? For Joshua Nixdorf, Technical Director at EA Games, working with video games all day is his reality. However, even a company that focuses on fun entertainment can be confronted with challenges to deliver that entertainment. In fact, when EA Games started to grow development operations, that’s exactly what happened.

Josh Nixdorf started at EA Games during a time when automation and continuous integration (CI) were just starting to spread. In the past, EA Games had dozens of CI jobs, today, they have hundreds. Where the company had a handful of QA disk builds, they now have hundreds for each of the world’s regions. In order to keep up with demand and growth, the company had to expand their CI/ CD practices. The first question that comes to mind… How? Well, we’ll let Josh tell you that part.

In episode 8 of DevOps Radio, Josh and DevOps Radio host Andre Pino talk about what it’s like working for EA Game and how Josh has grown the company’s automation practices. Josh provides insight into how developing for video games is different from business software and  how EA Games has worked to automate testing and more. Finally, Josh let’s slip some of the games he’s had a hand in…

Now, plug in your headphones, turn on your gaming console, and get ready to listen to the latest episode of DevOps Radio. The episode is available now on the CloudBees website and on iTunes. Join the conversation about the episode on Twitter by tweeting out to @CloudBees and including #DevOpsRadio in your post.


Josh Nixdork Presenting at Jenkins World 2016   Josh Nixdorf Presenting at Jenkins World 2016 
Categories: Companies

Holiday Readiness Testing: what to expect when expecting peak traffic

HP LoadRunner and Performance Center Blog - Wed, 10/26/2016 - 22:51

Web traffice level_teaser.jpg

Your busiest and most profitable season is expected in the next few weeks. You expect heavier than usual traffic, and you have performed many adjustments to ensure your site is mobile optimized—what else do you need to accomplish? Keep reading to find out.

Categories: Companies

Holiday Readiness and what your load testing tool should be able to do

HP LoadRunner and Performance Center Blog - Wed, 10/26/2016 - 22:51

HolidayTraffic teaser.png

The performance testing world is filled with vital information about why the need for great web and app performance is so important. Keep reading to learn how to avoid becoming a poor performance statistic.

Categories: Companies

Cambridge Lean Coffee

Hiccupps - James Thomas - Wed, 10/26/2016 - 22:30
This month's Lean Coffee was hosted by us at Linguamatics. Here's some brief, aggregated comments and questions  on topics covered by the group I was in.

How important is exploratory testing?
  • When interviewing tester candidates, many have never heard of it.
  • Is exploratory testing a discrete thing? Is it something that you are always doing?
  • For one participant, exploratory testing is done in-house; test cases/regression testing are outsourced to China.
  • Some people are prohibited from doing it by the company they work for.
  • Surely everybody goes outside the test scripts?
  • Is what goes on in an all-hands "bug bash" exploratory testing? 
  • Exploratory testing is testing that only humans can do.

How do you deal with a flaky legacy automation suite?
  • The suite described was complex in terms of coverage and environment and failures in a given run are hard to diagnose as product or infrastructure or test suite issues
  • "Kill it with fire!"
  • Do you know whether it covers important cases? (It does.)
  • Are you getting value for the effort expended? (Yes,so far, in terms of personal understanding of the product and infrastructure.)
  • Flaky suites are not just bad because they fail, and we naturally want the suites to "be green"
  • ... flaky suites are bad because they destroy confidence in the test infrastructure. They have negative value.

What starting strategies do you have for testing?
  • Isn't "now" always the best time to start?
  • But can you think of any scenarios in which "now" is not the best time to start? (We could.)
  • You have to think of the opportunity cost.
  • How well you know the thing under test already can be a factor.
  • You can start researching before there is a product to test.
  • Do you look back over previous test efforts to review whether testing started at an appropriate time or in an appropriate way? (Occasionally. Usually we just move on to the next business priority.)
  • Shift testing as far left as you can, as a general rule
  • ... but in practice most people haven't got very far left of some software being already made.
  • Getting into design meetings can be highly valuable
  • ... because questions about ideas can be more efficient when they provoke change. (Compared to having to change software.)
  • When you question ideas you may need to provide stronger arguments because you have less (or no) tangible evidence
  • ... because there's no product yet.
  • Challenging ideas can shut thinking down. (So use softer approaches: "what might happen if ..." rather than "That will never work if ...")
  • Start testing by looking for the value proposition.
  • Value to who?
  • Value to the customer, but also other stakeholders
  • ... then look to see what risks there might be to that value, and explore them.

Death to Bug Advocacy
  • Andrew wrote a blog, Death to Bug Advocacy, which generated a lot of heat on Twitter this week.
  • The thrust is that testers should not be in the business of aggressively persuading decision makers to take certain decision and, for him, that is overstepping the mark.
  • Bug advocacy isn't universally considered to be that, however. (See e.g. the BBST Bug Advocacy course.) 
  • Sometimes people in other roles are passionate too
  • ... and two passionate debaters can help to provide perspectives for decision makers.
  • Product owners (and others on the business side) have a different perspective.
  • We've all seen the reverse of Andrew's criticism: a product owner or other key stakeholder prioritising the issue they've just found. ("I found it, so it must be important.")

Categories: Blogs

We can do better - Wed, 10/26/2016 - 18:27

I’m proud that many people are actively addresing diversity issues. Research shows that diversity leads to better problem solving and often, more creative solutions. Unfortunately the results of history lead us to where we are today, but we can always do better. I’m proud to be part of ThoughtWorks, where we are also trying to do our part to address diversity issues, and our work was recently recognised as a great company for Women in Tech. And yes, I do realise that diversity goes beyond just gender diversity.

As a fairly regular conference speaker this year, I have been disappointed by some of the actions of both conference organisers and speakers that have been, in my opinion, rather unhelpful.

At a conference speaker’s dinner earlier in the year, the topic of diversity came up where someone calculated that only 4 out of almost 60 speakers were women. I was truly disappointed when one of the conference organisers responded with, “That’s just the industry ratio isn’t it? It’s just too hard to find women speakers.” Of course not all conference organisers have this attitude, such as The Lead Dev conference which ended up with 50% women:men speaker ratio or like Flowcon which achieved a >40% ratio women:men as well. Jez Humble writes about his experiences achieving this goal (recommended reading for conference organisers).

At another conference, I saw a slide tweeted from a talk that looked like this below (Note: I’ve found the original and applied my own label to the slide)

Bad slide of stereotypes

My first thoughts went something like: “Why do all the developers look like men and why do all the testers look like women?” I was glad to see some other tweets mention this, which I’m hoping that the speaker saw.

We all have responsibilities when we speak

I believe that if you hold talks at a conference, you have a responsibility to stop reinforcing stereotypes, and start doing something, even if it’s a little thing like removing gendered stereotypes. Be aware of the imagery that you use, and avoid words that might reinforce minority groups feeling even more like a minority in tech. If you don’t know where to start, think about taking some training about what the key issues are.

What you can do if you’re a speaker

As a speaker you can:

  • Review your slides for stereotypes and see if you can use alternative imagery to get your message across.
  • Find someone who can give you feedback on words you say (I am still trying to train myself out of using the “guys” word when I mean people and everyone).
  • Give your time (mentoring, advice and encouragement) to people who stand out as different so they can act like role models in the future.
  • Give feedback to conferences and other speakers when you see something that’s inappropriate. More likely than not, people are more unaware of what other message people might see/hear, and a good presenter will care about getting their real message across more effectively.
What to do if you’re a conference organiser

I’ve seen many great practices that conferences use to support diversity. These include:

One thing that I have yet to experience, but would like as a speaker is a review service where I could send some version of slides/notes (there is always tweaking) and get some feedback about whether the imagery/words or message I intend to use might make the minorities feel even more like a minority.

Categories: Blogs

Dealing With Test Log Data

Sauce Labs - Wed, 10/26/2016 - 18:00

Test logs. What are they good for? What can you do with them? What should you do with them? These aren’t always easy questions to answer, but in this post, we’ll take a look at what’s possible and what’s advisable when it comes to testing log data.

What are Test Logs Good For?

What are test logs good for? Or are they good for anything at all?

Let’s start with an even more basic question: What is (and what isn’t) a testing log? A testing log is not simply test output. Minimal pass/fail output may log the results of testing, but a true testing log should do more than that. At the very least, it should log the basics of the testing process itself, including test files used, specific test steps as they are performed, and any output messages or flags, with timestamps for each of these items. Ideally, it should also log key processes and variables indicating the state of the system before, during, and after the test.

Logging is Important

How important is this information? There are plenty of circumstances under which you probably won’t need test logging – for example, when a change to the software consistently passes all tests, or if it fails as the result of an easy-to-identify error in the code. Testing logs can make a difference, however, under a variety of circumstances:

  • Identifying problems with the test process itself. Tests aren’t perfect, and you need to be able to monitor the test process for errors and potential problems. This is particularly true with parallel testing, where concurrency is important. (See “Troubleshooting Parallel Tests.”) Even with individual tests, however, testing logs can help to identify problems with test data, basic testing assumptions, or initial test conditions.
  • As an adjunct to standard debugging tools. Sophisticated (or even basic) debugging tools are indispensable when it comes to such things as stepping through processes, tracing execution, and monitoring data values at key points during execution. They may, however, miss such relatively simple factors as the initial state of the system, data values at the beginning of execution, or environmental conditions while the test is running. These are all things which a good testing log can record.
  • Tracking down intermittent or hard-to-trace errors. Nobody likes intermittent bugs. Most developers would rather deal with consistently occurring catastrophic system crashes than problems that pop up unpredictably but need to be addressed. Standard debugging tools may provide little or no help in such cases. A testing log that includes a sufficient level of detail may, however, allow you to identify the conditions which lead to an intermittent error. This information is often the key to tracing such problems down to their roots.
  • Identifying regressions and tracking the history of closely related errors. We’ve all seen it happen—a bug that was fixed several builds previously suddenly turns up again. How can you be sure that it’s the same bug, though, and not a new problem that simply looks like the earlier error? A detailed testing log may allow you to spot key similarities and differences in the system state during and after the test, making it possible to distinguish between an old bug and a new but similar one. Testing log data may also be helpful in identifying a related group of errors, based on how they affect the state of the system.
Strategies For Dealing With Log Data

What’s the best overall strategy for dealing with testing log data?

Ultimately, you need to use the logging tools (and the configurations for those tools) which best suit your test environment and testing software, as well as your organization’s specific testing needs. The best place to start is by taking full advantage of your testing system’s built-in logging features, which, depending on the test system itself, may provide all of the functionality, flexibility, and integration that you need. When this is the case, you can simply configure the test system’s logging features as required. If you find that you need logging capabilities which the test system does not provide, you may want to consider integrating third-party test logging tools or services with your testing system to provide the required features.

Full Integration

You may also want to consider setting up a system for full integration of all of your logs—not just in testing, but along the entire length of your development and operations delivery chain. Full log integration provides a number of advantages. It can allow you to easily compare your test environment with your software’s actual operating conditions, as well as check system values logged during testing against those logged during operation, for example.

The Dashboard Light

If you use a log integration tool that includes an overall logging dashboard, you can generally get a quick overview of log data from a selection of sources, often with drill-down capabilities for focusing on individual incidents, results, or types of data (test logs, combined with specific operations logs, for example). You can use a logging dashboard not just to organize, view, and search logs, but also to identify relationships between log data (by way of charts, graphs, and other means of visually representing information) which might not otherwise be apparent.

Make It Useful

Adding testing logs to an overall system of integrated logging helps to keep them from becoming just another near-useless mess of raw data cluttering up your system. This is a serious consideration, since one of the most frequently expressed objections to test logging is that the process of searching through testing logs for useful information can consume considerable time and resources without producing any useful results.

Therefore, even if you do not integrate your testing logs with other logs, or make use of a logging dashboard, it is important to set up some kind of system for extracting useful data from testing logs quickly, efficiently, and accurately. A variety of scripted log analysis tools (both open-source and proprietary) are available. You can also create custom, in-house log analysis scripts.

You should not allow testing log data (or any other kind of log data) to take up storage space, doing nothing. Most log data has some kind of value. If you analyze that data and make use of it, you can use it to increase the value and the quality of your software, and your entire delivery chain.

Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the ’90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues.

Categories: Companies

Using Git in Software Testing. Definition and Basic Set of Functions

Test And Try - Wed, 10/26/2016 - 17:13
For the first time I faced such a phenomenon as a version control system during the process of creating my own website. New ideas appeared all the time during the development process, and each new version I had to hold in different folders on the hard disk that soon led me to utter confusion, especially when I had to come back to the code in 2-3 days. Enormous amount of time was spent on determining which version was the latest (active) and all the related minor, but important issues. This way search for a solution and familiarization with Git began. No related posts.
Categories: Blogs

When Scrum and DevOps go Bad

James Betteley's Release Management Blog - Wed, 10/26/2016 - 00:42

We all know a good agile organisation, or at least we’ve all heard about them, where everyone just *gets it*, they’re agile through-and-through, from the top down, bottom up, agile in the middle, and everyone’s a mini Martin Fowler. Yay for them.

We’ve also heard about these DevOps companies, who are leveraging automation in every step of their delivery pipeline. And they’re deploying to production 8,000 times a day with zero downtime and they rebuild their live VMs every 12 seconds. Great work.

Unfortunately the rest of the world sits outside those two extremes (recall Rogers Diffusion of Innovation Curve, principally the early and late majority). A lot of organisations simply don’t know what Agile and DevOps are, where they’ve come from, what the point is, and most importantly, how to do it.

So here’s what happens:

  • To become agile they “go scrum” and hire a scrum master or ten
  • To be “DevOps” they automate their environments and deployments

Why do they do this? I suspect it’s a number of reasons, but largely it’s because there’s a shit tonne of material out there that supports the view that Scrum is the best agile framework and DevOps means automating stuff.

The results are fairly predictable:

If you “do scrum” instead of understanding agile, you get what’s called Agile Cargo Cult. That basically ends up with people doing all these great scrum practices and ceremonies, but things don’t actually improve, and eventually they start to get worse, so to rectify the situation, teams apply the scrum ceremonies and practices with even greater rigour. Obviously this gets them nowhere, and eventually people within the organisation start to believe “Agile doesn’t work here”, blissfully unaware that they were never actually “agile” in the first place.

Organisations who think DevOps is about automating the Ops tasks just end up “slinging shit quicker”. If you don’t sort out the real problems in your system, you’re basically just making localised optimisations. There’s just no point. If your problem is that your software is hard to run, scale, operate and maintain – don’t try to automate your deployments.

Also, many DevOps initiatives, in my experience, are either driven by Dev, or Ops, but not usually both. And that says it all really.

So, for a lot of organisations who are new to this whole Agile and DevOps thing, there’s clearly an easy path sucking a lot of people in. And that’s a shame, because it results in a lot of frustration. It would be easy to laugh at these organisations, but it’s not their fault. Scrum has become a self-serving framework, seemingly more interested in its own popularity than its effectiveness, and DevOps is anything to anyone.

So, in summary, don’t do scrum, be agile. And don’t confuse DevOps with automating the Ops work.

Categories: Blogs

SmartBear Updates Support for REST API Security Testing

Software Testing Magazine - Tue, 10/25/2016 - 21:25
SmartBear Software has released a major update to its API readiness platform, Ready! API, focusing on the security of APIs. SmartBear’s Ready! API is a unified set of graphical and code-based testing tools that includes Secure Pro for dynamic API security testing, SoapUI NG Pro for functional testing, LoadUI NG Pro for load testing, ServiceV Pro for API service virtualization and TestServer for Continuous Integration environments. In addition, SmartBear also made major productivity improvements to SoapUI NG Pro. Groovy scripting capabilities have been enhanced to include script debugging, a feature request from many customers. With Ready! API 1.9, organizations committed to API enhancement and API quality can now integrate Virts fully with Continuous Integration processes in ServiceV Pro through automatic deploy, start, stop and un-deploy. They can also now measure the impact of load tests on Oracle DB servers by monitoring server parameters in LoadUI NG Pro.
Categories: Communities

Publishing HTML Reports in Pipeline

Most projects need more than just JUnit result reporting. Rather than writing a custom plugin for each type of report, we can use the HTML Publisher Plugin.

Let's Make this Quick

I've found a Ruby project, hermann, I'd like to build using Jenkins Pipeline. I'd also like to have the code coverage results published with each build job. I could write a plugin to publish this data, but I'm in a bit of a hurry and the build already creates an HTML report file using SimpleCov when the unit tests run.

Simple Build

I'm going to use the HTML Publisher Plugin to add the HTML-formatted code coverage report to my builds. Here's a simple pipeline for building the hermann project.

stage 'Build'

node {
  // Checkout
  checkout scm

  // install required bundles
  sh 'bundle install'

  // build and run tests with coverage
  sh 'bundle exec rake build spec'

  // Archive the built artifacts
  archive (includes: 'pkg/*.gem')

NOTE: This pipeline expects to be run from a Jenkinsfile in SCM. To copy and paste it directly into a Jenkins Pipeline job, replace the checkout scm step with git ''.

Simple enough, it builds, runs tests and archives the package.

Job Run Without Report Link

Now I just need to add the step to publish the code coverage report. I know that rake spec creates an index.html file in the coverage directory. I've already installed the HTML Publisher Plugin. How do I add the HTML publishing step to the pipeline? The plugin page doesn't say anything about it.

Snippet Generator to the Rescue

Documentation is hard to maintain and easy to miss, even more so in a system like Jenkins with hundreds of plugins with each potentially having one or more groovy fixtures to add to the Pipeline. The Pipeline Syntax "Snippet Generator" helps users navigate this jungle by providing a way to generate a code snippet for any step using provided inputs.

It offers a dynamically generated list of steps, based on the installed plugins. From that list I select the publishHTML step:

Snippet Generator Menu

Then it shows me a UI similar to the one used in job configuration. I fill in the fields, click "generate" and it shows me snippet of groovy generated from that input.

Snippet Generator Output

HTML Published

I can use that snippet directly or as a template for further customization. In this case, I'll just reformat and copy it in at the end of my pipeline. (I ran into a minor bug in the snippet generated for this plugin step. Typing error string in my search bar immediately found the bug and a workaround.)

  /* ...unchanged... */

  // Archive the built artifacts
  archive (includes: 'pkg/*.gem')

  // publish html
  // snippet generator doesn't include "target:"
  publishHTML (target: [
      allowMissing: false,
      alwaysLinkToLastBuild: false,
      keepAll: true,
      reportDir: 'coverage',
      reportFiles: 'index.html',
      reportName: "RCov Report"


When I run this new pipeline I am rewarded with an RCov Report link on left side, which I can follow to show the HTML report.

Job Run With Report Link

RCov Report

I even added the keepAll setting to let me go back and look at reports on old jobs as more come in. As I said to begin with, this is not as slick as what I could do with a custom plugin, but it is much easier and works with any static HTML.

Links Blog Categories: JenkinsDeveloper Zone
Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today