Skip to content

Markus Gaertner (
Syndicate content
Software Testing, Craftsmanship, Leadership and beyond
Updated: 7 hours 11 min ago

Data-driven tests in Junit5.0.0-SNAPSHOT

Sat, 07/02/2016 - 19:58

It’s been a while since I wrote code these days. Back in late April however I found myself in a Coding Dojo at the Düsseldorf Softwerkskammer meet-up working on the Mars Rover Kata. I have a story to share from that meeting. However, since I tried to reproduce the code we ended up with that night, and I decided to give JUnit5 (and Java8) a go for that, I ended up with a struggle.

Back in the days with JUnit4 I used the ParameterizedRunner quite often to use data-driven tests. I never remembered the signature of the @Parameters function, though. The Mars Rover Kata also includes some behavior that I wanted to run through a data-driven test, but how do you do that in JUnit5? I couldn’t find good answers for that on the Internet, so I decided to put my solution up here – probably for lots of critique.

Please note that I used JUnit 5.0.0-SNAPSHOT which is a later version than the alpha, but probably not the final one.

JUnit5 offers besides Java 8 capabilities some interesting new things. JUnit5 comes now with Extension capabilities where you may influence the test’s lifecycle and also ways to resolve parameters to your tests, and your test class constructors. And then there are TestFactories for DynamicTests. Woha, quite a lot new stuff.

First I tried stuff with parameter resolvers. But then I would have needed to keep track of the parameters, and I had to call the parameter resolver more than once. So, combining it with an extension might work? No, I couldn’t make that work. So, dynamic tests are the way to go.

So, here is an example for what I ended up with. We have a Direction class with a method called turnLeft(). The idea is if the Rover is headed NORTH, and turns left (by 90 degrees) then it will be facing WEST.

Some notes:

  • I kept a collection of test data in a field in line 17. This is somewhat similar to the old way you annotated a function with @Parameters in JUnit4, even though you can now get rid of the Object[], and use a private test data class per test class. That at least seems to be the solution that I preferred.
  • For the @TestFactory you have several possibilities. I decided to use the Stream return type here in line 28. As I haven’t programmed too much in Java 8, I am not sure whether my usage is appropriate here. The conversion of the testData from the Collection is quite straight-forward, I found.
  • For each operation I wrapped the assertion in line 36 to avoid making the call to dynamicTest more convoluted than necessary. I also decided to generate a descriptive string for each test with the method in line 32. I think you can come up with better ways to generate the test descriptions. Wrapping the assertion on seemed unavoidable though. I especially didn’t like the usage of the lambda-expression together with the aggregate expression seems to make the line with dynamicTest (line 29) less readable than I would like to. I think there is more improvement possible.
  • Note that you can have several @TestFactory methods on your test class. So when writing a test for turning right, you can provide another TestFactory and reuse the test data for that. I’ll leave that as an exercise for the inspired reader of my blog.

So, this is what I ended up with. I think there is still room for improvement, especially when you compare the result with stuff you might write in tools like Spock.

P.S.: I ran this through Marc Philipp – one of the JUnit5 originators – in an earlier version, and he told me that they will be working on a more elegant solution for data-driven tests, probably for one of the next releases of JUnit5.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Categories: Blogs

State of Testing 2016 – My view

Thu, 06/16/2016 - 20:44

Usually I don’t write many promotions for other’s contents on this blog as I try to keep it personal and focused on my personal views. Recently I was contacted on the International 2016 State of Testing report, and whether I would like to do a write-up about it. I asked whether it would be ok to post a personal view, so here it is.

Demographics – and what do they tell me?

The top areas from the report are Europe (& Russia), USA, and India. I think these are also the biggest areas when it comes to software testing. The demographics tell me that the data according to my impressions is not very biased but well-spread.

About a third of the respondents work across four different locations. Another third work in a single location. My personal view on this is that there is a good mix of testers working in one location, and way more spread across different locations. I think this might stem from different out-sourcing companies as well as companies working across different sites for various reasons – even though this usually makes the formation of real teams hard – at least in my experience.

Most of the respondents have working experience of five years or more. I think the majority of testers new in the field usually don’t get immediately their attention on such kind of surveys. I think this is tragic, as in the long run we should be working on integrating people new to the field more easily.

There also appear many test managers in the survey data. This seems quite unusual to me, as there certainly are way more testers than test managers – I hope. This usually raises the question to me how come there are so few testers passionate about their craft. In some way this is tragic, but it resembles the state of the industry.

Interestingly on time management, most time of the testers seems to be spent on documentation (51%) and dealing with environments (49%). That’s sort of weird, but also resembles my experiences with more and more open source tools, and more and more programmers not really caring how their stuff can be tested or even brought to production. On the other hand I notice many problems with test data-centric automation approaches, where handling test data appears to be the biggest worry in many organization. I usually attribute that to bad automation, as an automated tests usually should be easy to deal with, and create its own test data set that it operates on – a problem well-addressed in the xUnit Test Patterns book in my opinion – but few people appear to know about that book.

Skills – and what you should look out for?

Which sort of transitions my picture to the skills section. Testers appear to use a couple of approaches, foremost Exploratory Testing with 87%. There are also 60% mentioning they use scripted testing. This also matches my experience since testing rarely is purely Exploratory or purely scripted. I think the majority of testers claiming they use Exploratory Testing is either a signal of the rise of context-driven testing in general, or a bias in the data. I think it’s more of the former.

I liked that test documentation gets leaner. With the former 51% of the spare time of testers spent with documentation, this is certainly a good thing. At the conferences I attend I see more and more sessions on how to use mindmaps for that. Quite a third of the respondents said they already used mindmaps. I think that’s a good signal.

Even though the authors claim that formal training is on the raise when it comes to skills of testers, and their respective education, there are still many testers trained through training on the job and mentoring, as well as learning from books and online resources. I think this is a good trend, since I doubt that formal training will be able to keep up with transferring skills in the long run. They can inspire testers to dive deeper into certain topics, but on-the-job training and mentoring, as well as active reflection from material that you read, is a good thing, and way more powerful.

Unsurprisingly communication skills are the number one necessary skills for testers (78%). The next skillset that a tester needs according to the survey is on functional testing and automation, web technologies, and general testing methodologies. That resembles sort of my past as a tester, and which skills I put efforts into. Unsurprisingly 86% of the respondents claimed that they have test automation in place.

More Agile – less concerned

It seems that waterfall approaches are on the decline, even in the testing world. In 2015 42% mentioned they used Waterfall. In 2016 it were only 39%. 82% responded they used Agile – maybe every once in a while.

Even though the testing community usually is concerned from the historic background on their job safety, this uprise of Agile methodologies didn’t lead to more testers being concerned. Compared to 2015 where 42% were not concerned about their job, in 2016 there are 53% of the folks unconcerned. Probably that might be related to more context-driven approaches being more wide-spread.

This is just a summary with certain picks from myself. I encourage to dive into the State of Testing survey report on your own to get more details.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Categories: Blogs