Skip to content

Feed aggregator

Panel for Tech Leads: “Navigating Difficult Situations”

thekua.com@work - Mon, 08/01/2016 - 08:45

I recently moderated a panel in our London ThoughtWorks office aimed at developers leading technnical teams as a follow up from the Lead Developer conference.

Leading development teams can be a challenging prospect. Balancing the needs of the business with those of your team requires a number of different skills and these situations are often very difficult to prepare for.

This panel session will provide a platform for a group of tech leads to come together and share their experiences, insights and advice around the topic of managing conflict and overcoming difficult moments within your teams.

Our panelists are all at various stages of their own leadership journeys and will be offering a range of perspectives and viewpoints to help you on your way.

Tech Lead Panellists

The panelists shared their experiences around situations like:

  • Having a tough conversation with a team member or customer;
  • Sharing how they have dealt with overtime (weekends, later work);
  • How they resolved a technical disagreement within a team; and
  • Handling a particularly aggressive person, or being aggressively threatened;

The audience also threw in a few questions like:

  • Dealing with office politics;
  • Finding access to key influencers/stakeholders;
  • Where you draw the line with a person on a team; and
  • Dealing with a technical stakeholder who is too involved, because they seem to have too much time;

We also had some great sound bites in relation to the topics being discussed.

To deal with angry people:

Be the adult – Laura Paterson

or just:

Let them vent – Jon Barber

Managing stakeholders is hard, and you sometimes need to take a stance:

It’s easy to say no – Priya Samuel

People in teams need feedback to both strengthen confidence and improve effectiveness. However:

Frank feedback is really hard. Give the person a chance. – Mike Gardiner

Lastly when thinking about people and teams:

Have empathy. Pairing is scary & exhausting – Kornelis (Korny) Sietsma

I’d like to thank Amy Lynch for organising the panel, Laura Jenkins and Adriana Katrandzhieva for helping with the logistics, all the panelists who contributed their experiences and shared their stories (Priya Samuel, Kornelis (Korny) Sietsma, Mike Gardiner, Laura Paterson and Jon Barber) and all the people who turned up for the evening.

Categories: Blogs

New to Network Virtualization emulation in TruClient? – Here’s an example to get you running…

HP LoadRunner and Performance Center Blog - Sun, 07/31/2016 - 13:16

Virtual Location Settings teaser.png

In this article we will demonstrate how to activate Hewlett Packard Enterprise Network Virtualization (NV) emulation on a TruClient script. The TruClient script was recorded in IE and activated on LoadRunner 12.53 load testing software.

Categories: Companies

Seven Sees

Hiccupps - James Thomas - Sat, 07/30/2016 - 06:23
Here's the column I contributed this month to my company's internal newsletter, Indefinite Articles. (Yeah, you're right, we're a bit geeky and into linguistics. As it happens I wanted to call the thing My Ding-A-Ling but nobody else was having it.) 
When I was asked to write a Seven Things You Didn't Know About ...  article ("any subject would be fine" they said) I didn't know what to write about. As a tester, being in a position of not knowing something is an occupational hazard. In fact, it's pretty much a perpetual state since our work is predominantly about asking questions. And why would we ask questions if we already knew? (Please don't send me answers to this.)
Often, the teams in Linguamatics are asking questions because there's some data we need to obtain. Other times we're asking more open-ended, discovery-generating questions because, say, we're interested in understanding more about why we're doing something, exploring the ramifications of doing something, wondering what might make sense to do next, and you can think of many others I'm sure.
We ask these kinds of questions of others and of ourselves. And plenty of times we will get answers. But I've found that it helps me to remember that the answers - even when delivered in good faith - can be partial, be biased, be flawed, and even be wrong. And, however little I might think it or like it, the same applies to my questions.
We are all subject to any number of biases, susceptible to any number of logical fallacies, influenced by any number of subtle social factors, and are better or worse at expressing the concepts in our heads in ways that the people we're talking to can understand. And so even when you think you know something about something, there's likely to be something you don't know about the something you think you know about that something.
To help with that, here's a list of seven common biases, oversights, logical fallacies and reasoning errors that I've seen and see in action, and have perpetrated myself:
Further reading: Thou Shalt Not Commit Logical FallaciesMental Models I Find Repeatedly UsefulSatir Interaction Model.Image: https://flic.kr/p/9uHWvp
Categories: Blogs

Government Asks: What’s in Your Software?

Sonatype Blog - Fri, 07/29/2016 - 22:56
U.S. Government pays closer attention to software components Multiple agencies across the U.S. government are paying closer attention to the software they are buying.  More specifically, they want to know what open source and third party components were used to build the software...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Breaking the Barriers to Agile Adoption Webinar – Q&A and Video

The Seapine View - Fri, 07/29/2016 - 15:30

Thanks to everyone who attended the Breaking the Barriers to Agile Adoption webinar. Video and Gordon’s Q&A follow.

Q&A

Q: Is there any documented evidence that Agile accelerates medical device projects?

Great question! I’m not aware of any studies that prove this. Perhaps someone else has a link to studies about Agile and medical device projects…please comment if you do.

Each year, we survey medical device development professionals. In 2015, nearly half of the respondents said they were doing some form of Agile. You can download the survey report to learn more. While this doesn’t provide evidence about Agile accelerating medical device projects, it does support the idea that Agile can be adopted in safety-critical environments.

Q: What do you mean by automating change management?

I simply meant using things like impact analysis and suspect flagging. Basically, because you are managing each item as an identifiable record, you can link them. Linking allows the tool to automate lots of incredibly time-consuming tasks, such as maintaining a traceability matrix. If your tool doesn’t help automate these types of tasks, you won’t be able to keep up with the level of change that you need to be successful with Agile.

Q: Does TestTrack have a checklist function?

No, but you can use custom fields to create a checklist. For example, let’s say you have a Done event that requires an electronic signature. You can create custom checkbox fields and require users to select each checkbox before they can enter their electronic signature.

Q: Can you link bugs to user stories in TestTrack?

Yes, you can link anything to anything in TestTrack. You can also create a defect at the click of a button directly from a user story. This copies any info from the user story and automatically links the story and defect together.

Q: Is the “Linked Items” field new in TestTrack 2016?

Yes it is! Wondering what else you may have missed? Review the release notes.

Q: Will you be providing a step-by-step instruction guide on how to configure TestTrack for this Agile approach?

Please email Gordon Alexander for help with this.

Q: Is there a way to show changes in document view vs. history (like Word Compare)?

Yes, with snapshots. Check out the help to learn more.

Q: Can you “snapshot” test cases?

The snapshot functionality is only for documents. However, a test run is a copy of a test case at a point in time; you could think of it as a snapshot of an individual test case. Also, here’s a tip—use the “regenerate” option to create a test run from an existing one if you ever want to run an old version of a test case.

Categories: Companies

Announcing Selenium 3.0-beta1

Selenium - Fri, 07/29/2016 - 03:20

At SeleniumConf in 2013, we announced that a new major version of Selenium would be released “by Christmas”. Fortunately, we never said which Christmas, as it has taken us a while to make all the changes we wanted to make! We’re excited to announce the release of the first beta — Selenium 3.0.0-beta1.

We’d love you to try it out on your projects, and provide us with feedback on where the rough edges are before we ship the 3.0 itself! Please remember that this is a beta release, so your feedback is incredibly helpful and valuable in order to help us smooth any rough edges.

For the last six years we’ve been advising users to switch to the newer WebDriver APIs and to stop using the original RC APIs. With Selenium 3.0, the original implementation of RC has been removed, replaced by one that sits on top of WebDriver. For many users, this change will go completely unnoticed, as they’re no longer using the RC APIs. For those of you who still are, we’ve done our best to make the change as smooth as possible, but we welcome high quality bug reports to help us fix any problems that occur. Maven users will need to add a dependency on the new “leg-rc” package to access the old RC APIs.

There are some other changes that you might need to be aware of:

  • You’ll need to be running Java 8 to use the Java pieces of Selenium. This is the oldest version of Java officially supported by Oracle, so hopefully you’re using it already!
  • Support for Firefox is via Mozilla’s geckodriver.
  • Support for Safari is provided on macOS (Sierra or later) via Apple’s own safaridriver.
  • Support for Edge is provided by MS through their webdriver server.
  • Only versions 9 or above of IE are supported. Earlier versions may work, but are no longer supported as MS no longer supports them.

We’ll be posting more information about Selenium 3.0 to this blog soon, but until then if you’re interested in learning more then a recent webinar by Simon is a great place to start.


Categories: Open Source

It's Great When You're Negate... Yeah

Hiccupps - James Thomas - Thu, 07/28/2016 - 22:54
I'm testing. I can see a potential problem and I have an investigative approach in mind. (Actually, I generally challenge myself to have more than one.) Before I proceed, I'd like to get some confidence that the direction I'm about to take is plausible. Like this:

I have seen the system under test fail. I look in the logs at about the time of the failure. I see an error message that looks interesting.  I could - I could - regard that error message as significant and pursue a line of investigation that assumes it is implicated in the failure I observed.

Or - or -  I could take a second to grep the logs to see whether the error message is, say, occurring frequently and just happens to have occurred coincident with the problem I'm chasing on this occasion.

And that's what I'll do, I think.

James Lyndsay's excellent paper, A Positive View of Negative Testing, describes one of the aims of negative testing as the "prompt exposure of significant faults". That's what I'm after here. If my assumption is clearly wrong, I want to find out quickly and cheaply.

Checking myself and checking my ideas has saved me much time and grief over the years. Which is not to say I always remember to do it. But I feel great when I do, yeah.
Image: Black Grape (Wikipedia)
Categories: Blogs

10-minute digital experience health check of our new Dynatrace website

Yesterday we completely refreshed and relaunched our website. I’m in Asia this week, so I woke up early to check everything was OK overnight. This blog is a simple overview of what I looked at, step by step, as I drank my morning coffee. I’m not in the development team, I’m just one of the stakeholders […]

The post 10-minute digital experience health check of our new Dynatrace website appeared first on about:performance.

Categories: Companies

The DevOps 2.0 Toolkit

When agile appeared, it solved (some of) the problems we were facing at that time. It changed the idea that months-long iterations were the way to go. We learned that delivering often provides numerous benefits. It taught us to organize teams around all the skills required to deliver iterations, as opposed to horizontal departments organized around technical expertise (developers, testers, managers and so on). It taught us that automated testing and continuous integration are the best way to move fast and deliver often. Test-driven development, pair-programming, daily stand-ups and so on. A lot has changed since the waterfall days.

As a result, agile changed the way we develop software, but it failed to change how we deliver it.

Now we know that what we learned through agile is not enough. The problems we are facing today are not the same as those we were facing back then. Hence, the DevOps movement emerged. It taught us that operations are as important as any other skill and that teams need to be able not only to develop but also to deploy software. And by deploy, I mean reliably deploy often, at scale and without downtime. In today's fast-paced industry that operates at scale, operations require development and development requires operations. DevOps is, in a way, the continuation of agile principles that, this time, include operations into the mix.

What is DevOps? It is a cross-disciplinary community of practice dedicated to the study of building, evolving and operating rapidly-changing, resilient systems at scale. It is as much a cultural as technological change in the way we deliver software, from requirements all the way to production.

Let's explore technological changes introduced by DevOps that, later on, evolved into DevOps 2.0.

By adding operations into existing (agile) practices and teams, DevOps united, previously excluded, parts of organizations and taught us that most (if not all) of what we do after committing code to a repository can be automated. However, it failed to introduce a real technological change. With it, we got, more or less, the same as we had before but, this time, automated. Software architecture stayed the same but we were able to deliver automatically. Tools remained the same, but were used to their fullest. Processes stayed the same but with less human involvement.

DevOps 2.0 is a reset. It tries to redefine (almost) everything we do and provide benefits modern tools and processes provide. It introduces changes to processes, tools and architecture. It enables continuous deployment at scale and self-healing systems.

In this blog series, I'll focus on tools which, consequently, influence processes and architecture. Or, is it the other way around? It's hard to say. Most likely each has an equal impact on the others. Never the less, today's focus is tools. Stay tuned.

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book encompasses the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx and so on. We'll go through many practices and even more tools.

The book is available from Amazon (Amazon.com and other worldwide sites) and LeanPub.

Blog Categories: Developer Zone
Categories: Companies

Environment-Agnostic Testing and Test Data Management for End-to-End Test Stability

Sauce Labs - Thu, 07/28/2016 - 15:30

In the Design Patterns for Scalable Test Automation webinar we discussed the importance of adapting proper patterns for the scaling and maintaining of E-E tests. A couple of additional important aspects for End-to-End (E-E) test stability are:

  • Environment-agnostic tests – Tests should be independent, self-contained units, and should run against any environment without code change, and with no dependency on anything else (apart from the runner)
  • Test data – How to prevent tests failing because expected data wasn’t available in the system

In the context of a web app (not legacy, thick-client applications), let’s take a look at how to deal with these challenges.

Environment-agnostic Tests

E-E tests need environment-specific configuration information such as the URL, role, user name, password, etc. Needless to say, hardcoding these parts of the test is not a good practice. It makes updates and maintenance difficult. A better solution would be to tokenize, keep the key/value pair separate and use them as part of the test flow. Different technologies offer different tactics to handle the need.

For example, in the case of C# (and .NET), app.config is a good choice for carrying all the configuration tokens and values. However, the challenge shows up when you want to be able to update the app.config seamlessly before execution. For example, URL for DEV is different than TEST. How do you find and replace them in the config? There are a couple of ways to handle the situation:

  1. Create multiple app.configs per environment respectively (dev.app.config, test.app.config, ..)
  2. Maintain one repository, maintain one app.config, and update the app.config just before test execution

 

Both approaches work in practice. However, I prefer the second approach, because it eliminates multiple sources of truth (app.config). A single source of truth is always better.

Tooling

We could roll up a quick utility to find and replace values in app.config. However, I want to introduce a utility that can help with this: xmlpreprocess.exe. It’s a Windows-only tool, but it is self-contained, easy to use, and its able to integrate the CLI with CI/CD systems.

Usage is simple:

  1. Create app.config
  2. Create an xml to supply values for each environment
  3. xmlpreprocess CLI will take the values from your xml input and update the app.config

 

Explore more documentation and examples here.

In the case of WebdriverIO or other JS frameworks, we could keep them in a simple .json file and import them for use.

For example, in my sample repo, I’ve followed a composition model, which looks like this:

  1. master.conf.js – Carries all common config across environments
  2. local/wdio.conf.js – Carries local test execution configurations, which will be merged with the master config before execution
  3. saucelabs/wdio.conf.sauce.js – Carries Sauce Labs-specific configurations which will be merged with the master config before execution

 

Similarly we could create separate configs per environment inside the Sauce Labs folder. I’ve yet to find a xmlpreprocess.exe like the one for JS world.

sahas

These are just few tactics I’ve used, and I’m sure there are many other ways to achieve the goal as well. The advantages that these approaches offer are a clean separation of concern between test config and code, as well as easy maintenance and the ability to dynamically update the config as needed. So, give it a try.

Test Data Management

Here again, I’m talking about web application E-E testing, not thick-client applications. There are some heavyweight tools that offer professional test data management to restore data from production, scrub the sensitive data, mask the sensitive data and set up data on the target environment for non-prod testing, etc. However, there are a few ways to tackle the problem within the test automation code. They include:

CRUD flow – Try to combine scenarios for meaningful end-user behaviors. For example, let’s assume that you are testing a WordPress blogging application (create blog post, view the blog post, verify visitors, view by geography, delete the post, etc). If we logically group the end-user actions for that persona (i.e., author) and group them in Create, Read, Update and Delete flow, the data needed for the next step (Read) is likely created by the previous step (Create). We might end up testing a larger flow at the same time that necessary data is generated by the application part of the process. Data necessary for Create would have been created by the previous workflow. We don’t run into the stale data sitting in the system or data management issues where code refactoring expected a new field in the dataset, but we didn’t get a chance to update the test data generation script. In addition, if each one of the actions (Create, Read, Update and Delete) is an independent scenario, potentially some steps are repeated (i.e., launching the browser, navigate to website, login, navigate to posts page, etc). By forming CRUD flow, repeated steps are optimized, and as a result, tests complete faster.

Generate and clean up the test data as part of the test – Another approach is to call up your backend REST endpoints to generate the necessary data part of test setup. It’s similar to how we approach Unit testing, setup and teardown. Depending on the language of choice, there are libraries that can help in making calls to the backend and setup of the data. At the end, clean up the data as necessary. Another approach might be to use tools like JMeter to input necessary data before E-E suite execution.

Service virtualization – This is another approach that we leverage, typically when we need to interact with third-party services and every hit costs something, or in cases where a third party simply can’t stand up matching environments for all our non-prod environments. We use some tools like wiremock.org or mountebank or some commercial tools to help create the virtual services. These serve much like record and replay, if the dependent service is available at least once, or we can handcraft the request/responses as well. So, once these stubs are created, we need to run these virtual services in our data center and configure the UI to go through this. (There is an interesting walk-through here.) This will offer some stability, but we need to be cognizant about keeping the stubs up to date; otherwise we might run into issues.

While the commercial TDM tools might offer much more sophisticated features, it’s worth trying these approaches as a first step.

Sahas Subramanian (@sahaswaranamam) is a passionate engineer with experience spanning across DevOps, quality engineering, web development and consulting. He is currently working as a Continuous Delivery and Quality Architect @CDK Global, and shares his thoughts on tech via https://cdinsight.wordpress.com

Categories: Companies

Pokemon Go Performance Issues – Gotta Catch Them All

Pokemon Go has stormed onto the scene this past week and the buzz is everywhere about it! However, being so successful in such a brief period of time can have a downside. Reports of players being unable to access services started appearing in relatively short order. Amazon’s CTO jumped into the fray offering their cloud […]

The post Pokemon Go Performance Issues – Gotta Catch Them All appeared first on about:performance.

Categories: Companies

Get enterprise-ready performance testing with StormRunner Load 2.0

HP LoadRunner and Performance Center Blog - Wed, 07/27/2016 - 21:08

 

StormRunner Load 2.0 teaser.png

StormRunner Load allows you to easily test to see if your app can handle the load from potential users. Keep reading to learn about the enhancements made with the latest version.

Categories: Companies

Protractor: An Angular JS Testing Framework

Testing TV - Wed, 07/27/2016 - 15:00
Protractor, the testing framework for Angular built upon WebDriverJS, provides additional WebDriver-like functionality to test Angular based websites. But since it is written in JavaScript it has been limited to users of WebDriverJS. In this talk I will outline my efforts as well as the efforts of others to bring Protractor like functionality to Python, […]
Categories: Blogs

A Glass Half Fool

Hiccupps - James Thomas - Wed, 07/27/2016 - 07:48
While there's much to dislike about Twitter, one of the things I do enjoy is the cheap and frequent opportunities it provides for happy happenstance.
@noahsussman Only computers?

It's easy to put people in incongruous situations. The art is in not doing it accidentally.— James Thomas (@qahiccupps) July 27, 2016Without seeing Noah Sussman's tweet, I wouldn't have had my own thought, a useful thought for me, a handy reminder to myself of what I'm trying to do in my interactions with others, captured in a way I had never considered it before.
Image: https://flic.kr/p/a341bn 
Categories: Blogs

3 solutions, multiple apps, different users, 1 goal: Unleashing the power of performance engineering

HP LoadRunner and Performance Center Blog - Tue, 07/26/2016 - 22:23

Performance Testing 3 logos teaser.png

StormRunner is a cloud performance testing solution that enhances our performance testing tool set. This blog discusses the three HPE performance engineering solutions and when to use them.

Categories: Companies

Now Streaming on DevOps Radio: Jacob Tomaw Talks About How Orbitz Transformed Software Delivery

Blog co-authored by Sarah Grucza, PAN Communications

DevOps Radio - Interview with Jacob Tomaw, Orbitz

Have you ever booked a trip through the Orbitz website? Orbitz Worldwide, now a part of the Expedia family, is a leading global online travel company. If you’ve ever booked travel through the Orbitz website you can easily understand how Orbitz Worldwide sells ten of billions of dollars in travel annually. The Orbitz and Expedia brands use software to transform the way consumers around the world plan and purchase travel. The practice of direct booking online travel has disrupted the travel industry, and that industry innovation attracted Jacob Tomaw, principal engineer, to the company in 2006.

Jacob was in search of a company where technology was the business - and he found that in Orbitz. When Jacob joined the Orbitz team, he not only knew how important software was to Orbitz’s business, he quickly saw that there were ways to improve on current software delivery practices. Through a series of project and group transformations, Jacob began to implement agile, continuous delivery (CD) and DevOps practices throughout Orbitz. Since joining the company, the software delivery teams have achieved impressive results, including reducing release cycles by more than 75 percent, learning to value a team-oriented culture and enhancing user experience.   

DevOps Radio host Andre Pino wanted to learn more about Jacob and find out what it was like navigating through this transformation, so they sat down to talk. You can listen in on Jacob and Andre’s conversation in the latest episode of DevOps Radio.

In this latest DevOps Radio episode, Jacob covers how he got his start in software delivery and his experiences at Orbitz. He then talks through the transformation the software delivery teams at Orbitz went through. You’ll also get a look into the mind of a technology expert; Jacob explores thoughts on the future and on open source software.

Plug in your headphones and tune into the latest episode of DevOps Radio. Available on the CloudBees website and on iTunes. Join the conversation about the episode on Twitter by tweeting out to @CloudBees and including #DevOpsRadio in your post!

Listen to the podcast. If you still want to learn more about the Orbitz transformation, read the case study or watch the video (below), featuring Jacob and his team.

Categories: Companies

CC to Everyone

James Bach's Blog - Tue, 07/26/2016 - 19:23
I sent this to someone who’s angry with me due to some professional matter we debated. A colleague thought it would be worth showing you, too. So, for whatever it’s worth:

I will say this. I don’t want anyone to feel bad about me, or about my behavior, or about themselves. I can live with that, but I don’t want it.

So, if there is something simple I can do to help people feel better, and it does not require me to tell a lie, then I am willing to do so.

I want people to excel at their craft and be happy. That’s actually what is motivating me, underneath all my arguing.

Categories: Blogs

Don’t just let Node.js take the blame

No matter how well-built your applications are, countless issues can cause performance problems, putting the platforms they are running on under scrutiny. If you’ve moved to Node.js to power your applications, you may be at risk of these issues calling your choice into question. How do you identify vulnerabilities and mitigate risk to take the […]

The post Don’t just let Node.js take the blame appeared first on about:performance.

Categories: Companies

JUnit Testing: Getting Started and Getting the Most out of It

Sauce Labs - Tue, 07/26/2016 - 16:30

If you’re a Java developer, you probably know and love JUnit. It’s the go-to tool of choice for unit testing (and, as we will see below, other types of testing as well) for Java apps.

In fact, JUnit is so popular that it’s the most commonly included external library on Java projects on GitHub, according to a 2013 analysis. No other Java testing framework comes close in popularity to JUnit.

But while JUnit is widely used, are all of the projects that deploy it getting the most out of it? Probably not. Here’s a look at what you should be doing to use JUnit to maximal effect.

JUnit Basics

First, though, let’s go over the basics of JUnit, just in case you haven’t used it before.

Installation

JUnit supports any platform on which Java runs, and it’s pretty simple to install. Simply grab the junit.jar and hamcrest-core.jar files from GitHub and place them in your test class path.

Next, add a dependency like the following to junit:junit in the scope test:

<dependency>
  <groupId>junit</groupId>
  <artifactId>junit</artifactId>
  <version>4.12</version>
  <scope>test</scope>
</dependency>

Basic Usage

With JUnit installed, you can begin writing tests. This process has three main steps.

First, create a class, which should look something like this:

package junitfaq;
          
import org.junit.*;
import static org.junit.Assert.*;
     
import java.util.*;
     
public class SimpleTest {</pre>
<pre>

Second, write a test method, such as:

@Test
   public void testEmptyCollection() {
      Collection collection = new ArrayList();
      assertTrue(collection.isEmpty());
}

… and third, run the test! You can do that from the console with:

java org.junit.runner.JUnitCore junitfaq.SimpleTest

There’s lots more you can do, of course. For all the nitty-gritty details of writing JUnit tests, check out the API documentation.

Getting the Most out of JUnit

Now you know the basics of JUnit. But if you want to run it in production, there are some pointers to keep in mind in order to maximize testing performance and flexibility. Here are the two big ones:

  • Use parallel testing, which speeds up your testing enormously. Unfortunately, JUnit doesn’t have a parallel testing option built-in. However, there’s a Sauce Labs article dedicated to JUnit parallel testing, which explains how to do it using the Sauce OnDemand plugin.
  • Despite the tool’s name, JUnit’s functionality is not strictly limited to unit testing. You can also do integration and acceptance tests using JUnit, as explained here.

If you use Eclipse for your Java development, you may also want to check out Denis Golovin’s tips for making JUnit tests run faster in Eclipse. Most of his ideas involve tweaks to the Eclipse environment rather than JUnit-specific changes, but anything that makes Eclipse faster is a win in my book.

And of course, don’t forget Sauce Labs’ guide to testing best practices. They’re also not JUnit-specific, but they apply to JUnit testing, and they’re good to know whether you use JUnit or not.

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, Agile infrastructure and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO.

Categories: Companies

Jenkins World Speaker Highlight: Using Jenkins for Disparate Feedback on GitHub

This is a guest blog, authored by Ben Patterson, engineering manager, edX, and a speaker at Jenkins World

Picking a pear from a basket is straightforward when you can hold it in your hand, feel its weight, perhaps give a gentle squeeze, observe its color and look more closely at any bruises. If the only information we had was a photograph from one angle, we’d have to do some educated guessing.

As developers, we don’t get a photograph; we get a green checkmark or a red x. We use that to decide whether or not we need to switch gears and go back to a pull request we submitted recently. At edX, we take advantage of some Jenkins features that could give us more granularity on GitHub pull requests, and make that decision less of a guessing game.

Multiple contexts reporting back when they’re available

Pull requests on our platform are evaluated from several angles: static code analysis including linting and security audits, javascript unit tests, python unit tests, acceptance tests and accessibility tests. Using an elixir of plugins, including the GitHub Pull Request Builder Plugin, we put more direct feedback into the hands of the contributor so s/he can quickly decide how much digging is going to be needed.

For example, if I made adjustments to my branch and know more requirements are coming, then I may not be as worried about passing the linter; however, if my unit tests have failed, I likely have a problem I need to address regardless of when the new requirements arrive. Timing is important as well. Splitting out the contexts means we can run tests in parallel and report results faster.

Developers can re-run specific contexts

Occasionally the feedback mechanism fails. It is oftentimes a flaky condition in a test or in test setup. (Solving flakiness is a different discussion I’m sidestepping. Accept the fact that the system fails for purposes of this blog entry.) Engineers are armed with the power of re-running specific contexts, also available through the PR plugin. A developer can say “jenkins run bokchoy” to re-run the acceptance tests, for example. A developer can also re-run everything with “jenkins run all”. These phrases are set through the GitHub Pull Request Builder configuration.

More granular data is easier to find for our Tools team

Splitting the contexts has also given us important data points for our Tools team to help in highlighting things like flaky tests, time to feedback and other metrics that help the org prioritize what’s important. We use this with a log aggregator (in our case, Splunk) to produce valuable reports such as this one.

I could go on! The short answer here is we have an intuitive way of divvying up our tests, not only for optimizing the overall amount of time it takes to get build results, but also to make the experience more user-friendly to developers.

I’ll be presenting more of this concept and expanding on the edX configuration details at Jenkins World in September.

Ben Patterson 
Engineering Manager 
 edX

This is a guest post written by Jenkins World 2016 speaker Ben Patterson. Leading up to the event, there will be many more blog posts from speakers giving you a sneak peak of their upcoming presentations. Like what you see? Register for Jenkins World! For 20% off, use the code JWHINMAN

 

Blog Categories: Jenkins
Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today