Skip to content

Open Source

What JVM versions are running Jenkins? 2016 Update!

Like for last year’s article about the same subject, yet another recent discussion about finally requiring Java 8 to run future versions Jenkins pushed me to gather some more factual data around it. What follows contains some opinions or statements which may not be seen as purely factual or neutral. Note that this represents by no mean the general position of the Jenkins governance board. This is solely my opinion as a contributor based on the data I gathered, and what I feel from the feedback of the community at large. Java 8 now the most used version, and growing If we look...
Categories: Open Source

Tuning Jenkins GC For Responsiveness and Stability with Large Instances

This is a cross post by Sam Van Oort, Software Engineer at CloudBees and contributor to the Jenkins project. Today I’m going to show you how easy it is to tune Jenkins Java settings to make your masters more responsive and stable, especially with large heap sizes. The Magic Settings: Basics: -server -XX:+AlwaysPreTouch GC Logging: -Xloggc:$JENKINS_HOME/gc-%t.log -XX:NumberOfGCLogFiles=5 -XX:+UseGCLogFileRotation -XX:GCLogFileSize=20m -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCCause -XX:+PrintTenuringDistribution -XX:+PrintReferenceGC -XX:+PrintAdaptiveSizePolicy G1 GC settings: -XX:+UseG1GC -XX:+ExplicitGCInvokesConcurrent -XX:+ParallelRefProcEnabled -XX:+UseStringDeduplication -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=20 -XX:+UnlockDiagnosticVMOptions -XX:G1SummarizeRSetStatsPeriod=1 Heap settings: set your minimum heap size (-Xms) to at least 1/2 of your maximum size (-Xmx). Now, let’s look at where those came from! We’re going to focus on garbage collection (GC)...
Categories: Open Source

Security updates addressing zero day vulnerability

A zero-day vulnerability in Jenkins was published on Friday, November 11. Last week we provided an immediate mitigation and today we are releasing updates to Jenkins which fix the vulnerability. We strongly recommend you update Jenkins to 2.32 (main line) or 2.19.3 (LTS) as soon as possible. Today’s security advisory contains more information on the exploit, affected versions, and fixed versions, but in short: An unauthenticated remote code execution vulnerability allowed attackers to transfer a serialized Java object to the Jenkins CLI, making Jenkins connect to an attacker-controlled LDAP server, which in turn can send a serialized payload leading to code execution, bypassing existing protection mechanisms. Moving forward, the Jenkins security team is...
Categories: Open Source

Upcoming November Jenkins Events

November is packed full of meetups and events. If you are in any of the areas below please stop by to say "Hi" and talk Jenkins over beer. North America November 15 | SF JAM: Let’s Talk CI/CD and DevOps with ClusterHQ and Jenkins November 15 | DC JAM: Jenkins and Fannie Mae November 30 | Albuquerque JAM: Learn About Blue Ocean November 30 | Guadalajara JAM: Jenkins Install and Setup Europe November 10 | Amsterdam JAM: Jenkins and Docker - Multiple Uses for Containers and Jenkins November 10 | Milano JAM: Meet and Greet Australia November 15 | Melbourne JAM: Blue Ocean - A New User Experience Asia November 17 | Singapore...
Categories: Open Source

Addressing recently disclosed vulnerabilities in the Jenkins CLI

The Jenkins security team has been made aware of a new attack vector for a remote code execution vulnerability in the Jenkins CLI, according to this advisory by Daniel Beck: We have received a report of a possible unauthenticated remote code execution vulnerability in Jenkins (all versions). We strongly advise anyone running a Jenkins instance on a public network disable the CLI for now. As this uses the same attack vector as SECURITY-218, you can reuse the script and instructions published in this repository: https://github.com/jenkinsci-cert/SECURITY-218 We have since been able to confirm the vulnerability and strongly recommend that everyone follow the instructions in the linked repository. As Daniel mentions in the security advisory, the advised mitigation strategy...
Categories: Open Source

Monthly JAM Recap - October 2016

October has proven to be a busy month within the Jenkins Area Meetup groups. Below is a recap of topics discussed at various JAMS in the month of October. Dallas Forth Worth, Texas (DFW) JAM James Dumay took time out of his vacation to present Blue Ocean, a project that rethinks the user experience of Jenkins, modeling and presenting the process of software delivery by surfacing information that is important to development teams with as few clicks as possible, while still staying true to the extensibility that Jenkins always has had as a core value. See recording HERE. San Francisco, CA JAM Andrey Falko from Salesforce shared how he and his...
Categories: Open Source

SonarQube 6.x series: Focused and Efficient

Sonar - Thu, 11/03/2016 - 15:09

At the beginning of the summer, we announced the long-awaited new “Long Term Support” version, SonarQube 5.6. It comes packed with great features to highlight and help developers manage the leak, and to ensure the security and scalability of large instances.

Now we’re concentrating on the main themes for the 6.x series, and based on the discussions we have had during our City Tour 2016, we’re sure that you’ll be as excited by these new features as you were with the ones in 5.6 LTS.

Better leak management

water leak

Support of file move and renaming

SonarQube 5.6 LTS provides all the built-in features you need to monitor and fix the leak on your source code: a project home page that highlights activity on code that was added or modified recently, and a quality gate that turns red whenever bugs or vulnerabilities make their way into new code.

Unfortunately, SonarQube 5.6 doesn’t understand moving or renaming files. That means that if an old file is moved, all its existing issues are closed (including the ones marked False Positive or Won’t Fix), and new ones are (re)opened on the file at its new location. An ugly side effect is that old issues end up in the leak period even though the file wasn’t edited. The end result is noise for development teams who refactor frequently.

This limitation is fixed in SonarQube 6.0, and development teams at SonarSource have been enjoying it for a couple of months already.

Better understanding of bugs and vulnerabilities

Over the past 2 years, SonarSource’s analyzers have reached maturity levels that not only allow them to detect “simple” maintenance issues, but also more tricky issues that can be found only by exploring the code in depth using “symbolic execution” to explore multiple execution paths through the code. That’s why in the 5.x series, Bugs and Vulnerabilities debuted as part of the new SonarQube Quality Model. As you can imagine, it can be very complex to detect a bug when lots of different execution paths have to be explored. As a consequence, it’s easy to guess that it would be hard for a developer to understand why SonarQube is reporting this or that bug without more help. A glance at “SonarAnalyzer for Java: Tricky Bugs are Running Scared” shows that we must print arrows and explanations on the screenshots to help users understand how we discovered a bug.

The next LTS of SonarQube will provide this information out of the box in the web application. Not only will developers see where each bug is, but they’ll be able to display the execution paths (with explanations) that lead to it. This will be a nice improvement to help you fix the leak more easily!

Project Activity Stream

You’re already applying the right process to fix the leak, but sometimes it is hard to know exactly what causes the tiny drops that end up being the leak. The next LTS will keep track of the low-level activities in your project to help you find the source of your leak. For instance, are you facing unexpected new issues in the leak period? You will be able to see that they are due to the activation of a new rule in your quality profile. You want to find which exact commit(s) were not sufficiently tested and caused the quality gate to turn red because of insufficient coverage? You will see commit hashes to more easily link the problem with what happened in the source code repository.

Branching as a first class citizen

While SonarQube provides a feature to handle short-living (feature) branches through its pull request analysis plugin, it currently does very little when it comes to long-living (maintenance) branches, even though we all know that maintenance is a huge part of software development. Unfortunately, SonarQube’s current branch support is minimal at best. The sonar.branch analysis parameter allows you to analyze a branch alongside the trunk version of the code, but under the hood SonarQube treats the branch as a separate, completely unrelated project: configuration isn’t shared, metrics are artificially doubled (for instance the number of lines of code), issues are duplicated in each copy of the code with no link between them, it’s impossible to know at what point of time the maintenance branches diverged from the main one, … etc. In the end, you end up managing the branch as a totally different project, even though it is really the same application.

The next LTS will address all those issues, making it simple to create maintenance branches on existing projects to track activity on the branches and make sure that even in branches, there’s no leak on the new code.

See what’s important to you!

Eye

User-needs oriented spaces

In the early days, SonarQube offered the possibility to inject and display any kind of information, mostly thanks to customizable dashboards and widgets. This led to widespread adoption, but at the cost of SonarQube being seen as a multi-purpose aggregation and reporting tool: one plugin would add information from a bug tracking system, another one would add documentation information, … The consequence was that the global and project dashboards became a crazy quilt of both useless and useful information, with everything mixed in together in a big mess.

In the 5.x series, project dashboards were replaced by hardcoded pages dedicated to fit the use cases that SonarQube is meant for: seeing the overall quality of a project on its home page, quickly identifying whether the leak is fixed and the reasons why it might not be, and digging into the details to know more about what’s going wrong. Following the same logic, next LTS of SonarQube will get rid of global dashboards and widgets to provide pages designed to answer the needs of developers, technical leaders, project managers and executives – all this out of the box without having to wonder what to configure.

Powerful project exploration with tags

When focusing on a given project, SonarQube offers everything you need to both get the big picture and dig into the details. When it comes to exploring the whole set of projects available on a SonarQube instance, the only entry point is the ageing “Measures” page. This page currently goes into to much detail (allowing you to query for files, for instance), with difficult-to-use filtering criteria.

The next LTS will replace this page with a brand-new “Projects” page to query projects using advanced filtering similar to what’s on the Issues page. Ultimately, it will support tags on projects. It should help answer questions like: what’s the distribution of “strategic” projects regarding security and reliability ratings? how do “offshore” projects perform in terms of maintainability?

Always up-to-date portfolios

The Governance product allows you to manage application portfolios, usually by mapping the organisational structure of a company. The executive-oriented high level indicators produced by Governance are currently updated once in a while, when a refresh is triggered by some external system (usually a CI job), independent of project analyses. The consequence is that, depending on the frequency of this externally-triggered refresh task, those high-level indicators are imprecisely synchronized with the current status of the relevant projects.

The version of Governance compatible with the next LTS will get rid of the need to trigger this refresh, and update portfolio indicators as soon as one of the underlying projects has been updated. This way, there is no need to set up an external process to trigger portfolio calculation, and no wondering if what you are seeing in SonarQube is up to date or not.

Excellent support of huge instances

Scalability

Horizontal scalability

One of the targets of the 5.x series was making sure SonarQube would scale vertically to house more projects on a single instance if given more space, more CPU, and more RAM. This was achieved thanks to the architectural changes which led to removing the DB connection from the Scanner side, and to adding Elasticsearch in front of the database. But vertical scalability necessarily has limits – namely those of the underlying hardware.

The next LTS will allow you to deploy SonarQube as a cluster of SonarQube nodes. You’ll be able to configure each node for one or more components of SonarQube (web server, compute engine and Elasticsearch node), based on your load. The first instance to benefit from this capability will be SonarQube.com, the SonarQube-based service operated by SonarSource.

Organizations

When talking about large instances, one topic that often comes up is how to efficiently and correctly handle the permissions for large numbers of users and projects. Let’s take the example of an IT department serving several independent business units: the business units might not share the same quality profiles (because they’re working with different technologies), and each one probably wants to define its own user groups, or make specific configurations to suit their needs. There’s currently no good way to manage this scenario, but in the next LTS, organizations will create a way to define umbrellas that isolate sets of users and projects to achieve these goals. As with the ability to set up a cluster, SonarQube.com will be the first instance to benefit from this, so that users can group their projects together and customize settings or quality profiles for them.

Webhooks for Devops

Not related to big instances only, but still in the hands of DevOps teams who operate complex ALM setups, webhooks will increase your ability to integrate SonarQube with existing infrastructure. For instance, freshly built binaries shouldn’t be deployed to production if they don’t pass the quality gate, right? With webhooks, you’ll be able to have SonarQube notify the build system of the projects’ quality gate status so it can cancel or continue the delivery pipeline as appropriate.

Target is mid-2017!

That’s all folks! The estimated time of arrival for the next SonarQube 6.x LTS is mid-2017. Expect other small but useful features to make their way along those big themes!

Categories: Open Source

xUnit and Pipeline

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. The JUnit plugin is the go-to test result reporter for many Jenkins projects, but the it is not the only one available. The xUnit plugin is a viable alternative that supports JUnit and many other test result file formats. Introduction No matter the project, you need to gather and report test results. JUnit is one of the most widely supported formats for recording test results. For a scenarios where your tests are stable and your framework can produce JUnit output, this makes the JUnit plugin ideal for reporting results in Jenkins. It will consume results from a specified file or...
Categories: Open Source

SonarQube Embraces the .NET Ecosystem

Sonar - Fri, 10/28/2016 - 15:05

In the last couple months, we have worked on further improving our already-good support for the .NET ecosystem. In this blog post, I’ll summarize the changes and the product updates that you’re about to see.

C# plugin version 5.4

We moved all functionalities previously based on our own tokenizer/parser to Roslyn. This lets us do the colorization more accurately and will allow future improvements with less effort. Also, we’re happy to announce the following new features:

  • Added symbol reference highlighting, which has been available for Java source code for a long time.
  • Improved issue reporting with exact issue locations.
  • Added the missing complexity metrics: “complexity in classes” and “complexity in functions”
  • Finally, we also updated the rule engine (C# analyzer) to the latest version, so you can benefit from the rules already available through SonarLint for Visual Studio.

With these changes you should have the same great user experience in SonarQube for C# that is already available for Java.

VB.NET plugin version 3.0

The VB.NET plugin 2.4 also relied on our own parser implementation, which meant that it didn’t support the VB.NET language features added by the Roslyn team, such as string interpolation, and null-conditional operators. The deficit resulted in parsing errors on all new constructs, and on some already existing ones too, such as async await, and labels that are followed by statements on the same line. The obvious solution to all these problems was to use Roslyn internally. In the last couple months, we made the necessary changes, and now the VB.NET plugin uses the same architecture as the C# plugin. This has many additional benefits above and beyond eliminating the parsing errors, such as enabling the following new features in this version of the VB.NET plugin:

  • Exact issue location
  • Symbol reference highlighting
  • Colorization based on Roslyn
  • Copy-paste detection based on Roslyn
  • Missing complexity metrics are also computed
  • Support all the coverage and testing tools already available for C#

Additionally, we removed the dependency between the VB.NET and C# plugins, so if you only do VB.NET development, you don’t have to install the C# plugin any more.

While we were at it, we added a few useful new rules to the plugin: S1764, S1871, S1656, S1862. Here’s an issue we found with these rules in Roslyn itself:

Scanner for MsBuild version 2.2

Some of the features mentioned above couldn’t be added just by modifying the plugins. We had to improve the Scanner for MSBuild to make the changes possible. At the same time, we fixed many of the small annoyances and a few bugs. Finally, we upgraded the embedded SonarQube Scanner to the latest version, 2.8, so you’ll benefit from all changes made there too (v2.7 changelog, v2.8 changelog).

Additionally, when you use MSBuild14 to build your solution, we no longer need to compute metrics, copy-paste token information, code colorization information, etc. in the Scanner for MSBuild “end step”, so you’ll see a performance improvement there. These computations were moved to the build phase where they can be done more efficiently, so that step will be a little slower, but the overall performance should still be better.

FxCop plugin version 1.0

A final change worth mentioning is that we extracted FxCop analysis from the C# plugin into a dedicated community plugin. This move seems to align with what Microsoft is doing: not developing FxCop any longer. Microsoft’s replacement tool will come in the form of Roslyn analyzers.

Note that we not only extracted the functionality to a dedicated plugin, but fixed a problem with issues being reported on excluded files (see here).

Summary

That’s it. Huge architectural changes with many new features driven by our main goal to support .NET languages to the same extent as we support Java, JavaScript, and C/C++.

Categories: Open Source

SonarQube 6.1 in Screenshots

Sonar - Tue, 10/25/2016 - 14:40

The SonarSource team is proud to announce the release of SonarQube 6.1, which brings an improved interface and the first baby steps toward SonarQube clusters.

  • More Actionable Project Page
  • Redesigned Settings Pages
  • First Steps Toward Clustering

More Actionable Project Page

SonarQube 6.1 enhances the project front page to make duplications in the leak period useful and actionable.

Previously, we only tracked change in the duplications percentage against the global code base. So a very large project with only 100 new lines – all of them duplicated – still had a very small duplication percentage in the leak period. In other words, the true magnitude of new duplications was lost in the crowd. Now we calculate new duplications over code touched in the leak period, so those 100 new duplicated lines get the attention they deserve:

Redesigned Settings Pages

The global and project settings pages are redesigned for better clarity and ease of use in the new versioin:

Among the improvements the new pages bring is a clearer presentation of just what the default settings are:

First Steps Toward Clustering

There’s not a lot to show here, but it’s still worth mentioning that 6.1 takes the first steps down the road to a fully clusterizable architecture. You can still run everything on a single node if you want, but folks with large instances will be glad to know that we’re on the way to letting them distribute the load. Nothing’s configurable yet, but the planned capabilities are already starting to show up in the System Info portion of the UI:

That’s all, folks!

Its time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

Categories: Open Source

Jenkins World 2016 Session Videos

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. The videos of the sessions from Jenkins World 2016 are up! I’ve updated the wrap-up posts with links to each of the sessions mentioned: Jenkins Pipeline Scaling Jenkins Ask the Experts & Demos You can also find video from all the sessions here. Enjoy!...
Categories: Open Source

Controlling the Flow with Stage, Lock, and Milestone

This is a guest post by Patrick Wolf, Director of Product Management at CloudBees. Recently the Pipeline team began making several changes to improve the stage step and increase control of concurrent builds in Pipeline. Until now the stage step has been the catch-all for functionality related to the flow of builds through the Pipeline: grouping build steps into visualized stages, limiting concurrent builds, and discarding stale builds. In order to improve upon each of these areas independently we decided to break this functionality into discrete steps rather than push more and more features into an already packed stage step. stage - the stage...
Categories: Open Source

Selenium 3.0: Out Now!

Selenium - Thu, 10/13/2016 - 20:39

We are very pleased to announce the release of Selenium 3.0. If you’ve been waiting for a stable release since 2.53.1, now’s your chance to update. And if you do, here is what you’ll find:

As we’ve said before, for users of the WebDriver APIs this is a drop-in replacement. You’ll find that modern browsers, such as Chrome and Edge will continue to work just as before, and we’ve taken the opportunity to fix some bugs and improve stability. Selenium Grid users may require updates to their configuration as the json config file format has been updated, as have some of command line parameter options, but the upgrade should also be smooth. 

The major change in Selenium 3.0 is we’re removing the original Selenium Core implementation and replacing it with one backed by WebDriver. This will affect all users of the Selenium RC APIs. For more information, please see the previous post.

A lot has changed in the 5 years between versions 2 and 3. When we shipped Selenium 2, the Selenium project was responsible for providing the driver for each browser. Now, we are happy to say that all the major browser vendors ship their own implementations (Apple, Google, Microsoft, and Mozilla). Because the browser vendors know their browsers better than anyone, their WebDriver implementations can be tightly coupled to the browser, leading to a better testing experience for you.

The other notable change has been that there is now a W3C specification for browser automation, based on the Open Source WebDriver. This has yet to reach “recommendation” status, but the people working on it (including members of the Selenium project!) are now focusing on finishing the text and writing the implementations.

Mozilla has been a front-runner in implementing the W3C WebDriver protocol. On the plus side, this has exposed problems with the spec as it has evolved, but it also means that Firefox support is hard to track as their engineering efforts have been forward looking, rather than on supporting the current wire protocol used by Selenium WebDriver. For now, the best advice we can offer is for you to try the latest release of geckodriver and Selenium together.

These are exciting times for browser automation! Selenium 3.0 is a major release and we’re looking forward to improving things further, as well as tracking the ongoing work of the W3C spec. Our goal is to keep the changes your tests need to deal with to an absolute minimum, to continue preserving the hard work that’s gone into writing your existing tests. 

As a personal note, I’d like to say thank you to each of the many people that have worked so hard to make Selenium 3 possible. That’s not just the developers and contributors to the Open Source project (past and present), but also the engineers from Google, Microsoft, Mozilla, and Apple, and everyone involved with the W3C spec. I’d also like to say thank you to everyone who’s taken the time to report bugs, our users and our community. The project is great fun to work on and you’re the reason for that. A final thank you is due to the Software Freedom Conservancy, who have provided invaluable help with the logistics of running a large OSS project.

 
Happy hacking, everyone! May your tests run fast and true!


Categories: Open Source

The Tweets You Missed in September

Sonar - Wed, 10/05/2016 - 10:21

Here are the tweets you likely missed last month!

No barriers on https://t.co/DvXKhNM443! Sign up & start analyzing your OSS projects today! https://t.co/0QqXk1EAVO pic.twitter.com/Cl54pquct4

— SonarQube (@SonarQube) September 1, 2016

SonarLint for @VisualStudio 2.7 Released: with 30 new rules targeting https://t.co/nJ3w5PQ9Xy https://t.co/h60GhyaZEp pic.twitter.com/1s9WCTeaax

— SonarLint (@SonarLint) September 23, 2016

SonarQube #JavaScript plugin 2.16 Released : https://t.co/VbXwmrgd5n pic.twitter.com/oNLvSvHBiX

— SonarQube (@SonarQube) September 9, 2016

SonarQube ABAP 3.3 Released: Five new rules https://t.co/ucoO6OM0rj @SAPdevs #abap @SAP pic.twitter.com/Xbvcgb3oXF

— SonarQube (@SonarQube) September 7, 2016

SonarQube Scanner for Gradle 2.1 natively supports Android projects, and brings other improvements. https://t.co/xCQ9NHBYUn pic.twitter.com/QHMthAwHBX

— SonarQube (@SonarQube) September 26, 2016

Categories: Open Source

NUnit-Summary Becoming an “Official” NUnit Application

NUnit.org - Thu, 09/22/2016 - 23:39

NUnit-Summary is an “extra” that I’ve maintained personally for some time. It uses built-in or user-supplied transforms to produce summary reports based on the results of NUnit tests.

I have contributed it to the NUnit project and we’re working on updating it to recognize NUnit 3 test results. The program has never had a 1.0 release, but we expect to produce one soon.

This old post talks about the original nunit-summary program.

Categories: Open Source

An Engine Extension for Running Failed Tests – Part 1: Creating the Extension

NUnit.org - Thu, 09/22/2016 - 20:47

In a recent online discussion, one of our users talked about needing to re-run the NUnit console runner, executing just the failed tests from the previous run. This isn’t a feature in NUnit but it could be useful to some people. So… can we do this by creating an Engine Extension? Let’s give it a try!

The NUnit Test Engine supports extensions. In this case, we’re talking about a Result Writer extension, one that will take the output of a test run from NUnit and create an output file in a particular format. In this case, we want the output to be a text file with each line holding the full name of a failed test case. Why that format? Because it’s exactly the format that the console runner already recognizes for the --testlist option. We can use the file that is created as input to a subsequent test run.

Information about how to write an extension can be found on the Writing Engine Extensions page of the NUnit documentation. Details of creating a ResultWriter extension can be found on the Result Writers page.

To get started, I created a new class library project called failed-tests-writer. I made sure that it targeted .NET 2.0, because that allows it to be run under the widest range of runtime versions and I added a package reference to the NUnit.Engine.Api package. That package will be published on nuget.org with the release of NUnit 3.5. Since that’s not out yet, I used the latest pre-release version from the NUnit project MyGet feed by adding https://www.myget.org/F/nunit/api/v2 to my NuGet package sources.

Next, I created a class to implement the extension. I called it FailedTestsWriter. I added using statements for NUnit.Engine and NUnit.Engine.Extensibility and implemented the IResultWriter interface. I gave my class Extension and ExtensionProperty attributes. Here is what it looked like when I was done.

using System;
using System.IO;
using NUnit.Engine;
using NUnit.Engine.Extensibility

namespace EngineExtensions
{
    [Extension, ExtensionAttribute("Format", "failedtests")]
    public class FailedTestsWriter : IResultWriter
    {
        public void CheckWritability(string outputPath)
        {
            using (new StreamWriter(outputPath, false, Encoding.UTF8)) { }
        }

        public void WriteResultFile(XmlNode resultNode, string outputPath)
        {
            using (var writer = new StreamWriter(outputPath, false, Encoding.UTF8))
            {
                WriteResultFile(resultNode, writer);
            }
        }

        public void WriteResultFile(XmlNode resultNode, TextWriter writer)
        {
            foreach (XmlNode node in resultNode.SelectNodes("//test-case[@result='Failed']")) // (3)
                writer.WriteLine(node.Attributes["fullname"].Value);
        }
    }
}

The ExtensionAttribute marks the class as an extension. In this case as in most cases, it’s not necessary to add any arguments. The Engine can deduce how the extension should be used from the fact that it implements IResultWriter.

As explained on the Result Writers page, this type of extension requires use of the ExtensionPropertyAttribute so that NUnit knows the name of the format it implements. In this case, I chose to use “failedtests” as the format name.

The CheckWriteability method is required to throw an exception if the provided output path is not writeable. We do that very simply by trying to create a StreamWriter. The empty using statement is merely an easy way to ensure that the writer is closed.

The main point of the extension is accomplished in the second WriteResultFile method. A foreach statement selects each failing test, which is then written to the output file.

Testing the Extension

That explains how to write the extension. In Part 2, I’ll explain how to deploy it. Meanwhile, I’ll tell you how I tested my extension in it’s own solution, using nunit3-console.

First, I installed the package NUnit.ConsoleRunner from nuget.org. I used version 3.4.1. Next, I created a fake package subdirectory in my packages folder, so it ended up looking like this:

packages
    NUnit.ConsoleRunner.3.4.1
    NUnit.Engine.Api.3.5.0-dev-03211
    NUnit.Extension.FailedTestsWriter
        tools
            failed-tests-writer.dll

Note that the new extension “package” directory name must start with “NUnit.Extension.” in order to trick the console-runner and engine into using it.

With this structure in place, I was able to run the console with the --list-extensions option to see that my extension was installed and I could use a command like

nunit3-console mytests.dll --result:FailedTests.lst;format=failedtests

to actually produce the required output.

Categories: Open Source

Back to Blogging!

NUnit.org - Thu, 09/22/2016 - 02:50

My blog has been offline for a long time, as you can see. The last prior post was in 2009!

Recently, I found a backup copy of the old blog and was able to re-establish it. Watch for some new posts in the near future.

Categories: Open Source

Software Testing Latest Training Courses for 2012

The Cohen Blog — PushToTest - Mon, 02/20/2012 - 05:34
Free Workshops, Webinars, Screencasts on Open Source Testing Need to learn Selenium, soapUI or any of a dozen other Open Source Test (OST) tools? Join us for a free Webinar Workshop on OST. We just updated the calendar to include the following Workshops:
And If you are not available for the above Workshops, try watching a screencast recording.

Watch The Screencast

Categories: Companies, Open Source

Selenium Tutorial For Beginners

The Cohen Blog — PushToTest - Thu, 02/02/2012 - 08:45
Selenium Tutorial for Beginners Selenium is an open source technology for automating browser-based applications. Selenium is easy to get started with for simple functional testing of a Web application. I can usually take a beginner with some light testing experience and teach them Selenium in a 2 day course. A few years ago I wrote a fast and easy tutorial Building Selenium Tests For Web Applications tutorial for beginners.

Read the Selenium Tutorial For Beginners Tutorial

The Selenium Tutorial for Beginners has the following chapters:
  • Selenium Tutorial 1: Write Your First Functional Selenium Test
  • Selenium Tutorial 2: Write Your First Functional Selenium Test of an Ajax application
  • Selenium Tutorial 3: Choosing between Selenium 1 and Selenium 2
  • Selenium Tutorial 4: Install and Configure Selenium RC, Grid
  • Selenium Tutorial 5: Use Record/Playback Tools Instead of Writing Test Code
  • Selenium Tutorial 6: Repurpose Selenium Tests To Be Load and Performance Tests
  • Selenium Tutorial 7: Repurpose Selenium Tests To Be Production Service Monitors
  • Selenium Tutorial 8: Analyze the Selenium Test Logged Results To Identify Functional Issues and Performance Bottlenecks
  • Selenium Tutorial 9: Debugging Selenium Tests
  • Selenium Tutorial 10: Testing Flex/Flash Applications Using Selenium
  • Selenium Tutorial 11: Using Selenium In Agile Software Development Methodology
  • Selenium Tutorial 12: Run Selenium tests from HP Quality Center, HP Test Director, Hudson, Jenkins, Bamboo
  • Selenium Tutorial 13: Alternative To Selenium
A community of supporting open source projects - including my own PushToTest TestMaker - enables you to apply your Selenium tests as functional tests for smoke testing, regression testing, and integration tests, load and performance tests, and production service monitors. These techniques and tools make it easy to run Selenium tests from test management platforms, including HP Quality Center, HP Test Director, Zephyr, TestLink, QMetry, from automated Continuous Integration (CI) tests, including Hudson, Jenkins, Cruise Control, and Bamboo.

I wrote a Selenium tutorial for beginners to make it easy to get started and take advantage of the advanced topics. Download TestMaker Community to get the Selenium tutorial for beginners and immediately build and run your first Selenium tests. It is entirely open source and free!

Read the Selenium Tutorial For Beginners Tutorial

Categories: Companies, Open Source

5 Services To Improve SOA Software Development Life Cycle

The Cohen Blog — PushToTest - Fri, 01/27/2012 - 00:25
SOA Testing with Open Source Test Tools PushToTest helps organizations with large scale Service Oriented Architecture (SOA) applications achieve high performance and functional service delivery. But, it does not happen at the end of SOA application development. Success with SOA at Best Buy requires an Agile approach to software development and testing, on-site coaching, test management, and great SOA oriented test tools.

Distributing the work of performance testing through an Agile epoc, story, and sprints reduces the testing effort overall and informs the organization's business managers on the service's performance. The biggest problem I see is keeping the testing transparent so that anyone - tester, developer, IT Ops, business manager, architect - follows a requirement down to the actual test results.

With the right tools, methodology, and coaching an organization gets the following:
  • Process identification and re-engineering for Test Driven Development (TDD)
  • Installation and configuration of a best-in-class SOA Test Orchestration Platform to enable rapid test development of re-usable test assets for functional testing, load and performance testing and production monitoring
  • Integration with the organization's systems, including test management (for example, Rally and HP QC) and service asset management (for example, HP Systinet)
  • Construction of the organization's end-to-end tests with a team of PushToTest Global Professional Services, using this system and training of the existing organization's testers, Subject Matter Experts, and Developers to build and operate tests
  • On-going technical support
Download the Free SOA Performance Kit On-Site Coaching Leads To Certification
The key to high quality and reliable SOA service delivery is to practice an always-on management style. That requires on-site coaching. In a typical organization the coaches accomplish the following:
  • Test architects and test developers work with the existing Testing Team members. They bring expert knowledge of the test tools. Most important is their knowledge of how to go from concept to test coding/scripting
  • Technical coaching on test automation to ensure that team members follow defined management processes
Cumulatively this effort is referred to as "Certification". When the development team produces quality product as demonstrated by simple functional tests, then the partner QA teams take these projects and employ "best practice" test automation techniques. The resulting automated tests integrate with the requirements system (for example, Rally), the continuous integration system, and the governance systems (for example, HP Systinet.)
Agile, Test Management, and Roles in SOA
Agile software development process normally focuses first on functional testing - smoke tests, regression test, and integration tests. Agile applied to SOA service development deliverables support the overall vision and business model for the new software. At a minimum we should expect:
  1. Product Owner defines User Stories
  2. Test Developer defines Test Cases
  3. Product team translates Test Cases into soapUI, TestMaker Designer, and Java project implementations
  4. Test Developer wraps test cases into Test Scenarios and creates an easily accessible test record associated to the test management service
  5. Any team member follows a User Story down into associated tests. From there they can view past results or execute tests again.
  6. As tests execute the test management system creates "Test Execution Records" showing the test results
Learn how PushToTest improves your SOA software development life cycle. Click here to learn how.


Download the Free SOA Performance Kit

Categories: Companies, Open Source