Skip to content

Feed aggregator

Getting Started with Blue Ocean's Activity View

Blue Ocean is a new user experience for Jenkins, and version 1.0 is now live! Blue Ocean makes Jenkins, and continuous delivery, approachable to all team members. In my previous post, I showed how easy it is to create and edit Declarative Pipelines using the Blue Ocean Visual Pipeline Editor. In this video, I’ll use the Blue Ocean Activity View to track the state of branches and Pull Requests in one project. Blue Ocean makes it so much easier to find the logs I need to triage failures.

Please Enjoy!  In my next video, I’ll switch from looking at a single project to monitoring multiple projects with the Blue Ocean Dashboard.



Blog Categories: Jenkins
Categories: Companies

Nexus 3.3 Delivers Free Next-Gen Repository Health Check and Git LFS Support

Sonatype Blog - Thu, 04/20/2017 - 14:00
Sonatype is excited to announce the immediate availability of Nexus Repository 3.3 in OSS and Pro editions.  What’s in this latest release?  We’re glad you asked:   Next-Generation Repository Health Check We first introduced Repository Health Check (RHC) in 2012.  Now, every...

To read more, visit our blog at
Categories: Companies

New Vim Course Online - Vim Hates You

ISerializable - Roy Osherove's Blog - Thu, 04/20/2017 - 08:08

I’ve added a new course to - This time it’s all about Vim, which I’ve been using for a few good years now. 


You can check out my online Vim course at 

Or you can go directly to the course page at

Categories: Blogs

Securing a Jenkins instance on Azure

This is a guest post by Claudiu Guiman and Eric Jizba, Software Engineers in the Azure DevOps team at Microsoft. If you have any questions, please email us at One of the most frequently asked questions for managing a Jenkins instance is "How do I make it secure?" Like any other web application, these issues must be solved: How do I securely pass secrets between the browser and the server? How do I hide certain parts from unauthorized users and show other parts to anonymous users? This blog post details how to securely connect to a Jenkins instance and how to setup a read-only public dashboard. ...
Categories: Open Source


Agile Testing with Lisa Crispin - Wed, 04/19/2017 - 21:12
Listen to some podcastsListen to some podcasts!

I’ve been honored recently to participate in two different podcasts.

I talked with Ryan Ripley and  on Ryan’s Agile for Humans podcast. We talked about pairing for various things, including presenting and writing, and discussed common challenges we see today related to testing. I’m a big fan of the Agile for Humans podcast, I learn a lot from each episode, so please give some of them a listen.

I also joined Amitai along with Johanna Rothman and Tim Ottinger on John LeDrew’s AgilePath podcast. John’s podcasts set a whole new standard for podcasts in the software world. His first topic is safety on teams. I hope you will join us in exploring that important area.

The post Podcasts! appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Docker container monitoring in a hypercomplex world: notes from DockerCon 2017

We’ve just wrapped up at DockerCon and I wanted to share some of the key highlights from such an electric and high-energy event. Our booth was absolutely jammed, as we showcased our unique approach to monitoring Docker environments, caused by a word-of-mouth surge.

Here’s my take:

#1. Nearly everyone at the event is moving to Docker in a big way for its flexibility and scalability.

#2. Complexity of container environments and how to monitor them effectively (given they can spin up and expire in a matter of seconds) is a point of concern.

I recently read some good research by RightScale which addresses these points around cloud adoption and complexity issues, specifically:

  • Docker is the preferred DevOps tool today, and adoption has surged to 35% up from 27% last year
  • Cloud users are running apps in multiple clouds – an average of 1.8 public clouds and 2.3 private clouds.

But rather than be left feeling overwhelmed, the attendees that came by our booth were blown away by Dynatrace’s ability to tackle all the performance challenges associated with these hugely-complex cloud environments.

Why DockerCon attendees were blown away by Dynatrace

Let me give you an example of some of the conversations we are having down here today:

Two guys from a very large Midwestern university came by the booth after visiting DataDog, sysdig, and other vendors (unfortunately New Relic and AppDynamics weren’t here despite them touting their microservice and Docker container monitoring capabilities).

As I walked them through the demo and showed them our capabilities, they were amazed at what we can do versus our competitors:

“Nobody can do that. That is a game changer”

This was their reaction to the fact that, with zero configuration, Dynatrace auto-injects into Docker containers as they spin up without touching the image, and automatically provides metrics on the app or service running in the container.

“We didn’t know that was even possible”

This was what they said when I told them about our agent updates, and the fact that developers don’t have to modify their image or restart their production environment.

“How are you able to do that?”

This is how they react to Dynatrace’s AI-based problem analysis with a single alert of the root cause, straight out of the box.

It’s such an exciting time to be with Dynatrace. And I love being at these events to see the reactions people have when they experience the full power of our solution.

We want your feedback

For those that have experienced Dynatrace, especially any DockerCon attendees who came by the booth, what do you think sets us apart?  We love feedback – it’s crucial to our ability to address the complexity challenges our customers face today and in the future. So please let us know your thoughts via the comments section below.

The post Docker container monitoring in a hypercomplex world: notes from DockerCon 2017 appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Zephyr Releases New Quality Management Platform

Software Testing Magazine - Wed, 04/19/2017 - 18:30
Zephyr has announced the release of a new platform for complete control, customization and usability. Zephyr’s new open platform enables test management and automation, while delivering a rich...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

New Static Analysis Features for Veracode

Software Testing Magazine - Wed, 04/19/2017 - 17:27
Veracode, a leader in securing the world’s software and recently acquired by CA Technologies, has announced four new features in its industry-leading Veracode Application Security Platform:...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Recomended Testing Conferences you can’t afford to miss

PractiTest - Wed, 04/19/2017 - 16:16
Are you part of the 38%?

Last month, the 2017 state of testing report was released showing, among many other interesting facts, that 38% of the respondents attended a testing conference in the past year.  This is the 4th year we have been running this survey and it keeps growing year after year, allowing us to show an increasingly accurate picture of the worldwide testing community. (If by any reason you haven’t heard about it- now’s your chance to catch up).

Testing Conferences – The advantages

Attending a conference has some great advantages:  it keeps you up-to-date about the latest technologies and methodologies, as well as strengthening your relationship with the fellow members of the testing community worldwide.

In order to make your life easier- we have created a calendar,  showing all testing related events. Feel free to take it and embed it anywhere you wish.

Here is a list of my recommended events to attend in this upcoming conferences season:

  1. STAREAST- Orlando, Florida. This is one of the most important testing events worldwide which will be held on May 7-12.  PractiTest’s Joel Montvelisky will be one of the speakers and if you are there be sure to come and say “Hi” at our Booth #48.  You are welcome to use our code and get a $200 discount.

stareast invite

  1.  TestBash- Belfast. After successful past events at Brighton, Manchester, Philadelphia and the Netherlands this event is now on it’s way to Belfast, and we are one of the supporters. TestBash Belfast
  2. Agile Testing Days –  Other than being a great professional conference, speaker’s lineup for this event, which shows how our world should look like- with a strong female presence
Categories: Companies

Testing What Matters with VeST

Software Testing Magazine - Wed, 04/19/2017 - 16:13
Tired of projects where the wrong thing is tested? Finding TDD too time-consuming or too hard? High test coverage and your users still find bugs? Don’t understand if your tests are unit, integration,...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Breaking the SonarQube Analysis with Jenkins Pipelines

Sonar - Wed, 04/19/2017 - 15:14

One of the most requested feature regarding SonarQube Scanners is the ability to fail the build when quality level is not at the expected level. We have this built-in concept of quality gate in SonarQube, and we used to have a BuildBreaker plugin for this exact use case. But starting from version 5.2, aggregation of metrics is done asynchronously on SonarQube server side. It means build/scanner process would finish successfully just after publishing raw data to the SonarQube server, without waiting for the aggregation to complete.

Some people tried to resurrect the BuildBreaker feature by implementing some active polling at the end of the scanner execution. We never supported this solution, since it defeats one of the benefit of having asynchronous aggregation on SonarQube server side. Indeed it means your CI executors/agents will be occupied “just” for a wait.

The cleanest pattern to achieve this is to release the CI executor, and have the SonarQube server send a notification when aggregation is completed. The CI job would then be resumed, and take the appropriate actions (not only mark the job as failed, but it could also send email notifications for example).

All of this is now possible, thanks to the webhook feature introduced in SonarQube 6.2. We are also taking benefit of Jenkins pipeline feature, that allow some part of a job logic to be executed without occupying an executor.

Let’s see it in action.

First, you need SonarQube server 6.2+. In your Jenkins instance, install latest version of the SonarQube Scanner for Jenkins (2.6.1+). You should of course configure in Jenkins administration section the credentials to connect to the SonarQube server.

In your SonarQube server administration page, add a webhook entry:

https://<your Jenkins instance>/sonarqube-webhook/

Now you can configure a pipeline job using the two SonarQube keywords ‘withSonarQubeEnv’ and ‘waitForQualityGate’.

The first one should wrap the execution of the scanner (that will occupy an executor) and the second one will ‘pause’ the pipeline in a very light way, waiting for the webhook payload.

node {
  stage('SCM') {
    git ''
  stage('build & SonarQube Scan') {
    withSonarQubeEnv('My SonarQube Server') {
      sh 'mvn clean package sonar:sonar'
    } // SonarQube taskId is automatically attached to the pipeline context
// No need to occupy a node
stage("Quality Gate") {
  timeout(time: 1, unit: 'HOURS') { // Just in case something goes wrong, pipeline will be killed after a timeout
    def qg = waitForQualityGate() // Reuse taskId previously collected by withSonarQubeEnv
    if (qg.status != 'OK') {
      error "Pipeline aborted due to quality gate failure: ${qg.status}"

Here you are:

That’s all Folks!

Categories: Open Source

Nexus Firewall Grows with Support for PyPI

Sonatype Blog - Wed, 04/19/2017 - 07:00
All Parts Are Not Created Equal According to the recent DevSecOps Community survey, 80 - 90% of a modern application is assembled using open source and third party components.  This is true whether you develop in Java, .NET, Ruby, Python or any other language.  While these components...

To read more, visit our blog at
Categories: Companies

Getting Started with Blue Ocean's Visual Pipeline Editor

Blue Ocean is a new user experience for Jenkins and version 1.0 is now live!

Blue Ocean makes Jenkins and continuous delivery approachable to all team members. In my previous post, I explained how to install Blue Ocean on your local Jenkins instance and switch to using Blue Ocean. As promised, here’s a screencast that picks up where that post left off. Starting from a clean Jenkins install, the video below will guide you through creating and running your first pipeline in Blue Ocean with the Visual Pipeline Editor.

Please Enjoy! In my next video, I’ll go over the Blue Ocean Pipeline Activity View.



Blog Categories: Jenkins
Categories: Companies

Domain Command Patterns - Validation

Jimmy Bogard - Tue, 04/18/2017 - 21:36

I don't normally like to debate domain modeling patterns (your project won't succeed or fail because of what you pick), I do still like to have a catalog of available patterns to me. And one thing that comes up often are "how should I model commands?":

In general, apps I build follow CQRS, where I split my application architecture into distinct commands and queries. However, no two applications are identical in terms of how they've applied CQRS. There always seem to be some variations here and there.

My applications also tend to have explicit objects for external "requests", which are the types bound to the HTTP request variables. This might be a form POST, or it might be a JSON POST, but in either case, there's a request object.

The real question is - how does that request object finally affect my domain model?

Request to Domain

Before I get into different patterns, I like to make sure I understand the problem I'm trying to solve. In the above picture, from the external request perspective, I need a few questions answered:

  • Was my request accepted or rejected?
  • If rejected, why?
  • If accepted, what happened?

In real life, there aren't fire-and-forget requests, you want some sort of acknowledgement. I'll keep this in mind when looking at my options.

Validation Types

First up is to consider validation. I tend to look at validation with at least a couple different levels:

  • Request validation
  • Domain validation

Think of request validation as "have I filled out the form correctly". These are easily translatable to client-side validation rules. If it were 100 years ago, this would be a desk clerk just making sure you've filled in all the boxes appropriately. This sort of validation you can immediately return to the client and does not require any sort of domain-specific knowledge.

A next-level validation is domain validation, or as I've often seen referred, "business rule validation". This is more of a system state validation, "can I affect the change to my system based on the current state of my system". I might be checking the state of a single entity, a group of entities, an entire collection of entities, or the entire system. The key here is I'm not checking the request against itself, but against the system state.

While you can mix request validation and domain validation together, it's not always pretty. Validation frameworks don't mix the two together well, and these days I generally recommend against using validation frameworks for domain validation. I've done it a lot in the past and the results...just aren't great.

As a side note, I avoid as much as possible any kind of validation that changes the state of the system and THEN validates. My validation should take place before I attempt to change state, not after. This means no validation attributes on entities, for example.

Validation concerns

Next, I need to concern myself with how validation errors bubble up. For request validation, that's rather simple. I can immediately return 400 Bad Request and a descriptive body of what exactly is off with the request. Typically, request validation happens in the UI layer of my application - built in with the MVC framework I'm using. Request validation doesn't really affect the design of my domain validation.

Domain Validation

Now that we've split our validation concerns into request validation and domain validation, I need to decide how I want to validate the domain side, and how that information will bubble up. Remember - it's important to know not only that my request has failed, but why it failed.

In the domain side, understanding the design of the why is important. Can I have one reason, or multiple reasons for failure? Does the reason need to include contextual data? Do I need to connect a failure reason to a particular input, or is the contextual data in the reason enough?

Next, how are the failures surfaced? When I pass the request (or command) to the domain, how does it tell me this command is not valid? Does it just return back a failure, or does it use some indirect means, like an exception?

public void DoSomething(SomethingRequest request) {  
    if (stateInvalid) {
        throw new DomainValidationException(reason);


public bool DoSomething(SomethingRequest request) {  
    if (stateInvalid) {
        return false;
    return true;

In either case, I have some method that is responsible for affecting change. Where this method lives we can look at in the next post, but it's somewhere. I've gotten past the request-level validation and now need domain-level validation - can I affect this change based on the current state of the system? Two ways I can surface this back out - directly via a return value, or indirectly via an exception.

Exceptional domain validation

At first glance, it might seem that using exceptions are a bad choice for surfacing validation. Exceptions should be exceptional, and not part of a normal operation. But exceptions would let me adhere to the CQS principle, where methods either perform an action, or return data, but not both.

Personally, I'm not that hung up on CQS for these outer portions of my application, which is more of OOP concern. Maybe if I'm trying to follow OOP to the letter it would be important. But I'm far more concerned with clean code than OOP.

If I expect the exceptional case to be frequent, that is, the user frequently tries to do something that my domain validation disallows, then this wouldn't be a good choice. I shouldn't use exceptions just to get around the CQS guideline.

However, I do try to design my UX so that the user cannot get themselves in an invalid state. Even validations - my UX should guide the user so that they don't put in invalid data. The HTML5 placeholder attribute or explanatory text helps there.

But what about domain state? This is a bit more complex - but ideally, if a user isn't allowed to perform a state change, for whatever reason, then they are not presented with an option to do so! This can be communicated either with a disabled link/button, or simply removing the button/link altogether. In the case of REST, we just wouldn't return links and forms that were not valid state transitions.

If we're up-front designing our UX to not allow the user to try to get themselves in a bad state, then exceptions would truly be exceptional, and then it's OK to use them I believe.

Returning success/failure

If we don't want to use exceptions, but directly return the success/failure of our operation, then at this point we need to decide:

  • Can I have one or multiple reasons for failure?
  • Do I need contextual information in my message?
  • Do I need to correlate my message to input fields?

I don't really have a go to answer for any of these, it's really depended on the nature of the application. But if I just needed a single reason, then I can have a very simple CommandResult:

public class CommandResult  
   private CommandResult() { }

   private CommandResult(string failureReason)
       FailureReason = failureReason;

   public string FailureReason { get; }
   public bool IsSuccess => string.IsNullOrEmpty(FailureReason);

   public static CommandResult Success { get; } = new CommandResult();

   public static CommandResult Fail(string reason)
       return new CommandResult(reason);

   public static implicit operator bool(CommandResult result)
       return result.IsSuccess;

In the above example, we just allow a single failure reason. And for simplicity's sake, an implicit operator to bool so that we can do things like:

public IActionResult DoSomething(SomethingRequest request) {  
    CommandResult result = service.DoSomething(request);
    return result ? Ok() : BadRequest(result.FailureReason);

We can of course make our CommandResult as complex as we need be to represent the result of our command, but I like to start simple.

Between these two options, which should you use? I've gone back and forth between the two and they both have benefits and drawbacks. At some point it becomes what your team is more comfortable with and what best fits their preferences.

With request and command validation, let's next turn to handling the command itself inside our domain.

Categories: Blogs

Unit Testing Angular 2 with Jasmine

Software Testing Magazine - Tue, 04/18/2017 - 18:06
Jasmine is an open source behavior-driven development (BDD) framework for testing JavaScript code. Angular is a development platform for building mobile and desktop web applications using...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Monitoring .NET Core Applications with Dynatrace

This post was coauthored with Georg Schausberger.

If you follow the development of the .NET framework you probably know that there are some tremendous changes in the Microsoft world: the cross-platform, open-source implementation of .NET, the .NET Core was released and with Visual Studio 2017 finally, the tooling for development also reached RTM state.

So, the framework is RTM, the tooling for development is also RTM, and we believe that the next thing you need is an APM solution to monitor your applications in production. And this is exactly what we work on.

This post gives you an overview about our work on .NET core, and we try to give you an outlook on this topic for the future.

1. ASP.NET Core on Full Framework

As you may know Microsoft not only implemented a new cross platform CLR, they also created a cross platform web stack: this is ASP.NET Core, which can run both on the CoreCLR and on Full Framework. As the first step we extended our .NET agent with ASP.NET Core capabilities. This has been shipped in Dynatrace since September 2016 and in AppMon 6.5. We also blogged about this back in December. Of course, at that point it was clear for us that we want to go further and provide the same support for CoreCLR, so we did not stop at that point

2. .NET Core on Windows (including ASP.NET Core on .NET Core)

As you may guess the bigger challenge was to support CoreCLR. What we introduced in point “ASP.NET Core on Full Framework.” is a specialized sensor for ASP.NET Core based on the existing .NET agent. Making this agent compatible with the new CoreCLR on Windows was the next step. The big news is that this work will be shipped in AppMon 7.0 and in Dynatrace OneAgent version 117.

If you already ported an existing application from Full Framework to CoreCLR you know that it is quite a big challenge. The same applies for our agent, so this was a big work for us too. Profiling .NET Core is something completely new, therefore in the first phase we release it with a Beta label.

This will be publicly available as Beta starting end of April’17 in Dynatrace SaaS. Dynatrace Managed will receive Beta support for .NET Core with version 118. We highly welcome customers for our AppMon EAP 7.0 (early access program) to test out .NET Core monitoring. (details in FAQ section below)

So, let’s see how it looks:

This is a PurePath showing all DB statements involved in showing all tablets on SimplCommerce. 3. .NET Core on Linux and other non-Windows system

Linux and other platforms offer many new opportunities for hosting .NET Core services. The Profiling API of the CoreCLR on Linux is still not yet officially supported and not completely tested by Microsoft. Note: We are investing time into upgrading our testing infrastructure for .NET applications on Linux. We are committed to release this as quickly as possible, but we cannot promise a release date because the underlying Profiler API is still under construction as you see on GitHub.

4. FAQ

Q: Is the .NET Core Agent production ready?

A: Profiling .NET Core is something completely new, therefore in the first phase we release it with a Beta label.

Q: What about Azure?

A: There are Dynatrace Azure Extensions which automatically install everything needed for monitoring your .NET Core App inside Azure. Just search on the Azure Portal for Dynatrace under extensions. Or you can find it here.

Q: How is this related to OneAgent?

A: The OneAgent is the new Agent Platform which supports new technologies like ASP.NET Core, .NET Core, OWIN/ Katana. Everything we announce in this post applies to OneAgent.

Q: I am a Dynatrace SaaS customer, can I use this already?

A: The .NET Core beta support requires the OneAgent version 117. This will be available to you by the end of April.

Q: I am a Dynatrace Managed customer, can I use this already?

A: The .NET Core beta support requires the OneAgent version 117 and Cluster version 118.

Q: We would like to participate in an EAP program and work with you on this. How can we contact you?
A: Just register for EAP here

The post Monitoring .NET Core Applications with Dynatrace appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Web Performance and Load Test - the Web Test Recorder plugin loads but controls are grayed out in Windows 10 with IE 11

I had an issue with IE 11 where the Web Test Recorder was loading but all controls were grayed out.

I verified the “Web Test Recorder 14.0” and “Microsoft Web Test Recorder 14.0 Helper” add-ons were enabled in IE.
Then reset and restarted IE under (Tools> Internet Options> Advanced>Reset).  After that all IE add-ons were then disabled.  I re-enabled the Web Test Recorder add-ons when prompted. 
That resolved the issue with the controls being grayed out, but then I was getting a missing DLL exception clicking Pause or Stop during the recording: "System.DllNotFoundException: Unable to load DLL 'Microsoft.VisualStudio.QualityTools.RecorderBarBHO100.x64.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)"
This site said to copy the DLL to the IE folder to resolve the missing DLL exception.Copy Microsoft.VisualStudio.QualityTools.RecorderBarBHO100.dll (For older versions RecorderBarBHO90.dll  etc)  located under C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\PrivateAssemblies

To – C:\Program Files\Internet Explorer and for 64bit machines also Copy To – C:\Program Files (x86)\Internet Explorer

I copied the DLL to the IE folders and the issue is now resolved.
Categories: Blogs

Sonatype Nexus 3 launches into Mesosphere DC/OS

Sonatype Blog - Tue, 04/18/2017 - 12:00
Today we are excited to announce the availability of the incredibly popular repository manager and private container registries, Nexus Repository, on DC/OS.  Among its many benefits, Nexus Repository will deliver the first, free, enterprise-scale private Docker registry to the Mesosphere DC/OS...

To read more, visit our blog at
Categories: Companies

5 tips to solve common problems related to Oracle NCA protocol in LoadRunner and Performance Center

HP LoadRunner and Performance Center Blog - Tue, 04/18/2017 - 09:42

Oracle LR PC teaser.png


Oracle NCA protocol in LoadRunner and Performance Center provides a load testing solution for testing applications based on Oracle Forms technologies. Read more to get some handy tips on how to solve some common questions.

Categories: Companies

Delivery Pipelines, with Jenkins 2, SonarQube, and Artifactory

This is a guest post by Michael Hüttermann. Michael is an expert in Continuous Delivery, DevOps and SCM/ALM. More information about him at, or follow him on Twitter: @huettermann. Continuous Delivery and DevOps are well known and widely spread practices nowadays. It is commonly accepted that it is crucial to form great teams and define shared goals first and then choose and integrate the tools fitting best to given tasks. Often it is a mashup of lightweight tools, which are integrated to build up Continuous Delivery pipelines and underpin DevOps initiatives. In this blog post, we zoom in to an important part of the overall...
Categories: Open Source

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today