Skip to content

Blogs

Writing cleaner JavaScript code with gulp and eslint

Decaying Code - Maxime Rouiller - 25 min 30 sec ago

With the new ASP.NET Core 1.0 RC2 right around the corner and it's deep integration with the node.js workflow, I thought about putting out some examples of what I use for my own workflow.

In this scenario, we're going to see how we can improve the JavaScript code that we are writing.

Gulp

This example uses gulp.

I'm not saying that gulp is the best tool for the job. I just find that gulps work really well for our team and you guys should seriously consider it.

Base file

Let's get things started. We'll start off the base template that is shipped with the RC1 template.

The first thing we are going to do is check what is being done and what is missing.

/// <binding Clean='clean' />
"use strict";

var gulp = require("gulp"),
    rimraf = require("rimraf"),
    concat = require("gulp-concat"),
    cssmin = require("gulp-cssmin"),
    uglify = require("gulp-uglify");

var paths = {
    webroot: "./wwwroot/"
};

paths.js = paths.webroot + "js/**/*.js";
paths.minJs = paths.webroot + "js/**/*.min.js";
paths.css = paths.webroot + "css/**/*.css";
paths.minCss = paths.webroot + "css/**/*.min.css";
paths.concatJsDest = paths.webroot + "js/site.min.js";
paths.concatCssDest = paths.webroot + "css/site.min.css";

gulp.task("clean:js", function (cb) {
    rimraf(paths.concatJsDest, cb);
});

gulp.task("clean:css", function (cb) {
    rimraf(paths.concatCssDest, cb);
});

gulp.task("clean", ["clean:js", "clean:css"]);

gulp.task("min:js", function () {
    return gulp.src([paths.js, "!" + paths.minJs], { base: "." })
        .pipe(concat(paths.concatJsDest))
        .pipe(uglify())
        .pipe(gulp.dest("."));
});

gulp.task("min:css", function () {
    return gulp.src([paths.css, "!" + paths.minCss])
        .pipe(concat(paths.concatCssDest))
        .pipe(cssmin())
        .pipe(gulp.dest("."));
});

gulp.task("min", ["min:js", "min:css"]);

As you can see, we basically have 4 tasks and 2 aggregate tasks.

  • Clean JavaScripts files
  • Clean CSS files
  • Minimize Javascript files
  • Minimize CSS files

The aggregate tasks are basically just to do all the cleaning or the minifying at the same time.

Getting more out of it

Well, that brings us to feature equality with what was available with MVC 5 with the Javascript and CSS minifying. However, why not go a step further?

Linting our Javascript

One of the most common thing we need to do is make sure we do not write horrible code. Linting is a code analysis technique that detects early problems or stylistic issues.

How do we get this working with gulp?

First, we install gulp-eslint with npm install gulp-eslint --save-dev run into the web application project folder. This will install the required dependencies and we can start writing some code.

First, let's start by getting the dependency:

var eslint = require('gulp-eslint');

And into your default ASP.NET Core 1.0 project, open up site.js and copy the following code:

function something() {
}

var test = new something();

Let's run the min:js task with gulp like this: gulp min:js. This will show that our file is minimized but... there's something wrong with the style of this code. The something function should be Pascal cased and we want this to be reflected in our code.

Let's integrate the linter in our pipeline.

First let's create our linting task:

gulp.task("lint", function() {
    return gulp.src([paths.js, "!" + paths.minJs], { base: "." })
        .pipe(eslint({
            rules : {
                'new-cap': 1 // function need to begin with a capital letter when newed up
            }
        }))
        .pipe(eslint.format())
        .pipe(eslint.failAfterError());
});

Then, we need to integrate it in our minify task.

gulp.task("min:js" , ["lint"], function () { ... });

Then we can either run gulp lint or gulp min and see the result.

C:_Prototypes\WebApplication1\src\WebApplication1\wwwroot\js\site.js 6:16 warning A constructor name should not start with a lowercase letter new-cap

And that's it! You can pretty much build your own configuration from the available ruleset and have clean javascript part of your build flow!

Many more plugins available

More gulp plugins are available on the registry. Whether you want to lint, transpile javascript (TypeScript, CoffeeScript), compile CSS (Less, SASS), minify images... everything can be included in the pipeline.

Look up the registry and start hacking away!

Categories: Blogs

Creating a simple ASP.NET 5 Markdown TagHelper

Decaying Code - Maxime Rouiller - 25 min 30 sec ago

I've been dabbling a bit with the new ASP.NET 5 TagHelpers and I was wondering how easy it would be to create one.

I've created a simple Markdown TagHelper with the CommonMark implementation.

So let me show you what it is, what each line of code is doing and how to implement it in an ASP.NET MVC 6 application.

The Code
using CommonMark;
using Microsoft.AspNet.Mvc.Rendering;
using Microsoft.AspNet.Razor.Runtime.TagHelpers;

namespace My.TagHelpers
{
    [HtmlTargetElement("markdown")]
    public class MarkdownTagHelper : TagHelper
    {
        public ModelExpression Content { get; set; }
        public override void Process(TagHelperContext context, TagHelperOutput output)
        {
            output.TagMode = TagMode.SelfClosing;
            output.TagName = null;

            var markdown = Content.Model.ToString();
            var html = CommonMarkConverter.Convert(markdown);
            output.Content.SetContentEncoded(html);
        }
    }
}
Inspecting the code

Let's start with the HtmlTargetElementAttribute. This will wire the HTML Tag <markdown></markdown> to be interpreted and processed by this class. There is nothing stop you from actually having more than one target.

You could for example target element <md></md> by just adding [HtmlTargetElement("md")] and it would support both tags without any other changes.

The Content property will allow you to write code like this:

@model MyClass

<markdown content="@ViewData["markdown"]"></markdown>    
<markdown content="Markdown"></markdown>    

This easily allows you to use your model or any server-side code without having to handle data mapping manually.

TagMode.SelfClosing will force the HTML to use self-closing tag rather than having content inside (which we're not going to use anyway). So now we have this:

<markdown content="Markdown" />

All the remaining lines of code are dedicated to making sure that the content we render is actual HTML. output.TagName just make sure that we do not render the actual markdown tag.

And... that's it. Our code is complete.

Activating it

Now you can't just go and create TagHelpers and have them automatically served without wiring one thing.

In your ASP.NET 5 projects, go to /Views/_ViewImports.cshtml.

You should see something like this:

@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"

This will load all TagHelpers from the Microsoft.AspNet.Mvc.TagHelpers assembly.

Just duplicate the line and type-in your assembly name.

Then in your Razor code you can have the code bellow:

public class MyClass
{
    public string Markdown { get; set; }
}
@model MyClass
@{
    ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>  

<markdown content="Markdown"/>

Which will output your markdown formatted as HTML.

Now whether you load your markdown from files, database or anywhere... you can have your user write rich text in any text box and have your application generate safe HTML.

Components used
Categories: Blogs

Should our front-end websites be server-side at all?

Decaying Code - Maxime Rouiller - 25 min 30 sec ago

I’ve been toying around with projects like Jekyll, Hexo and even some hand-rolled software that will generate me HTML files based on data. The thought that crossed my mind was…

Why do we need dynamically generated HTML again?

Let me take examples and build my case.

Example 1: Blog

Of course the simpler examples like blogs could literally all be static. If you need comments, then you could go with a system like Disqus. This is quite literally one of the only part of your system that is dynamic.

RSS feed? Generated from posts. Posts themselves? Could be automatically generated from a databases or Markdown files periodically. The resulting output can be hosted on a Raspberry Pi without any issues.

Example 2: E-Commerce

This one is more of a problem. Here are the things that don’t change a lot. Products. OK, they may change but do you need to have your site updated right this second? Can it wait a minute? Then all the “product pages” could literally be static pages.

Product reviews? They will need to be “approved” anyway before you want them live. Put them in a servier-side queue, and regenerate the product page with the updated review once it’s done.

There’s 3 things that I see that would require to be dynamic in this scenario.

Search, Checkout and Reviews. Search because as your products scales up, so does your data. Doing the search client side won’t scale at any level. Checkout because we are now handling an actual order and it needs a server components. Reviews because we’ll need to approve and publish them.

In this scenario, only the Search is the actual “Read” component that is now server side. Everything else? Pre-generated. Even if the search is bringing you the list of product dynamically, it can still end up on a static page.

All the other write components? Queued server side to be processed by the business itself with either Azure or an off-site component.

All the backend side of the business (managing products, availability, sales, whatnot, etc.) will need a management UI that will be 100% dynamic (read/write).

Question

So… do we need dynamic front-end with the latest server framework? On the public facing too or just the backend?

If you want to discuss it, Tweet me at @MaximRouiller.

Categories: Blogs

You should not be using WebComponents yet

Decaying Code - Maxime Rouiller - 25 min 30 sec ago

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since... well... a long time.

If you take a look at the specification, it's hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let's take a look at all four of them.

Reviewing the specificationsHTML Templates

Specification

This specific specification is not part of the "Web components" section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements

Specification

This specification is for review and not for implementation!

Alright no let's not touch this yet.

Shadow DOM

Specification

This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports

Specification

This one is still a working draft so it hasn't been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it's the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it's built by... well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we're only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it's a painful path.

What is wrong right now with WebComponents

First, it's not cross platform. We handled that in the past. That's not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there's no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn't move, we're back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what "Production" is concerned, do not use WebComponents directly. Also, avoid Polymer as it's only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Categories: Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - 25 min 30 sec ago

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - 25 min 30 sec ago

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - 25 min 30 sec ago

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
{
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;
	}

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
	}
}
      
    
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - 25 min 30 sec ago

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.

Powershell

Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.

Param(
    [Parameter(Mandatory=$True)]
    [string]$filename
)

Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
{
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
    {
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
    }
}
    
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - 25 min 30 sec ago

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - 25 min 30 sec ago

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:

SET GIT_SSL_NO_VERIFY=true

This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.

NOW.

Categories: Blogs

Flaky Tests at Google and How We Mitigate Them

Google Testing Blog - Sat, 05/28/2016 - 02:34
by John Micco

At Google, we run a very large corpus of tests continuously to validate our code submissions. Everyone from developers to project managers rely on the results of these tests to make decisions about whether the system is ready for deployment or whether code changes are OK to submit. Productivity for developers at Google relies on the ability of the tests to find real problems with the code being changed or developed in a timely and reliable fashion.

Tests are run before submission (pre-submit testing) which gates submission and verifies that changes are acceptable, and again after submission (post-submit testing) to decide whether the project is ready to be released. In both cases, all of the tests for a particular project must report a passing result before submitting code or releasing a project.

Unfortunately, across our entire corpus of tests, we see a continual rate of about 1.5% of all test runs reporting a "flaky" result. We define a "flaky" test result as a test that exhibits both a passing and a failing result with the same code. There are many root causes why tests return flaky results, including concurrency, relying on non-deterministic or undefined behaviors, flaky third party code, infrastructure problems, etc. We have invested a lot of effort in removing flakiness from tests, but overall the insertion rate is about the same as the fix rate, meaning we are stuck with a certain rate of tests that provide value, but occasionally produce a flaky result. Almost 16% of our tests have some level of flakiness associated with them! This is a staggering number; it means that more than 1 in 7 of the tests written by our world-class engineers occasionally fail in a way not caused by changes to the code or tests.

When doing post-submit testing, our Continuous Integration (CI) system identifies when a passing test transitions to failing, so that we can investigate the code submission that caused the failure. What we find in practice is that about 84% of the transitions we observe from pass to fail involve a flaky test! This causes extra repetitive work to determine whether a new failure is a flaky result or a legitimate failure. It is quite common to ignore legitimate failures in flaky tests due to the high number of false-positives. At the very least, build monitors typically wait for additional CI cycles to run this test again to determine whether or not the test has been broken by a submission adding to the delay of identifying real problems and increasing the pool of changes that could contribute.

In addition to the cost of build monitoring, consider that the average project contains 1000 or so individual tests. To release a project, we require that all these tests pass with the latest code changes. If 1.5% of test results are flaky, 15 tests will likely fail, requiring expensive investigation by a build cop or developer. In some cases, developers dismiss a failing result as flaky only to later realize that it was a legitimate failure caused by the code. It is human nature to ignore alarms when there is a history of false signals coming from a system. For example, see this article about airline pilots ignoring an alarm on 737s. The same phenomenon occurs with pre-submit testing. The same 15 or so failing tests block submission and introduce costly delays into the core development process. Ignoring legitimate failures at this stage results in the submission of broken code.

We have several mitigation strategies for flaky tests during presubmit testing, including the ability to re-run only failing tests, and an option to re-run tests automatically when they fail. We even have a way to denote a test as flaky - causing it to report a failure only if it fails 3 times in a row. This reduces false positives, but encourages developers to ignore flakiness in their own tests unless their tests start failing 3 times in a row, which is hardly a perfect solution.
Imagine a 15 minute integration test marked as flaky that is broken by my code submission. The breakage will not be discovered until 3 executions of the test complete, or 45 minutes, after which it will need to be determined if the test is broken (and needs to be fixed) or if the test just flaked three times in a row.

Other mitigation strategies include:
  • A tool that monitors the flakiness of tests and if the flakiness is too high, it automatically quarantines the test. Quarantining removes the test from the critical path and files a bug for developers to reduce the flakiness. This prevents it from becoming a problem for developers, but could easily mask a real race condition or some other bug in the code being tested.
  • Another tool detects changes in the flakiness level of tests and works to identify the change that caused the test to change the level of flakiness.

In summary, test flakiness is an important problem, and Google is continuing to invest in detecting, mitigating, tracking, and fixing test flakiness throughout our code base. For example:
  • We have a new team dedicated to providing accurate and timely information about test flakiness to help developers and build monitors so that they know whether they are being harmed by test flakiness.
  • As we analyze the data from flaky test executions, we are seeing promising correlations with features that should enable us to identify a flaky result accurately without re-running the test.


By continually advancing the state of the art for teams at Google, we aim to remove the friction caused by test flakiness from the core developer workflows.

Categories: Blogs

Test automation as an orchard

Dorothy Graham Blog - Thu, 05/26/2016 - 13:46
At StarEast in May 2016, I was kindly invited to give a lightning keynote, which I did on this analogy. Hope you find it interesting and useful!

-----------------------------------------------------------------------

Automation is SO easy.
Let me rephrase that - automation often seems to be very easy.When you see your first demo, or run your first automated test, it’s like magic - wow, that’s good, wish I could type that fast.
But good automation is very different to that first test.
If you go into the garden and see a lovely juicy fruit hanging on a low branch, and you reach out and pick it, you think, "Wow, that was easy - isn’t it good, lovely and tasty".
But good test automation is more like building an orchard to grow enough fruit to feed a small town.
Where do you start?First you need to know what kind of fruit you want to grow - apples? oranges? (oranges would not be a good choice for the UK). You need to consider what kind of soil you have, what kind of climate, and also what will the market be - you don’t want to grow fruit that no one wants to buy or eat.
In automation, first you need to know what kind of tests you want to automate, and why. You need to consider the company culture, other tools, what the context is, and what will bring lasting value to your business.
Growing pains?Then you need to grow your trees. Fortunately automation can grow a lot quicker than trees, but it still takes time - it’s not instant.
While the trees are growing, you need to prune them and prune them hard especially in the first few years. Maybe you don’t allow them to fruit at all for the first 3 years - this way you are building a strong infrastructure for the trees so that they will be stronger and healthier and will produce much more fruit later on. You may also want to train them to grow into the structure that you want from the trees when they are mature.
In automation, you need to prune your tests - don’t just let them grow and grow and get all straggly. You need to make sure that each test has earned its place in your test suite, otherwise get rid of it. This way you will build a strong infrastructure of worthwhile tests that will make your automation stronger and healthier over the years, and it will bring good benefits to your organisation. You need to structure your automation (a good testware architecture) so that it will give lasting benefits.
Feeding, pests and diseasesOver time, you need to fertilise the ground, so that the trees have the nourishment they need to grow to be strong and healthy.
In automation, you need to nourish the people who are working on the automation, so that they will continue to improve and build stronger and healthier automation. They need to keep learning, experimenting, and be encouraged to make mistakes - in order to learn from them.
You need to deal with pests - bugs - that might attack your trees and damage your fruit.
Is this anything to do with automation? Are there bugs in automated scripts? In testing tools? Of course there are, and you need to deal with them - be prepared to look for them and eradicate them.
What about diseases? What if one of your trees gets infected with some kind of blight, or suddenly stops producing good fruit? You may need to chop down that infected tree and burn it, because it you don’t, this blight might spread to your whole orchard.
Does automation get sick? Actually, a lot of automation efforts seem to decay over time - they take more and more effort to maintain. technical debt builds up, and often the automation dies. If you want your automation to live and produce good results, you might need to take drastic action and re-factor the architecture if it is causing problems. Because if you don’t, your whole automation may die.
Picking and packingWhat about picking the fruit? I have seen machines that shake the trees so they can be scooped up - that might be ok if you are making cider or applesauce, but I wouldn’t want fruit picked in that way to be in my fruit bowl on the table. Manual effort is still needed. The machines can help but not do everything (and someone is driving the machines).
Test execution tools don’t do testing, they just run stuff. The tools can help and can very usefully do some things, but there are tests that should not be automated and should be run manually. The tools don’t replace testers, they support them.
We need to pack the fruit so it will survive the journey to market, perhaps building a structure to hold the fruit so it can be transported without damage.
Automation needs to survive too - it needs to survive more than one release of the application, more than one version of the tool, and may need to run on new platforms. The structure of the automation, the testware architecture, is what determines whether or not the automated tests survive these changes well.
Marketing, selling, roles and expectationsIt is important to do marketing and selling for our fruit - if no one buys it, we will have a glut of rotting fruit on our hands.
Automation needs to be marketed and sold as well - we need to make sure that our managers and stakeholders are aware of the value that automation brings, so that they want to keep buying it and supporting it over time.
By the way, the people who are good at marketing and selling are probably not the same people who are good at picking or packing or pruning - different roles are needed. Of course the same is true for automation - different roles are needed: tester, automator, automation architect, champion (who sells the benefits to stakeholders and managers).
Finally, it is important to set realistic expectations. If your local supermarket buyers have heard that eating your fruit will enable them to leap tall buildings at a single bound, you will have a very easy sell for the first shipment of fruit, but when they find out that it doesn’t meet those expectations, even if the fruit is very good, it may be seen as worthless.
Setting realistic expectations for automation is critical for long-term success and for gaining long-term support; otherwise if the expectations aren’t met, the automation may be seen as worthless, even if it is actually providing useful benefits.
SummarySo if you are growing your own automation, remember these things:
  • -      it takes time to do it well
  • -      prepare the ground
  • -      choose the right tests to grow
  • -      be prepared to prune / re-factor
  • -      deal with pests and diseases (see previous point)
  • -      make sure you have a good structure so the automation will survive change
  • -      different roles are needed
  • -      sell and market the automation and set realistic expectations
  • -      you can achieve great results


I hope that all of your automation efforts are very fruitful!


Categories: Blogs

Cambridge Lean Coffee

Hiccupps - James Thomas - Thu, 05/26/2016 - 06:40

This month's Lean Coffee was hosted by Cambridge Consultants. Here's some brief, aggregated comments on topics covered by the group I was in.

What is your biggest problem right now? How are you addressing it?
  • A common answer was managing multi-site test teams (in-house and/or off-shore)
  • Issues: sharing information, context, emergent specialisations in the teams, communication
  • Weinberg says all problems are people problems
  • ... but the core people problem is communication
  • Examples: chinese whispers, lack of information flow, expertise silos, lack of visual cues (e.g. in IM or email)
  • Exacerbated by time zone and cultural differences; lack/difficulty of ability to sit down together,  ...
  • Trying to set up communities of practice (e.g. Spotify Guilds) to help communication, iron out issues
  • Team splits tend to be imposed by management
  • But note that most of the problems can exist in a colocated team too

  • Another issue was adoption of Agile
  • Issues: lack of desire to undo silos, too many parallel projects, too little breaking down of tasks, insufficient catering for uncertainty, resources maxed out
  • People often expect Agile approaches to "speed things up" immediately
  • On the way to this Lean Coffee I was listening to Lisa Crispin on Test Talks:"you’re going to slow down for quite a long time, but you’re going to build a platform ... that, in the future, will enable you to go faster"

How do you get developers to be open about bugs?
  • Some developers know about bugs in the codebase but aren't sharing that information. 
  • Example: code reviewer doesn't flag up side-effects of a change in another developer's code
  • Example: developers get bored of working in an area so move on to something else, leaving unfinished functionality
  • Example: requirements are poorly defined and there's no appetite to clarify them so code has ambiguous aims
  • Example: code is built incrementally over time with no common design motivation and becomes shaky
  • Is there a checklist for code review that both sides can see?
  • Does bug triage include a risk assessment?
  • Do we know why the developers aren't motivated to share the information?
  • Talking to developers, asking to be shown code and talked through algorithms can help
  • Watching commits go through; looking at the speed of peer review can suggest places where effort was low

Testers should code; coders should test
  • Discussion was largely about testers in production code
  • Writing production code (even under guidance in non-critical areas) gives insight into the production
  • ... but perhaps it takes testers away from core skills; those where they add value to the team?
  • ... but perhaps testers need to be wary of not simply reinforcing skills/biases we already have?
  • Coders do test! Even static code review is testing
  • Why is coding special? Why shouldn't testers do UX, BA, Marketing, architecting, documentation, ...
  • Testing is dong other people's jobs
  • ... or is it?
  • These kinds of discussion seem to be predicated on the idea that  manual testing is devalued
  • Some discussion about whether test code can get worse when developers work on it
  • ... some say that they have never seen that happen
  • ... some say that developers have been seen to deliberately over-complicate such code in order to make it an interesting coding task
  • ... some have seen developers add very poor test data to frameworks 
  • ... but surely the same is true of some testers?
  • We should consider automation as a tool, rather than an all (writing product code) or nothing (manual tester). Use it when it makes sense to, e.g. to generate test data

Ways to convince others that testing is adding value
  • Difference between being seen as personally valuable against the test team adding value
  • Overheard: "Testing is necessary waste"
  • Find issues that your stakeholders care about
  • ... these needn't be in the product, they can be e.g. holes in requirements
  • ... but the stakeholders need to see what the impact of proceeding without addressing the issues could be
  • Be humble and efficient and professional and consistent and show respect to your colleagues and the project
  • Make your reporting really solid - what we did (and didn't); what we found; what the value of that work was (and why)
  • ... even when you find no issues
Image: https://flic.kr/p/5LH9o

Categories: Blogs

My OSS CI/CD Pipeline

Jimmy Bogard - Tue, 05/24/2016 - 23:39

As far back as I’ve been doing open source, I’ve borrowed other project’s build scripts. Because build scripts are almost always committed with source control, you get to see not only other projects’ code, but how they build, test and package their code as well.

And with any long-lived project, I’ve changed the build process for my projects more times than I care to count. AutoMapper, that I can remember, started off on NAnt (yes, it’s that old).

These days, I try to make the pipeline as simple as possible, and AppVeyor has been a big help with that goal.

The CI Pipeline

For my OSS projects, all work, including my own, goes through a branch and pull request. Some source control hosts allow you to enforce this behavior, including GitHub. I tend to leave this off on OSS, since it’s usually only me that has commit rights to the main project.

All of my OSS projects are now on the soon-to-be-defunct project.json, and all are either strictly project.json or a mix of main project being project.json and others being regular .csproj. Taking MediatR as the example, it’s entirely project.json, while AutoMapper has a mix for testing purposes.

Regardless, I still rely on a build script to execute a build that happens both on the local dev machine and the server. For MediatR, I opted for just a plain PowerShell script that I borrowed from projects online.  The build script really represents my build pipeline in its entirety, and it’s important for me that this build script actually live as part of my source code and not tied up in a build server. Its steps are:

  • Clean
  • Initialize
  • Build
  • Test
  • Package

Not very exciting, and similar to many other pipelines I’ve seen (in fact, I borrow a lot of ideas from Maven, which has a predefined pipeline).

The script for me then looks pretty straightfoward:

# Clean
if(Test-Path .\artifacts) { Remove-Item .\artifacts -Force -Recurse }

# Initialize
EnsurePsbuildInstalled

exec { & dotnet restore }

# Build
Invoke-MSBuild

# Test
exec { & dotnet test .\test\MediatR.Tests -c Release }

# Package
$revision = @{ $true = $env:APPVEYOR_BUILD_NUMBER; $false = 1 }[$env:APPVEYOR_BUILD_NUMBER -ne $NULL];
$revision = "{0:D4}" -f [convert]::ToInt32($revision, 10)

exec { & dotnet pack .\src\MediatR -c Release -o .\artifacts --version-suffix=$revision }

Cleaning is just removing an artifacts folder, where I put completed packages. Initialization is installing required PowerShell modules and running a “dotnet restore” on the root solution.

Building is just msbuild on the solution, executed through a PS module, which MSbuild defers to the dotnet CLI as needed. Testing is the “dotnet test” command against xUnit. Finally, packaging is “dotnet pack”, passing in a special version number I get from AppVeyor.

As part of my builds, I include the incremental build number in my packages. Because of how SemVer works, I need to make sure the build number is alphabetical, so I prefix a build number with leading zeroes, and “57” becomes “0057”.

In my project.json file, I’ve set the version up so that the version from the build gets substituted at package time. But my project.json file determines the major/minor/revision:

{
  "version": "4.0.0-beta-*",
  "authors": [ "Jeremy D. Miller", "Joshua Flanagan", "Josh Arnold" ],
  "packOptions": {
    "owners": [ "Jeremy D. Miller", "Jimmy Bogard" ],
    "licenseUrl": "https://github.com/HtmlTags/htmltags/raw/master/license.txt",
    "projectUrl": "https://github.com/HtmlTags/htmltags",
    "iconUrl": "https://raw.githubusercontent.com/HtmlTags/htmltags/master/logo/FubuHtml_256.png",
    "tags": [ "html", "ASP.NET MVC" ]
  },
}

This build script is used not just locally, but on the server as well. That way I can ensure I’m running the exact same build process reproducibly in both places.

The CD Pipeline

The next part is interesting, as I’m using AppVeyor to be my CI/CD pipeline, depending on how it detects changes. My goal with my CI/CD pipeline are:

  • Pull requests each build, and I can see their status inside GitHub
  • Pull requests do not push a package (but can create a package)
  • Merges to master push packages to MyGet
  • Tags to master push packages to NuGet

I used to push *all* packages to NuGet, but what wound up happening is I would have to move slower with changes because things would just “show up” on people before I had a chance to fully think through the changes to the public.

I still have pre-release packages, but these are a bit more thought-out than I have had in the past.

Finally, because I’m using AppVeyor, my entire build configuration lives in an “appveyor.yml” file that lives with my source control. Here’s MediatR’s:

version: '{build}'
pull_requests:
  do_not_increment_build_number: true
branches:
  only:
  - master
nuget:
  disable_publish_on_pr: true
build_script:
- ps: .\Build.ps1
test: off
artifacts:
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:
- provider: NuGet
  server: https://www.myget.org/F/mediatr-ci/api/v2/package
  api_key:
    secure: zKeQZmIv9PaozHQRJmrlRHN+jMCI64Uvzmb/vwePdXFR5CUNEHalnZdOCg0vrh8t
  skip_symbols: true
  on:
    branch: master
- provider: NuGet
  name: production
  api_key:
    secure: t3blEIQiDIYjjWhOSTTtrcAnzJkmSi+0zYPxC1v4RDzm6oI/gIpD6ZtrOGsYu2jE
  on:
    branch: master
    appveyor_repo_tag: true

First, the build version I set to be just the build number. Because my project.json file drives the package/assembly version, I don’t need anything more complicated here. I also don’t want any other branches built, just master and pull requests. This makes sure that I can still create branches/PRs inside the same repository without having to be forced to use a second repository.

The build/test/artifacts should be self-explanatory, I want everything flowing through the build script so I don’t want AppVeyor discovering and trying to figure things out itself. Explicit is better.

Finally, the deployments. I want every package to go to MyGet, but only tagged commits to go to NuGet. The first deploy configuration is the MyGet configuration, that deploys only on master to my MyGet configuration (with user-specific encrypted API key). The second is the NuGet configuration, to only deploy if AppVeyor sees a tag.

For public releases, I:

  • Update the project.json as necessary, potentially removing the asterisk for the version
  • Commit, tag the commit, and push both the commit and the tag.

With this setup, the MyGet feed contains *all* packages, not just the CI ones. The NuGet feed is then just a “curated” feed of packages of more official packages.

The last part of a release, publicizing it, is a little bit more work. I still like GitHub releases but haven’t found yet a great way of automating a “tag the commit and create a release” process. Instead, I use the tool GitHubReleaseNotes to create the markdown of a release based on the tags I apply to my issues for a release. Finally, I’ll make sure that I update any documentation in the wiki for a release.

I like where I’ve ended up so far, and there’s always room for improvement, but it’s a far cry from when I used to have to manually package and push to NuGet.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

TestOps: Chasing the White Whale

Testing TV - Tue, 05/24/2016 - 18:12
TestOps is the reclusive member of the DevOps family. Reports of sightings have been few and far between. Although we have heard stories, most of us don’t know anyone who’s actually seen it in the wild. This presentation is from a personal experience as a tester to search for clues about what TestOps is. It […]
Categories: Blogs

Testing does not prevent defects

There seems to be a bunch of discussion regarding whether testers prevent defects or not.

The main source of confusion that I see is confusing ‘testers’ with ‘testing’. Clearing this up seems pretty simple.

Testing does not prevent defects. Testers may. I do. But I don’t call that part of my work ‘testing’, even if testing and experiment design is a part of that work. Clarifying which aspects of my work are ‘testing’ and which are not is important for at least two reasons.

The first, is that I can discuss those skills clearly and keep that knowledge in one place. This allows me to make shortcut reference to those skills when I apply them in other roles I may be playing.

Secondly, time I spend playing some other role that may be about defect prevention (or team/product alignment, shared focus, or anything else) usually involves (sometimes irreconcilable) tradeoffs in the quality of my testing work or less available time for test effort. If I’m not making this clear to stakeholders who matter, or if I am unaware of the potential for conflict, I expose myself to a number of potential problems.

Categories: Blogs

Round Table Presentation 1: Mobile Cross-Platform Testing

Testing TV - Thu, 05/19/2016 - 16:47
This video presents the results of a round table about mobile cross-platform testing held at the Google Test Automation Conference (GTAC). Video producer: https://developers.google.com/google-test-automation-conference/
Categories: Blogs

AutoMapper 5.0 Beta released

Jimmy Bogard - Wed, 05/18/2016 - 20:21

This week marks a huge milestone in AutoMapper-land, the beta release of the 5.0 work we’ve been doing over the last many, many months.

In the previous release, 4.2.1, I obsoleted much of the dynamic configuration API in favor of an explicit configuration step. That means you only get to use “Mapper.Initialize” or “new MapperConfiguration”. You can still use a static Mapper.Map call, or create a new Mapper object “new Mapper(configuration)”. The last 4.x release really paved the way to have a static and instance API that were lockstep with each other.

In previous versions of AutoMapper, you could call “Mapper.CreateMap” anywhere in your code. This made two things difficult: performance optimization and dependent configuration. You could get all sorts of weird bugs if you called the configuration in the “wrong” order.

But that’s gone. In AutoMapper 5.0, the configuration is broken into two steps:

  1. Gather configuration
  2. Apply configuration in the correct order

By applying the configuration in the correct order, we can ensure that there’s no order dependency in your configuration, we handle all of that for you. It seems silly in hindsight, but at this point the API inside of AutoMapper is strictly segregated between “DSL API”, “Configuration” and “Execution”. By separating all of these into individual steps, we were able to do something rather interesting.

With AutoMapper 5.0, we are able to build execution plans based on type map configuration to explicitly map based on exactly your configuration. In previous versions, we would have to re-assess decisions every single time we mapped, resulting in huge performance hits. Things like “do you have a condition configured” and so on.

A strict separation meant we could overhaul the entire execution engine, so that each map is a precisely built expression tree only containing the mapping logic you’ve configured. The end result is a 10X performance boost in speed, but without sacrificing all of the runtime exception logic that makes AutoMapper so useful.

One problem with raw expression trees is that if there’s an exception, you’re left with no stack trace. When we built up our execution plan in an expression tree, we made sure to keep those good parts of capturing context when there’s a problem so that you know exactly which property in exactly which point in the mapping had a problem.

Along with the performance issues, we tightened up quite a bit of the API, making configuration consistent throughout. Additionally, a couple of added benefits moving to expressions:

  • ITypeConverter and IValueResolver are both generic, making it very straightforward to build custom resolvers
  • Supporting open generic type converters with one or two parameters

Overall, it’s been a release full of things I’ve wanted to tackle for years but never quite got the design right. Special thanks to TylerCarlson1 and lbargaoanu, both of whom passed the 100 commit mark to AutoMapper.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

The Power of Agile is in your Customers and Employees

I’m Agile, you’re Agile, everyone is Agile.  Or folks think they are.  But are they really? If Agile is only a process to you, Agile will fail. If Agile is pretending certainty without validating with customers, then Agile will fail. If Agile is commanded from above with little ownership from the team, Agile will fail.  More importantly, not only Agile will fail, so will your business.  Agile is a move to a lean culture focusing on customers and what they find as value and focusing on employees who are the engine that can create that value.  Agile is effectively about creating a thriving business. 
I believe there are the two primary success factors in creating a thriving business: a culture where customers matter and employees matter. I’m not talking about the lip service that is prevalent today. In some cases, we see quite the opposite, where employees are disenfranchised and customers are rarely engaged. Instead, the goal is to have a culture and practices in place that truly gain the benefits of engaging with customers and employees. Through the customer and employee, a company draws their power within an agile culture and, I contend, within any thriving company. When you have a riveting focus on the customer and you believe that an engaged customer matters, then you have the basis for a relationship where you can truly understand what the customer wants. When you have a sharp focus on employees and provide them the space to make decisions and own their work, then you will begin to understand the value an engaged employee base can provide.
If the values are sincerely translated to organizational objectives and agile approaches are applied, then it can act as a differentiator between the success of your organization compared to the success of other organizations. Of course, every company likes to say that employees and customers matter, but are their objectives and actions really aligned with these values? Upon closer inspection, the values should translate into objectives focusing on customer engagement and employee engagement.
  • Customer engagement focuses on establishing meaningful and honest customer relationships with the goal of initiating continuous customer feedback to truly identify what is valuable to the customer. This includes establishing all of the activities involved in a Customer Feedback Vision.
  • Employee engagement focuses on empowering employees so they can self-organize into teams and can own and be a part of the decision-making process at their own level.  When employees have ownership, they have more passion in their work.  When they have more passion, they give 110%. 

Then we add the “secret ingredient” of applying a continuous and adaptive approach (a.k.a. agile culture, processes, methods, practices, and techniques). If done properly with the ability to adapt, this can lead to an increase in customer sales and an increase in team productivity. This finally leads to your incentive, which is an increase in company profits.  
Now is time to take a moment.  Are employees disenfranchised or fully engaged?  Are customers rarely engaged or is their feedback continuously engaged?  Is Agile just a trend that others should do or are you serious about Agile and the culture shift it requires?  Keep in mind, the combination of customer and employee engagement within an Agile context isn’t just a good idea, it is great for business.  
PS - to read more about the importance of customers and employees, consider reading Chapter 3 of the book entitled Being Agile.  
Categories: Blogs

SATURN 2016, San Diego

thekua.com@work - Sun, 05/15/2016 - 16:54

I first heard of SATURN via social media through Michael Keeling, an IBM employee who spoke at XP2009 many years ago. Sam Newman spoke last year about Microservices. Although the conference has a Call For Papers (CFP) each year, they also had a small number of invited speakers and Michael organised for me to go along to talk about Evolutionary Architecture.

SATURN is a conference organised by the SEI (Software Engineering Institute). SEI are probably most well known for CMM, although there is now another institute responsible for it. The conference has been running since 2005 and has a long history of focusing on architectural concerns.

Architecting the UnknownArchitecting the Unknown keynote by Grady Booch (@Grady_Booch) What I really liked about the conference

I believe that one of SATURN’s greatest strengths is a focus on architectural thinking, through the application of the latest tools. Unlike some other conferences, I felt like the programme tried to assemble a good collection of experience reports and presentations that emphasised the architectural principles, rather than focusing on the current tools.

Although this may be less popular for people wanting to learn how to use a specific tool, I feel this emphasis is much more valuable and the lessons longer lasting.

What I also really enjoyed about the conference was the relatively small size which was just over 200 participants from various parts of the world. Having a good location, and small size allowed for some better quality conversations both with other speakers and attendees. One good example of the conference facilitating this, was an office hours session for attendees to easily ask questions of speakers. I shared by office hours with Eoin Woods (author of Software Systems Architecture) and Grady Booch (IBM Fellow, One of the Three Amigos who were best known for developing the Unified Modeling Language).

SATURN 2016 Conference DinnerConference dinner at a baseball game for SATURN 2016

Another example to reinforce that was connecting with Len Bass (one of the authors of Software Architecture in Practice) at the conference dinner (at a baseball game).

Other observations

During the conference I spoke with many people carrying various Architect titles. Interestingly many came from many larger organisations and government agencies, which shouldn’t have been any surprise given the history of the conference. I remember meeting and speaking with one particular Enterprise Architect (EA), who seemed to be one of the most pragmatic EAs I’d met, who was trying at all costs to stop his board randomly signing large contracts with product vendors like Oracle and IBM without the proper diligence about understanding what they were actually building.

At the same time, I met a number of architects struggling to help their organisations see the value of architecture and the role of the architect.

My talk

I spoke after Booch’s initial keynote, “Architecting for the Unknown” which lead really well onto the topic I was speaking on, “Evolutionary Architecture“. I had a packed room as was a topic that resonated with people. At one point the conference organised brought more chairs into the room and there was still only standing room for the 90 minute talk! I had some great questions and conversations throughout and found out later that I won the award for the best conference talk in the “Architecture in Practice” category!

Evolutionary Architecture TalkSATURN Evolutionary Architecture talk audience

I’d definitely recommend attending SATURN if you’re interested in focused on building architectural thinking and an opportunity to connect with architects across the industry. The size is great for conversations, sharing experiences and with next year’s scheduled for Colorado will be really interesting too.

Final thanks

I’d like to thank the organising group calling out Michele Falce (Technical Event Coordinator), Bill Pollak (General Chair), Jørn Ølmheim and Amine Chigani (Technial Co-Chairs) who did a great job putting together a fantastic conference, John Klein for hosting Office Hours, and Michael Keeling for organising an invite for me.

Categories: Blogs