Skip to content

Blogs

To preload or not to preload...

Rico Mariani's Performance Tidbits - Fri, 08/29/2014 - 21:25

Q:

My application starts slowly, I want to preload it to avoid that problem.   Should I be worried?

A:

Well, in short, there are lots of concerns.  Preloading things you may or may not need is a great way to waste a ton of memory and generally make the system less usable overall.

I’m often told that that the answer to a performance problem is to simply preload the slow stuff… unfortunately that doesn’t work as a general solution if everyone does it.  It’s classic “improve the benchmark” thinking.

When developing for Windows you have to think about all kinds of scenarios, such as the case where there are several hundred users trying to share a server each with their own user session.  Your application might also need to run in a very memory constrained environments like a small tablet or some such – you do not want to be loading extra stuff in those situations. 
 
The way to make a system responsive is to KEEP IT SIMPLE.  If you don’t do that, then it won’t matter that you’ve preloaded it -- when the user actually gets around to starting the thing in a real world situation, you will find that it has already been swapped out to try to reclaim some of the memory that was consumed by preloading it.  So you will pay for all the page faults to bring it back, which is probably as slow as starting the thing in the first place.  In short, you will have accomplished nothing other than using a bunch of memory you didn’t really need.

Preloading in a general purpose environment is, pretty much a terrible practice.  Instead, pay for what you need when you need it and keep your needs modest.  You only have to look at the tray at bottom right on your screen full of software that was so sure it was vitally important to you that it insisted on loading at boot time to see how badly early loading scales up.

Adding fuel to this already bonfire-sized problem is this simple truth: any application preloading itself competes with the system trying to do the very same thing.  Windows has long included powerful features to detect the things you actually use and get them into the disk cache before you actually use them, whether they are code or data.  Forcing your code and data to be loaded is just as likely to create more work evicting the unnecessary bits from memory to make room for something immediately necessary, whereas doing nothing would have resulted in ready-to-go bits if the application is commonly used with no effort on your part.

See: http://en.wikipedia.org/wiki/Windows_Vista_I/O_technologies

Bottom line, preloading is often a cop out.  Better to un-bloat.

Categories: Blogs

On adopting high end perf tools to study micro-architectural phenomena

Rico Mariani's Performance Tidbits - Fri, 08/29/2014 - 20:05

Huge words of caution: you can bury yourself in this kind of stuff forever and for my money it is rarely the way to go.  It’s helpful to know where you stand on CPI for instance but it’s much more typical to get results by observing that you (e.g.) have a ton of cache misses and therefore should use less memory.  Using less memory is always a good thing.

You could do meaningful analysis for a very long time without resorting to micro-architectural phenomena simply by studying where your CPU goes.

It is not only the case that (e.g.) ARM does things differently than (e.g.) x86 products, it is also the case that every x86 processor family you have ever heard of does it differently than every other one you have ever heard of.  But that turns out to be not that important for the most part.  Because the chief observations like “we branch too much” are true universally.  Just as “we use too much memory” is basically universally true.

The stock observations that you should:

1. Use less memory
2. Use fewer pointers and denser data structures
3. Not jump around so much

Are essentially universally true.  The question really comes down to what can you get away with on any given processor because its systems will save the day for you.  But even that is a bit of a lie, because the next question is “what else could you be doing an your program would still run well?” because the fact is there is always other stuff going on and if you minimize your use of CPU resources generally you will be a better citizen overall.

In short, the top level metrics, CPU, Disk, Memory, Network, will get your very far indeed without resorting to mispredicts and the like.  If you want to use the tools effectively, with broad results, I strongly recommend that you target the most important metrics, like L2 cache misses, and reduce them.  That’s always good.  Pay much less attention to the specific wall-clock consequence in lab scenarios and instead focus on reducing your overall consumption.

And naturally this advice must be tempered with focus on your customers actual problems and forgive me for being only approximately correct in 400 words or less.

 

Categories: Blogs

The Value of Checklists

For many years, I have included checklists on my Top 10 list of test tools (I also include "your brain"). Some people think this is ridiculous and inappropriate, but I have my reasons. I'm also not the only one who values checklists.

Atul Gawande makes a compelling case for checklists, especially in critical life-or-death situations in his book "The Checklist Manifesto." In reviewing the book on Amazon.com, Malcom Gladwell writes, "Gawande begins by making a distinction between errors of ignorance (mistakes we make because we don't know enough), and errors of ineptitude (mistakes we made because we don’t make proper use of what we know). Failure in the modern world, he writes, is really about the second of these errors, and he walks us through a series of examples from medicine showing how the routine tasks of surgeons have now become so incredibly complicated that mistakes of one kind or another are virtually inevitable: it's just too easy for an otherwise competent doctor to miss a step, or forget to ask a key question or, in the stress and pressure of the moment, to fail to plan properly for every eventuality."

Gladwell also makes another good point, "Experts need checklists--literally--written guides that walk them through the key steps in any complex procedure. In the last section of the book, Gawande shows how his research team has taken this idea, developed a safe surgery checklist, and applied it around the world, with staggering success."

In testing, we face similar challenges in testing all types of applications - from basic web sites to safety-critical systems. It is very easy to miss a critical detail in many of the things we do - from setting up a test environment to performing and evaluating a test.

I have a tried and true set of checklists that also help me to think of good tests to document and perform. It is important to note that a checklist leads to tests, but are not the same as test cases or the tests they represent.

I have been in some organizations where just a simple set of checklists would transform their test effectiveness from zero to over 80%! I even offer them my checklists, but there has to be the motivation (and humility) to use them correctly.

Humility? Yes, that's right. We miss things because we get too sure of ourselves and think we don't need something as lowly, simple and repetitive as a checklist.

Checklists cost little to produce, but have high-yield in value. By preventing just one production software defect, you save thousands of dollars in rework.

And...your checklists can grow as you learn new things to include. (This is especially true for my travel checklist!) So they are a great vehicle for process improvement.

Checklists can be great drivers for reviews as well. However, many people also skip the reviews. This is also unfortunate because reviews have been proven to be more effective than dynamic testing. Even lightweight peer reviews are very effective as pointed out in the e-book from Smartbear, Best Kept Secrets of Peer Code Reviews.

Now, there is a downside to checklists. That is, the tendency just to "check the box" without actually performing the action. So, from the QA perspective, I always spot check to get some sense of whether or not this is happening.

Just as my way of saying "thanks" for reading this, here is a link to one of my most popular checklists for common error conditions in software.

I would love to hear your comments about your experiences with checklists.
Categories: Blogs

Toxic Repo

The Build Doctor - Fri, 08/29/2014 - 02:51
If you can’t dispose of toxic waste (say, by burning it or launching it into space using surplus ICBM’s), then you probably need to contain it: stop innocents from stumbling across it, or stop the...

Visit The Build Doctor for the full article.
Categories: Blogs

Are you offering a career in testing or just a job?

The Social Tester - Thu, 08/28/2014 - 13:26
Many companies are offering a job in testing. Many companies are offering a career in testing. A career is a series of experiences. These experiences may come from many  jobs at many companies. Or they may come from a single place of work with a varied set of experiences. A job is what some companies […]
Categories: Blogs

How to display a country map with SVG and D3js

Decaying Code - Maxime Rouiller - Wed, 08/27/2014 - 06:17

I’ve been babbling recently with charts and most of them was with DimpleJS.

However, what is beside DimpleJS is d3.js which is an amazing tools for drawing anything in SVG.

So to babble some more, I’ve decide to do something simple. Draw Canada.

The Data

I’ve taken the data from this repository that contains every line that forms our Maple Syrup Country. Ours is called “CAN.geo.json”. This file is called a Geo-Json file and allows you to easily parse geolocation data without a hitch.

The Code
var svg = d3.select("#chartContainer")
    .append("svg")
    .attr("style", "solid 1px black")
    .attr("width", "100%")
    .attr("height", "350px");

var projection = d3.geo.mercator().center([45, 55]);
var path = d3.geo.path().projection(projection);

var g = svg.append("g");
d3.json("/data/CAN.geo.json", function (error, json) {
    g.selectAll("path")
           .data(json.features)
           .enter()
           .append("path")
           .attr("d", path)
           .style("fill", "red");
});
The Result var svg = d3.select("#chartContainer") .append("svg") .attr("style", "solid 1px black") .attr("width", "100%") .attr("height", "350px"); var projection = d3.geo.mercator().center([45, 55]); var path = d3.geo.path().projection(projection); var g = svg.append("g"); d3.json("/data/CAN.geo.json", function (error, json) { g.selectAll("path") .data(json.features) .enter() .append("path") .attr("d", path) .style("fill", "red"); }); Conclusion

Of course this is not something very amazing. It’s only a shape. This could be the building block necessary to create the next eCommerce world-wide sales revenue report.

Who knows… it’s just an idea.

Categories: Blogs

Animating your charts with Storyboard charts from DimpleJS and d3js

Decaying Code - Maxime Rouiller - Wed, 08/27/2014 - 06:17

chart-line-148256_640

Storyboard are charts/graphs that tell a story.

To have a graph, you need a timeline. Whether it’s days, weeks, months or years… you need a timeline of what happens. Then to have a chart, you need two axis. One that tells one version of the story, the other that relates to it. Then you move things forward in time and you move the data point. For each of those point, you also need to be able to label that point.

So let’s make a list of what we need.

  1. Data on a timeline.
  2. One numerical data
  3. Another numerical data that correlate to the other in some way
  4. A label to identify each point on the graph

I’ve taken the time to think about it and there’s one type of data that easy to come up with (I’m just writing a technical blog post after all).

Introducing the DataSet

I’ve taken the GDP, Population per country for the last 30 years from World Economics and merged it into one single file.

Note: World Economics is very keen to share data with you in format that are more readable than what is on their website. Contact them through their twitter account if you need their data!

Sound simple but it took me over 1 hour to actually merge all that data. So contact them to have a proper format that is more developer friendly.

Here’s what is the final result:

graphAnimation

So this is the result I have.

The Code

That’s the most bonkers thing ever. Once you have the data properly setup, this doesn’t require that much code. Here’s what the code to generate the same graph on your end:

$.ajax("/GDP.csv", {
    success: function (data) {
        var csv = d3.csv.parse(data);

        var post3 = function () {
            var svg = dimple.newSvg("#storyboardGraph", 800, 600);
            var chart = new dimple.chart(svg, csv);

            csv = dimple.filterData(csv, "Year", ["2000", "2001", "2002", "2003",
                "2004", "2005", "2006", "2007", "2008", "2009", "2010", "2011",
                "2012", "2013", ]);
            
            var frame = 2000;
            chart.addMeasureAxis("x", "GDP");
            chart.addMeasureAxis("y", "Population");
            chart.addSeries(["Country"], dimple.plot.bubble);
            var story = chart.setStoryboard("Year");
            story.frameDuration = frame;
            story.addOrderRule("Date");
            chart.draw();
        };
        post3();
    }
});
Conclusion

Stop using weird graphing library that will cost you an arm and a leg. Your browser (both desktop and mobile) can handle this kind of technology. Start using it now.

See DimpleJS for more examples and fun scenario to work with. Don’t forget to also follow John Kiernander on Twitter.

As usual, the source is available on Github.

Enjoy!

Categories: Blogs

Slow Cheetah is going in maintenance mode

Decaying Code - Maxime Rouiller - Wed, 08/27/2014 - 06:17

Just a quick blog post to let you know that it has been announced that Slow Cheetah is going in Maintenance Mode. I don’t have alternatives or scoop.

I’m just trying to get the word out as much as possible.

What is Slow Cheetah?

It’s a tool to transform XML files from App.config and Web.config (this will not be affected).

What does that mean for me?

It means that it won’t be supported in the next release of Visual Studio. No new features are going to be added. No fixes for future regressions are going to be applied.

What does it really mean?

Stop using it. It will still work for your current project but if you are expecting a quick migration when you upgrade Visual Studio, think again.

It might work but nothing is guaranteed.

What if I don’t want to change?

The code is open sourced. You can start maintaining it yourself but Sayed won’t be doing any more work on it.

Categories: Blogs

NuGet–Upgrading your packages like a boss

Decaying Code - Maxime Rouiller - Wed, 08/27/2014 - 06:17

How often do you get on a project and just to assess where are things… you open the “Manage NuGet Packages for Solution…” and go to the Updates tab.

Then… you see this.

image

I mean… there’s everything in there. From Javscript dependencies to ORM. You know that you are in for a world of trouble.

Problem

You see the “Update All” and it’s very tempting. However, you know you are applying all kinds of upgrades. This could be fine when starting a project but when maintaining an existing project… you are literally including new features and bugs fixes for all those libraries.

A lot can go wrong.

Solution A: Update All a.k.a. Hulk Smash

So you said… screw it. My client and me will live with the consequences. You press Update All and… everything still works on compile.

Congratulation! You are in the very few!

Usual case? Compile errors everywhere that you will need to fix ASAP before committing.

Worse case? Something breaks in production and it takes us to this:

code_test_meme

Solution B: Update safely a.k.a The Boss Approach

Alright… so you don’t want to go Hulk Smash on your libraries and on your code. And more importantly, you don’t want to be forced to wear the cowboy hat for a week.

So what is a developer to do in this case? You do it like a boss.

First, you open up “View > Other Windows > Package Manager Console”. Yes. It’s hidden but it’s for the pro. The kings. People like you who don’t use a tank to kill a fly.

It will look like this:

image

What is this? This beauty is Powershell. Yes. It’s awesome. There’s even a song about it.

So now that we have powershell… what can we do? Let me show you to your scalpel boss.

Update-Package is your best friend for this scenario. Here is what you are going to do:

Update-Package -Safe

That’s it.

What was done

This little “Safe” switch will only upgrade Revisions and will not touch Major and Minor versions. So to quote the documentation:

The `-Safe` flag constrains upgrades to only versions with the same Major and Minor version component.

That’s it. Now you can recompile your app and most of your app should have all bug fixes for current Major+Minor versions applied.

like-a-boss

If you want to read more about Semantic Versioning (which is what NuGet uses), go read Alexandre Brisebois’ post on it. Very informative and straight to the point.

Categories: Blogs

Adding color to your Javascript charts with Dimple and d3js (Part 2)

Decaying Code - Maxime Rouiller - Wed, 08/27/2014 - 06:17

So we started by doing some graphs from basic data. But having all the colors the same or maybe even showing bars is not enough.

Here are a few other tricks to make the graph a little bit nicer. Mind you, there is nothing revolutionary here… it’s all in the documentation. The point of this blog post is only to show you how easy it is to customize the look of your charts.

First thing first, here are the sources we are working with.

Showing lines instead of bars

Ahhh that is quite easy.

It’s actually as simple as changing the addSeries function paramter

Here’s what the current code look like now:

var post2 = function() {
    // blog post #2 chart
    var svg = dimple.newSvg("#lineGraph", 800, 600);
    var chart = new dimple.chart(svg, csv);
    chart.addCategoryAxis("x", "Country");
    chart.addMeasureAxis("y", "Total");
    chart.addSeries(null, dimple.plot.line);
    chart.draw();
};
post2();

And the graph looks like this:

image

Simple enough?

Of course, this isn’t the type of data for lines so let’s go back to our first graph with bars and try to add colors.

Adding a color per country

So adding a color per country is about defining the series properly. In this case… on “Country”.

Changing the code isn’t too hard:

var post1 = function() {
    var svg = dimple.newSvg("#graphDestination", 800, 600);
    var chart = new dimple.chart(svg, csv);
    chart.addCategoryAxis("x", "Country");
    chart.addMeasureAxis("y", "Total", "Gold");
    chart.addSeries("Country", dimple.plot.bar);
    chart.draw();
};
post1();

And here is how it looks like now!

image

Much prettier!!

Next blog post, what about adding some legends? Special requests?

Categories: Blogs

Easy Charting in JavaScript with d3js and Dimple from CSV data

Decaying Code - Maxime Rouiller - Wed, 08/27/2014 - 06:17

chart-line-148256_640

Before I go further, let me give you a link to the source for this blog post available on Github

When we talk about doing charts, post people will think about Excel.

Excel do provide some very rich charting but the problem is that you need a licence for Excel. Second, you need to share a file that often have over 30Mb of data to display a simple chart about your monthly sales or what not.

While it is a good way to explore your data, once you know what you want… you want to be able to share it easily. Then you use the first tool available to a Microsoft developer… SSRS.

But what if… you don’t need the huge machine that is SSRS but just want to display a simple graph in a web dashboard? It’s where simple charting with Javascript comes in.

So let’s start with d3js.

What is d3.js?

d3js is a JavaScript library for manipulation documents based on data. It will help you create HTML, CSS and SVG that will allow you  to better display your data.

However… it’s extremely low level. You will have to create your axis, your popup, your hover, your maps and what not.

But since it’s only a building block, other libraries exist that leverage d3js…

Dimple

Dimple is a super simple charting library built on top of d3js. It’s what we’re going to use for this demo. But we need data…

Let’s start with a simple data set.

Sample problem: Medal per country for the 2010 Winter Olympics

Original data can be found here: http://www.statcrunch.com/app/index.php?dataid=418469

I’m going to just copy this into Excel (Google Spreadsheets) to clean the data a bit. We’ll remove all the “Country of ”, which will only pollute our data, as well as the Bins which could be dynamic but are otherwise useless.

First step will be to start a simple MVC project so that we can leverage basic MVC minimizing, layouts and what not.

In our _Layout.cshtml, we’ll add the following thing to the “head”:

<script src="http://d3js.org/d3.v3.min.js"></script>
<script src="http://dimplejs.org/dist/dimple.v2.1.0.min.js"></script>

This will allow us to start charting almost right away!

Step one: Retrieving the csv data and parsing it

Here’s some code that will take a CSV that is on disk or generated by an API and parse it as an object.

$.ajax("/2010-winter-olympics.csv", {
    success: function(data) {
        var csv = d3.csv.parse(data);
        console.table(csv);
    }
});    

This code is super simple and will display something along those lines:

image

Wow. So we are almost ready to go?

Step two: Using Dimple to create our data.

As mentioned before, Dimple is a super simple tool to create chart. Let’s see how far we can go with the least amount of code.

Let’s add the following to our “success” handler:

var chart = new dimple.chart(svg, csv);
chart.addCategoryAxis("x", "Country");
chart.addMeasureAxis("y", "Total");
chart.addSeries(null, dimple.plot.bar);
chart.draw();

Once we refresh the page, it creates this:

image

Okay… not super pretty, lots of crappy data but… wow. We already have a minimum viable data source. To help us see it better… let’s clean the CSV file. We’ll remove all countries that didn’t win medals.

For our data set, that means from row 28 (Albania).

Let’s refresh.

image

And that’s it. We now have a super basic bar graph.

Conclusion

It is now super easy to create graphs in JavaScript. If you feel the need to create graphs for your users, you should consider using d3.js with charting library that are readily available like Dimple.

Do not use d3.js as a standalone way of creating graphs. You will find it harder than it needs to be.

If you want to know more about charting, please let me know on Twitter: @MaximRouiller

Categories: Blogs

Networking is important–or what we are really not good at

Decaying Code - Maxime Rouiller - Wed, 08/27/2014 - 06:17

virtualbusinessMany of us a software developer work with computers to avoid contact with people. To be fair, we all had our fair share of clients that would not understand why we couldn’t draw red lines with green ink. I understand the reason why would rather stay away from people who don’t understand what we do.

However… (there’s always an however) as I recently started my own business recently, I’ve really started to understand the meaning of building your network and staying in contact with people. While being an MVP has always lead me to meet great people all around Montreal, the real value I saw was when it was a very good contact of mine that introduced me to one of my first client. He knew they needed someone with my skills and directly introduced while skipping all the queues.

You can’t really ask for more. My first client was a big company. You can’t get in there without either being a big company that won a bid, be someone that is renowned or have the right contacts.

You can’t be the big company, you might not ever be someone but you can definitely work on contacts and expanding the amount of people you know.

So what can you do to expand your contacts and grow your network?

Go to user groups

This is killing 2 birds with one stone. First, you learn something new. It might be boring if you already now everything but let me give you a nice trick.

Arrive early and chat with people. If you are new, ask them if they are new too, ask them about their favourite presentation (if any), where they work, whether they like it, etc. Boom. First contact is done. You can stop sweating.

If this person has been here more than once, s/he probably knows other people that you can be introduced.

Always have business cards

I’m a business owner now. I need to have cards. You might think of yourself a low importance developer but if you meet people and impress them with your skills… they will want to know where you hang out.

If your business doesn’t have 50$ to put on you, make your own!  VistaPrint makes those “Networking cards” where you an just input your name, email, position, social network, whatever on them and you can get 500 for less than 50$.

Everyone in the business should have business cards. Especially those that makes the company money.

Don’t expect anything

I know… giving out your card sounds like you want to sell something to people or that you want them to call you back.

When I give my card, it’s in the hope that when they come back later that night and see my card they will think “Oh yeah it’s that guy I had a great conversation with!”. I don’t want them to think I’m there to sell them something.

My go-to phrase when I give it to them is “If you have any question or need a second advice, call me or email me! I’m always available for people like you!”

And I am.

Follow-up after giving out your card

When you give your card and receive another in exchange (you should!), send them a personal email. Tell them about something you liked from the conversation you had and ask them if you could add them on LinkedIn (always good). Seem simple  to salesman but us developers often forget that an email the day after has a very good impact.

People will remember you for writing to them personally with specific details from the conversation.

Yes. That means no “copy/paste” email. Got to make it personal.

If the other person doesn’t have a business card, take the time to note their email and full name (bring a pad!).

Rinse and repeat

If you keep on doing this, you should start to build a very strong network of developers in your city. If you have a good profile, recruiters should also start to notice you. Especially if you added all those people on LinkedIn.

It’s all about incremental growth. You won’t be a superstar tomorrow (and neither am I) but by working at it, you might end-up finding your next job through weird contacts that you only met once but that were impressed by who you are.

Conclusion

So here’s the Too Long Didn’t read version. Go out. Get business cards. Give them to everyone you meet. You intention is to help them, not sell them anything. Repeat often.

But in the long run, it’s all about getting out there. If you want a more detailed read of what real networking is about, you should definitely read Work the Pond by Darcy Rezac. It’s a very good read.

Categories: Blogs

Massive Community Update 2014-07-04

Decaying Code - Maxime Rouiller - Wed, 08/27/2014 - 06:17

So here I go again! We have Phil Haack explaining how he handle tasks in his life with GitHub, James Chamber’s series on MVC and Bootstrap, Visual Studio 2014 Update 3, MVC+WebAPI new release and more!

Especially, don’t miss this awesome series by Tomas Jansson about CQRS. He did an awesome job and I think you guys need to read it!

So beyond this, I’m hoping you guys have a great day!

Must Read

GitHub Saved My Marriage - You've Been Haacked (haacked.com)

James Chamber’s Series

Day 21: Cleaning Up Filtering, the Layout & the Menu | They Call Me Mister James (jameschambers.com)

Day 22: Sprucing up Identity for Logged In Users | They Call Me Mister James (jameschambers.com)

Day 23: Choosing Your Own Look-And-Feel | They Call Me Mister James (jameschambers.com)

Day 24: Storing User Profile Information | They Call Me Mister James (jameschambers.com)

Day 25: Personalizing Notifications, Bootstrap Tables | They Call Me Mister James (jameschambers.com)

Day 26: Bootstrap Tabs for Managing Accounts | They Call Me Mister James (jameschambers.com)

Day 27: Rendering Data in a Bootstrap Table | They Call Me Mister James (jameschambers.com)

NodeJS

Nodemon vs Grunt-Contrib-Watch: What’s The Difference? (derickbailey.com)

.NET

Update 3 Release Candidate for Visual Studio 2013 (blogs.msdn.com)

Test-Driven Development with Entity Framework 6 -- Visual Studio Magazine (visualstudiomagazine.com)

ASP.NET

Announcing the Release of ASP.NET MVC 5.2, Web API 2.2 and Web Pages 3.2 (blogs.msdn.com)

Using Discovery and Katana Middleware to write an OpenID Connect Web Client | leastprivilege.com on WordPress.com (leastprivilege.com)

Project Navigation and File Nesting in ASP.NET MVC Projects - Rick Strahl's Web Log (weblog.west-wind.com)

ASP.NET Session State using SQL Server In-Memory (blogs.msdn.com)

CQRS Series (code on GitHub)

CQRS the simple way with eventstore and elasticsearch: Implementing the first features (blog.tomasjansson.com)

CQRS the simple way with eventstore and elasticsearch: Implementing the rest of the features (blog.tomasjansson.com)

CQRS the simple way with eventstore and elasticsearch: Time for reflection (blog.tomasjansson.com)

CQRS the simple way with eventstore and elasticsearch: Build the API with simple.web (blog.tomasjansson.com)

CQRS the simple way with eventstore and elasticsearch: Integrating Elasticsearch (blog.tomasjansson.com)

CQRS the simple way with eventstore and elasticsearch: Let us throw neo4j into the mix (blog.tomasjansson.com)

Ending discussion to my blog series about CQRS and event sourcing (blog.tomasjansson.com)

Architecture

Michael Feathers - Microservices Until Macro Complexity (michaelfeathers.silvrback.com)

Windows Azure

Azure Cloud Services and Elasticsearch / NoSQL cluster (PAAS) | I'm Pedro Alonso (www.pedroalonso.net)

NuGet

Monitoring nuget.org (blog.nuget.org)

Search Engines (ElasticSearch, Solr, etc.)

Fast Search and Analytics on Hadoop with Elasticsearch | Hortonworks (hortonworks.com)

Elasticsearch.org This Week In Elasticsearch | Blog | Elasticsearch (www.elasticsearch.org)

Solr vs. ElasticSearch: Part 1 – Overview | Sematext Blog on WordPress.com (blog.sematext.com)

Categories: Blogs

Community Update 2014-06-25

Decaying Code - Maxime Rouiller - Wed, 08/27/2014 - 06:17

So not everything is brand new since I did my last community update 8 days ago. What I suggest highly is the combination of EventStore and ElasticSearch in a great article by Tomas Jansson.

It’s definitely a must read and I highly recommend it. Of course, don’t miss the series by James Chambers on Bootstrap and MVC.

Enjoy all the reading!

Must Read

Be more effective with your data - ElasticSearch | Raygun Blog (raygun.io)

Your Editor should Encourage You - You've Been Haacked (haacked.com)

Exploring cross-browser math equations using MathML or LaTeX with MathJax - Scott Hanselman (www.hanselman.com)

CQRSShop - Tomas Jansson (blog.tomasjansson.com) – Link to a tag that contains 3 blog post that are must read.

James Chambers Series

Day 18: Customizing and Rendering Bootstrap Badges | They Call Me Mister James (jameschambers.com)

Day 19: Long-Running Notifications Using Badges and Entity Framework Code First | They Call Me Mister James (jameschambers.com)

Day 20: An ActionFilter to Inject Notifications | They Call Me Mister James (jameschambers.com)

Web Development

Testing Browserify Modules In A (Headless) Browser (derickbailey.com)

ASP.NET

Fredrik Normén - Using Razor together with ASP.NET Web API (weblogs.asp.net)

A dynamic RequireSsl Attribute for ASP.NET MVC - Rick Strahl's Web Log (weblog.west-wind.com)

Versioning RESTful Services | Howard Dierking (codebetter.com)

ASP.NET vNext Routing Overview (blogs.msdn.com)

.NET

Exceptions exist for a reason – use them! | John V. Petersen (codebetter.com)

Nuget Dependencies and latest Versions - Rick Strahl's Web Log (weblog.west-wind.com)

Trying Redis Caching as a Service on Windows Azure - Scott Hanselman (www.hanselman.com)

Categories: Blogs

Massive Community Update 2014-06-17

Decaying Code - Maxime Rouiller - Wed, 08/27/2014 - 06:17

So as usual, here’s what’s new since a week ago.

Ever had problem downloading SQL Server Express? Too many links, download manager, version selection, etc. ? Fear not. Hanselman to the rescue. I’m also sharing with you the IE Developer channel that you should definitely take a look at.

We also continue to follow the series by James Chambers.

Enjoy your reading!

Must Read

Download SQL Server Express - Scott Hanselman (www.hanselman.com)

Announcing Internet Explorer Developer Channel (blogs.msdn.com)

Thinktecture.IdentityManager as a replacement for the ASP.NET WebSite Administration tool - Scott Hanselman (www.hanselman.com)

NodeJS

Why Use Node.js? A Comprehensive Introduction and Examples | Toptal (www.toptal.com)

Building With Gulp | Smashing Magazine (www.smashingmagazine.com) James Chambers Series

Day 12: | They Call Me Mister James (jameschambers.com)

Day 13: Standard Styling and Horizontal Forms | They Call Me Mister James (jameschambers.com)

Day 14: Bootstrap Alerts and MVC Framework TempData | They Call Me Mister James (jameschambers.com)

Day 15: Some Bootstrap Basics | They Call Me Mister James (jameschambers.com)

Day 16: Conceptual Organization of the Bootstrap Library | They Call Me Mister James (jameschambers.com)

ASP.NET vNext

Owin middleware (blog.tomasjansson.com)

Imran Baloch's Blog - K, KVM, KPM, KLR, KRE in ASP.NET vNext (weblogs.asp.net)

Jonathan Channon Blog - Nancy, ASP.Net vNext, VS2014 & Azure (blog.jonathanchannon.com)

Back To the Future: Windows Batch Scripting & ASP.NET vNext | A developer's blog (blog.tpcware.com)

Dependency Injection in ASP.NET vNext (blogs.msdn.com)

.NET

Here Come the .NET Containers | Wintellect (wintellect.com)

Architecture and Methodology

BoundedContext (martinfowler.com)

UnitTest (martinfowler.com)

Individuals, Not Groups | 8th Light (blog.8thlight.com)

Open Source

Download Emojis With Octokit.NET - You've Been Haacked (haacked.com)

ElasticSearch

Elasticsearch migrations with C# and NEST | Thomas Ardal (thomasardal.com)

Categories: Blogs

Chrome - Firefox WebRTC Interop Test - Pt 1

Google Testing Blog - Tue, 08/26/2014 - 23:09
by Patrik Höglund

WebRTC enables real time peer-to-peer video and voice transfer in the browser, making it possible to build, among other things, a working video chat with a small amount of Python and JavaScript. As a web standard, it has several unusual properties which makes it hard to test. A regular web standard generally accepts HTML text and yields a bitmap as output (what you see in the browser). For WebRTC, we have real-time RTP media streams on one side being sent to another WebRTC-enabled endpoint. These RTP packets have been jumping across NAT, through firewalls and perhaps through TURN servers to deliver hopefully stutter-free and low latency media.

WebRTC is probably the only web standard in which we need to test direct communication between Chrome and other browsers. Remember, WebRTC builds on peer-to-peer technology, which means we talk directly between browsers rather than through a server. Chrome, Firefox and Opera have announced support for WebRTC so far. To test interoperability, we set out to build an automated test to ensure that Chrome and Firefox can get a call up. This article describes how we implemented such a test and the tradeoffs we made along the way.

Calling in WebRTC Setting up a WebRTC call requires passing SDP blobs over a signaling connection. These blobs contain information on the capabilities of the endpoint, such as what media formats it supports and what preferences it has (for instance, perhaps the endpoint has VP8 decoding hardware, which means the endpoint will handle VP8 more efficiently than, say, H.264). By sending these blobs the endpoints can agree on what media format they will be sending between themselves and how to traverse the network between them. Once that is done, the browsers will talk directly to each other, and nothing gets sent over the signaling connection.

Figure 1. Signaling and media connections.
How these blobs are sent is up to the application. Usually the browsers connect to some server which mediates the connection between the browsers, for instance by using a contact list or a room number. The AppRTC reference application uses room numbers to pair up browsers and sends the SDP blobs from the browsers through the AppRTC server.

Test DesignInstead of designing a new signaling solution from scratch, we chose to use the AppRTC application we already had. This has the additional benefit of testing the AppRTC code, which we are also maintaining. We could also have used the small peerconnection_server binary and some JavaScript, which would give us additional flexibility in what to test. We chose to go with AppRTC since it effectively implements the signaling for us, leading to much less test code.

We assumed we would be able to get hold of the latest nightly Firefox and be able to launch that with a given URL. For the Chrome side, we assumed we would be running in a browser test, i.e. on a complete Chrome with some test scaffolding around it. For the first sketch of the test, we imagined just connecting the browsers to the live apprtc.appspot.com with some random room number. If the call got established, we would be able to look at the remote video feed on the Chrome side and verify that video was playing (for instance using the video+canvas grab trick). Furthermore, we could verify that audio was playing, for instance by using WebRTC getStats to measure the audio track energy level.

Figure 2. Basic test design.
However, since we like tests to be hermetic, this isn’t a good design. I can see several problems. For example, if the network between us and AppRTC is unreliable. Also, what if someone has occupied myroomid? If that were the case, the test would fail and we would be none the wiser. So to make this thing work, we would have to find some way to bring up the AppRTC instance on localhost to make our test hermetic.

Bringing up AppRTC on localhostAppRTC is a Google App Engine application. As this hello world example demonstrates, one can test applications locally with
google_appengine/dev_appserver.py apprtc_code/

So why not just call this from our test? It turns out we need to solve some complicated problems first, like how to ensure the AppEngine SDK and the AppRTC code is actually available on the executing machine, but we’ll get to that later. Let’s assume for now that stuff is just available. We can now write the browser test code to launch the local instance:
bool LaunchApprtcInstanceOnLocalhost() 
// ... Figure out locations of SDK and apprtc code ...
CommandLine command_line(CommandLine::NO_PROGRAM);
EXPECT_TRUE(GetPythonCommand(&command_line));

command_line.AppendArgPath(appengine_dev_appserver);
command_line.AppendArgPath(apprtc_dir);
command_line.AppendArg("--port=9999");
command_line.AppendArg("--admin_port=9998");
command_line.AppendArg("--skip_sdk_update_check");

VLOG(1) << "Running " << command_line.GetCommandLineString();
return base::LaunchProcess(command_line, base::LaunchOptions(),
&dev_appserver_);
}

That’s pretty straightforward [1].

Figuring out Whether the Local Server is Up Then we ran into a very typical test problem. So we have the code to get the server up, and launching the two browsers to connect to http://localhost:9999?r=some_room is easy. But how do we know when to connect? When I first ran the test, it would work sometimes and sometimes not depending on if the server had time to get up.

It’s tempting in these situations to just add a sleep to give the server time to get up. Don’t do that. That will result in a test that is flaky and/or slow. In these situations we need to identify what we’re really waiting for. We could probably monitor the stdout of the dev_appserver.py and look for some message that says “Server is up!” or equivalent. However, we’re really waiting for the server to be able to serve web pages, and since we have two browsers that are really good at connecting to servers, why not use them? Consider this code.
bool LocalApprtcInstanceIsUp() {
// Load the admin page and see if we manage to load it right.
ui_test_utils::NavigateToURL(browser(), GURL("localhost:9998"));
content::WebContents* tab_contents =
browser()->tab_strip_model()->GetActiveWebContents();
std::string javascript =
"window.domAutomationController.send(document.title)";
std::string result;
if (!content::ExecuteScriptAndExtractString(tab_contents,
javascript,
&result))
return false;

return result == kTitlePageOfAppEngineAdminPage;
}

Here we ask Chrome to load the AppEngine admin page for the local server (we set the admin port to 9998 earlier, remember?) and ask it what its title is. If that title is “Instances”, the admin page has been displayed, and the server must be up. If the server isn’t up, Chrome will fail to load the page and the title will be something like “localhost:9999 is not available”.

Then, we can just do this from the test:
while (!LocalApprtcInstanceIsUp())
VLOG(1) << "Waiting for AppRTC to come up...";

If the server never comes up, for whatever reason, the test will just time out in that loop. If it comes up we can safely proceed with the rest of test.

Launching the Browsers A browser window launches itself as a part of every Chromium browser test. It’s also easy for the test to control the command line switches the browser will run under.

We have less control over the Firefox browser since it is the “foreign” browser in this test, but we can still pass command-line options to it when we invoke the Firefox process. To make this easier, Mozilla provides a Python library called mozrunner. Using that we can set up a launcher python script we can invoke from the test:
from mozprofile import profile
from mozrunner import runner

WEBRTC_PREFERENCES = {
'media.navigator.permission.disabled': True,
}

def main():
# Set up flags, handle SIGTERM, etc
# ...
firefox_profile =
profile.FirefoxProfile(preferences=WEBRTC_PREFERENCES)
firefox_runner = runner.FirefoxRunner(
profile=firefox_profile, binary=options.binary,
cmdargs=[options.webpage])

firefox_runner.start()

Notice that we need to pass special preferences to make Firefox accept the getUserMedia prompt. Otherwise, the test would get stuck on the prompt and we would be unable to set up a call. Alternatively, we could employ some kind of clickbot to click “Allow” on the prompt when it pops up, but that is way harder to set up.

Without going into too much detail, the code for launching the browsers becomes
GURL room_url = 
GURL(base::StringPrintf("http://localhost:9999?r=room_%d",
base::RandInt(0, 65536)));
content::WebContents* chrome_tab =
OpenPageAndAcceptUserMedia(room_url);
ASSERT_TRUE(LaunchFirefoxWithUrl(room_url));

Where LaunchFirefoxWithUrl essentially runs this:
run_firefox_webrtc.py --binary /path/to/firefox --webpage http://localhost::9999?r=my_room

Now we can launch the two browsers. Next time we will look at how we actually verify that the call worked, and how we actually download all resources needed by the test in a maintainable and automated manner. Stay tuned!

1The explicit ports are because the default ports collided on the bots we were running on, and the --skip_sdk_update_check was because the SDK stopped and asked us something if there was an update.

Categories: Blogs

Announcing Austin Code Camp 2014

Jimmy Bogard - Tue, 08/26/2014 - 22:35

It’s that time of year again to hold our annual Austin Code Camp, hosted by the Austin .NET User Group:

Austin 2014 Code Camp

We’re at a new location this year at New Horizons Computer Learning Center Austin, as our previous host can no longer host events. Big thanks to St. Edwards PEC for hosting us in the past!

Register for Austin Code Camp

We’ve got links on the site for schedule, registration, sponsorship, location, speaker submissions and more.

Hope to see you there!

And because I know I’m going to get emails…

Charging for Austin Code Camp? Get the pitchforks and torches!

In the past, Austin Code Camp has been a free event with no effective cap on registrations. We could do this because the PEC had a ridiculous amount of space and could accommodate hundreds of people. With free registration, we would see 50% drop-off attending from registrations. Not very fun to plan food with such uncertainty!

This year we have a good amount of space, but not infinite space. We can accommodate the typical number of people that come to our Code Camp (150-175), but for safety reasons we can’t put an unlimited cap on registrations as we’ve done in the past.

Because of this, we’re charging a small fee to reserve a spot. It’s not even enough to cover lunch or a t-shirt or anything, but it is a small enough fee to ensure that we’re fair to those that truly want to come.

Don’t worry though, if you can’t afford the fee, send me an email, and we can work it out.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

On auditing, standards, and ISO 29119

Markus Gaertner (shino.de) - Tue, 08/26/2014 - 13:07

Disclaimer:
Since I am publishing this on my personal blog, this is my personal view, the view of Markus Gärtner as an individual.

I think the first time I came across ISO 29119 discussion was during the Agile Testing Days 2010, and probably also during Stuart Reid’s keynote at EuroSTAR 2010. Remembering back that particular keynote, I think he was visibly nervous during his whole talk, eventually delivering nothing worth of a keynote. Yeah, I am still disappointed by that keynote four years later.

Recently ISO 29119 started to be heavily debated in one of the communities I am involved in. Since I think that others have expressed their thoughts on the matter more eloquently and deeper than I going to do, make sure to look further than my blog for a complete picture of the whole discussion. I am going to share my current state of thoughts here.

Audits

In my past I have been part of a few audits. I think it was ISO 9000 or ISO 9001, I can’t tell, since people keep on confusing the two.

These audits usually had a story before the audit. Usually one or two weeks up-front I was approached by someone asking me whether I could show something during the audit that had something to do with our daily work. I was briefed in terms of what that auditor wanted to see. Usually we also prepared a presentation of some sorts.

Then came the auditing. Usually I sat together with the auditor and a developer in a meeting room, and we showed what we did. Then we answered some questions from the auditor. That was it.

Usually a week later we received some final evaluation. Mostly there were points like “this new development method needs to be described in the tool where you put your processes in.” and so on. It didn’t affect my work.

More interestingly, what we showed usually didn’t have anything to do with the work we did when the auditor left the room. Mostly, we ignored most of the process in the process tool that floated around. At least I wasn’t sure how to read that stuff anyways. And of course, on every project there was someone willing to convince you that diverting from whatever process was described was fruitful in this particular situation and context.

Most interestingly, based upon the auditing process people made claims about what was in the process description, and what the auditor might want to see. No one ever talked to them up-front (probably it wasn’t allowed, was the belief). Oh, and of course, if you audit something to improve it that isn’t the thing that you’re doing when you’re not audited, then you’re auditing bogus. Auditing didn’t prevent us from running into this trap. Remember: If there is an incentive, the target will be hit. Yeah, sounds like what we did. We hit the auditing target without changing anything real.

Skip forward a few years, and I see the same problems repeated within organizations that adopt CMMi, SPICE, you-name-it. Inherently, the fact that an organization has been standardized seems to lead to betrayal, mis-information, and ignorance when it comes to the processes that are described. To me, this seems to be a pattern among the companies that I have seen that adopted a particular standard for their work. (I might be biased.)

Standards

How come, you ask, we adopt standards to start with? Well, there are a bunch of standards out there. For example, USB is standardized. So was PS/2, VGA, serial and parallel ports. These standards solve the problem of two different vendors producing two pieces of hardware that need to work together. The standard defines their commonly used interface on a particular system.

This seems to work reasonably for hardware. Hardware is, well, hard. You can make hard decisions about hardware. Software on the other hand is more soft. It reacts flexibly, can be configured in certain ways, and usually involves a more creative process to get started with. When it comes to interfaces between two different systems, you can document these, but usually a particular way of interface between software components delivers some sort of competitive advantage for a particular vendor. Though, when working on the .NET platform, you have to adhere to certain standards. The same goes with stuff like JBoss, and whatever programming language you may use. There are things that you can work around, there are others which you can’t.

Soft-skill-ware, i.e. humans, are even more flexible, and will react in sometimes unpredictable ways when challenged in difficult work situations. That said, people tend to diverge from anything formal to add their personal note, to achieve something, and to show their flexibility. With interfaces between humans, as in behavioral models, humans tend to trick the system, and make it look like they adhere to the behavior described, but don’t do so.

ISO 29119

ISO 29119 tries to combine some of the knowledge that is floating around together. Based upon my experiences, I doubt that high quality work stems from a good process description. In my experience, humans can outperform any mediocre process that is around, and perform dramatically better.

That said, good process descriptions appear to be one indicator for a good process, but I doubt that our field is old enough for us to stop looking for better ways. There certainly are better ways. And we certainly haven’t understood enough about software delivery to come up with any behavioral interfaces for two companies working on the same product.

Indeed, I have seen companies suffer from outsourcing parts of a process, like testing, to another vendor, offshoring to other countries and/or timezones. Most of the clients I have been involved with were even suffering as much as to insource back the efforts they previously outsourced. The burden of the additional coordination was simply too high to warrant the results. (Yeah, there are exceptions where this was possible. But these appear to be exceptions as of now.)

In fact, I believe that we are currently exploring alternatives to the traditional split between programmers and testers. One of the reasons we started with that split, was Cognitive Dissonance. In the belief that a split between programmers and testers only overcomes Cognitive Dissonance, we have created an own profession a couple of decades ago. Right now, we find out with the uprising of cross-function teams in agile software development that that split wasn’t necessary to overcome Cognitive Dissonance. In short, you can keep an independent view if you can maintain a professional mind-set, while still helping your team to develop better products.

The question I am asking: will a standard like ISO 29119 keep us from exploring further such alternatives? Should we give up exploring other models of delivering working software to our customers? I don’t think so.

So, what should I do tomorrow?

Over the years, I have made a conscious effort to not put myself into places where standards dominated. I put myself simply speaking into the position where I don’t need to care, and can still help deliver good software. Open source software is such an environment.

Of course, that won’t help you in the long run if the industry got flooded with standards. ISO 29119 claims it is based upon internationally-agreed viewpoints. Yet, it claims that it tries to integrate Agile methods into the older standards that it’s going to replace. I don’t know which specialist they talked to in the Germany Agile community. It certainly wasn’t me. So, I doubt much good coming out of this.

And yet, I don’t see this as my battle. Since a while I realized that I probably put too much on my shoulders, and try to decide which battles to pick. I certainly see the problems of ISO 29119, but it’s not a thing that I am wanting to put active effort to.

Currently I am working on putting myself in a position where I don’t need to care at all about ISO 29119 anymore, whatever will come out of it. However, I think it’s important that the people that want to fight ISO 29119 more actively than me are able to do so. That is why, they have my support from a far.

– Markus Gärtner

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Categories: Blogs

On ISO 29119 Content

Thoughts from The Test Eye - Mon, 08/25/2014 - 18:58
DocumentationIdeas
Background

The first three parts of ISO 29119 were released in 2013. I was very skeptic, but also interested, so I grabbed an opportunity to teach the basics of the standard, so it would cover the costs of the standard.

I read it properly, and although I am biased against the standard I did a benevolent start, and blogged about it a year ago, http://thetesteye.com/blog/2013/11/iso-29119-a-benevolent-start/

I have not used the standard for real, I think that would be irresponsible and the reasons should be apparent from the following critique. But I have done exercises using the standard and had discussions about the content, and used most of what is included at one time or another.

Here are some scattered thoughts on the content.

 

World view

I don’t believe the content of the standard matches software testing in reality. It suffers from the same main problem as the ISTQB syllabus: it seems to view testing as a manufacturing discipline, without any focus on the skills and judgments involved in figuring out what is important, observing carefully in diverse ways, and reporting results appropriately. It puts focus on planning, monitoring and control; and not about what is being tested, and how the provided information brings value. It gives an impression that testing follows a straight line, but the reality I have been in is much more complicated and messy.

Examples: Test strategy and test plan is so chopped up that it is difficult to do something good with it. Using the document templates will probably give the same tendency as following IEEE 829 documentation: You have a document with many sections that looks good to non-testers, but doesn’t say anything about the most important things (what are you trying to test, and how.)

For such an important area as “test basis” – the information sources you use – they only include specifications and “undocumented understanding”, where they could have mentioned things like capabilities, failure modes, models, data, surroundings, white box, product history, rumors, actual software, technologies, competitors, purpose, business objectives, product image, business knowledge, legal aspects, creative ideas, internal collections, you, project background, information objectives, project risks, test artifacts, debt, conversations, context analysis, many deliverables, tools, quality characteristics, product fears, usage scenarios, field information, users, public collections, standards, references, searching.

 

Waste

The standard includes many documentation things and rules that are reasonable in some situations, but often will be just a waste of time. Good, useful documentation is good and useful, but following the standard will lead to documentation for its own sake.

Examples: If you realize you want to change your test strategy or plan, you need to go back in the process chain and redo all steps, including approvals (I hope most testers adjust often to reality, and only communicate major changes in conversation.)

It is not enough with Test Design Specification and Test Cases, they have also added a Test Procedure step, where you in advance write down in which order you will run the test cases. I wonder which organizations really want to read and approve all of these… (They do allow exploratory testing, but beware that the charter should be documented and approved first.)

 

Good testing?

A purpose of the standard is that testing should be better. I can’t really say that this is the case or not, but with all the paper work there are a lot of opportunity cost, time that could have been spent on testing. On the other hand, this might be somewhat accounted for by approvals from stakeholders.

At the same time, I could imagine a more flexible standard that would have much better chances of encouraging better testing. A standard that could ask questions like “Have you really not changed your test strategy as the project evolved?” A standard that would encourage the skills and judgment involved in testing.

The biggest risk with the standard is that it will lead to less testing, because you don’t want to go through all steps required.

 

Agile

It is apparent that they really tried to bend in Agile in the standard. The sequentiality in the standard makes this very unrealistic in reality.

But they do allow bug reports not being documented, which probably is covered by allowing partial compliance with ISO 29119 (this is unclear though, together with students I could not be certain what actually was needed in order to follow the standard with regards to incident reporting.)

The whole aura of the standard don’t fit the agile mindset.

 

Finale

There is a momentum right now against the standard, including a petition to stop it http://www.ipetitions.com/petition/stop29119 which I have signed.

I think you should make up your own mind and consider signing it; it might help if the standard starts being used.

 

References

Stuart Reid, ISO/IEC/IEEE 29119 The New International Software Testing Standards, http://www.bcs.org/upload/pdf/sreid-120913.pdf

Rikard Edgren, ISO 29119 – a benevolent start, http://thetesteye.com/blog/2013/11/iso-29119-a-benevolent-start/

ISO 29119 web site, http://www.softwaretestingstandard.org/

Categories: Blogs

My Tests are a Mess

Testing TV - Mon, 08/25/2014 - 17:25
Is your test suite comprehensible to someone new to the project? Can you find where you tested that last feature? Do you have to wade through dozens of files to deal with updated code? Organizing tests are hard. It is easy to make things overly elaborate and complicated. Learn an approach to grouping the tests […]
Categories: Blogs