Skip to content

Feed aggregator

Easy Charting in JavaScript with d3js and Dimple from CSV data

Decaying Code - Maxime Rouiller - Thu, 09/04/2014 - 04:04


Before I go further, let me give you a link to the source for this blog post available on Github

When we talk about doing charts, post people will think about Excel.

Excel do provide some very rich charting but the problem is that you need a licence for Excel. Second, you need to share a file that often have over 30Mb of data to display a simple chart about your monthly sales or what not.

While it is a good way to explore your data, once you know what you want… you want to be able to share it easily. Then you use the first tool available to a Microsoft developer… SSRS.

But what if… you don’t need the huge machine that is SSRS but just want to display a simple graph in a web dashboard? It’s where simple charting with Javascript comes in.

So let’s start with d3js.

What is d3.js?

d3js is a JavaScript library for manipulation documents based on data. It will help you create HTML, CSS and SVG that will allow you  to better display your data.

However… it’s extremely low level. You will have to create your axis, your popup, your hover, your maps and what not.

But since it’s only a building block, other libraries exist that leverage d3js…


Dimple is a super simple charting library built on top of d3js. It’s what we’re going to use for this demo. But we need data…

Let’s start with a simple data set.

Sample problem: Medal per country for the 2010 Winter Olympics

Original data can be found here:

I’m going to just copy this into Excel (Google Spreadsheets) to clean the data a bit. We’ll remove all the “Country of ”, which will only pollute our data, as well as the Bins which could be dynamic but are otherwise useless.

First step will be to start a simple MVC project so that we can leverage basic MVC minimizing, layouts and what not.

In our _Layout.cshtml, we’ll add the following thing to the “head”:

<script src=""></script>
<script src=""></script>

This will allow us to start charting almost right away!

Step one: Retrieving the csv data and parsing it

Here’s some code that will take a CSV that is on disk or generated by an API and parse it as an object.

$.ajax("/2010-winter-olympics.csv", {
    success: function(data) {
        var csv = d3.csv.parse(data);

This code is super simple and will display something along those lines:


Wow. So we are almost ready to go?

Step two: Using Dimple to create our data.

As mentioned before, Dimple is a super simple tool to create chart. Let’s see how far we can go with the least amount of code.

Let’s add the following to our “success” handler:

var chart = new dimple.chart(svg, csv);
chart.addCategoryAxis("x", "Country");
chart.addMeasureAxis("y", "Total");

Once we refresh the page, it creates this:


Okay… not super pretty, lots of crappy data but… wow. We already have a minimum viable data source. To help us see it better… let’s clean the CSV file. We’ll remove all countries that didn’t win medals.

For our data set, that means from row 28 (Albania).

Let’s refresh.


And that’s it. We now have a super basic bar graph.


It is now super easy to create graphs in JavaScript. If you feel the need to create graphs for your users, you should consider using d3.js with charting library that are readily available like Dimple.

Do not use d3.js as a standalone way of creating graphs. You will find it harder than it needs to be.

If you want to know more about charting, please let me know on Twitter: @MaximRouiller

Categories: Blogs

Networking is important–or what we are really not good at

Decaying Code - Maxime Rouiller - Thu, 09/04/2014 - 04:04

virtualbusinessMany of us a software developer work with computers to avoid contact with people. To be fair, we all had our fair share of clients that would not understand why we couldn’t draw red lines with green ink. I understand the reason why would rather stay away from people who don’t understand what we do.

However… (there’s always an however) as I recently started my own business recently, I’ve really started to understand the meaning of building your network and staying in contact with people. While being an MVP has always lead me to meet great people all around Montreal, the real value I saw was when it was a very good contact of mine that introduced me to one of my first client. He knew they needed someone with my skills and directly introduced while skipping all the queues.

You can’t really ask for more. My first client was a big company. You can’t get in there without either being a big company that won a bid, be someone that is renowned or have the right contacts.

You can’t be the big company, you might not ever be someone but you can definitely work on contacts and expanding the amount of people you know.

So what can you do to expand your contacts and grow your network?

Go to user groups

This is killing 2 birds with one stone. First, you learn something new. It might be boring if you already now everything but let me give you a nice trick.

Arrive early and chat with people. If you are new, ask them if they are new too, ask them about their favourite presentation (if any), where they work, whether they like it, etc. Boom. First contact is done. You can stop sweating.

If this person has been here more than once, s/he probably knows other people that you can be introduced.

Always have business cards

I’m a business owner now. I need to have cards. You might think of yourself a low importance developer but if you meet people and impress them with your skills… they will want to know where you hang out.

If your business doesn’t have 50$ to put on you, make your own!  VistaPrint makes those “Networking cards” where you an just input your name, email, position, social network, whatever on them and you can get 500 for less than 50$.

Everyone in the business should have business cards. Especially those that makes the company money.

Don’t expect anything

I know… giving out your card sounds like you want to sell something to people or that you want them to call you back.

When I give my card, it’s in the hope that when they come back later that night and see my card they will think “Oh yeah it’s that guy I had a great conversation with!”. I don’t want them to think I’m there to sell them something.

My go-to phrase when I give it to them is “If you have any question or need a second advice, call me or email me! I’m always available for people like you!”

And I am.

Follow-up after giving out your card

When you give your card and receive another in exchange (you should!), send them a personal email. Tell them about something you liked from the conversation you had and ask them if you could add them on LinkedIn (always good). Seem simple  to salesman but us developers often forget that an email the day after has a very good impact.

People will remember you for writing to them personally with specific details from the conversation.

Yes. That means no “copy/paste” email. Got to make it personal.

If the other person doesn’t have a business card, take the time to note their email and full name (bring a pad!).

Rinse and repeat

If you keep on doing this, you should start to build a very strong network of developers in your city. If you have a good profile, recruiters should also start to notice you. Especially if you added all those people on LinkedIn.

It’s all about incremental growth. You won’t be a superstar tomorrow (and neither am I) but by working at it, you might end-up finding your next job through weird contacts that you only met once but that were impressed by who you are.


So here’s the Too Long Didn’t read version. Go out. Get business cards. Give them to everyone you meet. You intention is to help them, not sell them anything. Repeat often.

But in the long run, it’s all about getting out there. If you want a more detailed read of what real networking is about, you should definitely read Work the Pond by Darcy Rezac. It’s a very good read.

Categories: Blogs

Massive Community Update 2014-07-04

Decaying Code - Maxime Rouiller - Thu, 09/04/2014 - 04:04

So here I go again! We have Phil Haack explaining how he handle tasks in his life with GitHub, James Chamber’s series on MVC and Bootstrap, Visual Studio 2014 Update 3, MVC+WebAPI new release and more!

Especially, don’t miss this awesome series by Tomas Jansson about CQRS. He did an awesome job and I think you guys need to read it!

So beyond this, I’m hoping you guys have a great day!

Must Read

GitHub Saved My Marriage - You've Been Haacked (

James Chamber’s Series

Day 21: Cleaning Up Filtering, the Layout & the Menu | They Call Me Mister James (

Day 22: Sprucing up Identity for Logged In Users | They Call Me Mister James (

Day 23: Choosing Your Own Look-And-Feel | They Call Me Mister James (

Day 24: Storing User Profile Information | They Call Me Mister James (

Day 25: Personalizing Notifications, Bootstrap Tables | They Call Me Mister James (

Day 26: Bootstrap Tabs for Managing Accounts | They Call Me Mister James (

Day 27: Rendering Data in a Bootstrap Table | They Call Me Mister James (


Nodemon vs Grunt-Contrib-Watch: What’s The Difference? (


Update 3 Release Candidate for Visual Studio 2013 (

Test-Driven Development with Entity Framework 6 -- Visual Studio Magazine (


Announcing the Release of ASP.NET MVC 5.2, Web API 2.2 and Web Pages 3.2 (

Using Discovery and Katana Middleware to write an OpenID Connect Web Client | on (

Project Navigation and File Nesting in ASP.NET MVC Projects - Rick Strahl's Web Log (

ASP.NET Session State using SQL Server In-Memory (

CQRS Series (code on GitHub)

CQRS the simple way with eventstore and elasticsearch: Implementing the first features (

CQRS the simple way with eventstore and elasticsearch: Implementing the rest of the features (

CQRS the simple way with eventstore and elasticsearch: Time for reflection (

CQRS the simple way with eventstore and elasticsearch: Build the API with simple.web (

CQRS the simple way with eventstore and elasticsearch: Integrating Elasticsearch (

CQRS the simple way with eventstore and elasticsearch: Let us throw neo4j into the mix (

Ending discussion to my blog series about CQRS and event sourcing (


Michael Feathers - Microservices Until Macro Complexity (

Windows Azure

Azure Cloud Services and Elasticsearch / NoSQL cluster (PAAS) | I'm Pedro Alonso (


Monitoring (

Search Engines (ElasticSearch, Solr, etc.)

Fast Search and Analytics on Hadoop with Elasticsearch | Hortonworks ( This Week In Elasticsearch | Blog | Elasticsearch (

Solr vs. ElasticSearch: Part 1 – Overview | Sematext Blog on (

Categories: Blogs

Community Update 2014-06-25

Decaying Code - Maxime Rouiller - Thu, 09/04/2014 - 04:04

So not everything is brand new since I did my last community update 8 days ago. What I suggest highly is the combination of EventStore and ElasticSearch in a great article by Tomas Jansson.

It’s definitely a must read and I highly recommend it. Of course, don’t miss the series by James Chambers on Bootstrap and MVC.

Enjoy all the reading!

Must Read

Be more effective with your data - ElasticSearch | Raygun Blog (

Your Editor should Encourage You - You've Been Haacked (

Exploring cross-browser math equations using MathML or LaTeX with MathJax - Scott Hanselman (

CQRSShop - Tomas Jansson ( – Link to a tag that contains 3 blog post that are must read.

James Chambers Series

Day 18: Customizing and Rendering Bootstrap Badges | They Call Me Mister James (

Day 19: Long-Running Notifications Using Badges and Entity Framework Code First | They Call Me Mister James (

Day 20: An ActionFilter to Inject Notifications | They Call Me Mister James (

Web Development

Testing Browserify Modules In A (Headless) Browser (


Fredrik Normén - Using Razor together with ASP.NET Web API (

A dynamic RequireSsl Attribute for ASP.NET MVC - Rick Strahl's Web Log (

Versioning RESTful Services | Howard Dierking (

ASP.NET vNext Routing Overview (


Exceptions exist for a reason – use them! | John V. Petersen (

Nuget Dependencies and latest Versions - Rick Strahl's Web Log (

Trying Redis Caching as a Service on Windows Azure - Scott Hanselman (

Categories: Blogs

My Blog is moving

Brian's House of Bilz - Wed, 09/03/2014 - 20:26

I am in the process of moving my blog over to using Jekyll with Github Pages.  Eventually, as I move all of the content over to markdown, this site will point to the new site.  For the time being, all of my content is here at:

Thanks for reading!

Categories: Blogs

Forrester & Seapine Webinar on the Internet of Things

The Seapine View - Wed, 09/03/2014 - 20:04

WebinarBannerIt’s time for companies to acknowledge the central role software has in their future success and better understand the challenges it poses to their product development practices.

Software is the key differentiator in today’s products. When done right, smart products enable premium pricing and drive increased customer retention and loyalty. To realize these benefits, organizations need to rethink the way they develop and deliver products to market.

Software extends and complicates the product lifecycle. Customers expect frequent updates and longer life out of smart products. Meeting those expectations requires longer maintenance lifecycles within a continuous delivery environment, and often results in a proliferation of product variants out in the field.

Software increases the risk and pain of product failure. Software is a double-edged sword, adding a layer of complexity to your product but also making the product more indispensable to customers. In this environment, meeting quality expectations is more challenging while the backlash from customers for failing to do so is harsher.


Guest Speaker John McCarthy, Vice President, Principal Analyst at Forrester Research, Inc., will share his insights into the challenges companies face when integrating software into existing product development and manufacturing processes. Matt Harp, Seapine’s Product Marketing Director, will also discuss how the right product development solution can improve your development practices and help your company innovate with software.

During this live, 60-minute event, you’ll learn about:

  • Specific benefits companies across a variety of industries are realizing from integrating software into their existing physical products.
  • Hurdles and common pitfalls companies face as they learn to manage the added complexity of integrating software into their physical products.
  • An integrated product development solution that can help you overcome those complexities.


Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

TestLink Introduction

Testing TV - Wed, 09/03/2014 - 19:01
A brief introduction to some of the capabilities of TestLink. TestLink is an open soruce web based Test Management tool.The application provides Test specification, Test plans and execution, Reporting, Requirements specification and collaborate with well-known bug trackers. Some of the TestLink features: * Requirements Management – Define your requirements and do not lose track of […]
Categories: Blogs

Tour de uTest: Community Member Tours Famous Cycling Stages

uTest - Wed, 09/03/2014 - 17:56

IMG_1291While it’s sometimes a challenge for me to even get up the stairs of our uTest/Applause headquarters each morning, some of our global community of testers are climbing mountains or cycling around Europe.

Put uTester Silvano Parodi into that latter category as an avid cyclist who managed to tour two stages of the Tour de France this Summer. Silvano hails from Genova, Italy and is a Silver-rated tester on paid projects here at uTest, and a 10-year development vet in his day job.

Beyond uTest, cycling is one other area that Silvano has always taken to in his spare time, riding since the age of 13, when his dream was to win the Tour de France. Silvano may have not realized that part of his dream, but this summer, he certainly was a little bit closer to the stage that he idolized as a kid.

Silvano and his wife put their bikes, a tent, and their uTest shirts on the car, and made the long trek of about 400 km (about 248 miles) to check out two alpine stages of the Tour de France.

They arrived a day early before the first stage they wanted to see,  and at the summit of the final climb, placed their tent that was brought along for the trip. According to Silvano, crowds gather from all over Europe, and even from as far as the United States and Australia, with their campers, caravans, and tents placed along the climb, each with a flag of the attendees’ home country. Silvano, as you’ll notice in the pictures here, served as a sort of flag/banner himself sporting the spiffy uTest attire. Lookin’ good, sir!

Silvanounnamed and his wife got on their bikes and toured stages 13 and 14 of the Tour de France, finding a good spot to see the final stage of the big race.

So why all of this travel and effort for a relatively short trip to France? Beyond the fact that cycling is one of Silvano’s passions, he mentioned that cycling draws a parallel to his life as a tester and developer:

“Cycling tells me that hard work always leads to good results, and I try to apply this rule in my daily job. Also, road cycling, despite what most people think, is a team sport. For the victory of one rider, other teammates do a lot of work, bringing to him a bottle, staying ahead of him to protect from headwind, etc. Each rider has his or her own role and tasks to achieve the goal. But when the captain wins, the joy for all teammates is big. I think that this attitude is also good in a work environment — do your own job without jealousies against colleagues to reach the company objectives.”

A noble parallel to testing and development, at that. Keep riding, Silvano, and thanks for sharing the story (and pictures) with us!

Categories: Companies

Does Performance in the Cloud Matter?

Note: Scott Turner and his team from Verizon Terremark performed the tests on the Terremark Cloud Platform and other public clouds. Scott can be reached at One of the most appealing benefits of cloud deployment is the ease of use and the flexibility of adding or removing compute capacity. You can dynamically allocate resources […]

The post Does Performance in the Cloud Matter? appeared first on Compuware APM Blog.

Categories: Companies

Ontogeny, phylogeny and virtual methods

The Kalistick Blog - Wed, 09/03/2014 - 16:20

Today on Ask The Bug Guys, a question I get occasionally, particularly from C++ programmers learning C#:

I’ve heard that it’s a bad idea to call a virtual method from a constructor in C++ (or C#). Does the same advice apply to C# (or C++)?

Great question. Calling a virtual method from a constructor in an unsealed class is a bad practice in both C++ and C# (and also in Java, though I won’t discuss the Java details today) but the reasons why are subtly different.

There is a now-discredited hypothesis of biology that the development of an individual organism is a replay in miniature of the large-scale evolutionary history of the species. That is, in utero the foetus first resembles a single-celled organism, then a fish, and so on. This hypothesis is usually summarized as “ontogeny recapitulates phylogeny“, and it turns out to be false for biological systems. However it is true in C++! Typically when writing a C++ program you write a base class first, then a derived class, then a more derived class, and so on; this is the phylogeny of the class hierarchy. That process is then repeated in miniature when an instance of the most derived class is created; the ontogeny is: while the base constructor is running, the object behaves like an instance of the base class, then when the derived constructor is running the object behaves like an instance of the derived class. And when the constructors are all finished the object behaves like an instance of the most derived class.

Let’s look at an example:

#include <iostream>
class B
  virtual void M()
    std::cout << "B" << std::endl;
class D : public B
  virtual void M()
    std::cout << "D" << std::endl;
  D() : B()
int main(int argc, char* argv[])
  new D(); // B - D
  return 0;

The constructor for D first invokes the constructor for B, and when it runs the virtual method slot for M is still referring to B::M! The ontogeny of the object is recapitulating the phylogeny of the class hierarchy. When the B constructor completes, the virtual slot is rewritten to refer to D::M, which is then invoked by the D constructor.

By contrast, C# does not have this idea that the object progressively takes on different types during its construction. In C# an object is of its most-derived type from the moment the memory allocator creates storage for an instance of a class. And thus the seemingly-equivalent program in C# has different behavior:

using System;
class B
  public virtual void M()
  public B()
class D : B
  public override void M()
  public D() : base()
class P
  static void Main()
    new D(); // D - D

When the B constructor calls M the virtual slot is already referring to D.M; it never refers to B.M in an instance of D.

Clearly this is a bit of a “gotcha” for new C# programmers who are used to the way C++ does it (or vice versa), but why is this a bad practice in both languages?

Because it is confusing, surprising and dangerous. In C# programs we have a situation where code in B is calling a method of D before D‘s constructor has run! That method might depend for its correctness or safety on some initialization that is performed in D‘s constructor. One imagines something like:

class D : B
  public override void M()
    Console.WriteLine( +;

where foo and bar are initialized in D‘s constructor. If the initialization code has not run yet then the fields will still have their default values, which could be completely wrong. Such a bug could go unnoticed for a long time. C++ doesn’t have this problem because a method in D is not called before D‘s constructor runs, but that is hardly better. Many C++ developers are unaware of this unusual feature of C++, and it could be surprising to them. One might reasonably expect that an overridden virtual method will always call the most-overriding method, not the least-overriding method. If the most-overriding version of a method performs a security check that the base class does not then there could again be a serious, hard-to-spot bug in the program.

Either way, it’s a good idea to avoid this pattern in both languages. In particular, remember that in C# methods ToString, GetHashCode and Equals are virtual; try to avoid calling them in constructors of unsealed classes.

The post Ontogeny, phylogeny and virtual methods appeared first on Software Testing Blog.

Categories: Companies

Federal government, insurance firms encourage corporate health care data mining

Kloctalk - Klocwork - Wed, 09/03/2014 - 15:50

The federal government and major insurance companies are encouraging businesses to leverage data mining to improve employee health outcomes, Politico reported.

The source explained that the Affordable Care Act incentivizes businesses to offer wellness programs to their employees. Similarly, insurance companies will often provide discounts or other benefits to companies that take steps to improve the overall quality of employee health. Data mining is playing a key role in such efforts, as these strategies enable organizations to identify those individuals who are most at risk of developing medical complications and health problems down the road.

These efforts can prove extremely successful. The news source noted that data scientists recently used claims data and electronic health records to examine 37,000 employees at a major corporation. These scientists were able to predict which employees would develop diabetes within a year with a nearly perfect track record.

By applying data mining techniques to their own employees, companies can encourage workers to take proactive steps to reduce health risks.

The influence of Electronic Health Records
The rise in data mining in this field is due to a number of factors, but perhaps none has proved as important as the adoption of Electronic Health Records (EHR).

"The adoption of electronic health records has increased the amount of information available to employers and providers and health plans," explained Kulleni Gebreyes, director of the Health Industries Advisory group at PricewaterhouseCoopers, the news source reported. "You can mine the data to get the answers and make correlations you couldn't make before. And with the correlations, you address the root causes."

The federal government is also focusing on health care data mining more directly. A recent survey from MeriTalk and EMC found that 63 percent of federal executives in the health care field believe data analytics will improve their ability to track and manage population health.

Categories: Companies

Ranorex Customer Experience Survey

Ranorex - Wed, 09/03/2014 - 09:27
The main goal of this short survey is to gather some detailed information about how you use Ranorex and find out your opinion of the software – in order to make the Ranorex tools even better.

Please take this opportunity to let us know how you use Ranorex in your test projects by participating in this Ranorex Test Automation Survey.

You can rest assured that the information submitted will be kept confidential.

The survey should only take about 4 minutes to complete – and your participation may save you time in the future when working with Ranorex!

Give us your feedback...

Categories: Companies

Jenkins User Meet-up in Paris

My apologies for the last minute announcement, but there will be a Jenkins user meet-up in Paris on Sep 10th 7:00pm, which is just next week. The event is hosted by Zenika. You'll hear from Gregory Boissinot and Adrien Lecharpentier about plugin development, and I'll be talking about workflow.

It's been a while we do a meet-up in Paris. Looking forward to seeing as many of you as possible. The event is free, but please RSVP so that we know what to expect.

Categories: Open Source

2 Ways Fast IT Helps Agencies and Development Shops

Assembla - Tue, 09/02/2014 - 20:27

Service providers use the Fast IT pitch to win business and increase quality and profitability.  

1. They are specialists.

2. Service providers can use the Fast IT model to build recurring revenue.  If they have a consistent stream of work, their practice will be easier to manage, more profitable, and have higher productivity and quality.  They often do a lot of work to specify and sell individual projects for people that often have annual budgets.  This is a big mismatch.  We can think about budgeting as a “Core” function that happens on a pretty slow time scale.  Service providers have an opportunity to say:  Give us a recurring budget, and we will allocate it to the Fast IT projects that are important in any month.  AND, we will maintain all of those projects over time.  A system like Assembla helps you track those projects so you can make upgrades on demand.

Categories: Companies

Writing Better Feature Files for BDD

Software Testing Magazine - Tue, 09/02/2014 - 19:14
Behavior Driven Development (BDD) is an software development technique that use a specific format that allows both to describe the system requirements, the features, and to feed an functional software testing tool that will allow to verify the software product. In this blog post, Shashikant Jagtap explains how to write better feature files. This approach is similar to the Test-Driven Development (BDD) approach, but applying it at the requirements level. A feature is template for a requirement that has two components: a feature title and a narrative. The article describe all ...
Categories: Communities

DevOps – From Practice to Doing

IBM UrbanCode - Release And Deploy - Tue, 09/02/2014 - 17:53

Here’s a recent presentation looking at what it takes to get DevOps going. Focus on architecture, testing and deployment.

DevOps in Practice: When does “Practice” Become “Doing”? from Michael Elder


Categories: Companies

ISO 29119: Why it is Dangerous to the Software Testing Community

uTest - Tue, 09/02/2014 - 16:41

stop-29119Two weeks ago, I gave a talk at CAST 2014 (the conference of the Association for Software Testing) in New York, titled “Standards: Promoting quality or restricting competition?”

It was mainly about the new ISO 29119 software testing standard (according to ISO, “an internationally agreed set of standards for software testing that can be used within any software development life cycle or organization”), though I also wove in arguments about ISTQB certification.

My argument was based on an economic analysis of how ISO (the International Organization for Standardization) has gone about developing and promoting the standard. ISO’s behavior is consistent with the economic concept of rent seeking. This is where factions use power and influence to acquire wealth by taking it from others — rigging the market — rather than by creating new wealth.

I argued that ISO has not achieved consensus, or has even attempted to gain consensus, from the whole testing profession. Those who disagree with the need for ISO 29119 and its underlying approach have been ignored. The opponents have been defined as irrelevant.

If ISO 29119 were expanding the market, and if it merely provided another alternative — a fresh option for testers, their employers and the buyers of testing services — then there could be little objection to it. However, it is being pushed as the responsible, professional way to test — it is an ISO standard, and therefore, by implication, the only responsible and professional way.

What is Wrong With ISO 29119?

Well, it embodies a dated, flawed and discredited approach to testing. It requires a commitment to heavy, advanced documentation. In practice, this documentation effort is largely wasted and serves as a distraction from useful preparation for testing.

Such an approach blithely ignores developments in both testing and management thinking over the last couple of decades. ISO 29119 attempts to update a mid-20th century worldview by smothering it in a veneer of 21st century terminology. It pays lip service to iteration, context and Agile, but the beast beneath is unchanged.

The danger is that buyers and lawyers will insist on compliance as a contractual requirement. Companies that would otherwise have ignored the standard will feel compelled to comply in order to win business. If the contract requires compliance, then the whole development process could be shaped by a damaging testing standard. ISO 29119 could affect anyone involved in software development, and not just testers.

Testing will be forced down to a common, low standard, a service that can be easily bought and sold as a commodity. It will be low quality, low status work. Good testers will continue to do excellent testing. But it will be non-compliant, and the testers who insist on doing the best work that they can will be excluded from many companies and many opportunities. Poor testers who are content to follow a dysfunctional standard and unhelpful processes will have better career opportunities. That is a deeply worrying vision of the future for testing.

I was astonished at the response to my talk. I was hoping that it would provoke some interest and discussion. It certainly did that, but it was immediately clear that there was a mood for action. Two petitions were launched. One was targeted at ISO to call for the withdrawal of ISO 29119 on the grounds that it lacked consensus. This was launched by the International Society for Software Testing.

The other petition was a more general manifesto that Karen Johnson organized for professional testers to sign. It allows testers to register their opposition to ISTQB certification and attempts to standardize testing.

A group of us also started to set up a special interest group within the Association for Software Testing so that we could review the standard, monitor progress, raise awareness and campaign.

Since CAST 2014, there has been a blizzard of activity on social media that has caught the attention of many serious commentators on testing. Nobody pretends that a flurry of Tweets will change the world and persuade ISO to change course. However, this publicity will alert people to the dangers of ISO 29119 and, I hope, persuade them to join the campaign.

This is not a problem that testers can simply ignore in the hope that it will go away. It is important that everyone who will be affected knows about the problem and speaks out. We must ensure that the rest of the world understands that ISO is not speaking for the whole testing profession, and that ISO 29119 does not enjoy the support of the profession.

James Christie has 30 years’ experience in IT, covering testing, development, IT auditing, information security management and project management. He is now a self-employed testing consultant, based in Scotland. You can learn more about James and his work over at his blog is  and follow him on Twitter @james_christie.

Categories: Companies

Ranorex 5.1.2 Released

Ranorex - Tue, 09/02/2014 - 10:41
We are proud to announce that Ranorex 5.1.2 has been released and is now available for download. General changes/Features
  • Added support for Firefox 32
Please check out the release notes for more details about the changes in this release.

Download latest Ranorex version here.
(You can find a direct download link for the latest Ranorex version on the Ranorex Studio start page.) 

Categories: Companies

Heartbleed raises more open source security challenges for federal government

Kloctalk - Klocwork - Mon, 09/01/2014 - 15:00

The discovery of the Heartbleed OpenSSL security vulnerability in April seems like old news at this point, but its impact continues to reverberate. Countless firms have been affected by this revelation, and few have fully put the open source flaw behind them.

One organization that has been particularly strongly affected by Heartbleed is the U.S. government. As NextGov contributor Jason Thompson recently discussed, OpenSSL is an incredibly important resource for the federal government, but Heartbleed raises questions about the viability of this and other open source solutions. To continue to utilize these offerings, a renewed focus on open source security may be necessary.

Government IT issues
While the expansive degree to which OpenSSL is used by organizations around the world has been widely discussed, few have noted how important this solution is for the U.S. government in particular. Thompson pointed out that OpenSSL, created in 1981, was essential for the development of Internet-based government services.

OpenSSL remains critical for providing encryption for U.S. government IT to this day, making Heartbleed a serious security risk. Thompson reported that four hackers recently accepted a challenge from website security company Cloudflare and successfully managed to steal private Secure Shell security keys by exploiting Heartbleed. Considering the fact that Secure Shell protocol operates in the background of most government networks, encrypting connections, these hackers' actions raise serious concerns.

Federal agencies regularly use identity and access management solutions to control authorization for cloud infrastructure use, along with access to applications, servers and data. And as Thompson pointed out, the IAM tools within Secure Shell implementations are at risk when hackers exploit Heartbleed. This is particularly problematic when it comes to machine-to-machine data transfers and other non-human identity management, he explained.

Open source implications
However, despite all of these issues, Thompson maintained that open source solutions can still remain an invaluable resource for government agencies. The discovery of the Heartbleed vulnerability should not dissuade agencies from leveraging this technology, but rather cause departments to reconsider their approach to open source tools.

These issues should encourage "technology leaders to take another look at the critical but oft-forgotten infrastructure their agencies are riding on, especially when it is something as ubiquitous and critical as encryption technologies like SSL or Secure Shell," Thompson explained.

In particular, the writer emphasized the need for agency decision-makers to consider who creates keys within the agency, who monitors open source technology and who delivers support for open source tools, along with a variety of related IT issues.

Open source tools
This may also be the ideal time for agency leaders to consider whether their current open source tools are sufficient for an evolving IT realm. As Thompson explained, no software is safe from the threat of external attackers – sooner or later, someone is bound to discover a vulnerability. The best that organizations, including the federal government, can do to protect themselves is to invest in the best tools and strategies to defend against these risks.

For example, agencies should make sure that they have high-quality scanning solutions in place. These tools should be specifically designed to work with open source software code, identifying where this code is in use. Without such resources in hand, IT personnel cannot effectively identify where open source is in operation within the department, and therefore cannot ensure that open source best practices are being followed. 

Additionally, agencies should implement governance and provisioning solutions to guarantee compliance and protect open source usage against security and functional risks. Only with such tools in place can the U.S. government continue to leverage open source resources for maximum utility. 

Categories: Companies

To preload or not to preload...

Rico Mariani's Performance Tidbits - Fri, 08/29/2014 - 21:25


My application starts slowly, I want to preload it to avoid that problem.   Should I be worried?


Well, in short, there are lots of concerns.  Preloading things you may or may not need is a great way to waste a ton of memory and generally make the system less usable overall.

I’m often told that that the answer to a performance problem is to simply preload the slow stuff… unfortunately that doesn’t work as a general solution if everyone does it.  It’s classic “improve the benchmark” thinking.

When developing for Windows you have to think about all kinds of scenarios, such as the case where there are several hundred users trying to share a server each with their own user session.  Your application might also need to run in a very memory constrained environments like a small tablet or some such – you do not want to be loading extra stuff in those situations. 
The way to make a system responsive is to KEEP IT SIMPLE.  If you don’t do that, then it won’t matter that you’ve preloaded it -- when the user actually gets around to starting the thing in a real world situation, you will find that it has already been swapped out to try to reclaim some of the memory that was consumed by preloading it.  So you will pay for all the page faults to bring it back, which is probably as slow as starting the thing in the first place.  In short, you will have accomplished nothing other than using a bunch of memory you didn’t really need.

Preloading in a general purpose environment is, pretty much a terrible practice.  Instead, pay for what you need when you need it and keep your needs modest.  You only have to look at the tray at bottom right on your screen full of software that was so sure it was vitally important to you that it insisted on loading at boot time to see how badly early loading scales up.

Adding fuel to this already bonfire-sized problem is this simple truth: any application preloading itself competes with the system trying to do the very same thing.  Windows has long included powerful features to detect the things you actually use and get them into the disk cache before you actually use them, whether they are code or data.  Forcing your code and data to be loaded is just as likely to create more work evicting the unnecessary bits from memory to make room for something immediately necessary, whereas doing nothing would have resulted in ready-to-go bits if the application is commonly used with no effort on your part.


Bottom line, preloading is often a cop out.  Better to un-bloat.

Categories: Blogs

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today