Skip to content

Feed aggregator

Security tests play a key role in successful health care app development

Kloctalk - Klocwork - Mon, 10/20/2014 - 15:00

Health care apps are rapidly growing in popularity. Consumers are eager to get their hands on apps that can help them track their daily activities and gain insight into their own health. Doctors, too, are enthusiastic about these tools, as they provide a much clearer understanding of an individual's physical condition than a brief checkup can offer.

Yet for all the benefits that these apps can deliver, their development can be a difficult undertaking. As TechTarget contributor Trevor Strome recently asserted, one of the keys to success in this area is achieving the right balance. Only by accommodating both security and usability can these apps deliver functionality without compromising patient privacy.

Old lessons
As the writer explained, he worked on health care app development projects in the mid-1990s. These apps were intended for tablets, which were exceedingly rare at the time and far less sophisticated than today's offerings. However, despite the seeming chasm that exists between the technology of old and what's available today, Strome argued that many of the rules that governed health care app development back then remain relevant to this day.

Key among these lessons is the need to strike the right balance between usability and security.

In terms of usability, Strome emphasized the importance of developing a comprehensive understanding of the project requirements and whatever issues or problems the app is supposed to address.

"For example, is the app needed for supplemental data collection (for quality improvement projects), clinical charting or information delivery (as required for evidence-based medicine)? By understanding the requirements, developers will have a better chance of including all the necessary functions and information that will make the app a useful asset," Strome wrote.

Another consideration, according to the writer, is the issue of workflow integration. All apps need to be available to users while they are on the move, and this is especially true when it comes to apps designed to collect or provide access to health care data. If doctors, nurses and clinicians are to take advantage of the potential utility of these assets, the apps need to accommodate business workflows. If this is not the case, Strome explained, then the apps will become more of a burden or hassle than a valuable resource for care providers.

"Process and workflow considerations can mean the difference between successful development or another add to the slag-pile of apps that didn't meet expectations," the writer stated.

Security concerns
Ultimately, though, none of these considerations should trump security. Strome pointed out that as early as 1996, the potential danger of lost or stolen mobile devices was already a hot-button issue, demanding that mobile health care apps have significant security in place to ensure sensitive data remains unavailable to unauthorized users.

The need for app security has grown even more significant in recent years. Not only do developers need to worry about the individual user's information, but the heavily interconnected nature of modern mobile devices may put the entire network at risk.

Furthermore, there are now far more regulations on the books that concern how both care providers and general organizations handle patient data. Failure to comply with these rules can lead to serious repercussions.

That is why it is so important for mobile health care app developers to make application security a priority. A big part of this should be the implementation of sophisticated security tools, such as static code analysis solutions. Static analysis can help to reduce testing costs and increase developer productivity, all while ensuring that app development code remains safe, secure and reliable.

Learn more:
• See how static code analysis built for Android development helps secure your code (PDF)
• Understand how security breaches occur and steps to minimize them by watching this webinar

Categories: Companies

Testing the Limits With Testing ‘Rock Star’ Michael Larsen — Part I

uTest - Mon, 10/20/2014 - 15:00

Michael Larsen is a software tester based out of San Francisco. Including a picture-87071-1360261260decade at Cisco in testing, he’s also has an extremely varied rock star career (quite literally…more on that later) touching upon several industries and technologies including virtual machine software and video game development.

Michael is a member of the Board of Directors for the Association for Software Testing and a founding member of the “Americas” Chapter of “Weekend Testing.” He also blogs at TESTHEAD and can be reached on Twitter at @mkltesthead.

In Part I of our two-part Testing the Limits interview, we talk with Michael on the most rewarding parts of his career, and how most testers are unaware of a major “movement” around them.

uTest: This is your first time on Testing the Limits. Could you tell our testers a little bit about your path into testing?

Michael Larsen: My path to testing was pure serendipity. I initially had plans to become a rock star in my younger years. I sang with several San Francisco Bay Area bands during the mid-to-late 80s and early 90s. Not the most financially stable life, to say the least. While I was trying to keep my head above water, I went to a temp agency and asked if they could help me get a more stable “day job.” They sent me to Cisco Systems in 1991, right at the time that they were gearing up to launch for the stratosphere.

I was assigned to the Release Engineering group to help them with whatever I could, and in the process, I learned how to burn EEPROMs, run network cables, wire up and configure machines, and I became a lab administrator for the group. Since I had developed a god rapport with the team, I was hired full-time and worked as their lab administrator. I came to realize that Release Engineering was the software test team for Cisco, and over the next couple of years, they encouraged me to join their testing team. The rest, as they say, is history.

uTest: You also come from a varied tech career, working in areas including video game development and virtual machine software. Outside of testing, what has been the most rewarding “other” part of your career?

ML: I think having had the opportunity to work in a variety of industries and work on software teams that were wildly varied. I’ve had both positive and negative experiences that taught me a great deal about how to work with different segments of the software world. I’ve worn several hats over the years, including on-again, off-again stints doing technical support, training, systems and network administration, and even some programming projects I was responsible for delivering.

All of them were memorable, but if I had to pick the one unusual standout that will always bring a smile to my face, it was being asked to record the guide vocal for the Doobie Brothers song “China Grove,” which appeared on Karaoke Revolution, Volume 3 in 2004.

uTest: You are also a prolific blogger and podcast contributor. Why did you get into blogging and why is it an effective medium for getting across to testers?

ML: I started blogging before blogging was really a thing, depending on who you talk to. Back in the late 90s, as part of my personal website, I did a number of snowboard competition chronicles for several years called “The Geezer X Chronicles.” Each entry was a recap of the event, my take on my performance (or lack thereof) and interactions with a variety of the characters from the South Lake Tahoe area. Though I didn’t realize at the time, I was actively blogging for those years.

In 2010, I decided that I had reached a point where I felt like I was on autopilot. I didn’t feel like I was learning or progressing, and it was having an effect on my day-to-day work. I had many areas of my life that I was passionate about (being a musician, being a competitive snowboarder, being a Boy Scout leader), but being a software tester was just “the day job that I did so I could do all the other things I loved.”

I decided I wanted to have that same sense of passion about my testing career, and I figured if my writing about snowboarding had connected me with such an amazing community, maybe writing about software testing would help me do the same. It has indeed done that — much more than I ever imagined it would. It also rekindled a passion and a joy for software testing that I had not felt in several years.

uTest: And your own blog is called ‘TESTHEAD.’ That sounds like a very scary John Carpenter movie.

ML: I’m happy it’s memorable! The term “test head” was something we used when I was at Cisco. The main hardware device in the middle that we’d do all the damage to was called the test head. I’ve always liked the idea of trying to be adaptable and letting myself be open to as many experiences and methods of testing as possible, even if the process wasn’t always comfortable. Because of that, I decided that TESTHEAD would be the best name for the blog.

uTest: As you know, James Bach offers free “coaching” to testers over Skype. You’re a founding member of the Americas chapter of “Weekend Testing,” learning sessions for testers in the Western Hemisphere. Does Weekend Testing run off of a similar concept?

ML: Weekend Testing is a real-time chat session with a number of software testers, so it’s more of a group interaction. James’ Skype coaching is one-on-one. It has some similarities. We approach a testing challenge, set up a mission and charters, and then we review our testing efforts and things we learn along the way — but we emphasize a learning objective up front so that multiple people can participate. We also time-box the sessions to two hours, whereas James will go as long as he and the person he is working with has energy to continue.

uTest: In the video interview you gave with us, you mentioned a key problem in testing is the de-emphasis of critical thinking as a whole in the industry. Are endeavors such as Weekend Testing more of a hard sell than they should be because of testers’ unwillingness to “grow?”

ML: I think we have been fortunate in that those that want to find us (Weekend Testing) do find us and enjoy the interactions they have. Having said that, I do think that there are a lot of software testers currently working in the industry that don’t even realize that there is a movement that is looking to develop and encourage practitioners to become “sapient testers” (to borrow a phrase from James Bach).

When I talk with testers that do understand the value of critical thinking, and that are actively engaged in trying to become better at their respective craft, I reluctantly realize that the community that actively strives to learn and improve is a very small percentage of the total number of software testing practitioners. I would love to see those numbers increase, of course.

Stay tuned for Part II of Michael Larsen’s Testing the Limits interview next Monday on the uTest Blog. Amongst other discussion topics, Michael will share why he believes “silence” is powerful on testing teams.

Categories: Companies

Your Competency Matrix May Not Match Reality

The Social Tester - Mon, 10/20/2014 - 14:00

I’ve just posted over on my LinkedIn profile about how a competency matrix may not be helpful in working out your teams competencies. In the article I outline 4 problems with a competency matrix and a simple solution to all of them – focus on behaviours and results. Here are the four identified problems. The … Read More →

The post Your Competency Matrix May Not Match Reality appeared first on The Social Tester.

Categories: Blogs

3 Handy Widgets for Monitoring Product Requirements

The Seapine View - Mon, 10/20/2014 - 12:00

Looking to monitor requirements and design on your TestTrack Home Page? Here are 3 widgets that can help. For these examples, I’m going to assume you’re using a simple requirements workflow as shown below.

Requirement workflow diagram


#1 – Requirements Under Review

It’s not uncommon for requirements to go out for review and then sit for awhile, waiting on the review to actually happen. You can use a widget to monitor and quickly review the list of requirements under review, to make sure reviews are being done in a timely manner. Remember that creating a widget is a two-step process. First you create a filter, then you create a widget to use the filter.

Using the simple requirements workflow, there’s just one state for design reviews called “Ready for Review”, and you can quickly create a filter to identify requirements being reviewed.
Filter for requirements under review

Once the filter is saved, create the widget by going to Tools > Administration > Home Widgets and clicking the + button. For this example, I’m going to simply tag requirement widgets in blue but you could use a 2-color mapping if you want to highlight when the number of requirements under review hits a certain threshold.

Widget for requirements under review

This kind of widget can be used to track requirements in any combination of states. Many customers have a special state for requirements that change after being approved. Those are usually changes that need to be scrutinized quickly, and tweaking the filter above to look at that state would let you watch for those on the Home Page as well.

#2 – High-churn Requirements

Change happens, but requirements that are constantly changing cause issues throughout the project. The constant back and forth of reviews annoys stakeholders and takes time away from more valuable work. Additionally, high churn in a specific requirement or area of the design often points to an underlying flaw in the design that should be addressed so that it doesn’t drag the entire project down.

To monitor requirements churn, you first need to create a calculated field to measure the churn. Then, you can create the filter and widget. Go to Tools > Administration > Custom Fields and add a new Requirements custom field, set the Field Type to “Calculated.” For this example, I’m going to call the field “Churn” and base it off of how many times the Reject event has been applied to a requirement (again, basing this on the simple requirements workflow). The details for the new custom field are in the following screenshot. To build the formula I clicked Insert Formula then selected Items.Events.count and chose the Reject event from the Inputs menu.

Calculated field for requirements churn

Now that the field is setup, it’s straightforward to create a filter. For this example, I’m going to filter in any requirement that’s gone through the review cycle more than 3 times. I want to know about the 4th rejection so I can review the requirement and determine how to stop the churning.

Filter for high-churn requirements

The last step in the process is to create a widget. For this one, I’m going to use a 2-color widget; green is good and red is bad.

Widget for high-churn requirements

#3 – Business Requirements to Break Down

Early in the process, you might have a big list of requirements from marketing or the product owner that need to be broken down into functional requirements, specifications, and the like. Some customers I talk to get an email or document from the product owner and put the high-level requirements into TestTrack themselves, while others allow marketing team members or even customers to input requirements in TestTrack on their own. Everyone’s process is a little different, the key is making sure no requirements are lost in the mix. One way to watch for those incoming requirements is with a widget.

For this example, I’m looking for specific types of requirements and checking whether they have a link to other types of requirements. The link type shown below might not be in your project, it all depends on how you’ve configured link definitions. Use whatever link type makes the most sense, given your development process.


For the widget, I’m sticking with blue for requirements widgets. This widget can help team leads identify missed requirements, or it could be used by the entire team to help them identify requirements they can grab and start working on. If you prefer to assign requirements to specific individuals before work starts, you could tweak this widget by adding an “Assigned to” restriction in the filter. This would shift the focus of the widget from helping everyone identify missed requirements to helping individual team members identify the specific requirements they should be working on next.


So there you have it, 3 handy widgets to help you monitor requirements and design on the Home Page. If you’ve created some cool widgets for your team, leave a note in the comments below. I’d love to hear the details!

Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Springboard or Straightjacket?

Hiccupps - James Thomas - Sun, 10/19/2014 - 11:50
It's a common claim that constraints feed creativity. But it doesn't have to be so; constraints may, well, constrain creativity too.

I use mnemonics such as SFDPOT in my testing. Seeding thoughts with each of the specific areas encourages idea generation but throwing away ideas that come from it, perhaps because they don't seem to fit into that category, holds it back. Ideas often form chains, and it may take several "invalid" links to get to a "valid" one. Breaking the chain loses that opportunity.
Image: Zappa
Categories: Blogs

Mad Scientists Welcome at the STARWEST 2014 Test Lab

uTest - Fri, 10/17/2014 - 19:30

Testing is dull, boring, and repetitive.

Ever heard anyone say that? Well at STARWEST 2014, the theme is Breaking Software (in the spirit of Breaking Bad), and this crowd is anything but dull! Creativity abounds at this conference, from the whimsical (yet impactful) session topics to the geek-chic booth themes (I do so love a good Star Wars parody!) to the on-site Test Lab run by what at first glance appears to be a crew of mad scientists. Boring or repetitive? I don’t think so!

Because the Test Lab was such a fun space, I interviewed one of the mad scientist/test lab rats, Paul Carvalho, to get the lowdown on what STARWEST 2014 attendees have been up to. Check out the video below for a tour of the STARWEST Test Lab, complete with singing computers, noisy chickens, talking clocks, and more!

You can learn more about Paul Carvalho – an IT industry veteran of more than 25 years – at (Software Testing and Quality) where he is the principal consultant. You can also find him on LinkedIn here.

So what do you think about the STARWEST Test Lab? What would you try to break first? Let us know in the Comments below, and check out all of our coverage from STARWEST 2014.

Categories: Companies

Take a 2-minute Video Tour of the New Surround SCM Web Client

The Seapine View - Fri, 10/17/2014 - 19:00

Surround SCM’s new web client gives you limited, read-only access to Surround SCM source files, repositories, and branches. If you don’t have the Surround SCM Client installed on your computer or device but need to view or get files from Surround, this new web client is for you.

Yan Shapochnik, Seapine software architect, recorded a two-minute video tour if you’re curious about the new functionality in the web client.


Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

STARWEST 2014 Interview: Mind Over Pixels — Quality Starts With the Right Attitude

uTest - Fri, 10/17/2014 - 17:10

How important is a tester’s mindset and attitude when it comes to testing?

I sat down with Stephen Vance, one of the STARWEST 2014 speakers, to chat about just that. As an Agile/Lean coach, Stephen is passionate about helping testers understand how to communicate with developers to better integrate into the software development process, and it all starts with the attitude you bring to the table.

Stephen teaches that investing in a “distinctly investigative, exploratory, hypothesis-driven mindset” is key to achieving process improvement at all levels of the software organization. He sees the value in the iterative approach that so well suits the skills testers bring to a collaboration, and encourages testers to be integral in more aspects of a project than just the black-and-white testing phases.

Stephen’s STARWEST 2014 session was called “Building Quality In: Adopting the Tester’s Mindset.” If you weren’t able to attend, check out my interview with him below to hear what else he had to say!

You can also read more about Stephen Vance on his website and connect with him on LinkedIn here.

What are some ways you think testers can use a hypothesis-driven, investigative approach to inject greater value into the software development life cycle? Feel free to sound off in the Comments below.

Categories: Companies

Future of commercial open source software still unclear

Kloctalk - Klocwork - Fri, 10/17/2014 - 15:05

Without question, open source software continues to grow more popular around the world. Individuals and organizations from every region and sector now leverage these tools for a wide range of purposes and experience significant benefits as a result.

Yet despite this progress, the future of open source software remains unclear in a number of different areas. Notably, it is difficult to predict whether commercial open source products will eventually prove viable, as Forbes contributor Adrian Bridgwater recently discussed.

Commercial open source issues
Bridgwater noted that open source solutions are now widely used in various capacities, and in some fields have become dominant. For example, he pointed out that Hadoop is the biggest name when it comes to big data analytics applications.

But the popularity of this and other open source software offerings does not reveal much about the viability of commercial open source solutions, the writer asserted. There have been some efforts in the direction in the past – most notably, Sun Microsystems and its "free until you need maintenance and support" model – but thus far commercial open source remains more of a concept than an actual, viable business strategy.

For commercial open source to work, Bridgwater suggested that the mechanics would have to involve making application code libraries static, rather than dynamic. This would make the libraries certifiable. In this arrangement, the Free and Open Source Software community would have the ability to propose alterations and improvements to public code repositories while the static libraries could be removed, optimized or certified as needed.

This would protect critical, sensitive systems, such as aircraft cockpit control units, from being modified by open source enthusiasts or students who lack the expertise or responsibility to confidently update these codes. However, Bridgwater acknowledged that such a deployment would potentially undercut new innovation, which relies largely on the input from a wide range of volunteers.

Viability questions
The above scenario could serve as a model for commercial open source efforts, but this does not mean that such a model would prove viable for any organization. According to Bridgwater, though, there is good reason to suspect that these offerings will arise in the near future.

The writer pointed out that the focus on customer support's importance has never been greater. Boriana Ditcheva, Web development director at the North Carolina Biotechnology Center, recently contributed to, arguing that open source communities offer superior assistance compared to traditional technical support. This suggests that there is a market for commercial products that deliver user assistance in an open source fashion.

Proprietary and open source software together
Regardless of whether commercial open source software becomes a field in its own right, companies are already leveraging open source as a means of adding value to their proprietary offerings, as The Server Side recently highlighted.

The source explained that when a start up or other software provider goes out of business or is bought out by a competitor, its software offerings no longer receive support. Any organizations that have invested in these software solutions will suddenly find themselves out of luck. This state of affairs hurts not just the directly affected companies, but also software developers, as they now face an additional hurdle when trying to convince potential customers to embrace their solutions.

According to The Server Side, many firms now combat this state of affairs by leveraging open source strategies. By merging their proprietary software with open source, organizations can ensure that their software remains supportable even if the company can no longer offer support itself. This increases consumer confidence and further establishes the importance of open source software as a business model.

Categories: Companies

Software Developer Computer Minimum Requirements October 2014

Decaying Code - Maxime Rouiller - Fri, 10/17/2014 - 05:44

I know that Scott Hanselman and Jeff Atwood have already done something similar.

Today, I'm bringing you the minimum specs that are required to do software development on a Windows Machine.

P.S.: If you are building your own desktop, I recommend PCPartPicker.


Intel: Intel Core i7-4790K

AMD: AMD FX-9590

Unless you use a lot of software that supports multi-threading, a simple 4 core here will work out for most needs.


Minimum 8GB. 16GB is better.

My minimum requirement here is 8GB. I run a database engine and Visual Studio. SQL Server can easily take 2Gb with some big queries. If you have extensions installed for Visual Studio, it will quickly raise to 1GB of usage per instance and finally... Chrome. With multiple extensions and multiple pages running... you will quickly reach 4GB.

So get 8GB as the bare minimum. If you are running Virtual Machines, get 16GB. It won't be too much. There's no such thing as too much RAM when doing software development.


512 GB SSD drive

I can't recommend enough an SSD. Most tools that you use on a development machine will require a lot of I/O. Especially random read. When a compiler starts and retrieve all your source code to compile, it will need to read from all those file. Same thing if you have tooling like ReSharper or CodeRush. I/O speed is crucial. This requirement is even more important on a laptop. Traditionally, PC maker put a 5200RPM HDD on a laptop to reduce power usage. However, 5200 RPM while doing development will be felt everywhere.

Get an SSD.

If you need bigger storage (terabytes), you can always get a second hard-drive of the HDD type instead. Slower but capacities are also higher. On most laptop, you will need external storage for this hard drive so make sure it is USB3 compatible.

Graphic Card

Unless you do graphic rendering or are working with graphic tools that require a beast of a card... this is where you will put the less amount of money.

Make sure to get enough of them for your amount of monitors and that they can provide the right resolution/refresh rate.


My minimum requirement nowadays is 22 inches. 4K is nice but is not part of the "minimum" requirement. I enjoy a 1920x1080 resolution. If you are buying them for someone else, make sure they can be rotated. Some developers like to have a vertical screen when reading code.

To Laptop or not to Laptop

Some company go Laptop for everyone. Personally, if the development machine never need to be taken out of the building, you can go desktop. You will save a bit on all the required accessories (docking port, wireless mouse, extra charger, etc.).

My personal scenario takes me to clients all over the city as well as doing presentations left and right. Laptop it is for me.

Categories: Blogs

SVG are now supported everywhere, or almost

Decaying Code - Maxime Rouiller - Fri, 10/17/2014 - 05:44

I remember that when I wanted to draw some graphs on a web page, I would normally have 2 solutions

Solution 1 was to have an IMG tag that linked to a server component that would render an image based on some data. Solution 2 was to do Adobe Flash or maybe even some Silverlight.

Problem with Solution 1

The main problem is that it is not interactive. You have an image and there is no way to do drilldown or do anything with it. So unless your content was simple and didn't need any kind of interaction or simply was headed for printing... this solution just wouldn't do.

Problem with Solution 2

While you now get all the interactivity and the beauty of a nice Flash animation and plugin... you lost the benefits of the first solution too. Can't print it if you need it and over that... it required a plugin.

For OSX back in 2009, plugins were the leading cause of browser crash and there is nothing that stops us from believing that similar things aren't true for other browsers.

The second problem is security. A plugin is just another attack vector on your browser and requiring a plugin to display nice graphs seem a bit extreme.

The Solution

The solution is relatively simple. We need a system that allows us to draw lines, curves and what not based on coordinate that we provide it.

That system should of course support colors, font and all the basic HTML features that we know now (including events).

Then came SVG

SVG has been the main specification to drawing anything vector related in a browser since 1999. Even though the specification started at the same time than IE5, it wasn't supported in Internet Explorer until IE9 (12 years later).

The support for SVG is now in all major browsers from Internet Explorer to FireFox and even in your phone.

Chances are that every computer you are using today can render SVG inside your browser.

So what?

SVG as a general rule is under used or thought of something only artists do or that it's too complicated to do.

My recommendation is to start cracking today on using libraries that leverage SVG. By leveraging them, you are setting yourself apart from others and can start offering real business value to your clients right now that others won't be able to.

SVG has been available on all browsers for a while now. It's time we start using it.

Browsers that do not support SVG
  • Internet Explorer 8 and lower
  • Old Android device (2.3 and less), partial support for 3-4.3
References, libraries and others
Categories: Blogs

Microsoft, Open Source and The Big Ship

Decaying Code - Maxime Rouiller - Fri, 10/17/2014 - 05:44

I would like to note that this post takes only public information available and are not based on my status as Microsoft MVP. I did not interview anyone at Microsoft for those answers. I did not receive any privileged information for writing this post. All the information I am using and the insight therefor are based on publicly available information.

When it happened

I'm not sure exactly when this change toward open source happened. Microsoft is a big ship. Once you start steering, it takes a while before you can feel the boat turn. I think it happened around 2008 when they started including jQuery in the default templates. It was the first swing of the wheel. Back then, you could have confused it for just another side project. Today, I think it was a sign of change.

Before this subtle change, we had things like Microsoft Ajax, the Ajax Control Toolkit and so many other reinvention from Microsoft. The same comment came back every time:

Why aren't you using <INSERT FRAMEWORK HERE> instead of reinventing the wheel?

Open source in the Microsoft world

Over 10 years ago, Microsoft wasn't doing open source. In fact, nothing I remember was open sourced. Free? Yes. Open source? No. The mindset of those days has changed.

The Changes

Initiatives like NuGetintegrating jQuery into Visual Studio templates, the multiple GitHub accounts and even going as to replace the default JSON serializer byJSON.NET instead of writing its own are all proofs that Microsoft have changed and is continuing to change.

It's important to take into account that this is not just lip service here. We're talking real time and money investment to publish tools, languages and frameworks into the open. Projects like Katana and Entity Framework are even open to contribution by anyone.

Without letting slip that Roslyn (the new C#/VB.NET compiler) as well as the F#'s compiler are now open sourced.

This is huge and people should know.

Where is it going today

I'm not sure where it's going today. Like I said, it's a big ship. From what I see, Microsoft is going 120% on Azure. Of course, Windows and Office is still there but... we already see that it's not an Open-Source vs Windows war anymore. The focus has changed.

Open source is being used to enrich Microsoft's environment now. Tools likeSideWaffle are being created by Microsoft employees like Sayed Hashimi and Mads Kristensen.

When I see a guy like Satya Nadella (CEO) talk about open source, I think it is inspiring. Microsoft is going open source internally then encouraging all employees to participate in open source projects.

Microsoft has gone through a culture change, and it's still happening today.

Comparing Microsoft circa 2001 to Microsoft 2014.

If you were at least 10 years in the field, you would remember that way back then, Microsoft didn't do open source. At all.

Compare it to what you've read about Microsoft now. It's been years of change since then and it's only the beginning. Back then, I wouldn't have believed anyone telling me that Microsoft would invest in Open Source.

Today? I'm grinning so much that my teeth are dry.

Categories: Blogs

List of d3.js library for charting, graphs and maps

Decaying Code - Maxime Rouiller - Fri, 10/17/2014 - 05:44

So I’ve been trying different kind of library that are based on d3.js. Most of them are awesome and … I know I’m going to forget some of them. So I decided to build a list and try to arrange them by categories.

  • DimpleJS – Easy API, lots of different type of graphs, easy to use
  • C3.js – Closer to the data than dimple but also a bit more powerful
  • NVD3.js – Similar to Dimple, require a CSS for proper usage
  • Epoch – Seems to be more focused on real-time graphs
  • Dygraphs – Focus on huge dataset
  • Rickshaw – Lots of easy chart to choose from. Used by Shutterstock

Since I haven’t had the chance to try them out, I won’t be able to provide more detailed comments about them. If you want me to update my post, hit me up on Twitter @MaximRouiller.

Data Visualization Editor
  • Raw – Focus on bringing data from spreadsheets online by simply copy/pasting it.
  • Tributary – Not simply focused on graphics, allows you to edit numbers, colors and such with a user friendly interface.
Geographical maps
  • DataMaps – Not a library per say but a set of examples that you can copy/paste and edit to match what you want.
Categories: Blogs

How to display a country map with SVG and D3js

Decaying Code - Maxime Rouiller - Fri, 10/17/2014 - 05:44

I’ve been babbling recently with charts and most of them was with DimpleJS.

However, what is beside DimpleJS is d3.js which is an amazing tools for drawing anything in SVG.

So to babble some more, I’ve decide to do something simple. Draw Canada.

The Data

I’ve taken the data from this repository that contains every line that forms our Maple Syrup Country. Ours is called “CAN.geo.json”. This file is called a Geo-Json file and allows you to easily parse geolocation data without a hitch.

The Code
var svg ="#chartContainer")
    .attr("style", "solid 1px black")
    .attr("width", "100%")
    .attr("height", "350px");

var projection = d3.geo.mercator().center([45, 55]);
var path = d3.geo.path().projection(projection);

var g = svg.append("g");
d3.json("/data/CAN.geo.json", function (error, json) {
           .attr("d", path)
           .style("fill", "red");
The Result var svg ="#chartContainer") .append("svg") .attr("style", "solid 1px black") .attr("width", "100%") .attr("height", "350px"); var projection = d3.geo.mercator().center([45, 55]); var path = d3.geo.path().projection(projection); var g = svg.append("g"); d3.json("/data/CAN.geo.json", function (error, json) { g.selectAll("path") .data(json.features) .enter() .append("path") .attr("d", path) .style("fill", "red"); }); Conclusion

Of course this is not something very amazing. It’s only a shape. This could be the building block necessary to create the next eCommerce world-wide sales revenue report.

Who knows… it’s just an idea.

Categories: Blogs

Animating your charts with Storyboard charts from DimpleJS and d3js

Decaying Code - Maxime Rouiller - Fri, 10/17/2014 - 05:44


Storyboard are charts/graphs that tell a story.

To have a graph, you need a timeline. Whether it’s days, weeks, months or years… you need a timeline of what happens. Then to have a chart, you need two axis. One that tells one version of the story, the other that relates to it. Then you move things forward in time and you move the data point. For each of those point, you also need to be able to label that point.

So let’s make a list of what we need.

  1. Data on a timeline.
  2. One numerical data
  3. Another numerical data that correlate to the other in some way
  4. A label to identify each point on the graph

I’ve taken the time to think about it and there’s one type of data that easy to come up with (I’m just writing a technical blog post after all).

Introducing the DataSet

I’ve taken the GDP, Population per country for the last 30 years from World Economics and merged it into one single file.

Note: World Economics is very keen to share data with you in format that are more readable than what is on their website. Contact them through their twitter account if you need their data!

Sound simple but it took me over 1 hour to actually merge all that data. So contact them to have a proper format that is more developer friendly.

Here’s what is the final result:


So this is the result I have.

The Code

That’s the most bonkers thing ever. Once you have the data properly setup, this doesn’t require that much code. Here’s what the code to generate the same graph on your end:

$.ajax("/GDP.csv", {
    success: function (data) {
        var csv = d3.csv.parse(data);

        var post3 = function () {
            var svg = dimple.newSvg("#storyboardGraph", 800, 600);
            var chart = new dimple.chart(svg, csv);

            csv = dimple.filterData(csv, "Year", ["2000", "2001", "2002", "2003",
                "2004", "2005", "2006", "2007", "2008", "2009", "2010", "2011",
                "2012", "2013", ]);
            var frame = 2000;
            chart.addMeasureAxis("x", "GDP");
            chart.addMeasureAxis("y", "Population");
            chart.addSeries(["Country"], dimple.plot.bubble);
            var story = chart.setStoryboard("Year");
            story.frameDuration = frame;

Stop using weird graphing library that will cost you an arm and a leg. Your browser (both desktop and mobile) can handle this kind of technology. Start using it now.

See DimpleJS for more examples and fun scenario to work with. Don’t forget to also follow John Kiernander on Twitter.

As usual, the source is available on Github.


Categories: Blogs

Slow Cheetah is going in maintenance mode

Decaying Code - Maxime Rouiller - Fri, 10/17/2014 - 05:44

Just a quick blog post to let you know that it has been announced that Slow Cheetah is going in Maintenance Mode. I don’t have alternatives or scoop.

I’m just trying to get the word out as much as possible.

What is Slow Cheetah?

It’s a tool to transform XML files from App.config and Web.config (this will not be affected).

What does that mean for me?

It means that it won’t be supported in the next release of Visual Studio. No new features are going to be added. No fixes for future regressions are going to be applied.

What does it really mean?

Stop using it. It will still work for your current project but if you are expecting a quick migration when you upgrade Visual Studio, think again.

It might work but nothing is guaranteed.

What if I don’t want to change?

The code is open sourced. You can start maintaining it yourself but Sayed won’t be doing any more work on it.

Categories: Blogs

NuGet–Upgrading your packages like a boss

Decaying Code - Maxime Rouiller - Fri, 10/17/2014 - 05:44

How often do you get on a project and just to assess where are things… you open the “Manage NuGet Packages for Solution…” and go to the Updates tab.

Then… you see this.


I mean… there’s everything in there. From Javscript dependencies to ORM. You know that you are in for a world of trouble.


You see the “Update All” and it’s very tempting. However, you know you are applying all kinds of upgrades. This could be fine when starting a project but when maintaining an existing project… you are literally including new features and bugs fixes for all those libraries.

A lot can go wrong.

Solution A: Update All a.k.a. Hulk Smash

So you said… screw it. My client and me will live with the consequences. You press Update All and… everything still works on compile.

Congratulation! You are in the very few!

Usual case? Compile errors everywhere that you will need to fix ASAP before committing.

Worse case? Something breaks in production and it takes us to this:


Solution B: Update safely a.k.a The Boss Approach

Alright… so you don’t want to go Hulk Smash on your libraries and on your code. And more importantly, you don’t want to be forced to wear the cowboy hat for a week.

So what is a developer to do in this case? You do it like a boss.

First, you open up “View > Other Windows > Package Manager Console”. Yes. It’s hidden but it’s for the pro. The kings. People like you who don’t use a tank to kill a fly.

It will look like this:


What is this? This beauty is Powershell. Yes. It’s awesome. There’s even a song about it.

So now that we have powershell… what can we do? Let me show you to your scalpel boss.

Update-Package is your best friend for this scenario. Here is what you are going to do:

Update-Package -Safe

That’s it.

What was done

This little “Safe” switch will only upgrade Revisions and will not touch Major and Minor versions. So to quote the documentation:

The `-Safe` flag constrains upgrades to only versions with the same Major and Minor version component.

That’s it. Now you can recompile your app and most of your app should have all bug fixes for current Major+Minor versions applied.


If you want to read more about Semantic Versioning (which is what NuGet uses), go read Alexandre Brisebois’ post on it. Very informative and straight to the point.

Categories: Blogs

Adding color to your Javascript charts with Dimple and d3js (Part 2)

Decaying Code - Maxime Rouiller - Fri, 10/17/2014 - 05:44

So we started by doing some graphs from basic data. But having all the colors the same or maybe even showing bars is not enough.

Here are a few other tricks to make the graph a little bit nicer. Mind you, there is nothing revolutionary here… it’s all in the documentation. The point of this blog post is only to show you how easy it is to customize the look of your charts.

First thing first, here are the sources we are working with.

Showing lines instead of bars

Ahhh that is quite easy.

It’s actually as simple as changing the addSeries function paramter

Here’s what the current code look like now:

var post2 = function() {
    // blog post #2 chart
    var svg = dimple.newSvg("#lineGraph", 800, 600);
    var chart = new dimple.chart(svg, csv);
    chart.addCategoryAxis("x", "Country");
    chart.addMeasureAxis("y", "Total");
    chart.addSeries(null, dimple.plot.line);

And the graph looks like this:


Simple enough?

Of course, this isn’t the type of data for lines so let’s go back to our first graph with bars and try to add colors.

Adding a color per country

So adding a color per country is about defining the series properly. In this case… on “Country”.

Changing the code isn’t too hard:

var post1 = function() {
    var svg = dimple.newSvg("#graphDestination", 800, 600);
    var chart = new dimple.chart(svg, csv);
    chart.addCategoryAxis("x", "Country");
    chart.addMeasureAxis("y", "Total", "Gold");

And here is how it looks like now!


Much prettier!!

Next blog post, what about adding some legends? Special requests?

Categories: Blogs

Easy Charting in JavaScript with d3js and Dimple from CSV data

Decaying Code - Maxime Rouiller - Fri, 10/17/2014 - 05:44


Before I go further, let me give you a link to the source for this blog post available on Github

When we talk about doing charts, post people will think about Excel.

Excel do provide some very rich charting but the problem is that you need a licence for Excel. Second, you need to share a file that often have over 30Mb of data to display a simple chart about your monthly sales or what not.

While it is a good way to explore your data, once you know what you want… you want to be able to share it easily. Then you use the first tool available to a Microsoft developer… SSRS.

But what if… you don’t need the huge machine that is SSRS but just want to display a simple graph in a web dashboard? It’s where simple charting with Javascript comes in.

So let’s start with d3js.

What is d3.js?

d3js is a JavaScript library for manipulation documents based on data. It will help you create HTML, CSS and SVG that will allow you  to better display your data.

However… it’s extremely low level. You will have to create your axis, your popup, your hover, your maps and what not.

But since it’s only a building block, other libraries exist that leverage d3js…


Dimple is a super simple charting library built on top of d3js. It’s what we’re going to use for this demo. But we need data…

Let’s start with a simple data set.

Sample problem: Medal per country for the 2010 Winter Olympics

Original data can be found here:

I’m going to just copy this into Excel (Google Spreadsheets) to clean the data a bit. We’ll remove all the “Country of ”, which will only pollute our data, as well as the Bins which could be dynamic but are otherwise useless.

First step will be to start a simple MVC project so that we can leverage basic MVC minimizing, layouts and what not.

In our _Layout.cshtml, we’ll add the following thing to the “head”:

<script src=""></script>
<script src=""></script>

This will allow us to start charting almost right away!

Step one: Retrieving the csv data and parsing it

Here’s some code that will take a CSV that is on disk or generated by an API and parse it as an object.

$.ajax("/2010-winter-olympics.csv", {
    success: function(data) {
        var csv = d3.csv.parse(data);

This code is super simple and will display something along those lines:


Wow. So we are almost ready to go?

Step two: Using Dimple to create our data.

As mentioned before, Dimple is a super simple tool to create chart. Let’s see how far we can go with the least amount of code.

Let’s add the following to our “success” handler:

var chart = new dimple.chart(svg, csv);
chart.addCategoryAxis("x", "Country");
chart.addMeasureAxis("y", "Total");

Once we refresh the page, it creates this:


Okay… not super pretty, lots of crappy data but… wow. We already have a minimum viable data source. To help us see it better… let’s clean the CSV file. We’ll remove all countries that didn’t win medals.

For our data set, that means from row 28 (Albania).

Let’s refresh.


And that’s it. We now have a super basic bar graph.


It is now super easy to create graphs in JavaScript. If you feel the need to create graphs for your users, you should consider using d3.js with charting library that are readily available like Dimple.

Do not use d3.js as a standalone way of creating graphs. You will find it harder than it needs to be.

If you want to know more about charting, please let me know on Twitter: @MaximRouiller

Categories: Blogs

New Feature: Product update list

Assembla - Fri, 10/17/2014 - 01:42

We have added a list of product changes and updates, here.  You will see a link to the most recent product update on your start page. You can also see the list of updates in the notification center.  We're continuously delivering changes and improvements.  The update list is a much-requested feature that will help our users take advanage of those changes.  Although we also post updates on our blog and in our "What's new at Assembla" newsletter, this very simple feature gives you a quick view of any new updates over the last 2 weeks right inside the app.



links to...


Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today