Skip to content

Feed aggregator

Community Update 2014-06-04 ASP.NET vNext, @CanadianJames MVC Bootstrap series and what we learned from C++

Decaying Code - Maxime Rouiller - Fri, 07/04/2014 - 22:28

So the big news is that Visual Studio 14 actually reached CTP. Of course, this is not the final name and is very temporary.

If you want to install it, I suggest booting up a VM locally or on Windows Azure.

Enjoy!

Visual Studio “14”

Visual Studio "14" CTP Downloads (www.visualstudio.com)

Announcing web features in Visual Studio “14” CTP (blogs.msdn.com)

Visual Studio "14" CTP (blogs.msdn.com)

ASP.NET vNext in Visual Studio “14” CTP (blogs.msdn.com)

Morten Anderson - ASP.NET vNext is now in Visual Studio (www.mortenanderson.net)

James Chambers MVC/Bootstrap Series

Day 4: Making a Page Worth a Visit | They Call Me Mister James (jameschambers.com)

Web Development

To Node.js Or Not To Node.js | Haney Codes .NET (www.haneycodes.net)

ASP.NET

aburakab/ASP-MVC-Tooltip-Validation · GitHub (github.com) – Translate MVC errors to Bootstrap notification

Download Microsoft Anti-Cross Site Scripting Library V4.3 from Official Microsoft Download Center (www.microsoft.com)

ASP.NET Web API parameter binding part 1 - Understanding binding from URI (www.strathweb.com)

Cutting Edge - External Authentication with ASP.NET Identity (msdn.microsoft.com)

Forcing WebApi controllers to output JSON (blog.bjerner.dk)

Videos

What – if anything – have we learned from C++? (channel9.msdn.com)

Search Engine

Elasticsearch.org Elasticsearch 1.2.1 Released | Blog | Elasticsearch (www.elasticsearch.org)

Elasticsearch.org Marvel 1.2 Released | Blog | Elasticsearch (www.elasticsearch.org)

Dealing with human language (www.elasticsearch.org)

Categories: Blogs

Late Community Update 2014-06-02 REST API, Visual Studio Update 3, data indexing, Project Orleans and more

Decaying Code - Maxime Rouiller - Fri, 07/04/2014 - 22:28

So I was at the MVP Open Days and I’ve missed a few days. It seems that my fellow MVP James Chambers has started a great initiative about exploring Bootstrap and MVC with lots of tips and tricks. Do not miss out!

Otherwise, this is your classic “I’ve missed a few days so here are 20,000 interesting links that you must read” kind of day.

Enjoy!

Must Read

AppVeyor - A good continuous integration system is a joy to behold - Scott Hanselman (www.hanselman.com)

This URL shortener situation is officially out of control - Scott Hanselman (www.hanselman.com)

James Chambers Bootstrap and MVC series

Day 0: Boothstrapping Mvc for the Next 30 Days | They Call Me Mister James (jameschambers.com)

Day 1: The MVC 5 Starter Project | They Call Me Mister James (jameschambers.com)

Day 2: Examining the Solution Structure | They Call Me Mister James (jameschambers.com)

Day 3: Adding a Controller and View | They Call Me Mister James (jameschambers.com)

Web Development

How much RESTful is your API | Bruno Câmara (www.bfcamara.com)

Data-binding Revolutions with Object.observe() - HTML5 Rocks (www.html5rocks.com)

ASP.NET

ASP.NET Moving Parts: IBuilder (whereslou.com)

Supporting only JSON in ASP.NET Web API - the right way - StrathWeb (www.strathweb.com)

Shamir Charania: Hacky In Memory User Store for ASP.NET Identity 2.0 (www.shamirc.com)

.NET

Missing EF Feature Workarounds: Filters | Jimmy Bogard's Blog (lostechies.com)

Visual Studio/Team Foundation Server 2013 Update 3 CTP1 (VS 2013.3.1 if you wish) (blogs.msdn.com)

TWC9: Visual Studio 2013 Update 3 CTP 1, Code Map, Code Lens for Git and more... (channel9.msdn.com)

.NET 4.5 is an in-place replacement for .NET 4.0 - Rick Strahl's Web Log (weblog.west-wind.com)

ASP.NET - Topshelf and Katana: A Unified Web and Service Architecture (msdn.microsoft.com)

Windows Azure

Episode 142: Microsoft Research project Orleans simplify development of scalable cloud services (channel9.msdn.com)

Tool

JSON to CSV (konklone.io)

Search Engines

The Absolute Basics of Indexing Data | Java Code Geeks (www.javacodegeeks.com)

Categories: Blogs

Incompatibility between Nancy and Superscribe

Decaying Code - Maxime Rouiller - Fri, 07/04/2014 - 22:28

So I’ve had the big idea of building an integration of Nancy and Superscribe and try to show how to do it.

Sadly, this is not going to happen.

Nancy doesn’t treat routing as a first class citizen like Superscribe does and doesn’t allow interaction with routing middleware. Nancy has its own packaged routing and will not allow Superscribe to provide it the URL.

Nancy does work with Superscribe but you have to hard-code the URL inside the NancyModule. So if you upgrade your Superscribe graph URL, Nancy will not respond on the new URL without you changing the hardcoded string.

I haven’t found a solution yet but if you do, please let me know!

Categories: Blogs

Configuring Superscribe to a self-hosted OWIN application

Decaying Code - Maxime Rouiller - Fri, 07/04/2014 - 22:28

We’ll start from my previous post with a single console application with a self-hosted OWIN instance.

The goal here is to provide a routing system so that we can route our application in different section. I could use something like WebAPI but the routing and the application itself are tightly linked.

I’m going to use a nice tool called Superscribe that allow to do routing. It’s Graph based routing but it should be simple enough for us to hook it up and create routes.

Installing Superscribe

Well, we’ll open up the Package Manager Console again and run the following command:

Install-Package Superscribe.Owin

This should install all the proper dependencies to have our routing going.

Modifying our Startup.cs to include Superscribe

First thing first, let’s get rid of this silly WelcomePage we created in the previous post. Boom. Gone.

Let’s create some basic structure to handle our routes.

using Microsoft.Owin;
using Owin;
using Superscribe.Owin.Engine;
using Superscribe.Owin.Extensions;

[assembly: OwinStartup(typeof(MySelfHostedApplication.Startup))]

namespace MySelfHostedApplication
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            var routes = CreateRoutes();
            app.UseSuperscribeRouter(routes)
                .UseSuperscribeHandler(routes);
        }

        public IOwinRouteEngine CreateRoutes()
        {
            var routeEngine = OwinRouteEngineFactory.Create();
            return routeEngine;
        }
    }
}

So this code basically create all the necessary plumbing for Superscribe to handle our requests.

Creating our routes

So we now have a route engine to work with. So let’s first create a handler for the default “/” URL.

We’ll also create a route for “/welcome” to use our default WelcomePage that we had earlier (just for demo purpose).

We’ll also create a route for “/Home” that will return a plain text (for the moment).

Here’s what it looks like:

using Microsoft.Owin;
using Owin;
using Superscribe.Models;
using Superscribe.Owin.Engine;
using Superscribe.Owin.Extensions;

[assembly: OwinStartup(typeof(MySelfHostedApplication.Startup))]

namespace MySelfHostedApplication
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            var routes = CreateRoutes();
            app.UseSuperscribeRouter(routes)
                .UseSuperscribeHandler(routes);
        }

        public IOwinRouteEngine CreateRoutes()
        {
            var routeEngine = OwinRouteEngineFactory.Create();
            routeEngine.Base.FinalFunctions.Add(new FinalFunction("GET", o => "Hello world"));
            routeEngine.Pipeline("welcome").UseWelcomePage();
            routeEngine.Pipeline("Home").Use((context, func) =>
            {
                context.Response.ContentType = "text/plain";
                return context.Response.WriteAsync("This is the home page");
            });

            return routeEngine;
        }
    }
}

That was simple.

Basically, we build a simple pipeline to an OWIN middleware.

What about NancyFX with Superscribe after?

Categories: Blogs

Insurance & Technology: How Insurers Can Avoid “Black Swans” in Product Launches

Insurance & Technology, a US publication that “provides insurance business and technology executives with the targeted information and analysis they need to be more profitable, productive and competitive,” recently published a contributed article by Original Software CEO Colin Armitage. “We’ve all seen it happen: An IT project plagued with delays, changes and complications goes so […]
Categories: Companies

Predictive analytics poised to improve government operations

Kloctalk - Klocwork - Fri, 07/04/2014 - 15:00

Predictive analytics is, without a doubt, one of the most promising technologies to emerge in recent years. By leveraging these solutions, organizations in a wide range of fields can make better informed, more strategic decisions in just about every area of business.

This applies not only to the private sector, but the public as well. As FCW contributor Thom Rubel recently reported, predictive analytics is poised to deliver major improvements to government agencies. However, for this to occur, a concerted effort to embrace the technology is essential.

Predicting government
Rubel noted that there are a number of areas where governmental use of available data combined with predictive analytics could yield powerful results.

"For example, programs that are collectively designed to ensure the smooth flow of people and commerce are typically informed by multiple data sources generated by people or things (sensors, data networks, etc.)," Rubel wrote. "Predictive decision-making ensures that the right combinations of information come together based on business rules that optimize desired outcomes – think smooth traffic flows."

By embracing predictive analytics technology, the government could see its operational efficiency rise significantly.

Predictive challenges
However, as Rubel pointed out, there are serious challenges which must be overcome first. Put simply, the government needs to make progress in terms of making sense of its massive volume of available data and also ensure that the technologies used for predictive analytics can scale up and down as needed.

Part of the reason this is such a challenge is that, as a general rule, the government struggles to attract and retain the level of IT talent necessary to develop and implement such advanced technological solutions. Numerous reports have noted that up-and-coming IT experts typically veer toward the private sector because the incentives to join the government are just not competitive. Government agencies do not afford these personnel the level of freedom they require to innovate new solutions, including advanced analytics efforts. This makes it difficult for the government to take advantage of this and other technological progress.

Yet despite this and other obstacles, Rubel believes that the government will eventually utilize predictive analytics to a wide degree. Specifically, he forecast the Internet of Things will integrate with predictive analytics and government programs to deliver more sophisticated, effective governance. As the use of analytics for critical decision making grows, it becomes more important for organizations to rely on proven and robust algorithms to deliver results that can be trusted.

Learn more:
• Read this white paper to learn how analytics are used by different industries to create competitive advantages (PDF)
• Answer the question of which is costlier – building your own algorithms or buying them?

Categories: Companies

A Hackathon for Testers – A Testathon

The Social Tester - Fri, 07/04/2014 - 14:29
I’m a big fan of Hackathons and ShipIt days as a mechanism for learning, but also as an opportunity to bring people together to share ideas. That’s why I’m liking the look of this – a Testathon. The next ones at Spotify’s head office in Sweden. More details on their website.
Categories: Blogs

Anti-Pattern: Fixing Configuration “As-Broken”

IBM UrbanCode - Release And Deploy - Thu, 07/03/2014 - 23:30

In the webinar Death to Manual Deployments we highlight a common problem in enterprise IT: configuration updates to middleware and applications are made on an “as-broken” basis. A developer will change the application to need a configuration tweak, which she makes on her own laptop. When the code is submitted a few days later the first test environment start showing errors. After a defect is raised, the developer informs QA of the change they need to make and its made. A week later, the application is promoted on to another testing environment where the application fails, defects are raised and eventually someone remembers to make the fix. Hopefully, someone gets this added to the release plan before the production release, but outage windows being extended due to this pattern is not unheard of.

The basic strategy to fix this challenge is to drive it into the release process as quickly as possible. Ideally the only way to make these kinds of configuration changes in any of your testing environments is through your deployment automation tool, such as UrbanCode Deploy. This will force the change to get captured, and make it easy to bind the configuration change with the application change that requires it. Otherwise, the policy should be that any kind of configuration change isn’t permitted unless it’s shown to be captured in the release plan. UrbanCode Release has a nice way of capturing these kinds of manual changes. Developers and testers are generally given  access to the lower environment deployment plan for the Release they are working on. Either the original programmer or the first tester to find the problem would update the plan with the instructions for making the configuration change. The new task would be flagged to run only in environments that hadn’t had the changed applied yet, and it would be suggested as a task to add to the production release plan. Easy to capture and easy to manage.

 Fixing Configuration As Broken

Categories: Companies

What's new in ApprovalTests.Net v3.7

Approval Tests - Thu, 07/03/2014 - 22:56
[Available on Nuget]
AsyncApprovals - rules and exceptions[Contributors: James Counts]In the end, all tests become synchronous. This means for a normal test we recommendHowever, If you are looking to test exceptions everything changes and you might want to use Removed BCL requirement[Contributors: James Counts &  Simon Cropp]HttpClient is nice way of doing web calls in .Net. Unfortunately, at this time the BCL in nuget does unfortunate things to your project if you do not wish to use the HttpClient. This is a violation of a core philosophy of approvaltests 
"only pay for the dependencies you use
HttpClient was add ApprovalTests 3.6. Thanks to Simon for pointing and troubleshooting this error. It has now been removed. 
Wpf Binding Asserts[Contributors: Jay Bazuzi]This is a bonus from v3.6It is a very hard thing to detect and report Wpf Binding Error. To even get the reports to happen you have to fiddle with the registry and then read and parse logs.No More!  Now to you use BindsWithoutError to ensure that your Wpf binding are working.

Categories: Open Source

Automation in Testing the Subject of Latest Engaging STP Podcast

uTest - Thu, 07/03/2014 - 20:20

uTest has always had a strong relationship with the Software Test Professionals (STP) community as attendees and sponsors of STP’s twice-a-year STPCon conferences in the US, some of the largest shows in the testing industry.

This week, STP brings us pre-recorded testing fun in the form of a podcast. Testing expert Richard Bradshaw talks with STP on the subject of automation in testing. Specifically, Richard gets into where automation comes into play as a manual tester, and how managers can build successful teams comprised of developers and both manual and automated testers, and keep everything running smoothly.

Check out the full audio of the great STP interview below.

Categories: Companies

Jenkins User Event & Code Camp 2014, Copenhagen

This is a guest post from Adam Henriques.

On August 22nd Jenkins CI enthusiasts will gather in Copenhagen, Denmark for the 3rd consecutive year for a day of networking and knowledge sharing. Over the past two years the event has grown and this year we are expecting a record number of participants representing Jenkins CI experts, enthusiasts, and users from all over the world.

The Jenkins CI User Event Copenhagen has become cynosure for the Scandinavian Jenkins community to come together and share new ideas, network, and harness inspiration from peers. The program offers invited as well as contributed speaks, tech talks, case stories, and facilitated Open Space discussions on best practice and application of continuous integration and agile development with Jenkins.

The Jenkins CI Code Camp 2014

The Jenkins CI User Event will be kicked off by The Jenkins CI Code Camp on August 21st, the day before the User Event. Featuring Jenkins frontrunners, this full day community driven event has become very popular, where Jenkins peers band together to contribute content back to the community. The intended audience is both experienced Jenkins developers and developers who are looking to get started with Jenkins plugin development.

For more information please visit the Jenkins CI User Event 2014, Copenhagen website.

Categories: Open Source

Applause Collaborates on Mobile Application Quality Solutions With IBM

SQA Zone - Thu, 07/03/2014 - 18:25
Applause has announced an ongoing technology collaboration with global solutions leader, IBM. Through this teaming, the two firms have co-developed mobile app quality solutions that enable companies to improve app quality and delight their mobile ...
Categories: Communities

JUC Berlin summary

IMG_9194

After a very successful JUC Boston we headed over to Berlin for JUC Berlin. I've heard the attendance number was comparable to that of JUC Boston, with close to 400 people registered and 350+ people who came.

The event kicked off at a pre-conference beer garden meetup, except it turned out that the venue was closed on that day and we had to make an emergency switch to another nearby place, and missed some people during that fiasco. My apologies for that.

But the level of the talks during the day more than made up for my failing. They covered everything from large user use cases from BMW to Android builds, continuous delivery to Docker, then of course workflow!

One of the key attractions of events like this is actually meeting people you interact with. There are all the usual suspects of the community, including some who I've met for the first time.

Most of the slides are up, and I believe the video recordings will be uploaded shortly, if you missed the event.

Categories: Open Source

Applause and IBM Collaborate on Mobile Software Testing

Software Testing Magazine - Thu, 07/03/2014 - 18:12
Applause has announced an ongoing technology collaboration with IBM. Through this teaming, the two firms have co-developed mobile app quality solutions that enable companies to improve app quality and delight their mobile users. These offerings take the form of both on-premise and cloud-based solutions. Applause worked closely with IBM’s MobileFirst and Rational Software technology teams to develop solutions that help companies better achieve mobile app quality that aligns with users’ perspectives. Applause and IBM will also work together on thought leadership activities to advance the market’s knowledge around the value of ...
Categories: Communities

Pictures from JUC and cdSummit

I've uploaded pictures I've taken during JUC Boston and JUC Berlin.

JUC Berlin pictures starts with pre-conference beer garden meet-up. See Vincent Latombe gives a talk about Literate plugin. I really appreciated his coming to this despite the fact that the event was only a few days before his wedding:

In JUC Boston pictures, you can see some nice Jenkins lighting effect, as well as my fellow colleague Corey Phelan using World Cup to lure attendees into a booth:

IMG_8721 IMG_8745

Pictures from the cdSummits are also available here and here.

If you have taken pictures, please share with us as your comment here so that others can see them.

Categories: Open Source

Jenkins Office Hours: dotCi

Surya walked us through the dotCI source code yesterday, and a bunch of ideas about how to reuse pieces are discussed. The recording is on YouTube, and my notes are here.

Categories: Open Source

Throwback Thursday: The Palm V

uTest - Thu, 07/03/2014 - 16:19

Today’s tech Throwback Thursday pays homage to the days when PDA meant Personal Digital Assistant. The long-time leader in that category was the PalmPilot series of PDAs by Palm Computing, a division of 3Com. By the time I got my hands on one of these amazing machines, it was known as the Palm V. You could email, access your calendar and contacts, and organize your to-do lists all in the palm of your hand.
PalmV
Released in 1999, the Palm V was the first of its kind to have a built-in rechargeable battery. Among other hardware specs:

Processor: Motorola Dragonball EZ MC68EZ328
User Storage Memory: 2MB
Display: 160×160 pixel, high contrast and backlit
Size and weight: 4.5″ x 3.1″ x 0.4″, 4oz

According to this great review from The Gadgeteer, “the display on the Palm V is a marvel to behold” and “the Palm V is downright sexy with its sleek metal body.” Maybe we were more easily amused back then, but I don’t remember ever thinking that 160×160 pixels was a marvel to behold. (Then again, I used to think the Iomega Zip 100 disk was pretty sweet so my judgement for tech marvels isn’t 100%.)

What I remember most about the Palm V was the need to use a stylus to input any information into the PDA. You lost that stylus and you were out of luck. That, and you had to learn a new shorthand way of writing called Graffiti. We were all such newbies at Graffiti that Palm actually included a cheat sheet when you bought a new PDA. Palm Pilot Graffiti

Graffiti was kind of like cursive, only weirder. As you can see from the reference card to the right, the dot indicated the starting point of your letter or number. Getting tripped up on “L” and “4″ were common occurrences as were completely misspelled words that you couldn’t recognize when you tried to read your hastily-scribbled meeting notes later.

Sadly, the lineage of Palm devices came to end in 2011 after Hewlett-Packard acquired the brand a year earlier.

Did you have a PalmPilot or other brand of PDA? Tell us what you miss (or don’t miss) about it in the comments below.

Categories: Companies

An Impressive Show at Geekout

The Kalistick Blog - Thu, 07/03/2014 - 15:58

Another sold out year at the 2014 Geekout event in Tallinn saw Coverity exhibit its technology to a number of companies in the Java development space and ask a key question  -Are you happy with your open source analysis tools? Clearly the likes of Findbugs, Sonar and PMD are widely used across the Java community. However after a number of product demonstrations with the Geekout attendees some common themes started to appear. Open source clearly lacked the depth of analysis that the Coverity platform was able to apply provide and the Java gurus seemed impressed with the low level defects we exposed. Also, the ability for Coverity to bring security related OWASP results exposed a huge gap in the open source tools – the Coverity platform is able to deliver security and quality defects right into the heart of development with developer remediation advice. For more information and the ability for you to get your own code analyzed visit the Coverity website.

The post An Impressive Show at Geekout appeared first on Software Testing Blog.

Categories: Companies

Best Practices – Code Coverage Metrics

NCover - Code Coverage for .NET Developers - Thu, 07/03/2014 - 15:38

best-practices-code-coverageThis is part one in a four part Best Practices For .NET Code Coverage webinar series focused on using code coverage metrics to guide development efforts and improve overall code quality. We explain the importance of selecting the right combination of metrics to measure the effectiveness of your testing strategies and the quality of your code base and how it is a first, but extremely crucial, step in using code coverage within your organization.

In addition to explaining three important code coverage metrics, Branch Coverage, Sequence Point Coverage and Change Risk Anti-Patterns, we discuss our recommended best practices for using them as part a core set of metrics, whose trends are monitored over time, to ensure that you have the best insights possible for ensuring that your code, your testing strategies and ultimately your entire application, will perform as expected.

Best Practices For .NET Code Coverage

 

Code Coverage Metrics

 

Selecting the right combination of metrics you use to measure the effectiveness of your testing strategies and the quality of your code base is a crucial first step in using code coverage within your organization.

 

For instance, think about if you are driving a sports car on the highway.  You have a variety of metrics that are being presented to you.  The speedometer indicates speed, the tachometer indicates RPMs, your oil pressure gauge indicates oil pressure and the volume indicator on your stereo indicates how loud your favorite song is playing.  They are all metrics and they are all useful, however, if you select to focus on the volume indicator to determine how fast you should drive to the office, local law enforcement might give you an out of bounds exception in the form of a speeding ticket.

 

So it is pretty obvious in that scenario that volume is an irrelevant metric for guiding your speed.

 

But the same concept applies in code coverage.  For instance, in .NET development, let’s say you pick  method coverage, a metric that can be calculated, but isn’t particularly information rich, to guide your quality efforts.  Well, you may quickly find out that you can either intentionally or unintentionally, produce some pretty amazing metrics and simultaneously release some pretty buggy code.

 

So what is the one metric?  Well, let’s go back to the car example.  If you are traveling down the road, your speedometer is something you keep a pretty close eye on.  It’s a reliable metric that provides useful information.  It lets you know if you are staying within the thresholds for speed.  However, if your tachometer spikes or your oil pressure gauge drops, all before your speedometer reflects a change in speed, it’s probably a good indication that you may have a larger underlying problem.

 

It’s not that different with code coverage.  Code coverage is about using a core set of metrics, and monitoring their trends over time, to ensure that you have the best insights possible for ensuring that your code, your testing strategies and ultimately your entire application, will perform as expected.

 

So here are three of the core code coverage metrics that are key to measuring the health of your code base.

 

The first metric is Branch Coverage.  Branch Coverage is the percentage of individual code segments covered during the the testing of the application where each branch is a segment of code that has exactly one entry point and one exit point.  For example, if you had a simple if/else statement, one branch would be if the condition was met and the other branch would be if it was not.

 

In .NET code coverage, we would think of Branch Coverage as a key ongoing metric for measuring how successful our testing strategies are and a good indicator as to the overall quality of our code base.  If we think back to the car example, it could be considered our speedometer and that we need to keep a close eye on it to ensure we are meeting our overall objective.

 

The next metric we would use as a best practice for measuring the health of our code base is Sequence Point Coverage, which is the percentage of sequence points covered during the testing of the application.  When you view the source code of a code base that has coverage data from within NCover, sequence points are represented by individual dots and diamonds visually represented next to your code.

 

We view sequence point coverage as one of the key supporting metrics for drilling in and finding the exact points within our code that we still need to test to maintain our overall Branch Coverage.  Although it is not the most useful metric for measuring the overall success of your testing strategies, it is incredibly useful for finding the exact deficiencies within those strategies.

 

The final key metric we use as part of our best practices for measuring the health of our code base is the Change Risk Anti-Patterns score which scores the amount of uncovered code against the complexity of that code.  It simple terms, it is a calculation that indicates the riskiness of the code you have yet to test where risk is represented as complexity.  The higher the complexity, the higher the risk that the involved code has unintended consequences.

 

The Change Risk Anti-Patterns score is incredibly useful at helping you direct your attention and resources to the portions of code that have the greatest probability of impacting the overall health of your code base.

 

Understanding code coverage metrics is only part of the equation when it comes to putting .NET code coverage best practices to work within your development or QA team.  Visit us online to learn more, request a free trial or speak with a member of the NCover team for suggestions specific to your situation.

Best Practices For .NET Code Coverage Code Coverage Metrics Selecting the right combination of metrics you use to measure the effectiveness of your testing strategies and the quality of your code base is a crucial first step in using code coverage within your organization. For instance, think about if you are driving a sports car on the highway. You have a variety of metrics that are being presented to you. The speedometer indicates speed, the tachometer indicates RPMs, your oil pressure gauge indicates oil pressure and the volume indicator on your stereo indicates how loud your favorite song is playing. They are all metrics and they are all useful, however, if you select to focus on the volume indicator to determine how fast you should drive to the office, local law enforcement might give you an out of bounds exception in the form of a speeding ticket. So it is pretty obvious in that scenario that volume is an irrelevant metric for guiding your speed. But the same concept applies in code coverage. For instance, in .NET development, let’s say you pick Method Coverage, a metric that can be calculated, but isn’t particularly information rich, to guide your quality efforts. Well, you may quickly find out that you can either intentionally or unintentionally, produce some pretty amazing metrics and simultaneously release some pretty buggy code. So what is the one metric? Well, let’s go back to the car example. If you are traveling down the road, If you are traveling down the road, your speedometer is something you keep a pretty close eye on. It’s a reliable metric that provides useful information. It lets you know if you are staying within the thresholds for speed. However, if your tachometer spikes or your oil pressure gauge drops, all before your speedometer reflects a change in speed, it’s probably a good indication that you may have a larger underlying problem. It’s not that different with code coverage. Code coverage is about using a core set of metrics, and monitoring their trends over time, to ensure that you have the best insights possible for ensuring that your code, your testing strategies and ultimately your entire application, will perform as expected. So here are three of the core code coverage metrics that are key to measuring the health of your code base. The first metric is Branch Coverage. Branch Coverage is the percentage of individual code segments covered during the the testing of the application where each branch is a segment of code that has exactly one entry point and one exit point. For example, if you had a simple if/else statement, one branch would be if the condition was met and the other branch would be if it was not. In .NET code coverage, we would think of Branch Coverage as a key ongoing metric for measuring how successful our testing strategies are and a good indicator as to the overall quality of our code base. If we think back to the car example, it could be considered our speedometer and that we need to keep a close eye on it to ensure we are meeting our overall objective. The next metric we would use as a best practice for measuring the health of our code base is Sequence Point Coverage, which is the percentage of sequence points covered during the testing of the application. When you view the source code of a code base that has coverage data from within NCover, sequence points are represented by individual dots and diamonds visually represented next to your code. We view sequence point coverage as one of the key supporting metrics for drilling in and finding the exact points within our code that we still need to test to maintain our overall Branch Coverage. Although it is not the most useful metric for measuring the overall success of your testing strategies, it is incredibly useful for finding the exact deficiencies within those strategies. The final key metric we use as part of our best practices for measuring the health of our code base is the Change Risk Anti-Patterns score which scores the amount of uncovered code against the complexity of that code. It simple terms, it is a calculation that indicates the riskiness of the code you have yet to test where risk is represented as complexity. The higher the complexity, the higher the risk that the involved code has unintended consequences. The Change Risk Anti-Patterns score is incredibly useful at helping you direct your attention and resources to the portions of code that have the greatest probability of impacting the overall health of your code base. Understanding code coverage metrics is only part of the equation when it comes to putting .NET code coverage best practices to work within your development or QA team. Visit us online to learn more, request a free trial or speak with a member of the NCover team for suggestions specific to your situation.


//

The post Best Practices – Code Coverage Metrics appeared first on NCover.

Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today