Skip to content


Creating a simple ASP.NET 5 Markdown TagHelper

Decaying Code - Maxime Rouiller - 5 hours 44 min ago

I've been dabbling a bit with the new ASP.NET 5 TagHelpers and I was wondering how easy it would be to create one.

I've created a simple Markdown TagHelper with the CommonMark implementation.

So let me show you what it is, what each line of code is doing and how to implement it in an ASP.NET MVC 6 application.

The Code
using CommonMark;
using Microsoft.AspNet.Mvc.Rendering;
using Microsoft.AspNet.Razor.Runtime.TagHelpers;

namespace My.TagHelpers
    public class MarkdownTagHelper : TagHelper
        public ModelExpression Content { get; set; }
        public override void Process(TagHelperContext context, TagHelperOutput output)
            output.TagMode = TagMode.SelfClosing;
            output.TagName = null;

            var markdown = Content.Model.ToString();
            var html = CommonMarkConverter.Convert(markdown);
Inspecting the code

Let's start with the HtmlTargetElementAttribute. This will wire the HTML Tag <markdown></markdown> to be interpreted and processed by this class. There is nothing stop you from actually having more than one target.

You could for example target element <md></md> by just adding [HtmlTargetElement("md")] and it would support both tags without any other changes.

The Content property will allow you to write code like this:

@model MyClass

<markdown content="@ViewData["markdown"]"></markdown>    
<markdown content="Markdown"></markdown>    

This easily allows you to use your model or any server-side code without having to handle data mapping manually.

TagMode.SelfClosing will force the HTML to use self-closing tag rather than having content inside (which we're not going to use anyway). So now we have this:

<markdown content="Markdown" />

All the remaining lines of code are dedicated to making sure that the content we render is actual HTML. output.TagName just make sure that we do not render the actual markdown tag.

And... that's it. Our code is complete.

Activating it

Now you can't just go and create TagHelpers and have them automatically served without wiring one thing.

In your ASP.NET 5 projects, go to /Views/_ViewImports.cshtml.

You should see something like this:

@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"

This will load all TagHelpers from the Microsoft.AspNet.Mvc.TagHelpers assembly.

Just duplicate the line and type-in your assembly name.

Then in your Razor code you can have the code bellow:

public class MyClass
    public string Markdown { get; set; }
@model MyClass
    ViewData["Title"] = "About";

<markdown content="Markdown"/>

Which will output your markdown formatted as HTML.

Now whether you load your markdown from files, database or anywhere... you can have your user write rich text in any text box and have your application generate safe HTML.

Components used
Categories: Blogs

Should our front-end websites be server-side at all?

Decaying Code - Maxime Rouiller - 5 hours 44 min ago

I’ve been toying around with projects like Jekyll, Hexo and even some hand-rolled software that will generate me HTML files based on data. The thought that crossed my mind was…

Why do we need dynamically generated HTML again?

Let me take examples and build my case.

Example 1: Blog

Of course the simpler examples like blogs could literally all be static. If you need comments, then you could go with a system like Disqus. This is quite literally one of the only part of your system that is dynamic.

RSS feed? Generated from posts. Posts themselves? Could be automatically generated from a databases or Markdown files periodically. The resulting output can be hosted on a Raspberry Pi without any issues.

Example 2: E-Commerce

This one is more of a problem. Here are the things that don’t change a lot. Products. OK, they may change but do you need to have your site updated right this second? Can it wait a minute? Then all the “product pages” could literally be static pages.

Product reviews? They will need to be “approved” anyway before you want them live. Put them in a servier-side queue, and regenerate the product page with the updated review once it’s done.

There’s 3 things that I see that would require to be dynamic in this scenario.

Search, Checkout and Reviews. Search because as your products scales up, so does your data. Doing the search client side won’t scale at any level. Checkout because we are now handling an actual order and it needs a server components. Reviews because we’ll need to approve and publish them.

In this scenario, only the Search is the actual “Read” component that is now server side. Everything else? Pre-generated. Even if the search is bringing you the list of product dynamically, it can still end up on a static page.

All the other write components? Queued server side to be processed by the business itself with either Azure or an off-site component.

All the backend side of the business (managing products, availability, sales, whatnot, etc.) will need a management UI that will be 100% dynamic (read/write).


So… do we need dynamic front-end with the latest server framework? On the public facing too or just the backend?

If you want to discuss it, Tweet me at @MaximRouiller.

Categories: Blogs

You should not be using WebComponents yet

Decaying Code - Maxime Rouiller - 5 hours 44 min ago

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since... well... a long time.

If you take a look at the specification, it's hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let's take a look at all four of them.

Reviewing the specificationsHTML Templates


This specific specification is not part of the "Web components" section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements


This specification is for review and not for implementation!

Alright no let's not touch this yet.

Shadow DOM


This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports


This one is still a working draft so it hasn't been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it's the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it's built by... well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we're only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it's a painful path.

What is wrong right now with WebComponents

First, it's not cross platform. We handled that in the past. That's not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there's no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn't move, we're back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what "Production" is concerned, do not use WebComponents directly. Also, avoid Polymer as it's only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Categories: Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - 5 hours 44 min ago

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - 5 hours 44 min ago

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - 5 hours 44 min ago

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - 5 hours 44 min ago

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.


Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.


Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - 5 hours 44 min ago

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - 5 hours 44 min ago

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:


This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.


Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - 5 hours 44 min ago

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… — Awesome way of describing it.

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

Performance Testing with a Raspberry Pi & Java

Testing TV - Mon, 11/30/2015 - 21:03
Learn how a large number of cheap Raspberry Pi computers running Java can be combined into a powerful load testing engine for networking applications and how this tool has been used in the real world. The Raspberry is fun, and with Java it shines.
Categories: Blogs

Happy Not to Know

Hiccupps - James Thomas - Sun, 11/29/2015 - 13:25

The current issue of The Guardian's Weekend magazine includes the transcription of a conversation between the authors Marlon James and Jeanette Winterson. This extract struck a chord:
MJ: What I find, particularly with young writers and readers, is that they don’t want complicated feelings. JW: But they’re young. And I feel sympathy with that. I’m happy to not know what I think about stuff; I’m happy to change my mind. But it’s relatively recently that I’ve been able to apply that to feelings. I used to like to know what I felt. I didn’t want those feelings to be complicated or muddled or clashing.I've been young (yeah, really, I had hair and everything) and I feel like these days I've made the transition they're talking about in writing, reading, feelings, work and life.

And while I see that I also referred to age when I wrote about something similar a couple of years ago I don't believe the opposition here need be about that, although the experience and empirical evidence that can help to usher in the realisation and acceptance necessarily takes time.

For me, now, the key is recognising and processing uncertainty and finding productive ways to operate in the face of it. And that's an intellectual exercise, not a birthday present.
Categories: Blogs

Oracles from the Inside Out, Part 8: Successful Stumbling

DevelopSense Blog - Fri, 11/27/2015 - 02:18
When we’re building a product, despite everyone’s good intentions, we’re never really clear about what we’re building until we try to build some of it, and then study what we’ve built. Even after that, we’re never sure, so to reduce risk, we must keep studying. For economy, let’s group the processes associated with that study—review, […]
Categories: Blogs

Cambridge Lean Coffee

Hiccupps - James Thomas - Thu, 11/26/2015 - 09:07
Yesterday's Lean Coffee was hosted by Jagex.  Here's a brief note on the topics that made it to discussion in the group that I was in.

Automated testing.
  • A big topic but mostly restricted this time to the question of screenshot comparison for web testing.
  • Experience reports say it's fragile.
  • Understanding what you want to achieve with it is crucial because maintenance costs will likely be high.
  • Looking to test at the lowest level possible, for the smallest testable element possible, can probably reduce the number of screenshots you will want to take.
  • For example, to check that a background image is visible for a page, you might check at a lower level that the image is served and assume that browsers are reliable enough to render it rather than taking a screenshot of the whole page which includes much more than the simple background image.
Why go to a testing conference?
  • It builds your confidence as a tester to find that other people think similar things, make similar decisions, have similar solutions.
  • It also reassures you when other people have similar problems.
  • You are exposed in a short space of time, in a sympathetic environment, to new ideas or new perspectives on old ideas.
  • You can meet people that you've only previously followed or tweeted at, and deepen the connection with them. "Testing royalty" is accessible!
  • When you come back, sharing what you found can clarify it for you and hopefully make positive changes to the way you work.
Strategies for compatibility testing.
  • Experience reports say that there's reasonable success with online services - to avoid having a stack of devices in-house - although not when high data throughput is required.
  • Reduce the permutations with a risk analysis.
  • Reduce the permutations by taking guidance from the business. What is important to your context, your customers?
How do you know which automated tests to remove?
  • Some tests have been running for years and never failed. This is wasting time and resource. 
  • Perhaps you shouldn't remove them if the impact of them failing is considered too serious.
  • Perhaps there's other ways to save time and resource. Do you even need to save this time and resource?
  • Can you run them differently? e.g. prioritise each test and run higher priority tests with greater frequency?
  • Can you run them differently? e.g. run only those that could be affected by a code change?
  • Can you run them differently? e.g. use randomisation to run subsets and build coverage over time?
  • Can you run them differently? e.g. run every suite frequently, but some configurations less frequently?
  • Chris George has a good talk on legacy tests.
Why isn't testing easier?
  • We've been testing software for decades now. Why hasn't it got easy?
  • It's bespoke to each solution.
  • Teams often want to reinvent the wheel (and all the mistakes that go into invention.)
  • You can't test everything.
  • Complexity of the inputs and deployment contexts increases at least as fast as advances in testing.
  • Systems are so interconnected these days, and pull in dependencies from all over the place.
  • People don't like to change and so get stuck with out of date ideas about testing that don't fit the current context.

EDIT: Karo has written about the group she was in too.
Categories: Blogs

A Guided Read of Minitest

Testing TV - Tue, 11/24/2015 - 20:13
Minitest is a testing library of just 1,500 lines of Ruby code. By comparison, Rspec clocks in at nearly 15,000! Why is Minitest so small? I’d like to investigate by doing a guided read of Minitest’s source code. Conference organizer: Video producer:
Categories: Blogs

Means Testing

Hiccupps - James Thomas - Sun, 11/22/2015 - 07:40

I've found it interesting to read the recent flurry of thought on the testing/checking question, thoughts referring back to Testing and Checking Refined, by James Bach and Michael Bolton, in which the following definitions for the terms are offered:

  • Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modelling, observation, inference, etc.
  • Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.

The definitions are accompanied by a glossary which attempts to clarify some of the other terms used. These chaps really do sweat the semantics but if I could be so presumptuous as to gloss the distinction I might say: testing is an activity which requires thought; checking is simply a rigid comparison of expectation to observation.

There has been much misunderstanding about the relationship between the two terms, not least that they are often seen in opposition to one another while at the same time testing is said to include checking, as in Why the testing/checking debate is so messy – a fruit salad analogy,  where Joep Schuurkes laments:
What we’re left with are only two concepts, "testing" and "checking", and the non-checking part of testing is gone.In Exploring, Testing, Checking, and the mental model, Patrick Prill proposes that
When a human is performing a check, she is able to evaluate many assertions, that often are not encoded in the explicit check.For me, by the definitions above, this means that the human is not simply performing a check. The check is narrow while the human perspective can be broad (and deep, and lateral, ...) which, for Bolton and Bach, as I interpret it, means that this is testing. I think that this is also what Anders Dinsen is getting at in Why the dichotomy of testing versus checking is the core of our craft:
As a tester, I carry out checks when I test, but when I do, the checks I am doing are elements in the testing and the whole activity is testing, not checking.One thing that I find missing from the conversation that I've seen and heard is the notion of intent. I'm wondering whether it's useful to think about some action as a check or test depending on the way that the result is expected to be, or actually, used in the context of the person who is doing the interpretation.

Here's an example: in Migrate Idea I talked about some scripts that I designed to help me to gain confidence in the migration and upgrade of a service across multiple servers. The scripts consisted of a series of API requests followed by assertions, each of which had a very narrow scope. Essentially they were looking to see that some particular question gave particular answers against the pre- and post-migrated service on a particular server.

At face value, they appear to be checks, and indeed I used them like that when evaluating the migration server-by-server. After putting in the (testing) effort to craft the code, I expended little effort interpreting the results beyond accepting the result they gave (all clear vs some problem).

However, by aggregating the results across servers, and across runs, I found additional value, and this is something that I would happily call testing by the definitions at the top. So perhaps these scripts are not intrinsically a check or a test; they simply gather data.

Could it be that what is done with that data can help to classify the actions as (part of) a check or test? At different times, I conjecture that the same instance of some action can be checking, testing, both or neither.  If I don't exercise thought, that (at best) means checking. If I do, that means testing.
Categories: Blogs

Structure of Tests-As-Specifications

Sustainable Test-Driven Development - Wed, 11/18/2015 - 23:16
<!--[if gte mso 9]> <![endif]--> A big part of our thesis is that TDD is not really a testing activity, but rather a specifying activity that generates tests as a very useful side effect.  For TDD to be a sustainable process, it is important to understand the various implications of this distinction. [1] Here, we will discuss the way our tests are structured when we seek to use them as
Categories: Blogs

New, new perspectives (EuroSTAR 2015 Lightning Talk)

Thoughts from The Test Eye - Wed, 11/18/2015 - 19:52

I believe one of the most important traits of testers is that we bring new perspectives, new ideas about what is relevant.
I probably believe this from my experiences from the first development team I joined, so I will tell you about the future by telling an old story.

This was in Gothenburg, 15 years ago, and we developed a pretty cool product for interactive data analysis. Data visualization, data filtering and calculations, and we could even use the product on our own bug system. The team consisted of quite young men and women who had all gone to Chalmers, the technical high school in Gothenburg.
They had taken the same lectures, they had done the same exercises.
They collaborated well, using the modelling tools, and the thinking thinking patterns, they had learnt in school.
They weren’t exactly the same of course, they had different haircuts, personalities, specialities, but all in all, they had roughly the same ideas about how to design and develop products.

I was the first tester in their team, and I had not gone at Chalmers. At University I read philosophy, musical science, history of ideas, pratical Swedish; and rather stumbled on testing because I wanted to be a programmer. I did not think at all like the rest of the team, and that was the good part!
I saw perspectives they didn’t see, my set of mental models contained other elements than theirs.
So when they agreed a solution was perfect, I asked “but what if this box isn’t there?” or “can we really know that the data is this clean?” or “what if the user tries this?” or “isn’t this too different from this other part of the product I looked at yesterday?” or “how on earth should i test this?” or “how useful is this really?”
They were a great team, and they used my perspectives to make the product better.
I felt valuable, and maybe that’s when I started loving testing (well, maybe earlier, I have always enjoyed finding big bugs, and I will always love it. But that’s more a kind of arousal, I am talking about a deeper love, when you feel that you provide value others can’t.)

So when we get to 2030, a lot of things will be the same, and a lot of things will be different. There will definitely be a need for people carefully examining software, and bringing new perspectives, and new questions. A richer set of mental models are needed, regardless if we are called testers or something else.
But it will be new, new perspectives, and you should look out for these, and use them.
You should learn stuff, you should test software appropriately, you should embrace new situations and perspectives, and you will be ready in 2030.
I hope I will too.

Categories: Blogs

Exploratory Testing for Complex Software

Testing TV - Tue, 11/17/2015 - 17:35
In modern software development organizations, the days are gone when separate, independent Quality Assurance departments test software only after it is finished. Iterative development and agile methods mean that software is constantly being created, tested, released, marketed, and used in short, tight cycles. An important testing approach in such an environment is called Exploratory Testing, […]
Categories: Blogs

Faking it, FFF and Variadic Macro Madness

James Grenning’s Blog - Sun, 11/15/2015 - 21:50

I spend the day updating the Fake Function Framework, Mike Long’s (@meekrosoft) creation. With my client last week, one of the engineers had some problems with Visual Studio (don’t we all) and the Fake Function Framework (a.k.a FFF). I promised to integrate the changes into the FFF. The FFF is created with a ruby script, but you don’t have to care about that unless you have functions to fake that have more than 10 parameters. But anyway, I spent the day updating and reviewing the ruby script that generates the FFF.

FFF lets you make C test stubs using C macros. It can make it easier to do the right thing while you are programming. By the ‘right thing’, I mean make sure you know what your code is doing and that means automating unit tests for your code. Off the soapbox and back to the FFF…

If you had code that used pthread (or some other problem dependency), FFF let’s you create a stub very quickly. First, look at the signature for the function you want to stub, in this case pthread_create:

int pthread_create(pthread_t *thread, const pthread_attr_t *attr,
                          void *(*start_routine) (void *), void *arg);

The way you create an FFF fake is to copy and paste the declaration into you test code; then edit it into this form using the FAKE_VALUE_FUNCTION macro (there is a FAKE_VOID_FUNCTION too):

FAKE_VALUE_FUNCTION(int, pthread_create, pthread_t *, const pthread_attr_t *,
                          void *(*) (void *), void *);

Basically you in insert some commas and remove all the parameter names, though in this case that’s not good enough. The function pointer syntax confounds the FFF. The problem is easily solved with a bridging type just for the FFF’s (and your) benefit.

typedef void *(*threadEntry) (void *);
FAKE_VALUE_FUNCTION(int, pthread_create, pthread_t *, const pthread_attr_t *,
                          threadEntry, void *);

Now you basically have a test stub that remembers each parameter, how many times its been called, and a few other things. It also lets you control the return result in your test scenario.

Any of you that have worked with macros know, they are not known for the clarity of the compiler errors. Make a mistake in the macro and you have no idea until you use it, and then then you know right where the problem is because C preprocessor errors are so clear. Oh wait, that is a different universe, not this one. Actually, the error is almost totally unhelpful.

I came across this handy way to trick the C preprocessor into telling me the macro expansion:

#pragma message "Macro: " FAKE_VALUE_FUNCTION(int, foobar)

Seems like there should be better way to show a macro expansion on demand. Let me know if you have one.

Look at fff.h and you will see some amazing macro madness. Be happy, as I am, that it is generated and did not have to be hand crafted. Thanks Mike for the great tool and figuring out that variadic macro madness. To find out more about the FFF, check it out on my fff gitbub fork of Mike Long’s original version.

Categories: Blogs