Skip to content

Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - 1 hour 42 min ago

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - 1 hour 42 min ago

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - 1 hour 42 min ago

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
{
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;
	}

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
	}
}
      
    
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - 1 hour 42 min ago

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.

Powershell

Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.

Param(
    [Parameter(Mandatory=$True)]
    [string]$filename
)

Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
{
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
    {
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
    }
}
    
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - 1 hour 42 min ago

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - 1 hour 42 min ago

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:

SET GIT_SSL_NO_VERIFY=true

This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.

NOW.

Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - 1 hour 42 min ago

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… https://t.co/unhgFdMCrh — Awesome way of describing it. pic.twitter.com/KDPxpdB3UE

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

Do you have your own Batman Utility Belt?

Decaying Code - Maxime Rouiller - 1 hour 42 min ago
Just like most of us on any project, you (yes you!) as a developer must have done the same thing over and over again. I'm not talking about coding a controller or accessing the database.

Let's check out some concrete examples shall we?

  • Have you ever setup HTTP Caching properly, created a class for your project and call it done?
  • What about creating a proper Web.config to configure static asset caching?
  • And what about creating a MediaTypeFormatter for handling CSV or some other custom type?
  • What about that BaseController that you rebuild from project to project?
  • And those extension methods that you use ALL the time but rebuild for each projects...

If you answered yes to any of those questions... you are in great risk of having to code those again.

Hell... maybe someone already built them out there. But more often than not, they will be packed with other classes that you are not using. However, most of those projects are open source and will allow you to build your own Batman utility belt!

So once you see that you do something often, start building your utility belt! Grab those open source classes left and right (make sure to follow the licenses!) and start building your own class library.

NuGet

Once you have a good collection that is properly separated in a project and that you seem ready to kick some monkey ass, the only way to go is to use NuGet to pack it together!

Checkout the reference to make sure that you do things properly.

NuGet - Publishing

OK you got a steamy new hot NuGet package that you are ready to use? You can either push it to the main repository if your intention is to share it with the world.

If you are not ready quite yet, there are multiple way to use a NuGet package internally in your company. The easiest? Just create a Share on a server and add it to your package source! As simple as that!

Now just make sure to increment your version number on each release by using the SemVer convention.

Reap the profit

OK, no... not really. You probably won't be money anytime soon with this library. At least not in real money. Where you will gain however is when you are asked to do one of those boring task yet over again in another project or at another client.

The only thing you'll do is import your magic package, use it and boom. This task that they planned would take a whole day? Got finished in minutes.

As you build up your toolkit, more and more task will become easier to accomplish.

The only thing left to consider is what NOT to put in your toolkit.

Last minute warning

If you have an employer, make sure that your contract allows you to reuse code. Some contracts allows you to do that but double check with your employer.

If you are a company, make sure not to bill your client for the time spent building your tool or he might have the right to claim them as his own since you billed him for it.

In case of doubt, double check with a lawyer!

Categories: Blogs

Software Developer Computer Minimum Requirements October 2014

Decaying Code - Maxime Rouiller - 1 hour 42 min ago

I know that Scott Hanselman and Jeff Atwood have already done something similar.

Today, I'm bringing you the minimum specs that are required to do software development on a Windows Machine.

P.S.: If you are building your own desktop, I recommend PCPartPicker.

ProcessorRecommendation

Intel: Intel Core i7-4790K

AMD: AMD FX-9590

Unless you use a lot of software that supports multi-threading, a simple 4 core here will work out for most needs.

MemoryRecommendation

Minimum 8GB. 16GB is better.

My minimum requirement here is 8GB. I run a database engine and Visual Studio. SQL Server can easily take 2Gb with some big queries. If you have extensions installed for Visual Studio, it will quickly raise to 1GB of usage per instance and finally... Chrome. With multiple extensions and multiple pages running... you will quickly reach 4GB.

So get 8GB as the bare minimum. If you are running Virtual Machines, get 16GB. It won't be too much. There's no such thing as too much RAM when doing software development.

Hard-driveRecommendation

512 GB SSD drive

I can't recommend enough an SSD. Most tools that you use on a development machine will require a lot of I/O. Especially random read. When a compiler starts and retrieve all your source code to compile, it will need to read from all those file. Same thing if you have tooling like ReSharper or CodeRush. I/O speed is crucial. This requirement is even more important on a laptop. Traditionally, PC maker put a 5200RPM HDD on a laptop to reduce power usage. However, 5200 RPM while doing development will be felt everywhere.

Get an SSD.

If you need bigger storage (terabytes), you can always get a second hard-drive of the HDD type instead. Slower but capacities are also higher. On most laptop, you will need external storage for this hard drive so make sure it is USB3 compatible.

Graphic Card

Unless you do graphic rendering or are working with graphic tools that require a beast of a card... this is where you will put the less amount of money.

Make sure to get enough of them for your amount of monitors and that they can provide the right resolution/refresh rate.

Monitors

My minimum requirement nowadays is 22 inches. 4K is nice but is not part of the "minimum" requirement. I enjoy a 1920x1080 resolution. If you are buying them for someone else, make sure they can be rotated. Some developers like to have a vertical screen when reading code.

To Laptop or not to Laptop

Some company go Laptop for everyone. Personally, if the development machine never need to be taken out of the building, you can go desktop. You will save a bit on all the required accessories (docking port, wireless mouse, extra charger, etc.).

My personal scenario takes me to clients all over the city as well as doing presentations left and right. Laptop it is for me.

Categories: Blogs

SVG are now supported everywhere, or almost

Decaying Code - Maxime Rouiller - 1 hour 42 min ago

I remember that when I wanted to draw some graphs on a web page, I would normally have 2 solutions

Solution 1 was to have an IMG tag that linked to a server component that would render an image based on some data. Solution 2 was to do Adobe Flash or maybe even some Silverlight.

Problem with Solution 1

The main problem is that it is not interactive. You have an image and there is no way to do drilldown or do anything with it. So unless your content was simple and didn't need any kind of interaction or simply was headed for printing... this solution just wouldn't do.

Problem with Solution 2

While you now get all the interactivity and the beauty of a nice Flash animation and plugin... you lost the benefits of the first solution too. Can't print it if you need it and over that... it required a plugin.

For OSX back in 2009, plugins were the leading cause of browser crash and there is nothing that stops us from believing that similar things aren't true for other browsers.

The second problem is security. A plugin is just another attack vector on your browser and requiring a plugin to display nice graphs seem a bit extreme.

The Solution

The solution is relatively simple. We need a system that allows us to draw lines, curves and what not based on coordinate that we provide it.

That system should of course support colors, font and all the basic HTML features that we know now (including events).

Then came SVG

SVG has been the main specification to drawing anything vector related in a browser since 1999. Even though the specification started at the same time than IE5, it wasn't supported in Internet Explorer until IE9 (12 years later).

The support for SVG is now in all major browsers from Internet Explorer to FireFox and even in your phone.

Chances are that every computer you are using today can render SVG inside your browser.

So what?

SVG as a general rule is under used or thought of something only artists do or that it's too complicated to do.

My recommendation is to start cracking today on using libraries that leverage SVG. By leveraging them, you are setting yourself apart from others and can start offering real business value to your clients right now that others won't be able to.

SVG has been available on all browsers for a while now. It's time we start using it.

Browsers that do not support SVG
  • Internet Explorer 8 and lower
  • Old Android device (2.3 and less), partial support for 3-4.3
References, libraries and others
Categories: Blogs

Execute the TestComplete TestExecute module remotely on a VM using PSExec

Using the SysInternals tool PSExec.exe to launch TestExecute and run a project on a VM.

  1. Login to VM as normal user
  2. Run PSExec from command line (or batch file) on local machine.  I was having trouble with this; needs to be username not in quotes and password in double quotes.
C:\SysInternals\PSTools\PsExec.exe \\TESTVM -u domain\user -p "P@$$w0rd" -i \\TESTVM\C$\Users\testaccount\Downloads\TestExecuteRemote.bat
  1. User launching PSExec.exe is the same user logged in to the VM (without admin rights)
  2. -i option executes interactive
  3. TestExecuteRemote.bat contains command line to call TestExecute that looks like this:
:: Run TestExecute and export log to c:\LOG\ExportLog.mht:: Log file cannot exist or test will fail to run:: Test account needs write permission to the project folder (log is also generated under project)"\\TESTVM\C$\Program Files (x86)\SmartBear\TestExecute 10\Bin\TestExecute.exe" \\TESTVM\C$\Test\ProjectSuite1\ProjectSuite1.pjs /r /e /DoNotShowLog /ExportLog:\\TESTVM\C$\LOG\ExportLog.mht

PSExec.exe has an option for -l to run as limited user, but the remote batch file failed to run using that option. 
Categories: Blogs

To improve testing, snoop on the competition

Gojko Adzic - Thu, 04/23/2015 - 09:47

This is an excerpt from my upcoming book, Fifty Quick Ideas To Improve Your Tests

snoop on competitionAs a general rule, teams focus the majority of testing activities on their zone of control, on the modules they develop, or the software that they are directly delivering. But it’s just as irresponsible not to consider competition when planning testing as it is in the management of product development in general, whether the field is software or consumer electronics.

Software products that are unique are very rare, and it’s likely that someone else is working on something similar to the product or project that you are involved with at the moment. Although the products might be built using different technical platforms and address different segments, key usage scenarios probably translate well across teams and products, as do the key risks and major things that can go wrong.

When planning your testing activities, look at the competition for inspiration — the cheapest mistakes to fix are the ones already made by other people. Although it might seem logical that people won’t openly disclose information about their mistakes, it’s actually quite easy to get this data if you know where to look.

Teams working in regulated industries typically have to submit detailed reports on problems caught by users in the field. Such reports are kept by the regulators and can typically be accessed in their archives. Past regulatory reports are a priceless treasure trove of information on what typically goes wrong, especially because of the huge financial and reputation impact of incidents that are escalated to such a level.

For teams that do not work in regulated environments, similar sources of data could be news websites or even social media networks. Users today are quite vocal when they encounter problems, and a quick search for competing products on Facebook or Twitter might uncover quite a few interesting testing ideas.

Lastly, most companies today operate free online support forums for their customers. If your competitors have a publicly available bug tracking system or a discussion forum for customers, sign up and monitor it. Look for categories of problems that people typically inquire about and try to translate them to your product, to get more testing ideas.

For high-profile incidents that have happened to your competitors, especially ones in regulated industries, it’s often useful to conduct a fake post-mortem. Imagine that a similar problem was caught by users of your product in the field and reported to the news. Try to come up with a plausible excuse for how it might have happened, and hold a fake retrospective about what went wrong and why such a problem would be allowed to escape undetected. This can help to significantly tighten up testing activities.

Key benefits

Investigating competing products and their problems is a cheap way of getting additional testing ideas, not about theoretical risks that might happen, but about things that actually happened to someone else in the same market segment. This is incredibly useful for teams working on a new piece of software or an unfamiliar part of the business domain, when they can’t rely on their own historical data for inspiration.

Running a fake post-mortem can help to discover blind spots and potential process improvements, both in software testing and in support activities. High-profile problems often surface because information falls through the cracks in an organisation, or people do not have sufficiently powerful tools to inspect and observe the software in use. Thinking about a problem that happened to someone else and translating it to your situation can help establish checks and make the system more supportable, so that problems do not escalate to that level. Such activities also communicate potential risks to a larger group of people, so developers can be more aware of similar risks when they design the system, and testers can get additional testing ideas to check.

The post-mortem suggestions, especially around improving the support procedures or observability, help the organisation to handle ‘black swans’ — unexpected and unknown incidents that won’t be prevented by any kind of regression testing. We can’t know upfront what those risks are (otherwise they wouldn’t be unexpected), but we can train the organisation to react faster and better to such incidents. This is akin to government disaster relief organisations holding simulations of floods and earthquakes to discover facilitation and coordination problems. It’s much cheaper and less risky to discover things like this in a safe simulated environment than learn about organisational cracks when the disaster actually happens.

How to make it work

When investigating support forums, look for patterns and categories rather than individual problems. Due to different implementations and technology choices, it’s unlikely that third-party product issues will directly translate to your situation, but problem trends or areas of influence will probably be similar.

One particularly useful trick is to look at the root cause analyses in the reports, and try to identify similar categories of problems in your software that could be caused by the same root causes.

Categories: Blogs

The 80/20 rule in Cloud Development

The Build Doctor - Thu, 04/23/2015 - 08:27
Guest Post by Brian Whipple, Marketing & Communications Manager at Cycligent.com. There is a commonly known rule in business called the 80/20 Rule. Introduced as an economics rule to explore...

Visit The Build Doctor for the full article.
Categories: Blogs

Just Say No to More End-to-End Tests

Google Testing Blog - Thu, 04/23/2015 - 01:10
by Mike Wacker

At some point in your life, you can probably recall a movie that you and your friends all wanted to see, and that you and your friends all regretted watching afterwards. Or maybe you remember that time your team thought they’d found the next "killer feature" for their product, only to see that feature bomb after it was released.

Good ideas often fail in practice, and in the world of testing, one pervasive good idea that often fails in practice is a testing strategy built around end-to-end tests.

Testers can invest their time in writing many types of automated tests, including unit tests, integration tests, and end-to-end tests, but this strategy invests mostly in end-to-end tests that verify the product or service as a whole. Typically, these tests simulate real user scenarios.
End-to-End Tests in Theory While relying primarily on end-to-end tests is a bad idea, one could certainly convince a reasonable person that the idea makes sense in theory.

To start, number one on Google's list of ten things we know to be true is: "Focus on the user and all else will follow." Thus, end-to-end tests that focus on real user scenarios sound like a great idea. Additionally, this strategy broadly appeals to many constituencies:
  • Developers like it because it offloads most, if not all, of the testing to others. 
  • Managers and decision-makers like it because tests that simulate real user scenarios can help them easily determine how a failing test would impact the user. 
  • Testers like it because they often worry about missing a bug or writing a test that does not verify real-world behavior; writing tests from the user's perspective often avoids both problems and gives the tester a greater sense of accomplishment. 
End-to-End Tests in Practice So if this testing strategy sounds so good in theory, then where does it go wrong in practice? To demonstrate, I present the following composite sketch based on a collection of real experiences familiar to both myself and other testers. In this sketch, a team is building a service for editing documents online (e.g., Google Docs).

Let's assume the team already has some fantastic test infrastructure in place. Every night:
  1. The latest version of the service is built. 
  2. This version is then deployed to the team's testing environment. 
  3. All end-to-end tests then run against this testing environment. 
  4. An email report summarizing the test results is sent to the team.

The deadline is approaching fast as our team codes new features for their next release. To maintain a high bar for product quality, they also require that at least 90% of their end-to-end tests pass before features are considered complete. Currently, that deadline is one day away:

Days LeftPass %Notes15%Everything is broken! Signing in to the service is broken. Almost all tests sign in a user, so almost all tests failed.04%A partner team we rely on deployed a bad build to their testing environment yesterday.-154%A dev broke the save scenario yesterday (or the day before?). Half the tests save a document at some point in time. Devs spent most of the day determining if it's a frontend bug or a backend bug.-254%It's a frontend bug, devs spent half of today figuring out where.-354%A bad fix was checked in yesterday. The mistake was pretty easy to spot, though, and a correct fix was checked in today.-41%Hardware failures occurred in the lab for our testing environment.-584%Many small bugs hiding behind the big bugs (e.g., sign-in broken, save broken). Still working on the small bugs.-687%We should be above 90%, but are not for some reason.-789.54%(Rounds up to 90%, close enough.) No fixes were checked in yesterday, so the tests must have been flaky yesterday.
Analysis Despite numerous problems, the tests ultimately did catch real bugs.

What Went Well 
  • Customer-impacting bugs were identified and fixed before they reached the customer.

What Went Wrong 
  • The team completed their coding milestone a week late (and worked a lot of overtime). 
  • Finding the root cause for a failing end-to-end test is painful and can take a long time. 
  • Partner failures and lab failures ruined the test results on multiple days. 
  • Many smaller bugs were hidden behind bigger bugs. 
  • End-to-end tests were flaky at times. 
  • Developers had to wait until the following day to know if a fix worked or not. 

So now that we know what went wrong with the end-to-end strategy, we need to change our approach to testing to avoid many of these problems. But what is the right approach?
The True Value of Tests Typically, a tester's job ends once they have a failing test. A bug is filed, and then it's the developer's job to fix the bug. To identify where the end-to-end strategy breaks down, however, we need to think outside this box and approach the problem from first principles. If we "focus on the user (and all else will follow)," we have to ask ourselves how a failing test benefits the user. Here is the answer:

A failing test does not directly benefit the user. 

While this statement seems shocking at first, it is true. If a product works, it works, whether a test says it works or not. If a product is broken, it is broken, whether a test says it is broken or not. So, if failing tests do not benefit the user, then what does benefit the user?

A bug fix directly benefits the user.

The user will only be happy when that unintended behavior - the bug - goes away. Obviously, to fix a bug, you must know the bug exists. To know the bug exists, ideally you have a test that catches the bug (because the user will find the bug if the test does not). But in that entire process, from failing test to bug fix, value is only added at the very last step.

StageFailing TestBug OpenedBug FixedValue AddedNoNoYes
Thus, to evaluate any testing strategy, you cannot just evaluate how it finds bugs. You also must evaluate how it enables developers to fix (and even prevent) bugs.
Building the Right Feedback LoopTests create a feedback loop that informs the developer whether the product is working or not. The ideal feedback loop has several properties:
  • It's fast. No developer wants to wait hours or days to find out if their change works. Sometimes the change does not work - nobody is perfect - and the feedback loop needs to run multiple times. A faster feedback loop leads to faster fixes. If the loop is fast enough, developers may even run tests before checking in a change. 
  • It's reliable. No developer wants to spend hours debugging a test, only to find out it was a flaky test. Flaky tests reduce the developer's trust in the test, and as a result flaky tests are often ignored, even when they find real product issues. 
  • It isolates failures. To fix a bug, developers need to find the specific lines of code causing the bug. When a product contains millions of lines of codes, and the bug could be anywhere, it's like trying to find a needle in a haystack. 
Think Smaller, Not LargerSo how do we create that ideal feedback loop? By thinking smaller, not larger.

Unit TestsUnit tests take a small piece of the product and test that piece in isolation. They tend to create that ideal feedback loop:

  • Unit tests are fast. We only need to build a small unit to test it, and the tests also tend to be rather small. In fact, one tenth of a second is considered slow for unit tests. 
  • Unit tests are reliable. Simple systems and small units in general tend to suffer much less from flakiness. Furthermore, best practices for unit testing - in particular practices related to hermetic tests - will remove flakiness entirely. 
  • Unit tests isolate failures. Even if a product contains millions of lines of code, if a unit test fails, you only need to search that small unit under test to find the bug. 

Writing effective unit tests requires skills in areas such as dependency management, mocking, and hermetic testing. I won't cover these skills here, but as a start, the typical example offered to new Googlers (or Nooglers) is how Google builds and tests a stopwatch.

Unit Tests vs. End-to-End TestsWith end-to-end tests, you have to wait: first for the entire product to be built, then for it to be deployed, and finally for all end-to-end tests to run. When the tests do run, flaky tests tend to be a fact of life. And even if a test finds a bug, that bug could be anywhere in the product.

Although end-to-end tests do a better job of simulating real user scenarios, this advantage quickly becomes outweighed by all the disadvantages of the end-to-end feedback loop:

UnitEnd-toEndFast

Reliable


Isolates Failures

Simulates a Real User


Integration TestsUnit tests do have one major disadvantage: even if the units work well in isolation, you do not know if they work well together. But even then, you do not necessarily need end-to-end tests. For that, you can use an integration test. An integration test takes a small group of units, often two units, and tests their behavior as a whole, verifying that they coherently work together.

If two units do not integrate properly, why write an end-to-end test when you can write a much smaller, more focused integration test that will detect the same bug? While you do need to think larger, you only need to think a little larger to verify that units work together.
Testing PyramidEven with both unit tests and integration tests, you probably still will want a small number of end-to-end tests to verify the system as a whole. To find the right balance between all three test types, the best visual aid to use is the testing pyramid. Here is a simplified version of the testing pyramid from the opening keynote of the 2014 Google Test Automation Conference:



The bulk of your tests are unit tests at the bottom of the pyramid. As you move up the pyramid, your tests gets larger, but at the same time the number of tests (the width of your pyramid) gets smaller.

As a good first guess, Google often suggests a 70/20/10 split: 70% unit tests, 20% integration tests, and 10% end-to-end tests. The exact mix will be different for each team, but in general, it should retain that pyramid shape. Try to avoid these anti-patterns:
  • Inverted pyramid/ice cream cone. The team relies primarily on end-to-end tests, using few integration tests and even fewer unit tests. 
  • Hourglass. The team starts with a lot of unit tests, then uses end-to-end tests where integration tests should be used. The hourglass has many unit tests at the bottom and many end-to-end tests at the top, but few integration tests in the middle. 
Just like a regular pyramid tends to be the most stable structure in real life, the testing pyramid also tends to be the most stable testing strategy.


Categories: Blogs

TDD For Your Soul

Testing TV - Wed, 04/22/2015 - 18:55
As a new Ruby developer, I expected to be challenged intellectually as I dove into algorithms, data structures, and object-oriented design, but I was disturbed to realize that coding, especially in an Agile workflow, actually challenged me personally. Web development pushes us to our limits, not only of cognition, but of character. Character has been […]
Categories: Blogs

Have you Crossed the Agile Chasm?

In 1991, Geoffrey Moore refined the classic technology adoption model with an additional element he called the “chasm.”[1] He advanced a proposition specific to disruptive innovation that there is a significant shift in mentality to be crossed between the early adopter and the early majority groups. Disruptive innovation is the development of new values that forces a significant change of behavior to the culture adopting it. In this case, Agile is that disruptive force that insists on applying a set of values and principles within a specific culture of “being Agile” to be successful and for the organization to realize the full business benefits of Agile.
At first glance, it would appear that many companies have adopted Agile. I believe, however, that this perception is specious, in view of the further observation that the majority of companies that are “doing” Agile have not actually adopted the new values and principles and not made the cultural shift to actually “being” Agile. Such companies look at Agile as a set of skills, tools, and procedural changes and not the integrated behavioral and cultural change it truly is. In other words, they think they have crossed the chasm, but they have not made the significant change of behavior required to make the leap. My experience in the field leads me to posit a refinement on Moore's chasm concept as applied to Agile. First, there is the real Agile chasm between those on the left side who have made the organic behavioral changes consistent with the values of being Agile―and those on the right side who are just doing” Agile mechanically. Second, there is a fake chasm, which many organizations pride themselves on having crossed by virtue of adopting some mechanical features of Agile, whereas they have not been willing or able to make the behavioral changes and adopt the values required to cross the real chasm. Although many companies say that they are doing Agile in some form, a large proportion of these are actually doing Fragile ("fake Agile") or some other hybrid variant that cannot deliver the business benefits of Agile.
I cannot overstate this point: many companies and their teams are mechanically doing some form of Agile without having actually crossed the Agile chasm, not discarding the behavioral baggage that is keeping them from behaviorally and culturally being Agile. Until a team attains the state of being Agile, the business benefits that Agile can provide will be elusive. I contend that the industry has barely entered the early majority of true Agile cultural transformation, and many companies continue to struggle to leap the Agile chasm.  What have you noticed across the Agile landscape?  Have companies crossed the Agile chasm?
Note: If you are looking for more insight in crossing the Agile chasm, consider reading the book Being Agile. This book lays the foundation for those who want to cross the Agile cultural chasm, understand the behaviors that need to change, and gauge progress along the way. It provides an Agile transformation roadmap to the destination of achieving better business results.

[1] Geoffrey A. Moore, Crossing the Chasm (New York: Harper Business Essentials, 1991).
Categories: Blogs

Saga Implementation Patterns: Singleton

Jimmy Bogard - Fri, 04/17/2015 - 17:38

NServiceBus sagas are great tools for managing asynchronous business processes. We use them all the time for dealing with long-running transactions, integration, and even places we just want to have a little more control over a process.

Occasionally we have a process where we really only need one instance of that process running at a time. In our case, it was a process to manage periodic updates from an external system. In the past, I’ve used Quartz with NServiceBus to perform job scheduling, but for processes where I want to include a little more information about what’s been processed, I can’t extend the Quartz jobs as easily as NServiceBus saga data. NServiceBus also provides a scheduler for simple jobs but they don’t have persistent data, which for a periodic process you might want to keep.

Regardless of why you’d want only one saga entity around, with a singleton saga you run into the issue of a Start message arriving more than once. You have two options here:

  1. Create a correlation ID that is well known
  2. Force a creation of only one saga at a time

I didn’t really like the first option, since it requires whomever starts to the saga to provide some bogus correlation ID, and never ever change that ID. I don’t like things that I could potentially screw up, so I prefer the second option. First, we create our saga and saga entity:

public class SingletonSaga : Saga<SingletonData>,
    IAmStartedByMessages<StartSingletonSaga>,
    IHandleTimeouts<SagaTimeout>
{
    protected override void ConfigureHowToFindSaga(
    	SagaPropertyMapper<SingletonData> mapper)
    {
    	// no-op
    }

    public void Handle(StartSingletonSaga message)
    {
        if (Data.HasStarted)
        {
            return;
        }

        Data.HasStarted = true;
        
        // Do work like request a timeout
        RequestTimeout(TimeSpan.FromSeconds(30), new SagaTimeout());
    }
    
    public void Timeout(SagaTimeout state)
    {
    	// Send message or whatever work
    }
}

Our saga entity has a property “HasStarted” that’s just used to track that we’ve already started. Our process in this case is a periodic timeout and we don’t want two sets of timeouts going. We leave the message/saga correlation piece empty, as we’re going to force NServiceBus to only ever create one saga:

public class SingletonSagaFinder
    : IFindSagas<SingletonSagaData>.Using<StartSingletonSaga>
{
    public NHibernateStorageContext StorageContext { get; set; }

    public SingletonSagaData FindBy(StartSingletonSaga message)
    {
        return StorageContext.Session
            .QueryOver<SingletonSagaData>()
            .SingleOrDefault();
    }
}

With our custom saga finder we only ever return the one saga entity from persistent storage, or nothing. This combined with our logic for not kicking off any first-time logic in our StartSingletonSaga handler ensures we only ever do the first-time logic once.

That’s it! NServiceBus sagas are handy because of their simplicity and flexibility, and implementing something a singleton saga is just about as simple as it gets.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Forget DevOps – focus on DevMOps

The Social Tester - Thu, 04/16/2015 - 12:00

Forget DevOps – focus on DevMOps. There seems to be a growing problem with DevOps and Marketing not aligning. So what about DevMOps = Dev + Marketing + Ops Ok – so we may want to rename it but recently...
Read more

The post Forget DevOps – focus on DevMOps appeared first on Rob Lambert.

Categories: Blogs

11 years at ThoughtWorks

thekua.com@work - Wed, 04/15/2015 - 22:45

I had planned to write a 10 years at ThoughtWorks post but was busy on a sabbatical learning a real language (German!) This year, I decided to get around to writing an anniversary post. One of the current impressive employee benefits for long-time employees is a 12-week paid break (it’s mandatory law in Australia but not true around the world).

When I think about my time here at ThoughtWorks, I can’t believe that I have been here so long. I still remember, after graduating from University, thinking how unlikely I would stay with a company for more than two years because I wanted to learn, change and grow. I thought that would be difficult in a permanent position in any other company. I wanted to stay in the zone but also find an opportunity to do interesting work. Consulting proved to be a good middle ground for what I was looking for.

What a ride it has been. Oh, and it’s still going, too :)

Facts

Like most companies, ThoughtWorks has changed and evolved over time.

  • When I started, we had (I’m guessing) about 10 offices in four countries. As of this post, we have 30 offices in 12 countries (Australia, Brazil, Canada, China, Ecuador, Germany, India, Singapore, South Africa, Uganda, the United Kingdom, and the United States) in some places I would never have guessed we would have had offices.
  • When I started, we had maybe 500 employees worldwide. We now have around between 2500-3000 people.
  • When I started, we were pretty much a consulting firm doing software development projects. Since then we now have a product division, a highly integrated UX capability, and are influencing companies at the CxO level which means a different type of consulting (whilst still keeping to our core offering of delivering effective software solutions).

We don’t always get things right, but I do see ThoughtWorks takes risks. In doing so, that means trying things and being prepared to fail or succeed. Although we have grown, I have found it interesting to see how our culture remains, in some ways very consistent, but adapts to the local market cultures and constraints of where we are operating. When I have visited our Brazilian offices, I felt like it was TW but with a Brazilian flavour, likewise when I visit our German offices.

Observations

I find it constantly interesting talking to alumni or people who have never worked for ThoughtWorks to see what their perceptions are. With some alumni, they have a very fixed perception of what the company is like (based on their time with the company) and it’s interesting to contrast that view with my own, given that the company constantly changes.

We are still (at least here in the UK) mostly a consulting firm, and so some of the normal challenges with running a consulting business still apply, both from an operational perspective, and being a consultant out on the field. Working on client site often means travel, and still affected by the ebbs and flows of customer demand around client budgeting cycles.

Based on my own personal observations (YMMV) we, as a company, have got a lot better about leadership development and support (although there is always opportunity to improve). I also find that we tend, on average, to get more aligned clients and have opportunities to have a greater impact. We have become better at expressing our values (The Three Pillars) and finding clients where we can help them and they are ready for that help.

It is always hard to see colleagues move on as it means building new relationships but that is always a reality in all firms, and occurs often even more so in consulting firms. After coming back from sabbatical I had to deal with quite a bit of change. Our office had moved, a significant part of our management team had changed, and of course there were lots of new colleagues who I hadn’t met. At the same time, I was surprised to see how many long-time employees (and not just operational people) were still around and it was very comforting to reconnect with them and renew those relationships.

Highlights

I’ve been particularly proud of some of the impact and some of the opportunities I have had. Some of my personal highlights include:

  • Being the keynote speaker for the 2000-attendee Agile Brazil conference.
  • Publishing my first book, The Retrospective Handbook, a book that makes the agile retrospective practice even more effective.
  • Publishing my second book, Talking with Tech Leads, a book that collects the experiences of Tech Leads around the world aimed at helping new or existing Tech Leads improve their skills.
  • Developing a skills training course for Tech Leads that we run internally. It’s a unique experiential course aimed at raising awareness of and developing the skills developers need when they play the Architect or Tech Lead roles. I may even have an opportunity of running it externally this year.
  • Being considered a role model for how ThoughtWorks employees can have impact on both clients, the industry and within our own company.
Categories: Blogs

10 Reasons Why Being A Scrum Master Sucks – And a job opportunity

The Social Tester - Mon, 04/13/2015 - 11:00

10 Reasons Why Being A Scrum Master Sucks* I loved my time as a scrum master. It was an interesting and rewarding role. But being a scrum master mostly sucks. And here’s why. 1. You spend more money than you...
Read more

The post 10 Reasons Why Being A Scrum Master Sucks – And a job opportunity appeared first on Rob Lambert.

Categories: Blogs