Skip to content

Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - 2 hours 23 min ago

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - 2 hours 23 min ago

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - 2 hours 23 min ago

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
{
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;
	}

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
	}
}
      
    
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - 2 hours 23 min ago

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.

Powershell

Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.

Param(
    [Parameter(Mandatory=$True)]
    [string]$filename
)

Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
{
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
    {
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
    }
}
    
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - 2 hours 23 min ago

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - 2 hours 23 min ago

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:

SET GIT_SSL_NO_VERIFY=true

This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.

NOW.

Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - 2 hours 23 min ago

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… https://t.co/unhgFdMCrh — Awesome way of describing it. pic.twitter.com/KDPxpdB3UE

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

Do you have your own Batman Utility Belt?

Decaying Code - Maxime Rouiller - 2 hours 23 min ago
Just like most of us on any project, you (yes you!) as a developer must have done the same thing over and over again. I'm not talking about coding a controller or accessing the database.

Let's check out some concrete examples shall we?

  • Have you ever setup HTTP Caching properly, created a class for your project and call it done?
  • What about creating a proper Web.config to configure static asset caching?
  • And what about creating a MediaTypeFormatter for handling CSV or some other custom type?
  • What about that BaseController that you rebuild from project to project?
  • And those extension methods that you use ALL the time but rebuild for each projects...

If you answered yes to any of those questions... you are in great risk of having to code those again.

Hell... maybe someone already built them out there. But more often than not, they will be packed with other classes that you are not using. However, most of those projects are open source and will allow you to build your own Batman utility belt!

So once you see that you do something often, start building your utility belt! Grab those open source classes left and right (make sure to follow the licenses!) and start building your own class library.

NuGet

Once you have a good collection that is properly separated in a project and that you seem ready to kick some monkey ass, the only way to go is to use NuGet to pack it together!

Checkout the reference to make sure that you do things properly.

NuGet - Publishing

OK you got a steamy new hot NuGet package that you are ready to use? You can either push it to the main repository if your intention is to share it with the world.

If you are not ready quite yet, there are multiple way to use a NuGet package internally in your company. The easiest? Just create a Share on a server and add it to your package source! As simple as that!

Now just make sure to increment your version number on each release by using the SemVer convention.

Reap the profit

OK, no... not really. You probably won't be money anytime soon with this library. At least not in real money. Where you will gain however is when you are asked to do one of those boring task yet over again in another project or at another client.

The only thing you'll do is import your magic package, use it and boom. This task that they planned would take a whole day? Got finished in minutes.

As you build up your toolkit, more and more task will become easier to accomplish.

The only thing left to consider is what NOT to put in your toolkit.

Last minute warning

If you have an employer, make sure that your contract allows you to reuse code. Some contracts allows you to do that but double check with your employer.

If you are a company, make sure not to bill your client for the time spent building your tool or he might have the right to claim them as his own since you billed him for it.

In case of doubt, double check with a lawyer!

Categories: Blogs

Software Developer Computer Minimum Requirements October 2014

Decaying Code - Maxime Rouiller - 2 hours 23 min ago

I know that Scott Hanselman and Jeff Atwood have already done something similar.

Today, I'm bringing you the minimum specs that are required to do software development on a Windows Machine.

P.S.: If you are building your own desktop, I recommend PCPartPicker.

ProcessorRecommendation

Intel: Intel Core i7-4790K

AMD: AMD FX-9590

Unless you use a lot of software that supports multi-threading, a simple 4 core here will work out for most needs.

MemoryRecommendation

Minimum 8GB. 16GB is better.

My minimum requirement here is 8GB. I run a database engine and Visual Studio. SQL Server can easily take 2Gb with some big queries. If you have extensions installed for Visual Studio, it will quickly raise to 1GB of usage per instance and finally... Chrome. With multiple extensions and multiple pages running... you will quickly reach 4GB.

So get 8GB as the bare minimum. If you are running Virtual Machines, get 16GB. It won't be too much. There's no such thing as too much RAM when doing software development.

Hard-driveRecommendation

512 GB SSD drive

I can't recommend enough an SSD. Most tools that you use on a development machine will require a lot of I/O. Especially random read. When a compiler starts and retrieve all your source code to compile, it will need to read from all those file. Same thing if you have tooling like ReSharper or CodeRush. I/O speed is crucial. This requirement is even more important on a laptop. Traditionally, PC maker put a 5200RPM HDD on a laptop to reduce power usage. However, 5200 RPM while doing development will be felt everywhere.

Get an SSD.

If you need bigger storage (terabytes), you can always get a second hard-drive of the HDD type instead. Slower but capacities are also higher. On most laptop, you will need external storage for this hard drive so make sure it is USB3 compatible.

Graphic Card

Unless you do graphic rendering or are working with graphic tools that require a beast of a card... this is where you will put the less amount of money.

Make sure to get enough of them for your amount of monitors and that they can provide the right resolution/refresh rate.

Monitors

My minimum requirement nowadays is 22 inches. 4K is nice but is not part of the "minimum" requirement. I enjoy a 1920x1080 resolution. If you are buying them for someone else, make sure they can be rotated. Some developers like to have a vertical screen when reading code.

To Laptop or not to Laptop

Some company go Laptop for everyone. Personally, if the development machine never need to be taken out of the building, you can go desktop. You will save a bit on all the required accessories (docking port, wireless mouse, extra charger, etc.).

My personal scenario takes me to clients all over the city as well as doing presentations left and right. Laptop it is for me.

Categories: Blogs

SVG are now supported everywhere, or almost

Decaying Code - Maxime Rouiller - 2 hours 23 min ago

I remember that when I wanted to draw some graphs on a web page, I would normally have 2 solutions

Solution 1 was to have an IMG tag that linked to a server component that would render an image based on some data. Solution 2 was to do Adobe Flash or maybe even some Silverlight.

Problem with Solution 1

The main problem is that it is not interactive. You have an image and there is no way to do drilldown or do anything with it. So unless your content was simple and didn't need any kind of interaction or simply was headed for printing... this solution just wouldn't do.

Problem with Solution 2

While you now get all the interactivity and the beauty of a nice Flash animation and plugin... you lost the benefits of the first solution too. Can't print it if you need it and over that... it required a plugin.

For OSX back in 2009, plugins were the leading cause of browser crash and there is nothing that stops us from believing that similar things aren't true for other browsers.

The second problem is security. A plugin is just another attack vector on your browser and requiring a plugin to display nice graphs seem a bit extreme.

The Solution

The solution is relatively simple. We need a system that allows us to draw lines, curves and what not based on coordinate that we provide it.

That system should of course support colors, font and all the basic HTML features that we know now (including events).

Then came SVG

SVG has been the main specification to drawing anything vector related in a browser since 1999. Even though the specification started at the same time than IE5, it wasn't supported in Internet Explorer until IE9 (12 years later).

The support for SVG is now in all major browsers from Internet Explorer to FireFox and even in your phone.

Chances are that every computer you are using today can render SVG inside your browser.

So what?

SVG as a general rule is under used or thought of something only artists do or that it's too complicated to do.

My recommendation is to start cracking today on using libraries that leverage SVG. By leveraging them, you are setting yourself apart from others and can start offering real business value to your clients right now that others won't be able to.

SVG has been available on all browsers for a while now. It's time we start using it.

Browsers that do not support SVG
  • Internet Explorer 8 and lower
  • Old Android device (2.3 and less), partial support for 3-4.3
References, libraries and others
Categories: Blogs

Upcoming DevOps & Agile Events

James Betteley's Release Management Blog - Thu, 05/21/2015 - 12:28

London Puppet User Group Meetup
London, Thursday May 21st, 2015
6:00pm
http://goo.gl/C2zuKb

DevOps Exchange London – DevOps & DevOps
London, Tuesday May 26th, 2015
6:30pm
http://goo.gl/Xmdqxl

London Agile Discussion Group – Should DevOps be a person or a team-wide skill?
London, Tuesday May 26th, 2015
6:30pm
http://goo.gl/xksVOH

AWS User Group UK – meetup #15
London, Wed May 27th, 2015
6:30pm
http://goo.gl/uBsiUj

Chef Users London – Microsoft Azure / Chef Taster Day
London, Friday May 29, 2015
9:00am to 5:00pm
http://goo.gl/VOvkC3

DevOps Cardiff – Herding ELKs with consul.io
Cardiff, Wednesday, June 3, 2015
6:30pm
http://goo.gl/WwOvkQ

Agile Testing – Visual Creativity: Using Sketchnotes & Mindmaps to aid testing @ #ltgworkshops
London, Thursday June 4th, 2015
8:30am
http://goo.gl/34iIXM

ABC (Agile Book Club) London – Review Jeff Patton’s User Story Mapping
London, Thursday June 4th, 2015
6:30pm
http://goo.gl/X0qPwb

Agile Testing – Hooking Docker Into Selenium @ #ltgworkshops
London, Thursday June 4th, 2015
8:30am
http://goo.gl/ONH8dQ

UK Azure User Group – Cloud Gaming Hackathon
London, Saturday June 6th, 2015
9:30am
http://goo.gl/ONH8dQ

London DevOps – London DevOps Meetup #10
London, Thursday June 11th, 2015
7:00pm
http://goo.gl/uolxJk

Kanban Coaching Exchange – Continuous learning through communities of practice – Emily Webber
London, Thursday June 11th, 2015
6:30pm
http://goo.gl/9aFD8x

Lean Agile Manchester
Manchester, Wednesday June 17th, 2015
6:30pm
http://goo.gl/Z15ac3

London Lean Coffee – Holborn
London, Thursday, June 18th, 2015
9-10am
http://goo.gl/QkIBhj

UK Azure User Group – Chris Risner
London, Thursday June 18th, 2015
7:00pm
http://goo.gl/EfbNnn

Jenkins User Conference – Europe (London)
London, Tuesday June 23rd – 24th, 2015
2 days
http://goo.gl/achJJX

BDD London June Meetup
London, Thursday June 25th, 2015
6:30pm
http://goo.gl/C2zuKb

Automated Database Deployment (Workshop – £300)
Belfast, Northern Ireland, Friday June 26th, 2015
1 day course
http://goo.gl/fXlJr7

Database Continuous Integration (Workshop – £300)
London, July 8th, 2015
1 day course
http://goo.gl/lW4TjA

Database Source Control (Workshop – £100)
London, July 8th, 2015
1 day course
http://goo.gl/C2zuKb

London Lean Coffee – Holborn
London, Thursday, July 16, 2015
9-10am
http://goo.gl/mtJ3k4

Agile Taster – a free introductory Agile training course
Cardiff, Saturday 18 July 2015
10am – 3pm
http://goo.gl/qFYS6b

AWS User Group UK – meetup #16
London, Wed July 22nd, 2015
6:30pm
http://goo.gl/Tc3hlD


Categories: Blogs

End-to-End Hypermedia: Making the Leap

Jimmy Bogard - Tue, 05/19/2015 - 17:26

REST, a term that few people understand and fewer know how to implement, has become a blanket term for any sort of Web API. That’s unfortunate, because the underlying foundation of REST has a lot of benefits. So much so that I’ve started talking about regular Web APIs not as “RESTful” but just as a “Web API”. The value of REST for me has come from the hypermedia aspect of REST.

REST and hypermedia aren’t free – they significantly complicate both the building of the server API and the clients. But they are useful in certain scenarios, as I laid out in talking about the value proposition of hypermedia:

  • Native mobile apps
  • Disparate client deployments talking to a single server
  • Clients talking to disparate server deployments

I’ve only put one hypermedia-driven API into production (which, to be frank, is one more than most folks who talk about REST). I’ve attempted to build many other hypermedia APIs, only to find hypermedia was complete overkill.

If your client is deployed at the same time as your server, lives in the same source control repository, hypermedia doesn’t provide much value at all.

Hypermedia is great at decoupling client from server, allowing the client to adjust according to the server. In most apps I build, I happily couple client to server, taking advantage of the metadata I find on the server to build highly intelligent clients:

@using (Html.BeginForm()) 
{
    @Html.AntiForgeryToken()
    
    <div class="form-horizontal">
        <h4>Instructor</h4>
        <hr />
        @Html.ValidationDiv()
        @Html.FormBlock(m => m.LastName)
        @Html.FormBlock(m => m.FirstMidName)
        @Html.FormBlock(m => m.HireDate)
        @Html.FormBlock(m => m.OfficeAssignmentLocation)

In this case, my client is the browser, but my view is intelligently built up so that labels, text inputs, drop downs, checkboxes, date pickers and so on are created using metadata from a variety of sources. I can even employ this mechanism in SPAs, where my templates are pre-rendered using server metadata.

I don’t really build APIs for clients I can’t completely control, so those have completely different considerations. Building an API for public consumption means you want to enable as many clients as possible, balancing coupling with flexibility. The APIs I’ve built for clients I don’t own, I’ve never used hypermedia. It just put too much burden on my clients, so I just left it as plain old JSON objects (POJSONOs).

So if you’ve found yourself in a situation where you’ve convinced yourself you do need hypermedia, primarily based on coupling decisions, you’ll need to do a few things to get a full hypermedia solution end-to-end:

  • Choose a hypermedia-rich media type
  • Build the server API
  • Build the client consumer

In the next few posts, I’ll walk through end-to-end hypermedia from my experiences of shipping a hypermedia API server and a client consumer.

 

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Fifty Quick Ideas To Improve Your Tests now available

Gojko Adzic - Tue, 05/19/2015 - 10:00

My new book, Fifty Quick Ideas to Improve Your Tests, is now available on Amazon. Grab it at 50% discount before Friday:

This book is for cross-functional teams working in an iterative delivery environment, planning with user stories and testing frequently changing software under tough time pressure. This book will help you test your software better, easier and faster. Many of these ideas also help teams engage their business stakeholders better in defining key expectations and improve the quality of their software products.

For more info, check out FiftyQuickIdeas.com

Categories: Blogs

Security Testing for Test Professionals

Testing TV - Mon, 05/18/2015 - 17:31
Coveros CEO Jeff Payne goes into detail about his upcoming STARWEST tutorial, the importance of software testing in the mobile age, the most common types of breaches, and how he would have handled some security issues that Twitter encountered. Video producer: http://www.techwell.com/
Categories: Blogs

I Prefer This Over That

A couple weeks ago I tweeted:

I prefer: - Recovery over Perfection - Predictability over Commitment - Safety Nets over Change Control - Collaboration over Handoffs

— ElisabethHendrickson (@testobsessed) May 6, 2015

Apparently it resonated. I think that’s more retweets than anything else original I’ve said on Twitter in my seven years on the platform. (SEVEN years? Holy snack-sized sound bytes! But I digress.)

@jonathandart said, “I would love to read a fleshed out version of that tweet.”

OK, here you go.

First, a little background. Since I worked on Cloud Foundry at Pivotal for a couple years, I’ve been living the DevOps life. My days were filled with zero-downtime deployments, monitoring, configuration as code, and a deep antipathy for snowflakes. We honed our practices around deployment checklists, incident response, and no-blame post mortems.

It is within that context that I came to appreciate these four simple statements.

Recovery over Perfection

Something will go wrong. Software might behave differently with real production data or traffic than you could possibly have imagined. AWS could have an outage. Humans, being fallible, might publish secret credentials in public places. A new security vulnerability may come to light (oh hai, Heartbleed).

If we aim for perfection, we’ll be too afraid to deploy. We’ll delay deploying while we attempt to test all the things (and fail anyway because ‘all the things’ is an infinite set). Lowering the frequency with which we deploy in order to attempt perfection will ironically increase the odds of failure: we’ll have fewer turns of the crank and thus fewer opportunities to learn, so we’ll be even farther from perfect.

Perfect is indeed the enemy of good. Striving for perfection creates brittle systems.

So rather than strive for perfection, I prefer to have a Plan B. What happens if the deployment fails? Make sure we can roll back. What happens if the software exhibits bad behavior? Make sure we can update it quickly.

Predictability over Commitment

Surely you have seen at least one case where estimates were interpreted as a commitment, and a team was then pressured to deliver a fixed scope in fixed time.

Some even think such commitments light a fire under the team. They give everyone something to strive for.

It’s a trap.

Any interesting, innovative, and even slightly complex development effort will encounter unforeseen obstacles. Surprises will crop up that affect our ability to deliver. If those surprises threaten our ability to meet our commitments, we have to make painful tradeoffs: Do we live up to our commitment and sacrifice something else, like quality? Or do we break our commitment? The very notion of commitment means we probably take the tradeoff. We made a commitment, after all. Broken commitments are a sign of failure.

Commitment thus trumps sustainability. It leads to mounting technical debt. Some number of years later find themselves constantly firefighting and unable to make any progress.

The real problem with commitments is that they suggest that achieving a given goal is more important than positioning ourselves for ongoing success. It is not enough to deliver on this one thing. With each delivery, we need to improve our position to deliver in the future.

So rather than committing in the face of the unknown, I prefer to use historical information and systems that create visibility to predict outcomes. That means having a backlog that represents a single stream of work, and using velocity to enable us to predict when a given story will land. When we’re surprised by the need for additional work, we put that work in the backlog and see the implications. If we don’t like the result, we make an explicit decision to tradeoff scope and time instead of cutting corners to make a commitment.

Aiming for predictability instead of commitment allows us to adapt when we discover that our assumptions were not realistic. There is no failure, there is only learning.

Safety Nets over Change Control

If you want to prevent a given set of changes from breaking your system, you can either put in place practices to tightly control the nature of the changes, or you can make it safer to change things.

Controlling the changes typically means having mechanisms to accept or reject proposed changes: change control boards, review cycles, quality gates.

Such systems may be intended to mitigate risk, but they do so by making change more expensive. The people making changes have to navigate through the labyrinth of these control systems to deliver their work. More expensive change means less change means less risk. Unless the real risk to your business is a slogging pace of innovation in a rapidly changing market.

Thus rather than building up control systems that prevent change, I’d rather find ways to make change safe. One way is to ensure recoverability. Recovery over perfection, after all.

Fast feedback cycles make change safe too. So instead of a review board, I’d rather have CI to tell us when the system is violating expectations. And instead of a laborious code review process, I’d rather have a pair work with me in real time.

If you want to keep the status quo, change control is fine. But if you want to go fast, find ways to make change cheap and safe.

Collaboration over Handoffs

In traditional processes there are typically a variety of points where one group hands off work to another. Developers hand off to other developers, to QA for test, to Release Engineering to deliver, or to Ops to deploy. Such handoffs typically involve checklists and documentation.

But the written word cannot convey the richness of a conversation. Things will be missed. And then there will be a back and forth.

“You didn’t document foo.”
“Yes, we did. See section 3.5.1.”
“I read that. It doesn’t give me the information I need.”

The next thing you know it’s been 3 weeks and the project is stalled.

We imagine a proper handoff to be an efficient use of everyone’s time, but they’re risky. Too much can go wrong, and when it does progress stops.

Instead of throwing a set of deliverables at the next team down the line, bring people together. Embed testers in the development team. Have members of the development team rotate through Ops to help with deployment and operation for a period of time. It actually takes less time to work together than it does to create sufficient documentation to achieve a perfect handoff.

True Responsiveness over the Illusion of Control

Ultimately all these statements are about creating responsive systems.

When we design processes that attempt to corral reality into a neat little box, we set ourselves up for failure. Such systems are brittle. We may feel in control, but it’s an illusion. The real world is not constrained by our imagined boundaries. There are surprises just around the corner.

We can’t control the surprises. But we can be ready for them.

Categories: Blogs

Multi-Repository Development

Google Testing Blog - Fri, 05/15/2015 - 23:00
Author: Patrik Höglund

As we all know, software development is a complicated activity where we develop features and applications to provide value to our users. Furthermore, any nontrivial modern software is composed out of other software. For instance, the Chrome web browser pulls roughly a hundred libraries into its third_party folder when you build the browser. The most significant of these libraries is Blink, the rendering engine, but there’s also ffmpeg for image processing, skia for low-level 2D graphics, and WebRTC for real-time communication (to name a few).

Figure 1. Holy dependencies, Batman!
There are many reasons to use software libraries. Why write your own phone number parser when you can use libphonenumber, which is battle-tested by real use in Android and Chrome and available under a permissive license? Using such software frees you up to focus on the core of your software so you can deliver a unique experience to your users. On the other hand, you need to keep your application up to date with changes in the library (you want that latest bug fix, right?), and you also run a risk of such a change breaking your application. This article will examine that integration problem and how you can reduce the risks associated with it.
Updating Dependencies is HardThe simplest solution is to check in a copy of the library, build with it, and avoid touching it as much as possible. This solution, however, can be problematic because you miss out on bug fixes and new features in the library. What if you need a new feature or bug fix that just made it in? You have a few options:
  • Update the library to its latest release. If it’s been a long time since you did this, it can be quite risky and you may have to spend significant testing resources to ensure all the accumulated changes don’t break your application. You may have to catch up to interface changes in the library as well. 
  • Cherry-pick the feature/bug fix you want into your copy of the library. This is even riskier because your cherry-picked patches may depend on other changes in the library in subtle ways. Also, you still are not up to date with the latest version. 
  • Find some way to make do without the feature or bug fix.
None of the above options are very good. Using this ad-hoc updating model can work if there’s a low volume of changes in the library and our requirements on the library don’t change very often. Even if that is the case, what will you do if a critical zero-day exploit is discovered in your socket library?

One way to mitigate the update risk is to integrate more often with your dependencies. As an extreme example, let’s look at Chrome.

In Chrome development, there’s a massive amount of change going into its dependencies. The Blink rendering engine lives in a separate code repository from the browser. Blink sees hundreds of code changes per day, and Chrome must integrate with Blink often since it’s an important part of the browser. Another example is the WebRTC implementation, where a large part of Chrome’s implementation resides in the webrtc.org repository. This article will focus on the latter because it’s the team I happen to work on.
How “Rolling” Works The open-sourced WebRTC codebase is used by Chrome but also by a number of other companies working on WebRTC. Chrome uses a toolchain called depot_tools to manage dependencies, and there’s a checked-in text file called DEPS where dependencies are managed. It looks roughly like this:
{
# ...
'src/third_party/webrtc':
'https://chromium.googlesource.com/' +
'external/webrtc/trunk/webrtc.git' +
'@' + '5727038f572c517204e1642b8bc69b25381c4e9f',
}

The above means we should pull WebRTC from the specified git repository at the 572703... hash, similar to other dependency-provisioning frameworks. To build Chrome with a new version, we change the hash and check in a new version of the DEPS file. If the library’s API has changed, we must update Chrome to use the new API in the same patch. This process is known as rolling WebRTC to a new version.

Now the problem is that we have changed the code going into Chrome. Maybe getUserMedia has started crashing on Android, or maybe the browser no longer boots on Windows. We don’t know until we have built and run all the tests. Therefore a roll patch is subject to the same presubmit checks as any Chrome patch (i.e. many tests, on all platforms we ship on). However, roll patches can be considerably more painful and risky than other patches.


Figure 2. Life of a Roll Patch.
On the WebRTC team we found ourselves in an uncomfortable position a couple years back. Developers would make changes to the webrtc.org code and there was a fair amount of churn in the interface, which meant we would have to update Chrome to adapt to those changes. Also we frequently broke tests and WebRTC functionality in Chrome because semantic changes had unexpected consequences in Chrome. Since rolls were so risky and painful to make, they started to happen less often, which made things even worse. There could be two weeks between rolls, which meant Chrome was hit by a large number of changes in one patch.
Bots That Can See the Future: “FYI Bots” We found a way to mitigate this which we called FYI (for your information) bots. A bot is Chrome lingo for a continuous build machine which builds Chrome and runs tests.

All the existing Chrome bots at that point would build Chrome as specified in the DEPS file, which meant they would build the WebRTC version we had rolled to up to that point. FYI bots replace that pinned version with WebRTC HEAD, but otherwise build and run Chrome-level tests as usual. Therefore:

  • If all the FYI bots were green, we knew a roll most likely would go smoothly. 
  • If the bots didn’t compile, we knew we would have to adapt Chrome to an interface change in the next roll patch. 
  • If the bots were red, we knew we either had a bug in WebRTC or that Chrome would have to be adapted to some semantic change in WebRTC.
The FYI “waterfall” (a set of bots that builds and runs tests) is a straight copy of the main waterfall, which is expensive in resources. We could have cheated and just set up FYI bots for one platform (say, Linux), but the most expensive regressions are platform-specific, so we reckoned the extra machines and maintenance were worth it.
Making Gradual Interface Changes This solution helped but wasn’t quite satisfactory. We initially had the policy that it was fine to break the FYI bots since we could not update Chrome to use a new interface until the new interface had actually been rolled into Chrome. This, however, often caused the FYI bots to be compile-broken for days. We quickly started to suffer from red blindness [1] and had no idea if we would break tests on the roll, especially if an interface change was made early in the roll cycle.

The solution was to move to a more careful update policy for the WebRTC API. For the more technically inclined, “careful” here means “following the API prime directive[2]. Consider this example:
class WebRtcAmplifier {
...
int SetOutputVolume(float volume);
}

Normally we would just change the method’s signature when we needed to:
class WebRtcAmplifier {
...
int SetOutputVolume(float volume, bool allow_eleven1);
}

… but this would compile-break Chome until it could be updated. So we started doing it like this instead:
class WebRtcAmplifier {
...
int SetOutputVolume(float volume);
int SetOutputVolume2(float volume, bool allow_eleven);
}

Then we could:
  1. Roll into Chrome 
  2. Make Chrome use SetOutputVolume2 
  3. Update SetOutputVolume’s signature 
  4. Roll again and make Chrome use SetOutputVolume 
  5. Delete SetOutputVolume2
This approach requires several steps but we end up with the right interface and at no point do we break Chrome.
ResultsWhen we implemented the above, we could fix problems as they came up rather than in big batches on each roll. We could institute the policy that the FYI bots should always be green, and that changes breaking them should be immediately rolled back. This made a huge difference. The team could work smoother and roll more often. This reduced our risk quite a bit, particularly when Chrome was about to cut a new version branch. Instead of doing panicked and risky rolls around a release, we could work out issues in good time and stay in control.

Another benefit of FYI bots is more granular performance tests. Before the FYI bots, it would frequently happen that a bunch of metrics regressed. However, it’s not fun to find which of the 100 patches in the roll caused the regression! With the FYI bots, we can see precisely which WebRTC revision caused the problem.
Future Work: Optimistic Auto-rollingThe final step on this ladder (short of actually merging the repositories) is auto-rolling. The Blink team implemented this with their ARB (AutoRollBot). The bot wakes up periodically and tries to do a roll patch. If it fails on the trybots, it waits and tries again later (perhaps the trybots failed because of a flake or other temporary error, or perhaps the error was real but has been fixed).

To pull auto-rolling off, you are going to need very good tests. That goes for any roll patch (or any patch, really), but if you’re edging closer to a release and an unstoppable flood of code changes keep breaking you, you’re not in a good place.

References[1] Martin Fowler (May 2006) “Continuous Integration”
[2] Dani Megert, Remy Chi Jian Suen, et. al. (Oct 2014) “Evolving Java-based APIs”
Footnotes
  1. We actually did have a hilarious bug in WebRTC where it was possible to set the volume to 1.1, but only 0.0-1.0 was supposed to be allowed. No, really. Thus, our WebRTC implementation must be louder than the others since everybody knows 1.1 must be louder than 1.0.

Categories: Blogs

A tester’s life

Agile Testing with Lisa Crispin - Fri, 05/15/2015 - 05:45

I work every day as a tester on the Pivotal Tracker team. Some people think that because I’ve written books and speak at conferences I must be a full time consultant, but my passion lies in being a member of a great team and doing hands-on testing.

brainstormingAtWork

Brainstorming with JoEllen, at our old office

It’s easy for me to go around telling you all what’s the best way to build quality into a software product, but practicing what I preach can be a challenge. For example, I’m a very shy person with self-esteem issues. So though I’m always telling you how great pairing is, I often find it hard to leave my bubble and pair with amazing people like my teammate JoEllen Carter.

Here’s a slice of life from a particularly exciting week. We moved to a fancy new office half a block down the street, along with other teams from Pivotal Labs. Right now our Tracker team is rattling around in our new digs, but we have several new hires, interns and a Colorado School of Mines project team joining us soon.

Enough said

Enough said

Until all those new folks join us, there are extra monitors lying around. One of my teammates, knowing how much I love major monitor real estate, hooked a third monitor up to my usual workstation. Of course, another teammate caught me using my laptop for a standup meeting with some remote team members, and posted it on our Slack channel. Yeah, I look pretty silly! And there’s a downside: it’d be nice to move locations every day as our dev pairs do, but I can’t tear myself away from those three monitors. Well, it won’t last forever.

On the right are the scenarios I wrote, on the left are the designer's ideas.

On the right are the scenarios I wrote, on the left are the designer’s ideas.

Our new space has acres of glorious whiteboards. We had few whiteboards at the old office, but whenever any of us got in front of a whiteboard and started drawing while talking, magic happened. Today, a few of us started discussing a poor user experience in our customer signup process. After a few minutes of waving hands and explaining, I walked over to the giant wall o’ whiteboards and wrote out three scenarios. The others walked over and we had a good conversation.

Later on, the designer and a couple of developers went back to the whiteboard to talk more about it. The designer sketched out his ideas. Writing and drawing on the whiteboard helped us think things through and share the same understanding about the problem and the potential solution.

newOffice

New Tracker office

Other stuff that went on this week? I usually work from home two days a week because I live 35 miles and much gridlock away from work. The office move translated into problems connecting with the office. The team is changing up how we do builds and deploys, and that’s getting in the way of delivering stories for final acceptance testing. We testers pitch in on customer support, and that’s been a bit busy this week. Like everyone I know, I don’t have enough time to do all the things I want/need to be doing. But we sure have a pretty new office!

Oh, and I did NOT pair with JoEllen at all. This is terrible. OK, she was gone the first two days of the week. We have a three day Hackathon next week. I had better take advantage of the opportunity.

 

 

The post A tester’s life appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Frustrated? It is probably your fault.

Thought Nursery - Jeffrey Fredrick - Tue, 05/12/2015 - 23:50

That’s the title of my talk which has been accepted to the program for the upcoming Devopsdays Amsterdam June 24th, 25th and 26th. The topics I’ll be discussing will be familiar to the attendees of the London Action Science meetup — and to the people who have been diligently reading my session notes — which is that if you want to create change you need to start by changing your behaviour.

For the past few CITCONs I have been leading open space sessions with a very similar theme: “Can’t create change? It is probably your fault”. Those sessions have been great fun, because we get to talk through situations in real time together. My challenge now is how to generate those same kind of ah-ha moments for the audience without building it together in the room.

I’m not sure how I’ll do that yet, but I’m excited to have been accepted and to have the opportunity to give it a go. Hope to see you there!

Categories: Blogs

BDD from Scratch with Serenity and JBehave

Testing TV - Tue, 05/12/2015 - 17:48
This tutorial explains how to start a Behavior-Driven Development (BDD) approach from scratch with JBehave framework and Serenity reporting library. JBehave is an open source framework for Behavior-Driven Development (BDD) in Java. BDD is an evolution of test-driven development (TDD) and acceptance-test driven design, and is intended to make these practices more accessible and intuitive […]
Categories: Blogs

Contract Tester Position At NewVoiceMedia

The Social Tester - Mon, 05/11/2015 - 12:20

I am looking for someone to come and join the NewVoiceMedia development team here in the UK as a tester. The role is for a tester on a 3 month contract. The role is located in Basingstoke, UK. I’m looking for someone who has a critical mind, great communication skills and an ability to hit […]

The post Contract Tester Position At NewVoiceMedia appeared first on The Social Tester.

Categories: Blogs