Skip to content


Audio Testing - Automatic Gain Control

Google Testing Blog - 14 hours 20 min ago
By: Patrik Höglund

What is Automatic Gain Control? It’s time to talk about advanced media quality tests again! As experienced Google testing blog readers know, when I write an article it’s usually about WebRTC, and the unusual testing solutions we build to test it. This article is no exception. Today we’re going to talk about Automatic Gain Control, or AGC. This is a feature that’s on by default for WebRTC applications, such as It uses various means to adjust the microphone signal so your voice makes it loud and clear to the other side of the peer connection. For instance, it can attempt to adjust your microphone gain or try to amplify the signal digitally.

Figure 1. How Auto Gain Control works [code here].
This is an example of automatic control engineering (another example would be the classic PID controller) and happens in real time. Therefore, if you move closer to the mic while speaking, the AGC will notice the output stream is too loud, and reduce mic volume and/or digital gain. When you move further away, it tries to adapt up again. The fancy voice activity detector is there so we only amplify speech, and not, say, the microwave oven your spouse just started in the other room.
Testing the AGCNow, how do we make sure the AGC works? The first thing is obviously to write unit tests and integration tests. You didn’t think about building that end-to-end test first, did you? Once we have the lower-level tests in place, we can start looking at a bigger test. While developing the WebRTC implementation in Chrome, we had several bugs where the AGC code was working by itself, but was misconfigured in Chrome. In one case, it was simply turned off for all users. In another, it was only turned off in Hangouts.

Only an end-to-end test can catch these integration issues, and we already had stable, low-maintenance audio quality tests with the ability to record Chrome’s output sound for analysis. I encourage you to read that article, but the bottom line is that those tests can run a WebRTC call in two tabs and record the audio output to a file. Those tests run the PESQ algorithm on input and output to see how similar they are.

That’s a good framework to have, but I needed to make two changes:

  • Add file support to Chrome’s fake audio input device, so we can play a known file. The original audio test avoided this by using WebAudio, but AGC doesn’t run in the WebAudio path, just the microphone capture path, so that won’t work.
  • Instead of running PESQ, run an analysis that compares the gain between input and output.
Adding Fake File SupportThis is always a big part of the work in media testing: controlling the input and output. It’s unworkable to tape microphones to loudspeakers or point cameras to screens to capture the media, so the easiest solution is usually to add a debug flag. It is exactly what I did here. It was a lot of work, but I won’t go into much detail since Chrome’s audio pipeline is complex. The core is this:

int FileSource::OnMoreData(AudioBus* audio_bus, uint32 total_bytes_delay) {
// Load the file if we haven't already. This load needs to happen on the
// audio thread, otherwise we'll run on the UI thread on Mac for instance.
// This will massively delay the first OnMoreData, but we'll catch up.
if (!wav_audio_handler_)
if (load_failed_)
return 0;


// Stop playing if we've played out the whole file.
if (wav_audio_handler_->AtEnd(wav_file_read_pos_))
return 0;

// This pulls data from ProvideInput.
return audio_bus->frames();

This code runs every 10 ms and reads a small chunk from the file, converts it to Chrome’s preferred audio format and sends it on through the audio pipeline. After implementing this, I could simply run:

chrome --use-fake-device-for-media-stream \

and whenever I hit a webpage that used WebRTC, the above file would play instead of my microphone input. Sweet!
The Analysis StageNext I had to get the analysis stage figured out. It turned out there was something called an AudioPowerMonitor in the Chrome code, which you feed audio data into and get the average audio power for the data you fed in. This is a measure of how “loud” the audio is. Since the whole point of the AGC is getting to the right audio power level, we’re looking to compute

Adiff = Aout - Ain

Or, really, how much louder or weaker is the output compared to the input audio? Then we can construct different scenarios: Adiff should be 0 if the AGC is turned off and it should be > 0 dB if the AGC is on and we feed in a low power audio file. Computing the average energy of an audio file was straightforward to implement:

  // ...
size_t bytes_written;
wav_audio_handler->CopyTo(audio_bus.get(), 0, &bytes_written);
CHECK_EQ(bytes_written, wav_audio_handler->data().size())
<< "Expected to write entire file into bus.";

// Set the filter coefficient to the whole file's duration; this will make
// the power monitor take the entire file into account.
media::AudioPowerMonitor power_monitor(wav_audio_handler->sample_rate(),
power_monitor.Scan(*audio_bus, audio_bus->frames());
// ...
return power_monitor.ReadCurrentPowerAndClip().first;

I wrote a new test, and hooked up the above logic instead of PESQ. I could compute Ain by running the above algorithm on the reference file (which I fed in using the flag I implemented above) and Aout on the recording of the output audio. At this point I pretty much thought I was done. I ran a WebRTC call with the AGC turned off, expecting to get zero… and got a huge number. Turns out I wasn’t done.
What Went Wrong?I needed more debugging information to figure out what went wrong. Since the AGC was off, I would expect the power curves for output and input to be identical. All I had was the average audio power over the entire file, so I started plotting the audio power for each 10 millisecond segment instead to understand where the curves diverged. I could then plot the detected audio power over the time of the test. I started by plotting Adiff :

Figure 2. Plot of Adiff.
The difference is quite small in the beginning, but grows in amplitude over time. Interesting. I then plotted Aout and Ain next to each other:

Figure 3. Plot of Aout and Ain.
A-ha! The curves drift apart over time; the above shows about 10 seconds of time, and the drift is maybe 80 ms at the end. The more they drift apart, the bigger the diff becomes. Exasperated, I asked our audio engineers about the above. Had my fancy test found its first bug? No, as it turns out - it was by design.
Clock Drift and Packet LossLet me explain. As a part of WebRTC audio processing, we run a complex module called NetEq on the received audio stream. When sending audio over the Internet, there will inevitably be packet loss and clock drift. Packet losses always happen on the Internet, depending on the network path between sender and receiver. Clock drift happens because the sample clocks on the sending and receiving sound cards are not perfectly synced.

In this particular case, the problem was not packet loss since we have ideal network conditions (one machine, packets go over the machine’s loopback interface = zero packet loss). But how can we have clock drift? Well, recall the fake device I wrote earlier that reads a file? It never touches the sound card like when the sound comes from the mic, so it runs on the system clock. That clock will drift against the machine’s sound card clock, even when we are on the same machine.

NetEq uses clever algorithms to conceal clock drift and packet loss. Most commonly it applies time compression or stretching on the audio it plays out, which means it makes the audio a little shorter or longer when needed to compensate for the drift. We humans mostly don’t even notice that, whereas a drift left uncompensated would result in a depleted or flooded receiver buffer – very noticeable. Anyway, I digress. This drift of the recording vs. the reference file was natural and I would just have to deal with it.
Silence Splitting to the Rescue!I could probably have solved this with math and postprocessing of the results (least squares maybe?), but I had another idea. The reference file happened to be comprised of five segments with small pauses between them. What if I made these pauses longer, split the files on the pauses and trimmed away all the silence? This would effectively align the start of each segment with its corresponding segment in the reference file.

Figure 4. Before silence splitting.
Figure 5. After silence splitting.
We would still have NetEQ drift, but as you can see its effects will not stack up towards the end, so if the segments are short enough we should be able to mitigate this problem.
ResultHere is the final test implementation:

  base::FilePath reference_file = 
base::FilePath recording = CreateTemporaryWaveFile();

reference_file, recording, constraints,

base::ScopedTempDir split_ref_files;
SplitFileOnSilenceIntoDir(reference_file, split_ref_files.path()));
std::vector<base::FilePath> ref_segments =

base::ScopedTempDir split_actual_files;
SplitFileOnSilenceIntoDir(recording, split_actual_files.path()));

// Keep the recording and split files if the analysis fails.
base::FilePath actual_files_dir = split_actual_files.Take();
std::vector<base::FilePath> actual_segments =

ref_segments, actual_segments, reference_file, perf_modifier);

DeleteFileUnlessTestFailed(recording, false);
DeleteFileUnlessTestFailed(actual_files_dir, true);

Where AnalyzeSegmentsAndPrintResult looks like this:

void AnalyzeSegmentsAndPrintResult(
const std::vector<base::FilePath>& ref_segments,
const std::vector<base::FilePath>& actual_segments,
const base::FilePath& reference_file,
const std::string& perf_modifier) {
ASSERT_GT(ref_segments.size(), 0u)
<< "Failed to split reference file on silence; sox is likely broken.";
ASSERT_EQ(ref_segments.size(), actual_segments.size())
<< "The recording did not result in the same number of audio segments "
<< "after on splitting on silence; WebRTC must have deformed the audio "
<< "too much.";

for (size_t i = 0; i < ref_segments.size(); i++) {
float difference_in_decibel = AnalyzeOneSegment(ref_segments[i],
std::string trace_name = MakeTraceName(reference_file, i);
perf_test::PrintResult("agc_energy_diff", perf_modifier, trace_name,
difference_in_decibel, "dB", false);

The results look like this:

Figure 6. Average Adiff values for each segment on the y axis, Chromium revisions on the x axis.
We can clearly see the AGC applies about 6 dB of gain to the (relatively low-energy) audio file we feed in. The maximum amount of gain the digital AGC can apply is 12 dB, and 7 dB is the default, so in this case the AGC is pretty happy with the level of the input audio. If we run with the AGC turned off, we get the expected 0 dB of gain. The diff varies a bit per segment, since the segments are different in audio power.

Using this test, we can detect if the AGC accidentally gets turned off or malfunctions on windows, mac or linux. If that happens, the with_agc graph will drop from ~6 db to 0, and we’ll know something is up. Same thing if the amount of digital gain changes.

A more advanced version of this test would also look at the mic level the AGC sets. This mic level is currently ignored in the test, but it could take it into account by artificially amplifying the reference file when played through the fake device. We could also try throwing curveballs at the AGC, like abruptly raising the volume mid-test (as if the user leaned closer to the mic), and look at the gain for the segments to ensure it adapted correctly.

Categories: Blogs

Should our front-end websites be server-side at all?

Decaying Code - Maxime Rouiller - 19 hours 7 min ago

I’ve been toying around with projects like Jekyll, Hexo and even some hand-rolled software that will generate me HTML files based on data. The thought that crossed my mind was…

Why do we need dynamically generated HTML again?

Let me take examples and build my case.

Example 1: Blog

Of course the simpler examples like blogs could literally all be static. If you need comments, then you could go with a system like Disqus. This is quite literally one of the only part of your system that is dynamic.

RSS feed? Generated from posts. Posts themselves? Could be automatically generated from a databases or Markdown files periodically. The resulting output can be hosted on a Raspberry Pi without any issues.

Example 2: E-Commerce

This one is more of a problem. Here are the things that don’t change a lot. Products. OK, they may change but do you need to have your site updated right this second? Can it wait a minute? Then all the “product pages” could literally be static pages.

Product reviews? They will need to be “approved” anyway before you want them live. Put them in a servier-side queue, and regenerate the product page with the updated review once it’s done.

There’s 3 things that I see that would require to be dynamic in this scenario.

Search, Checkout and Reviews. Search because as your products scales up, so does your data. Doing the search client side won’t scale at any level. Checkout because we are now handling an actual order and it needs a server components. Reviews because we’ll need to approve and publish them.

In this scenario, only the Search is the actual “Read” component that is now server side. Everything else? Pre-generated. Even if the search is bringing you the list of product dynamically, it can still end up on a static page.

All the other write components? Queued server side to be processed by the business itself with either Azure or an off-site component.

All the backend side of the business (managing products, availability, sales, whatnot, etc.) will need a management UI that will be 100% dynamic (read/write).


So… do we need dynamic front-end with the latest server framework? On the public facing too or just the backend?

If you want to discuss it, Tweet me at @MaximRouiller.

Categories: Blogs

You should not be using WebComponents yet

Decaying Code - Maxime Rouiller - 19 hours 7 min ago

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since... well... a long time.

If you take a look at the specification, it's hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let's take a look at all four of them.

Reviewing the specificationsHTML Templates


This specific specification is not part of the "Web components" section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements


This specification is for review and not for implementation!

Alright no let's not touch this yet.

Shadow DOM


This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports


This one is still a working draft so it hasn't been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it's the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it's built by... well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we're only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it's a painful path.

What is wrong right now with WebComponents

First, it's not cross platform. We handled that in the past. That's not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there's no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn't move, we're back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what "Production" is concerned, do not use WebComponents directly. Also, avoid Polymer as it's only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Categories: Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - 19 hours 7 min ago

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - 19 hours 7 min ago

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - 19 hours 7 min ago

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - 19 hours 7 min ago

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.


Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.


Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - 19 hours 7 min ago

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - 19 hours 7 min ago

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:


This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.


Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - 19 hours 7 min ago

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… — Awesome way of describing it.

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

Do you have your own Batman Utility Belt?

Decaying Code - Maxime Rouiller - 19 hours 7 min ago
Just like most of us on any project, you (yes you!) as a developer must have done the same thing over and over again. I'm not talking about coding a controller or accessing the database.

Let's check out some concrete examples shall we?

  • Have you ever setup HTTP Caching properly, created a class for your project and call it done?
  • What about creating a proper Web.config to configure static asset caching?
  • And what about creating a MediaTypeFormatter for handling CSV or some other custom type?
  • What about that BaseController that you rebuild from project to project?
  • And those extension methods that you use ALL the time but rebuild for each projects...

If you answered yes to any of those questions... you are in great risk of having to code those again.

Hell... maybe someone already built them out there. But more often than not, they will be packed with other classes that you are not using. However, most of those projects are open source and will allow you to build your own Batman utility belt!

So once you see that you do something often, start building your utility belt! Grab those open source classes left and right (make sure to follow the licenses!) and start building your own class library.


Once you have a good collection that is properly separated in a project and that you seem ready to kick some monkey ass, the only way to go is to use NuGet to pack it together!

Checkout the reference to make sure that you do things properly.

NuGet - Publishing

OK you got a steamy new hot NuGet package that you are ready to use? You can either push it to the main repository if your intention is to share it with the world.

If you are not ready quite yet, there are multiple way to use a NuGet package internally in your company. The easiest? Just create a Share on a server and add it to your package source! As simple as that!

Now just make sure to increment your version number on each release by using the SemVer convention.

Reap the profit

OK, no... not really. You probably won't be money anytime soon with this library. At least not in real money. Where you will gain however is when you are asked to do one of those boring task yet over again in another project or at another client.

The only thing you'll do is import your magic package, use it and boom. This task that they planned would take a whole day? Got finished in minutes.

As you build up your toolkit, more and more task will become easier to accomplish.

The only thing left to consider is what NOT to put in your toolkit.

Last minute warning

If you have an employer, make sure that your contract allows you to reuse code. Some contracts allows you to do that but double check with your employer.

If you are a company, make sure not to bill your client for the time spent building your tool or he might have the right to claim them as his own since you billed him for it.

In case of doubt, double check with a lawyer!

Categories: Blogs

Test-Driven Development with JavaFX

Testing TV - Wed, 10/07/2015 - 16:56
This session presents existing testing tools and frameworks in their current stage of development. It compares the capabilities and the kinds of impacts of existing projects. The presentation pays particular attention to questions such as How can a cross-platform GUI test be created? With many legacy (Java Swing–based) applications in need of migrating to the […]
Categories: Blogs

Attention to all managers out there: show your appreciation towards the people who work for you!

Panamo QA - Georgia Motoc - Tue, 10/06/2015 - 20:20
Either way you put it, we all like to be appreciated for our work. We are proud of our accomplishments, aren’t we? It’s nice when the nice words come from a colleague, but it’s even better when a senior manager … Continue reading →
Categories: Blogs

London Test Forum - Be there Tue 24th NOVEMBER!

Yet another bloody blog - Mark Crowther - Tue, 10/06/2015 - 14:52
Hey All, I caught up with Stacey Howard over at the Reco Group the other day. As well as letting me know about the incredible roles and clients they have, she also told me about the awesome free event they're arranging. Have a look below and be sure to attend. Be sure to tell you friends and colleagues too! I will definitely be there to see Rob Lambert, Jonathon Wright and Declan O'Riodan. Mark. -------- From the Eventbrite website  The London Test Forum aims to create a platform for test professionals to share ideas and thoughts on a rapidly growing and changing space. Please join us on Tuesday 24th November 2015 at the Leathermarket to discuss the future of testing. We have industry recognised speakers such as Jonathon Wright talking about TestOps. How Agile and DevOps has exacerbated the problem for security test resources with Declan O'Riodan, and finally 10 behaviours an effective employee should show with Rob Lambert.   Agenda: 
  • 5:30pm - Doors open: Drink and Networking
  • 6:30pm - First speaker: Jonathon Wright: The Digital Evolution: TestOps Blueprint
  • 6:45pm - Second Speaker: Declan O'Riordan: Application Security in an Agile or DevOps environment
  • 7:00pm - Third Speaker: Robert Lambert: 10 Behaviours of Effective Employees
  • 7:15pm - Networking and Drinks
  ... read more on the site and get tickets   
Categories: Blogs

Keynote at the GDS Away Day - Tue, 10/06/2015 - 09:00

A couple of weeks ago I had a last minute invite via James Lewis to speak at at Away Day for some people from GDS. I only had a couple of days notice and was going to give a talk around being a Tech Lead but thought I would adapt some of my talks to the broader audience that would include developers, web operations, delivery managers, tech leads and architects.

I ended up with a talk titled, “Technical Leadership Matters.”

Why Technical Leadership Matters from Patrick Kua

I have been thoroughly impressed by the work and innovation that the GDS team have made along the way and my goal was for everyone to come away, feeling like they could demonstrate Technical Leadership without requiring a title of a “Tech Lead” or “Architect.”

Fish Island Labs
The GDS Away Day was held at Fish Island Labs, a new digital hub set up by the Barbican group and located on the river Lea with a nice spacious set of rooms for conferences and co-working and a very functional bar down stairs.

I held my talk as the closing keynote before I joined everyone in the pub downstairs and had some great feedback about how relevant the message was, and that many people came away inspired, which I was particularly happy with given it was the first time I held this talk and the little time I had to prepare for it.

Categories: Blogs

The Illusion of Control

Agile Testing with Lisa Crispin - Tue, 10/06/2015 - 05:48

The Starship Enterprise was usually in control…

Recently I listened to one of Amitai Schlair’s excellent Agile in 3 Minutes podcasts (also available on iTunes), about Control. We had a brief Twitter conversation about it with Karina B., aka @GertieGamer. Amitai tweeted, “We can’t control what happens to us, but we control how we’d like to feel about it next time + what we do about it.” Karina tweeted, “the illusion of control is a fun magic trick that always leaves people wanting more”.

My dressage trainer, Elaine Marion, with Flynn

My dressage trainer, Elaine Marion, with Flynn

I enjoy the illusion of being in control. I think that’s one reason that one of my equestrian sports of choice is dressage. If I bombed around a cross-country jumping course on horseback, I’d have to let the horse make many of the decisions. The dressage arena feels so much more genteel. If I’m in perfect communion with my mount, performing the well-defined movements, I feel like I’m in charge. It’s a nice illusion! (In truth, I should be allowing the horse to perform correctly…)

The “What-Ifs”

I’ve been driving my miniature donkeys for many years. We’ve learned so much from my donkey trainer, Tom Mowery, and I trust my boys to take care of me. Still, if I’m driving my wagon with my donkey team down the road, and a huge RV motors towards us, I start getting what Tom calls the “what-ifs”. What if they run into the path of the RV? What if they spook and run into the ditch? I have an even worse case of the “what-ifs” with my newest donkey, a standard jenny, who is still a beginner at pulling a cart. She doesn’t steer reliably yet. My illusion of control is easily dispelled. What if she runs into that fence? This happens to software teams as well. What if we missed some giant bug? What if this isn’t the right feature?

We don’t need control – we need trust in our skills Enjoying the view with Marsela

Enjoying the view and forgetting the “What-ifs” with Marsela

Tom’s words of wisdom about my worry over losing control are: “Lisa, if anything goes wrong, you have the skills to deal with it.” This is true, and I keep practicing those skills so they’ll be ready when I need them. If my donkeys run towards either an RV or a ditch, I can remember that they are actually trained, and cue them to change course and do something safer.

It’s the same with software development. We constantly learn new skills so that we can deal with whatever new obstacles get in our way. We identify problems using retrospectives, and we try small experiments to chip away at those problems. As a team and personally, if I am confident that if we have good skills and tools at our disposal, we don’t need an illusion of control. Whatever happens next, my team and I are in a position to do something about it.

In his podcast, Amitai suggests that if we can accept feeling less in control, we might make better decisions. I think if we focus on continually learning new skills and tools, our confidence in our ability to adapt to the current situation is much more important than feeling in control.


The post The Illusion of Control appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

I have been laid off, now what? 10 ways to keep it together.

Panamo QA - Georgia Motoc - Mon, 10/05/2015 - 22:28
Ten ways to overcome the negative emotions of being laid off. Continue reading →
Categories: Blogs

Follow the work – bad news for Test Managers?

The Social Tester - Mon, 10/05/2015 - 11:00

I get lots of enquiries from founders of start-ups who reach a certain growth point where they really need to start taking control of the quality of the work being produced. Their companies seem to reach a size and market growth where the focus on quality becomes a priority. This is usually about the time […]

The post Follow the work – bad news for Test Managers? appeared first on Rob Lambert.

Categories: Blogs

TestStorming - A Collaborative Approach for Rapid Test Design

Thanks to everyone who attended my webinar today on TestStorming(TM)!

Here is the recorded presentation:
Here are the slides:

Here is the article on my website:

And, finally, here is the link to the METS spreadsheets I mentioned today:

Have a great weekend!

Categories: Blogs

TDD and Its (at least) 5 Benefits

Sustainable Test-Driven Development - Thu, 10/01/2015 - 20:19
Many developers have concerns about adopting test-driven development, specifically regarding: It's more work.  I'm already over-burdened and now you're giving me a new job to do. I'm not a tester.  We have testers for testing, and they have more expertise than I do.  It will take me a long time to learn how to write tests as well as they do. If I write the code, and then test it, the test-pass
Categories: Blogs