Skip to content

Blogs

You should not be using WebComponents yet

Decaying Code - Maxime Rouiller - Mon, 07/06/2015 - 08:36

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since... well... a long time.

If you take a look at the specification, it's hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let's take a look at all four of them.

Reviewing the specificationsHTML Templates

Specification

This specific specification is not part of the "Web components" section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements

Specification

This specification is for review and not for implementation!

Alright no let's not touch this yet.

Shadow DOM

Specification

This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports

Specification

This one is still a working draft so it hasn't been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it's the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it's built by... well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we're only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it's a painful path.

What is wrong right now with WebComponents

First, it's not cross platform. We handled that in the past. That's not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there's no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn't move, we're back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what "Production" is concerned, do not use WebComponents directly. Also, avoid Polymer as it's only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Categories: Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - Mon, 07/06/2015 - 08:36

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - Mon, 07/06/2015 - 08:36

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - Mon, 07/06/2015 - 08:36

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
{
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;
	}

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
	}
}
      
    
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - Mon, 07/06/2015 - 08:36

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.

Powershell

Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.

Param(
    [Parameter(Mandatory=$True)]
    [string]$filename
)

Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
{
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
    {
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
    }
}
    
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - Mon, 07/06/2015 - 08:36

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - Mon, 07/06/2015 - 08:36

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:

SET GIT_SSL_NO_VERIFY=true

This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.

NOW.

Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - Mon, 07/06/2015 - 08:36

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… https://t.co/unhgFdMCrh — Awesome way of describing it. pic.twitter.com/KDPxpdB3UE

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

Do you have your own Batman Utility Belt?

Decaying Code - Maxime Rouiller - Mon, 07/06/2015 - 08:36
Just like most of us on any project, you (yes you!) as a developer must have done the same thing over and over again. I'm not talking about coding a controller or accessing the database.

Let's check out some concrete examples shall we?

  • Have you ever setup HTTP Caching properly, created a class for your project and call it done?
  • What about creating a proper Web.config to configure static asset caching?
  • And what about creating a MediaTypeFormatter for handling CSV or some other custom type?
  • What about that BaseController that you rebuild from project to project?
  • And those extension methods that you use ALL the time but rebuild for each projects...

If you answered yes to any of those questions... you are in great risk of having to code those again.

Hell... maybe someone already built them out there. But more often than not, they will be packed with other classes that you are not using. However, most of those projects are open source and will allow you to build your own Batman utility belt!

So once you see that you do something often, start building your utility belt! Grab those open source classes left and right (make sure to follow the licenses!) and start building your own class library.

NuGet

Once you have a good collection that is properly separated in a project and that you seem ready to kick some monkey ass, the only way to go is to use NuGet to pack it together!

Checkout the reference to make sure that you do things properly.

NuGet - Publishing

OK you got a steamy new hot NuGet package that you are ready to use? You can either push it to the main repository if your intention is to share it with the world.

If you are not ready quite yet, there are multiple way to use a NuGet package internally in your company. The easiest? Just create a Share on a server and add it to your package source! As simple as that!

Now just make sure to increment your version number on each release by using the SemVer convention.

Reap the profit

OK, no... not really. You probably won't be money anytime soon with this library. At least not in real money. Where you will gain however is when you are asked to do one of those boring task yet over again in another project or at another client.

The only thing you'll do is import your magic package, use it and boom. This task that they planned would take a whole day? Got finished in minutes.

As you build up your toolkit, more and more task will become easier to accomplish.

The only thing left to consider is what NOT to put in your toolkit.

Last minute warning

If you have an employer, make sure that your contract allows you to reuse code. Some contracts allows you to do that but double check with your employer.

If you are a company, make sure not to bill your client for the time spent building your tool or he might have the right to claim them as his own since you billed him for it.

In case of doubt, double check with a lawyer!

Categories: Blogs

Software Developer Computer Minimum Requirements October 2014

Decaying Code - Maxime Rouiller - Mon, 07/06/2015 - 08:36

I know that Scott Hanselman and Jeff Atwood have already done something similar.

Today, I'm bringing you the minimum specs that are required to do software development on a Windows Machine.

P.S.: If you are building your own desktop, I recommend PCPartPicker.

ProcessorRecommendation

Intel: Intel Core i7-4790K

AMD: AMD FX-9590

Unless you use a lot of software that supports multi-threading, a simple 4 core here will work out for most needs.

MemoryRecommendation

Minimum 8GB. 16GB is better.

My minimum requirement here is 8GB. I run a database engine and Visual Studio. SQL Server can easily take 2Gb with some big queries. If you have extensions installed for Visual Studio, it will quickly raise to 1GB of usage per instance and finally... Chrome. With multiple extensions and multiple pages running... you will quickly reach 4GB.

So get 8GB as the bare minimum. If you are running Virtual Machines, get 16GB. It won't be too much. There's no such thing as too much RAM when doing software development.

Hard-driveRecommendation

512 GB SSD drive

I can't recommend enough an SSD. Most tools that you use on a development machine will require a lot of I/O. Especially random read. When a compiler starts and retrieve all your source code to compile, it will need to read from all those file. Same thing if you have tooling like ReSharper or CodeRush. I/O speed is crucial. This requirement is even more important on a laptop. Traditionally, PC maker put a 5200RPM HDD on a laptop to reduce power usage. However, 5200 RPM while doing development will be felt everywhere.

Get an SSD.

If you need bigger storage (terabytes), you can always get a second hard-drive of the HDD type instead. Slower but capacities are also higher. On most laptop, you will need external storage for this hard drive so make sure it is USB3 compatible.

Graphic Card

Unless you do graphic rendering or are working with graphic tools that require a beast of a card... this is where you will put the less amount of money.

Make sure to get enough of them for your amount of monitors and that they can provide the right resolution/refresh rate.

Monitors

My minimum requirement nowadays is 22 inches. 4K is nice but is not part of the "minimum" requirement. I enjoy a 1920x1080 resolution. If you are buying them for someone else, make sure they can be rotated. Some developers like to have a vertical screen when reading code.

To Laptop or not to Laptop

Some company go Laptop for everyone. Personally, if the development machine never need to be taken out of the building, you can go desktop. You will save a bit on all the required accessories (docking port, wireless mouse, extra charger, etc.).

My personal scenario takes me to clients all over the city as well as doing presentations left and right. Laptop it is for me.

Categories: Blogs

DevOps vs. SCM

The Build Doctor - Mon, 07/06/2015 - 01:40
There’s a team of people in your company.  They’re responsible for: Storing built versions of your code in a repository Ensuring that you can reproduce each one of those builds Tracking...

Visit The Build Doctor for the full article.
Categories: Blogs

Trying to be CEWT

Hiccupps - James Thomas - Sun, 07/05/2015 - 08:03

I attend, enjoy, hopefully contribute to, and get a lot from, the local tester meetups and Lean Coffee in Cambridge. But I'd had the thought kicking around for a long time that I'd like to try a peer workshop inspired by MEWT, DEWTLEWT and the like. I finally asked a few others, including the local meetup organisers, and got mostly positive noises, so I decided to give it a go.

I wrote a short statement to frame the idea, based on LEWT's:
CEWT (Cambirdge Exploratory Workshop on Testing) is an exploratory peer workshop. We take the view that discussions are more interesting than lectures. We enjoy diverse ideas, and limit some activities in order to work with more ideas.and proposed a mission for an initial attempt to validate it locally on a small scale.

Other local testers helped to refine the details in usual the testing ways - you know: criticism, questions, thought experiments, challenges, comparisons, mockery and the rest - and a list of potential attendees was drawn up. In parallel I solicited advice from the groups that had inspired me, asking what's worked well and what what hasn't, particularly in the events and in the organisation of them.

This post aggregates and roughly sorts their responses, removing mentions of specific groups or people. I'd like to thank all of them for being so forthcoming and open with their experience and advice.

I wanted to pull two specific comments out, two that I tried to keep uppermost in my mind thoughout:
  • As you will understand: there is no best practice.
  • The thought is this: at a peer workshop, I should consider everyone my peer. For the duration of the workshop, I will attempt to listen to – and question – anyone who I share the room with, regardless of whether they have more or less experience, or whether I generally consider their work good or poor, whether I am fascinated, bored, repelled, awestruck or confused. 

I started this process at the end of April and yesterday (July 4th) we had CEWT #1. There were a few rough edges, and I learnt a thing or two, and I already know some things I would change if and when we have another, but there'll more on that later. For now, here's that aggregated advice for anyone else thinking of trying it ...

StartingWe started small: in a kitchen with only a few people.

I have no idea how many interested people you know, but it is smart to keep it either very small to start with, which you can organise by yourself. Or make it a bit bigger, but then you should have some help.

I’d thought about doing this for about 12 months before our first one, and it was only when I started to talk to the others about the idea that I found they had similar thoughts and things started to move.

SizeMy experience is that you need about 10 people to have good discussions in LAWST style. 7-8 people could be okay, although I don't think you need facilitation with such a small group. You also have the risk that if 1 or 2 do not show up, your group becomes even smaller.

We have limited it to a maximum of around 25 people. As we are always looking to improve, this all might be subject to change in the near future.

I had assumed [the sense that in a peer conference everyone is granted the status of everyone else's peer] was a central guidance to peer conferences – even if, in practice, it was occasionally hard to see such respect in action. However, I’m no longer certain of this; when I’ve shared my position with other peer conference organisers, it has been (generally) either alien or less important. I think this gets hard with >8 people, and is pretty impossible with >15. A 25-person room will naturally form groups, gurus, acolytes and pariahs – so it’s ludicrous of me to expect larger peer conferences to work this way.

Personally, I think the max size for any peer group is rather under 20.

AttendeesWe have a very simple approach to application and invitation - if someone asks if they can come, then they can. Done. I tell people that there's a cutoff, what the cutoff is, and that people who apply when numbers are under the cutoff can come, and people who are later can't come.

Currently, I ask prior participants to set the theme and the date, so they know before anyone else. This gives them precedence, but if they don't take the opportunity, they don't get to go.

Wrong people: who am I to judge? However, if someone applies out of the blue, I'll talk with them so that they can judge if they're the right person. Usually their judgement is sound.

If someone's interested enough to ask to come and to give up their time to be part of it, then they're in - whether they 'fit' the group, or not. We have had people who didn't fit, and sometimes they've been wonderful contributors, sometimes they've triggered good conversations and interesting realignments. No one has walked out yet. A few participants have complained about others, and I can deal with that as facilitator if something is said early enough. I sometimes find my own comfort challenged – but I don't think it's my role to exclude someone, and I'm sure that the group is muscular enough to chew someone up and spit them out if it absolutely has to.

We are thinking of adding the possibility to choose one speaker chosen by the participants.

All organisers can introduce one (sometimes two) others to the peer conference. We often try to invite somebody outside of the testing circle to add some other views to our conferences.

If you are inviting people, then invite people you think will have something interesting to say on the topic rather than people you know or feel you need to invite out of loyalty – remember it’s a firstly a learning opportunity not a social gathering.

Even if you don’t know someone well but want to invite them, don’t be afraid to reach out and ask them – most people like to be invited to these things.

I find that the more diverse the group, the more it offers guarded respect to each individual: our two-people-with-less-than-two-years-experience thing helps with the diversity.

The Organising TeamWe are organized as a small core group with assigned roles - which rotate per event - to some of us to organize the peer conference.

A small team will help give the idea some momentum, generate more interesting ideas and share the effort of creating the event.

Play to people's strengths - we are all very different with unique skills and personalities, but we each bring something to the table.

If you have a team then agree roles (we change roles each time) to ensure things get done. Generally you will need:

  • 1 x Content Owner – responsible for describing the theme, reviewing and feedback on abstracts, ensuring all attendees have an abstract.
  • 1 x Facilitator – responsible for managing the flow of the discussions on the day (doesn’t need to speak)
  • 1-2 x Organisers – responsible for logistics (venue arrangements, ensuring costs are covered either by sponsorship or attendees, providing travel and hotel information, keeping in touch with attendees etc.).
We have introduced the formal role of 'content owner' in the conferences to keep us from going all over the place. He/she chooses the speakers. The conferences are centred around experience reports and discussions are facilitated by a facilitator.
Find some awesome people to work with, it's a lot of work for one person!
Logistics: BeforeChoose relevant and open topics that encourage a wider range of views and discussions.

Find a good venue.

Food is important - quality grub adds to the vibe.

All participants are obliged to send in a proposal for a small presentation (organisers too).

Asking for abstracts (and receiving them) helps to focus people's minds ahead of the day.

Chase people for abstracts, review and feedback on the abstracts. In my opinion, if you don’t have abstracts then some attendees will forget to prepare and attempt to wing-it resulting in less interesting talks and discussions. However that does depend upon who the attendees are.

Don't underestimate the effort required to invite people or encourage people to attend (if you have an open attendance). You will have people who drop out in the lead up to the event so be prepared.

Plan ahead, we have started planning 3-4 months ahead to give people time to commit and provide abstracts. When you invite or accept people to attend, ensure they know the outline plan with milestones such as confirming attendance, when initial and final abstracts are due etc.

Keep regular contact with those who are attending to keep them informed of plans, reminders of upcoming milestones, hotel and travel arrangements etc.

Logistics: On the DayIf you can, find someone to do the distracting mid-workshop logistics (i.e. who’s eating what, taking calls from late people).

Trying to get through all of the talks works well - fast paced and high energy.

Not worrying about getting through all the talks works well too - slower and deeper.

Breaks: as long as possible without losing momentum and direction. Proper, multithreaded conversation happens in the breaks. The “talks” are a primer for the discussions, the discussions a primer for conversations – and connections and ideas grow from those conversations.

Set-up: everyone should be able to see everyone else’s face, all the time. Other than that, don’t be precious about room layout, drinks, stationery, power supplies, matching tables or any other fripperies. Indeed, the more informal, the better. Help participants to feel comfortable, not coddled, and certainly not privileged.

Visuals: I strongly discourage slides, and encourage flip charts. They’re more immediate, more interactive, and less goes wrong. I prefer flipcharts to whiteboards, as they’re more permanent and one can flip back.

Dot voting lean coffee style gets everyone involved.

Keep presentations nice and short; 15 minutes max.

Ordering: the room gets to decide what goes early (the facilitator gets a deciding vote) – so topics at the end usually get less time. This can make them more focussed, and the speaker will often be able to tune what they have so that it suits the attention of the room.

We don't have a content owner deciding what gets attention or priority, we don't have a scribe making public notes, we don't have a mission. We all agree at the outset to be facilitated, which helps - but we don't necessarily decide what 'facilitation' is.

The relatively-fast turnover of topics helps, a lot.

FacilitationAsk the room to accept you as someone who will regulate the ebb and flow. Don’t direct (or dictate) the content.

Accept that, as facilitator, you’re not really at the workshop, and give the primary part of your attention to emotions of the people in the room, not to what is being said.

Monitoring people's energy and staying fluid with structure and content helps keep things moving.

When I'm facilitating, I try to do the job with as light a touch as possible - basically I keep a queue, keep my eye on time, and try to help the group stay within the discipline of conversing in a way that lets everyone talk, and everyone listen. Even that, however, requires my complete attention on the room - which means I don't make many notes for myself or contribute much to the conversation.

The facilitator is not a peer. The participants give the facilitator their attention, and their permission to stop and start them, in pursuit of a greater goal then their own individual airtime. The facilitator accepts their temporary status, and returns the favour by serving the group and putting his or her own needs aside.

Name cards can help your own flow.

Getting everyone’s attention focussed from chat to the group: There are clutch of approaches. Most work, most use sound or visual cues. I pick up whatever (physical) sound effect I’ve not used recently. Singing bowls, thundersticks, jingle toys. It gets to a point where, when everyone’s concentrating, one has only to pick the thing up to make people switch focus. My favourite was the vuvuzela – a disgustingly loud football horn. I don’t remember blowing it at all (except to try it out).

For each new topic, I try to remember to announce the topic and speaker, ask how much time they want to talk, support them no more than they want, and to ask the room to thank them at the end.

As someone starts their topic, I split the audio recording and also write down the start time, the time the speaker’s asked me to give them, and the time we’ve all agreed to spend on the topic. I write those as absolutes, not relatives, because calculation takes your attention – (ie 10:03:15, 10:13, 10:33). My laptop clock is always in view.

I record audio, and this also keeps track of elapsed relative time (i.e. 0:17:30 since the topic started).

I keep track of the timing info and the current queue on the same topic card that I’ve pulled off the wall – the card that started with a topic title and ended up covered in sticky dots. Keeping track of the question stack/queue is easy – it’s a list, sometimes with indents and squiggles. If sub-topics are spawning more sub-topics, do ask the room if they want to go deep or wide.

Allow the clock to rule, allow the room to override the clock. Don’t worry about going short. The room will need to regularly be reminded of the time available as the stack builds up and time burns down.

Every few questions, I’ll tell the room who the next 2-4 people on the stack. If we’re in open discussion, and I feel the room needs to move on, I’ll catch the eye of whoever is speaking, breathe in as they finish a point, and indicate the next question by pointing to someone and saying their name.

Don’t fear dropping a person from the queue – it’s your job. But don’t drop them slyly, either.

I bite my tongue (metaphorically, mostly) to stop (my) witty interjections; they’re not usually that great, and it’s an abuse of the role the room has allowed me to take. For the same reason, I don’t usually ask many questions – but I don’t absolutely exclude myself, either.

If, as time runs out on a topic, you give participants the chance to pull their questions or comments to let other questions be asked, they might just do it.

As a facilitator, the people who give me problems are those who assume their contribution is more important than the person who currently has the room's attention, the people with one thing to say and a big personal stake in having it heard, and people who stop listening after someone uses a word that is hot (or dull) for them.

I'm sometimes a problem if I get involved, and I'm lucky that people help me rein myself in if I get out of hand. But problems are few and often easy to deal with if one has a feel for the tolerance and firmness that suits the mood of the room (the whole group, not just the loud participants).

If everyone speaks at once, I need to decide when and how and whether to stop them – and if people only speak when their feel they have permission to speak, I’ve done it all wrong and need to shake up the room. Stay between these extremes, let people (including yourself) be human, aim for fine chat, and you’ll have done a job that anyone should be satisfied with.

I find that expression and body position will tell you whether someone has a new point or a follow-on (and if not, just ask), so I think that k-cards in something with <20 people are a constraining gadget.

I don’t tend to give much leeway to an extended back-and-forth between speaker and a single interlocutor.

Discourage bad behaviour more than the person who is behaving badly: Firmly and clearly block people who are being bullies, then swiftly forgive them and allow them a chance to redeem themselves in the eyes of their peers.

See for general ideas: Paul Holland on facilitation.

Success or failure (pick your own definition) is mostly down to the group, not the facilitator – but you are, as Jerry Weinberg might say, responsible for your reactions to the group.
Image: https://flic.kr/p/4F24G7
Categories: Blogs

NDC talk on SOLID in slices not layers video online

Jimmy Bogard - Thu, 07/02/2015 - 20:21

The talk I gave at NDC Oslo 2015 is up on SOLID architecture in slices not layers:

https://vimeo.com/131633177

In it I talk about flipping this style architecture:

To one that focuses on vertical deliverable features:

Enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

End-to-end Hypermedia: Building a React Client

Jimmy Bogard - Wed, 07/01/2015 - 18:06

In the last post, I walked through what is to me the most interesting part of REST – the client. It’s easy to build a server API, but no API is complete without someone actually using that API. This is where most REST examples fall down for me – they show all sorts of pretty pictures of hypermedia-rich JSON from the server, but no real examples of how to consume that API.

I walked through some jQuery code in the last post, but why stop with jQuery? That’s so 2010. Instead, I want to build around React. React is perfect for hypermedia because of its component-oriented nature. A resource’s representation can be broken down into its components, and React components then matched accordingly. But before we get into the client, I’ll need to modify my sample to consume React.

Installing React

As a shortcut, I’m just going to use ReactJS.Net to build React into my existing MVC app. I install the ReactJS.Net NuGet package, and add a script reference to my downloaded react.js library. Normally, I’d go through the whole Bower/npm path, but this seemed like the simplest path to integrate into my sample.

I’m going to create just a blank JSX file for all my React components for this page, and slim down my Index view to the basics:

<h2>Instructors</h2>
<div id="content"></div>
@section scripts{
    <script src="@Url.Content("~/Scripts/react-0.13.3.js")"></script>
    <script src="@Url.Content("~/Scripts/InstructorInfo.jsx")"></script>
    @{
        var href = Url.Action("Index", "Instructor", new {httproute = ""});
    }
    <script>
        React.render(
            React.createElement(InstructorsInfo, {href: '@href'}),
            document.getElementById("content")
        );
    </script>
}

All of the div placeholders are removed except one, for content. I pull in the React library and my custom React components. The ReactJS.Net package takes my JSX file and transpiles it into Javascript (as well as builds the needed files for in-browser debugging). Finally, I render my base React component, passing in the root URL for kicking off the initial request for instructors, and the DOM element in which to render the React component into.

Once I’ve got the basic React library up and running, it’s time to figure out how we would like to componentize our page.

Slicing our Page

If we look at the page we want to create, we need to take this page and create React components from the parts we find. Here’s our page from before:

Looking at this, I see three individual tables populated with collection+json data. I’m thinking I create one overall component composed of three individual items. Inside the table, I can break things up into the table, rows, header, cells and links:

I might need a few more, but this is a good start. Next, we can start building our React components.

React Components

First up is our overall component that contains our three tables of collection+json data. Since I have an understanding of what’s getting returned on the server side, I’m going to make an assumption that I’m building out three tables, and I can navigate links to drill down to more. Additionally, this component will be responsible for making the initial AJAX call and keeping the overall state. State is important in React, and I’ve decided to keep the parent component responsible for the resource state rather than each table. My InstructorInfo component is:

class InstructorsInfo extends React.Component {
  constructor(props) {
    super(props);
    this.state = {
      instructors: { },
      courses: { },
      students: { }
    };
    this._handleSelect = this._handleSelect.bind(this);
  }
  componentDidMount() {
    $.getJSON(this.props.href)
      .done(data => this.setState({ instructors: data }));
  }
  _handleSelect(e) {
    $.getJSON(e.href)
      .done(data => {
        var state = e.rel === "courses"
          ? { students: {}}
          : {};

        state[e.rel] = data;

        this.setState(state);
      });
  }
  render() {
    return (
      <div>
        <CollectionJsonTable data={this.state.instructors}
          onSelect={this._handleSelect} />
        <CollectionJsonTable data={this.state.courses}
          onSelect={this._handleSelect} />
        <CollectionJsonTable data={this.state.students}
          onSelect={this._handleSelect} />
      </div>
    )
  }
}

I’m using ES6 here, which makes building React components a bit nicer to work with. I first declare my React component, extending from React.Component. Next, in my constructor, I set up the initial state, a object with empty values for the instructors/courses/students state. Finally, I set up the binding for a callback function to bind to the React component as opposed to the function itself.

In the componentDidMount function, I perform the initial AJAX call and set the instructors collection state based on the data that gets back. The URL I use to make the initial call is based on the “href” of my components properties.

The _handleSelect function is the callback of the clicked link way down on one of the tables. I wanted to have the parent component manage fetching new collections instead of a child component figuring out what to do. That method makes the AJAX call based on the “href” passed in from the collection+json data, gets the state back and updates the relevant state based on the “rel” of the link. To make things easy, I matched up the state’s property names to the rel’s I knew about.

Finally, the render function just has a div with my three CollectionJsonTable components, binding up the data and select functions. Let’s look at that component next:

class CollectionJsonTable extends React.Component {
  render() {
    if (!this.props.data.collection) {
      return <div></div>;
    }
    if (!this.props.data.collection.items.length){
      return <p>No items found.</p>;
    }

    var containsLinks = _(this.props.data.collection.items)
      .some(item => item.links && item.links.length);

    var rows = _(this.props.data.collection.items)
      .map((item, idx) => <CollectionJsonTableRow
        item={item}
        containsLinks={containsLinks}
        onSelect={this.props.onSelect}
        key={idx}
        />)
      .value();

    return (
      <table className="table">
        <CollectionJsonTableHeader
          data={this.props.data.collection.items}
          containsLinks={containsLinks} />
        <tbody>
          {rows}
        </tbody>
      </table>
    );
  }
}

This one is not quite as interesting. It only has the render method, and the first part is just to manage either no data or empty data. Since my data can conditionally have links, I found it easier to inform child components whether or not links exist (through the lodash code), rather than every component having to re-figure this out.

To build up each row, I map the collection+json items to CollectionJsonTableRow components, setting up the necessary props (the item, containsLinks, onSelect and key items). In React, there’s no event aggregator so I have to pass down a callback function to the lowest component via properties all the way down. Finally, since I’m building a collection of components, it’s best practice to put some sort of key on these items so that React knows how to re-render correctly.

The final rendered component is a table with a CollectionJsonTableHeader and the rows. Let’s look at that header next:

class CollectionJsonTableHeader extends React.Component {
  render() {
    var headerCells = _(this.props.data[0].data)
      .map((datum, idx) => <th key={idx}>{datum.prompt}</th>)
      .value();

    if (this.props.containsLinks) {
      headerCells.push(<th key="links"></th>);
    }

    return (
      <thead>
        <tr>
          {headerCells}
        </tr>
      </thead>
    );
  }
}

This component also only has a render method. I map the data items from the first item in the collection, producing header cells based on the prompt from the collection+json data. If the collection contains links, I’ll add an empty header cell on the end. Finally, I render the header with the header cells in a row.

With the header done, I can circle back to the CollectionJsonTableRow:

class CollectionJsonTableRow extends React.Component {
  render() {
    var dataCells = _(this.props.item.data)
      .map((datum, idx) => <td key={idx}>{datum.value}</td>)
      .value();

    if (this.props.containsLinks) {
      dataCells.push(<CollectionJsonTableLinkCell
        key="links"
        links={this.props.item.links}
        onSelect={this.props.onSelect} />);
    }

    return (
      <tr>
        {dataCells}
      </tr>
    );
  }
}

The row’s responsibility is just to build up the collection of cells, plus the optional CollectionJsonTableLinkCell. As before, I have to pass down the callback for the link clicks. Similar to the header cells, I fill in the data value (instead of the prompt). Next up is our link cell:

class CollectionJsonTableLinkCell extends React.Component {
  render() {
    var links = _(this.props.links)
      .map((link, idx) => <CollectionJsonTableLink
        key={idx}
        link={link}
        onSelect={this.props.onSelect} />)
      .value();

    return (
      <td>{links}</td>
    );
  }
}

This one isn’t so interesting, it just loops through the links, building out a CollectionJsonTableLink component, filling in the link object, key, and callback. Finally, our CollectionJsonTableLink component:

class CollectionJsonTableLink extends React.Component {
  constructor(props) {
    super(props);
    this._handleClick = this._handleClick.bind(this);
  }
  _handleClick(e) {
    e.preventDefault();
    this.props.onSelect({
      href : this.props.link.href,
      rel: this.props.link.rel}
    );
  }
  render() {
    return (
      <a href='#' rel={this.props.link.rel} onClick={this._handleClick}>
        {this.props.link.prompt}
      </a>
    );
  }
}
CollectionJsonTableLink.propTypes = {
  onSelect: React.PropTypes.func.isRequired
};

The link clicks are the most interesting part here. I didn’t want my link itself to have the behavior of what to do on click, so I call my “onSelect” prop in the click event from my link. The _handleClick method calls the onSelect method, passing in the href/rel from the collection+json link object. In my render method, I just output a normal anchor tag, with the rel and prompt from the link object, and the onClick event bound to the _handleClick method. Finally, I indicate that the onSelect prop is required, so that I don’t have to check for its existence when the link is clicked.

With all these components, I’ve got a working example:

I found working with hypermedia and React to be a far nicer experience than just raw jQuery. I could reason about individual components at the same level as the hypermedia controls, matching what I was building much more effectively to the resource representation returned. I still have to have some sort of knowledge of how I’m going to navigate the links and what to do, but that logic is all encapsulated in my topmost component.

Each of the sub-components aren’t tied to my overall logic and can be re-used as much as I want across my application, allowing me to use collection+json extensively and not worry about having to parse the result again and again. I’ve got a component that can effectively render a nice table based on a collection+json representation.

Next, we’ll kick things up a notch and build out a React.Native implementation, pushing the limit of hypermedia with a dynamic native mobile client.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

GTAC 2015: Call for Proposals & Attendance

Google Testing Blog - Tue, 06/30/2015 - 23:11
Posted by Anthony Vallone on behalf of the GTAC Committee

The GTAC (Google Test Automation Conference) 2015 application process is now open for presentation proposals and attendance. GTAC will be held at the Google Cambridge office (near Boston, Massachusetts, USA) on November 10th - 11th, 2015.

GTAC will be streamed live on YouTube again this year, so even if you can’t attend in person, you’ll be able to watch the conference remotely. We will post the live stream information as we get closer to the event, and recordings will be posted afterward.

Speakers
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations are 30 minutes and lightning talks are 10 minutes. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 25 talks and 200 attendees for the event. The selection process is not first come first serve (no need to rush your application), and we select a diverse group of engineers from various locations, company sizes, and technical backgrounds (academic, industry expert, junior engineer, etc).

Deadline
The due date for both presentation and attendance applications is August 10th, 2015.

Fees
There are no registration fees, but speakers and attendees must arrange and pay for their own travel and accommodations.

More information
You can find more details at developers.google.com/gtac.

Categories: Blogs

Book Review: Managing Humans

thekua.com@work - Mon, 06/29/2015 - 21:31

I remember hearing about Managing Humans several years ago but I only got around to buying it and getting through reading it.

Managing Humans

It is written by the well-known Michael Lopp otherwise known as Rands, who blogs at Rands and Repose.

The title is a clever take on working in software development and Rands shares his experiences working as a technical manager in various companies through his very unique perspective and writing style. If you follow his blog, you can see it shine through in the way that he tells stories, the way that he creates names around stereotypes and situations you might find yourself in the role of a Technical Manager.

He offers lots of useful advice that covers a wide variety of topics such as tips for interviewing, resigning, making meetings more effective, dealing with specific types of characters that are useful regardless of whether or not you are a Technical Manager or not.

He also covers a wider breath of topics such as handling conflict, tips for hiring, motivation and managing upwards (the last particularly necessary in large corporations). I felt like some of the topics felt outside the topic of “Managing Humans” and the intended target audience of a Technical Manager such as tips for resigning (yourself, not handling it from your team) and joining a start up.

His stories describe the people he has worked with and situations he has worked in. A lot of it will probably resonate very well with you if you have worked, or work in large software development firm or a “Borland” of our time.

The book is easy to digest in chunks, and with clear titles, is easy to pick up at different intervals or going back for future reference. The book is less about a single message, than a series of essays that offer another valuable insight into working with people in the software industry.

Categories: Blogs

EventFiringWebDriver, WebDriverEventListener, and AbstarctWebDriverEventListener

Testing tools Blog - Mayank Srivastava - Mon, 06/29/2015 - 17:36
Before getting into the sample code, lets have a look on EventFiringWebDriver, WebDriverEventListener, and AbstarctWebDriverEventListener descriptions. EventFiringWebDriver is a class and wrapper around an arbitrary webdriver instance which supports registering of a WebDriverEventListener. WebDriverEventListener is interface which includes list of abstract events methods and all should be implemented if we are implementing it. AbstarctWebDriverEventListener is abstract class which […]
Categories: Blogs

Moving Away from Legacy Code with BDD

Testing TV - Mon, 06/29/2015 - 17:28
Greenfield projects are awesome – you can develop highest quality application using best practices on the market. But what if your bread actually is Legacy projects? Does it mean that you need to descend into darkness of QA absence? Does it mean that you can’t use Agile or modern communication practices like BDD? This talk […]
Categories: Blogs

Pizza Chants

Hiccupps - James Thomas - Sun, 06/28/2015 - 06:41

So my wife caught me giggling to myself in the kitchen. Why? I'd just seen a really corny pun on peace and peas. It wasn't the "classic" above but it was the same kind of thing. In fact, it wasn't the joke itself that had caused me to crack my face at all, but the thoughts spurred from the desire to make a better one from the same phonetic premise.

The first thing I come up with is:
Give Pisa chanceThis slight variation on the well-known punchline is a plausible sentence but to make it work as a joke I need a context that can produce it. I'm working backwards from a result to look for some setup in which it is coherent:
Did you know that casinos are illegal in some parts of Italy? Apparently a bunch of gamblers held a candlelit protest overnight. They were singing "All we are saying is give Pisa chance."This is also a testing pattern. When you're looking at responses from a system, a useful approach to finding potential issues can be:
  • I've got X. 
  • By changing X a little I can get Y. 
  • Y is plausible. 
  • Y would be bad. 
  • What context could give me Y?
The comedian Milton Jones has a beautiful gag which is a series of one-liners with the punch line "Your house stinks" spread out in his set:
 Anyone here own a cat?
 Any students in tonight?
So, you know what the punch line is going to be, what context might give a laugh here?  He goes for:
Is anyone in the audience an aromatherapist?Which is not only funny, but also a (comedy) rule of three.

Meanwhile, back in the kitchen, I am busy applying another pattern - I think of it loosely as the Spooner - where you can look for the funny by permuting some aspect(s) of multiple elements. For example switching the initial sounds of peace and chance:
Give cheeser pants
Give cheetahs pants
Give cheaters pants
Give cheetah's pants
Small beer, perhaps. No obviously gut-busting laughs here, I'll grant you. But you could imagine contexts in which you could set these some of these up as jokes, although I will say that if you search for "cheetahs pants" as I did, looking for clues to such a context, you get a lot of photos of leopard skin leggings. Which - fashion naif that I am - violated both my expectations and my eyes.

But that's testing too: generate ideas and choosing to use them or not (at the moment). Sometimes rote generation  by some formula like this is productive and sometimes not so much. As it happens, I decide to try to stretch this line further (like some of those leggings) and end up with:
Give peaches pantsWhich I found an amusing idea (this was the point my wife came to ask what had happened to her coffee) although probably a step too far in terms of plausibility... but I later found this picture:


To relate this back to testing with a specific example: imagine you have some functionality that accepts a couple of arguments. You might ask yourself questions like these:
  • what happens if the arguments are given in the wrong position?
  • does the structure, naming, usage etc of this functionality make it likely that users will mix up the arguments?
  • how would someone spot that they had made this kind of mistake?
I find an interesting overlap in techniques and skills I use for joking and testing and I use one to keep in trim for the other. I'll be talking about it at the Cambridge Tester Meetup and the UKTMF next month and then EuroSTAR in November.
Images: Kotaku.com, The Crunchy Carrot
Categories: Blogs

What skills should we learn & teach to build quality in?

Agile Testing with Lisa Crispin - Sun, 06/28/2015 - 01:26

I learned so much last week at Agile Roots 2015 last week. Check out the artifacts, they’ll inspire you too! Janet Gregory and I did a plenary talk on “Do Testers Have to Code… To Be Useful?” I always love pair presenting with Janet. She did a super job of explaining our views on the subject. To summarize: Your software delivery team already has coders, and they can write test code as well as production code. But we think testers do need technical awareness to help them communicate and collaborate well with other team members.

This blog post is meant to be about our workshop, though, so on to that. We had 90 minutes and a great group of participants to think about what skills a team needs to help them build quality in to their software product. Testing isn’t a phase, as our friend Elisabeth Hendrickson so aptly says. We know we can’t test quality into a product (I am not sure who first said that, but I’ve heard it for 20 years! Still, people seem to try!) Quality has to be baked in. What skills help us do that? As testers, Janet and I tend to focus on testing skills, but are they the most important?

T-Shaped Skills

Each of us has a wide range of thinking (aka ‘soft’ or ‘people’) and technical skills. Most of us also have some area of special passion where we have deep skills. For example, I have lots of experience in exploratory testing, test automation, eliciting examples from customers, SQL, and so on. But I can bring the most value to my team with my ability to learn domains quickly – that’s my deep skill. I learned about the T-Shaped Skills concept from Rob Lambert. Each workshop participant noted their skills which can help their team build in quality, one per sticky note.

Commitment to quality

Quality is like Mom and apple pie. Ask any software delivery team, they’ll say they want to create a high-quality product. But are they really committed to doing that? What will they do when they encounter an obstacle? We shared stories and discussed the importance of making that commitment mean something. It will take a variety of skills, experience and perspectives to creatively overcome all the things that get in the way of building in quality. Get your team together and talk about what your commitment to quality really mans.

Square-Shaped Teams

When all team members put their T-shaped skill sets together, we get square-shaped teams, see Adam Knight‘s blog post on this topic. Our workshop participants compared their individual skills, grouped similar ones, and discussed which were most important. (Pictures of the results are at the end of this post). What skills can each specialty bring to the party? If an essential skill is missing, how can your team obtain it?

Transferring knowledge, effecting change

We discussed collaboration techniques teams can use to make the best use of specialized skills they need. Learning new skills or sharing specialized ones can mean change, and change is hard. Patterns from More Fearless Change by Linda Rising and Mary Lynn Manns are helpful as you try to spread new ideas or encourage new experiments.

Each workshop group discussed the skill area they deemed most important, and thought of experiments they could try with their own teams to build those skills. Interestingly, communication skills, rather than technical testing skills such as exploratory testing or test automation, were tops in three out of the five table groups. The other two groups chose related skill areas: conflict resolution and gaining empathy with users. Interesting experiments were tried. One group decided to try teaching a simple skill to see how hard it might be. One of the group members was left handed, and set about teaching the others to write left-handed. This proved a simple way to learn how to teach a skill, a pre-requisite to helping spread skills across the team! Another group played an icebreaker game to learn more about each other as a first step in improving communication. Again, this is something simple and fun that any team can try.

Giveaways

With only 90 minutes for our workshop, we didn’t have time to try out a lot of techniques to transfer skills. For myself, a key giveaway (I learned that term from Alex Schwarz and Fanny Pittack at last year’s Agile Testing Days, I like it better than takeaways) were that what so many play down as “soft” skills form the core strength of a team’s ability to build quality into their software. If they can’t communicate with each other or their customer effectively, it’s hard even to define what quality means to them and to their customer. Another “aha” moment was realizing that extremely simple exercises such as an icebreaker game or teaching a skill like writing left-handed provide a lot of insights and help teams work together better.

Below are the skill charts from each of our groups (WP won’t let me format these in a nicer way, for some reason). You can also check out our slides, which have some good resources for further reading. Janet and I will do a similar workshop at Agile Testing Days, but we’ll have a whole day there, so we are looking forward to more in-depth outcomes which we can share.

IMG_5235

IMG_5237
IMG_5236

IMG_5238

IMG_5239

The post What skills should we learn & teach to build quality in? appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs