Skip to content

Feed aggregator

Holding a Tech Lead course in Sydney

thekua.com@work - 38 min 48 sec ago

With a one-time only opportunity this year, I am running a course for Architects and Tech Leads on 22-23 October at the ThoughtWorks Sydney offices. After interviewing over 35 Tech Leads for the Talking with Tech Leads book, I recognised there is a gap about teaching developers the special leadership skills a successful Architect and Tech Lead demands. The class size is really limited, so reserve yourself a place while you can.

Tech Lead

In this very hands-on and discussion-based course, participants will cover a wide breadth of topics including understanding what a Tech Lead is responsible for, the technical aspects a developers rarely experiences and is not accountable for, and the difficult people-oriented side to the role including influencing, relationship building and tools for better understanding your team.

This is a two-day course that will quickly pay back dividends in accelerating you on your path or further developing your Tech Lead skills as developers. Register here on eventbrite for the course 22-23 October.

Categories: Blogs

“The story I’m making up…”

Last week at the August session of the London Action Science Meetup we started with a discussion of the phrase “the story I’m making up…” I love this phrase! It captures the process of the Ladder of Inference, but it has an immediate emotional resonance that the ladder does not.

I came across this phrase from an unlikely source: Oprah.com. Now I’ve got nothing against Oprah, but this it just isn’t in a url that shows up a lot in my browser history! So the real credit goes to @frauparker, who is someone I clearly trust well enough to follow off into the well-scrubbed bright and shiny parts of the internet; in this case an article by Brené Brown on How to Reckon with Emotion and Change Your Narrative, an excerpt from her new book Rising Strong. The article is well worth reading but I’m going to take a look at it through the narrow lens of the action science workshops I’ve been leading.

For a few months now I’ve been leading weekly lunchtime workshops at TIM Group where anyone who is interested can join in a discussion of some two-column case study. This might be a canned case study, or one people recall and write down on the spot, but the best sessions have been when someone has come with something in mind, some conversation in the past or future that is bothering them. As a group then we try to help that person to explore how they were feeling, what they didn’t share and why, and how they might be more effective in the future. One of the common observations, and one this article reminded me of, is how hesitant the participants are to put their feelings into words. It seems we can almost always have a productive conversation by asking the series “How did you feel at that point?” followed by “Is there any reason you didn’t share that reaction?” These questions get quickly to the real source of our frustrations that are hiding back in the shadows. Bringing them into the light we find out something quite surprising: they were stories we were making up.

The article has good advice that will be familiar to students of action science and the mutual learning model, suggestions like separating facts from assumptions, and asking “what part did I play?” It also had a key action that is hidden in plain sight. Do you see it in this exchange from the article?

Steve opened the refrigerator and sighed. “We have no groceries. Not even lunch meat.” I shot back, “I’m doing the best I can. You can shop, too!” “I know,” he said in a measured voice. “I do it every week. What’s going on?”

As I read the remainder of the article from the author’s point of view it was easy to overlook the role Steve played here. It was his calm empathetic question that allowed the author to reply with the title phrase: “The story I’m making up is that you were blaming me for not having groceries, that I was screwing up.” I really struggle generating this kind of response, of not getting caught up with my own emotional reaction. So I admire what Steve accomplished here.

I’ve long said that “stories are the unit of idea transmission”. What I hadn’t realised before reading this article was how powerful I’d find using the word story to describe what is going on in my head. I’m really looking forward to practicing with this phrase in future conversations.

Categories: Blogs

Should our front-end websites be server-side at all?

Decaying Code - Maxime Rouiller - 4 hours 16 min ago

I’ve been toying around with projects like Jekyll, Hexo and even some hand-rolled software that will generate me HTML files based on data. The thought that crossed my mind was…

Why do we need dynamically generated HTML again?

Let me take examples and build my case.

Example 1: Blog

Of course the simpler examples like blogs could literally all be static. If you need comments, then you could go with a system like Disqus. This is quite literally one of the only part of your system that is dynamic.

RSS feed? Generated from posts. Posts themselves? Could be automatically generated from a databases or Markdown files periodically. The resulting output can be hosted on a Raspberry Pi without any issues.

Example 2: E-Commerce

This one is more of a problem. Here are the things that don’t change a lot. Products. OK, they may change but do you need to have your site updated right this second? Can it wait a minute? Then all the “product pages” could literally be static pages.

Product reviews? They will need to be “approved” anyway before you want them live. Put them in a servier-side queue, and regenerate the product page with the updated review once it’s done.

There’s 3 things that I see that would require to be dynamic in this scenario.

Search, Checkout and Reviews. Search because as your products scales up, so does your data. Doing the search client side won’t scale at any level. Checkout because we are now handling an actual order and it needs a server components. Reviews because we’ll need to approve and publish them.

In this scenario, only the Search is the actual “Read” component that is now server side. Everything else? Pre-generated. Even if the search is bringing you the list of product dynamically, it can still end up on a static page.

All the other write components? Queued server side to be processed by the business itself with either Azure or an off-site component.

All the backend side of the business (managing products, availability, sales, whatnot, etc.) will need a management UI that will be 100% dynamic (read/write).

Question

So… do we need dynamic front-end with the latest server framework? On the public facing too or just the backend?

If you want to discuss it, Tweet me at @MaximRouiller.

Categories: Blogs

You should not be using WebComponents yet

Decaying Code - Maxime Rouiller - 4 hours 16 min ago

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since... well... a long time.

If you take a look at the specification, it's hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let's take a look at all four of them.

Reviewing the specificationsHTML Templates

Specification

This specific specification is not part of the "Web components" section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements

Specification

This specification is for review and not for implementation!

Alright no let's not touch this yet.

Shadow DOM

Specification

This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports

Specification

This one is still a working draft so it hasn't been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it's the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it's built by... well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we're only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it's a painful path.

What is wrong right now with WebComponents

First, it's not cross platform. We handled that in the past. That's not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there's no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn't move, we're back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what "Production" is concerned, do not use WebComponents directly. Also, avoid Polymer as it's only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Categories: Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - 4 hours 16 min ago

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - 4 hours 16 min ago

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - 4 hours 16 min ago

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
{
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;
	}

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
	}
}
      
    
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - 4 hours 16 min ago

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.

Powershell

Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.

Param(
    [Parameter(Mandatory=$True)]
    [string]$filename
)

Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
{
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
    {
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
    }
}
    
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - 4 hours 16 min ago

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - 4 hours 16 min ago

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:

SET GIT_SSL_NO_VERIFY=true

This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.

NOW.

Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - 4 hours 16 min ago

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… https://t.co/unhgFdMCrh — Awesome way of describing it. pic.twitter.com/KDPxpdB3UE

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

Do you have your own Batman Utility Belt?

Decaying Code - Maxime Rouiller - 4 hours 16 min ago
Just like most of us on any project, you (yes you!) as a developer must have done the same thing over and over again. I'm not talking about coding a controller or accessing the database.

Let's check out some concrete examples shall we?

  • Have you ever setup HTTP Caching properly, created a class for your project and call it done?
  • What about creating a proper Web.config to configure static asset caching?
  • And what about creating a MediaTypeFormatter for handling CSV or some other custom type?
  • What about that BaseController that you rebuild from project to project?
  • And those extension methods that you use ALL the time but rebuild for each projects...

If you answered yes to any of those questions... you are in great risk of having to code those again.

Hell... maybe someone already built them out there. But more often than not, they will be packed with other classes that you are not using. However, most of those projects are open source and will allow you to build your own Batman utility belt!

So once you see that you do something often, start building your utility belt! Grab those open source classes left and right (make sure to follow the licenses!) and start building your own class library.

NuGet

Once you have a good collection that is properly separated in a project and that you seem ready to kick some monkey ass, the only way to go is to use NuGet to pack it together!

Checkout the reference to make sure that you do things properly.

NuGet - Publishing

OK you got a steamy new hot NuGet package that you are ready to use? You can either push it to the main repository if your intention is to share it with the world.

If you are not ready quite yet, there are multiple way to use a NuGet package internally in your company. The easiest? Just create a Share on a server and add it to your package source! As simple as that!

Now just make sure to increment your version number on each release by using the SemVer convention.

Reap the profit

OK, no... not really. You probably won't be money anytime soon with this library. At least not in real money. Where you will gain however is when you are asked to do one of those boring task yet over again in another project or at another client.

The only thing you'll do is import your magic package, use it and boom. This task that they planned would take a whole day? Got finished in minutes.

As you build up your toolkit, more and more task will become easier to accomplish.

The only thing left to consider is what NOT to put in your toolkit.

Last minute warning

If you have an employer, make sure that your contract allows you to reuse code. Some contracts allows you to do that but double check with your employer.

If you are a company, make sure not to bill your client for the time spent building your tool or he might have the right to claim them as his own since you billed him for it.

In case of doubt, double check with a lawyer!

Categories: Blogs

API Testing with Telerik Fiddler

Telerik TestStudio - 4 hours 17 min ago
Fiddler 4.6 introduces a powerful new tab for API Testing. 2015-08-20T17:14:00Z 2015-08-31T12:28:53Z Eric Lawrence
Categories: Companies

The Future of Mobile App Testing and Telerik Platform

Telerik TestStudio - 4 hours 17 min ago
When the Telerik Platform was released almost two years ago, a key ingredient was Telerik Mobile Testing, a new functional testing framework geared towards the mobile world. Today we are experiencing a shift in what we provide in Telerik Platform when it comes to testing mobile apps, and we have some exciting announcements to make regarding the future of mobile app testing in the Telerik Platform. 2015-07-30T20:02:00Z 2015-08-31T12:28:53Z Rob Lauer
Categories: Companies

Latest TeamPulse R1 2015 Release

Telerik TestStudio - 4 hours 17 min ago
TeamPulse R1 2015 is here. With this release, we aimed to simplify the project management experience even further and to make TeamPulse users even more productive. Some of the new features we are launching include cross-project relation, SVN check-in integration, new reports and enhancements to existing reports. 2015-07-30T15:00:00Z 2015-08-31T12:28:53Z Fani Kondova
Categories: Companies

Test Studio R2 Release with Test Studio Mobile

Telerik TestStudio - 4 hours 17 min ago
Validating a consistent UX across devices is probably the biggest problem experienced in mobile testing today. In the latest release of Test Studio solution, we are introducing Test Studio Test Studio Mobile. It will help solve these real-time issues, and provide features and functionality to help you deliver high-quality apps on time. 2015-07-15T15:00:00Z 2015-08-31T12:28:53Z Shravanthi Alimilli
Categories: Companies

Delivering High Quality Applications in a Mobile World

Telerik TestStudio - 4 hours 17 min ago
Testing mobile applications is not an easy process. There are many common challenges that must be considered before testing a mobile application. This blog post will help you get started. 2015-06-17T15:45:27Z 2015-08-31T12:28:53Z Shravanthi Alimilli
Categories: Companies

Building an Automation Framework that Scales Webinar

Telerik TestStudio - 4 hours 17 min ago
The next Test Studio webinar, which will take place on June 25, 11:00 a.m. ET, will focus on how Test Studio solution can help scale the test coverage by integrating with TFS and Visual Studio. 2015-06-09T15:00:00Z 2015-08-31T12:28:53Z Fani Kondova
Categories: Companies

Mastering the Essentials of UI Automation—Webinar Q&A Follow Up

Telerik TestStudio - 4 hours 17 min ago
Wednesday, March 25, Dave Haeffner, joined me (Jim Holmes) for a webinar targeted at helping teams and organizations start up successful UI test automation projects. This webinar was based on a series of blogposts on the same topic hosted here on the Telerik blogs. You can find the recording of the webinar hosted online and you can sign up to receive a copy of an eBook assembled from those same blogposts. We had a very interactive audience for the webinar. Unfortunately we couldn’t answer all questions during the webinar itself. We’ve addressed quite a few of the unanswered questions below. 2015-04-03T20:07:39Z 2015-08-31T12:28:53Z Jim Holmes
Categories: Companies

Master the Essentials of UI Test Automation Series: Chapter Six

Telerik TestStudio - 4 hours 17 min ago
Chapter 6: Automation in the Real World So here you are: ready and raring to get real work done. Hopefully, at this point, you're feeling excited about what you've accomplished so far. Your team has set itself up for success through the right amount of planning, learning and prototyping. Now it's time to execute on what you've laid out. Remember: your best chance for success is focusing on early conversations to eliminate rework or waste, and being passionate about true collaboration. Break down the walls wherever possible to make the mechanics of automation all that much easier... 2015-03-25T17:00:00Z 2015-08-31T12:28:53Z Jim Holmes
Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today