Skip to content

Feed aggregator

You should not be using WebComponents yet

Decaying Code - Maxime Rouiller - 4 hours 34 min ago

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since... well... a long time.

If you take a look at the specification, it's hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let's take a look at all four of them.

Reviewing the specificationsHTML Templates

Specification

This specific specification is not part of the "Web components" section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements

Specification

This specification is for review and not for implementation!

Alright no let's not touch this yet.

Shadow DOM

Specification

This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports

Specification

This one is still a working draft so it hasn't been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it's the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it's built by... well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we're only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it's a painful path.

What is wrong right now with WebComponents

First, it's not cross platform. We handled that in the past. That's not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there's no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn't move, we're back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what "Production" is concerned, do not use WebComponents directly. Also, avoid Polymer as it's only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Categories: Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - 4 hours 34 min ago

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - 4 hours 34 min ago

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - 4 hours 34 min ago

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
{
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;
	}

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
	}
}
      
    
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - 4 hours 34 min ago

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.

Powershell

Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.

Param(
    [Parameter(Mandatory=$True)]
    [string]$filename
)

Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
{
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
    {
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
    }
}
    
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - 4 hours 34 min ago

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - 4 hours 34 min ago

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:

SET GIT_SSL_NO_VERIFY=true

This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.

NOW.

Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - 4 hours 34 min ago

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… https://t.co/unhgFdMCrh — Awesome way of describing it. pic.twitter.com/KDPxpdB3UE

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

Do you have your own Batman Utility Belt?

Decaying Code - Maxime Rouiller - 4 hours 34 min ago
Just like most of us on any project, you (yes you!) as a developer must have done the same thing over and over again. I'm not talking about coding a controller or accessing the database.

Let's check out some concrete examples shall we?

  • Have you ever setup HTTP Caching properly, created a class for your project and call it done?
  • What about creating a proper Web.config to configure static asset caching?
  • And what about creating a MediaTypeFormatter for handling CSV or some other custom type?
  • What about that BaseController that you rebuild from project to project?
  • And those extension methods that you use ALL the time but rebuild for each projects...

If you answered yes to any of those questions... you are in great risk of having to code those again.

Hell... maybe someone already built them out there. But more often than not, they will be packed with other classes that you are not using. However, most of those projects are open source and will allow you to build your own Batman utility belt!

So once you see that you do something often, start building your utility belt! Grab those open source classes left and right (make sure to follow the licenses!) and start building your own class library.

NuGet

Once you have a good collection that is properly separated in a project and that you seem ready to kick some monkey ass, the only way to go is to use NuGet to pack it together!

Checkout the reference to make sure that you do things properly.

NuGet - Publishing

OK you got a steamy new hot NuGet package that you are ready to use? You can either push it to the main repository if your intention is to share it with the world.

If you are not ready quite yet, there are multiple way to use a NuGet package internally in your company. The easiest? Just create a Share on a server and add it to your package source! As simple as that!

Now just make sure to increment your version number on each release by using the SemVer convention.

Reap the profit

OK, no... not really. You probably won't be money anytime soon with this library. At least not in real money. Where you will gain however is when you are asked to do one of those boring task yet over again in another project or at another client.

The only thing you'll do is import your magic package, use it and boom. This task that they planned would take a whole day? Got finished in minutes.

As you build up your toolkit, more and more task will become easier to accomplish.

The only thing left to consider is what NOT to put in your toolkit.

Last minute warning

If you have an employer, make sure that your contract allows you to reuse code. Some contracts allows you to do that but double check with your employer.

If you are a company, make sure not to bill your client for the time spent building your tool or he might have the right to claim them as his own since you billed him for it.

In case of doubt, double check with a lawyer!

Categories: Blogs

Software Developer Computer Minimum Requirements October 2014

Decaying Code - Maxime Rouiller - 4 hours 34 min ago

I know that Scott Hanselman and Jeff Atwood have already done something similar.

Today, I'm bringing you the minimum specs that are required to do software development on a Windows Machine.

P.S.: If you are building your own desktop, I recommend PCPartPicker.

ProcessorRecommendation

Intel: Intel Core i7-4790K

AMD: AMD FX-9590

Unless you use a lot of software that supports multi-threading, a simple 4 core here will work out for most needs.

MemoryRecommendation

Minimum 8GB. 16GB is better.

My minimum requirement here is 8GB. I run a database engine and Visual Studio. SQL Server can easily take 2Gb with some big queries. If you have extensions installed for Visual Studio, it will quickly raise to 1GB of usage per instance and finally... Chrome. With multiple extensions and multiple pages running... you will quickly reach 4GB.

So get 8GB as the bare minimum. If you are running Virtual Machines, get 16GB. It won't be too much. There's no such thing as too much RAM when doing software development.

Hard-driveRecommendation

512 GB SSD drive

I can't recommend enough an SSD. Most tools that you use on a development machine will require a lot of I/O. Especially random read. When a compiler starts and retrieve all your source code to compile, it will need to read from all those file. Same thing if you have tooling like ReSharper or CodeRush. I/O speed is crucial. This requirement is even more important on a laptop. Traditionally, PC maker put a 5200RPM HDD on a laptop to reduce power usage. However, 5200 RPM while doing development will be felt everywhere.

Get an SSD.

If you need bigger storage (terabytes), you can always get a second hard-drive of the HDD type instead. Slower but capacities are also higher. On most laptop, you will need external storage for this hard drive so make sure it is USB3 compatible.

Graphic Card

Unless you do graphic rendering or are working with graphic tools that require a beast of a card... this is where you will put the less amount of money.

Make sure to get enough of them for your amount of monitors and that they can provide the right resolution/refresh rate.

Monitors

My minimum requirement nowadays is 22 inches. 4K is nice but is not part of the "minimum" requirement. I enjoy a 1920x1080 resolution. If you are buying them for someone else, make sure they can be rotated. Some developers like to have a vertical screen when reading code.

To Laptop or not to Laptop

Some company go Laptop for everyone. Personally, if the development machine never need to be taken out of the building, you can go desktop. You will save a bit on all the required accessories (docking port, wireless mouse, extra charger, etc.).

My personal scenario takes me to clients all over the city as well as doing presentations left and right. Laptop it is for me.

Categories: Blogs

Working in a distributed company

Markus Gaertner (shino.de) - 6 hours 12 min ago

In my courses, one or more of the participants almost always raise a question like this:

How do you set up a team across many sites?

Almost always when digging deeper I find out that they are currently working in a setting with many sites involved. Almost always they have a project organization set up with single-skill specialists involved. These single-skill specialists are almost always working on at least three different projects at the same time. In the better cases, the remote team members are spread across a single timezone. In the worst cases I have seen so far, it had been a timezone difference of ten hours.

I will leave how to deal with such a setting for a later blog entry. Today, I want to focus on some tips and tricks for working with remote team members and remote colleagues.

Tripit reported that I was on the road in 2012 for 304 days. I hardly believe that since I stayed at home for our newborn son Nico the whole June back then. (I think they have had a bug there.) But it was close. I have worked with remote team members and remote project teams in distributed companies since 2006. I hope I have some nuggets worth sharing.

Remote comes with a price

When it comes to distributed working, remoteness comes with a price. The hard thing most managers don’t realize does not stem from the difference in wage. In fact most companies I have seen merely out-source to far distant locations only because they take the wage savings into account – but not the social loss alongside.

The social loss is what happens when team members don’t know each other since they didn’t meet for a single time in person.

What happens with social loss?

Richard Hackman provides a some insights in his book Leading Teams. According to Hackman, teams are subject to several gains and losses over the course of their lifetime together. There are gains and losses on the subject of effort, performance strategy, and knowledge and skill.

When it comes to effort, social loafing by team members may stand in the way of the development of high shared commitment to the team and its work. For performance strategy, mindless reliance on habitual routines can become an obstacle to the invention of innovative, task-appropriate work procedures. For knowledge and skill, inappropriate weighting of member contributions can become a drag against sharing of knowledge and development of member skills.

All these three losses – social loafing, mindless reliance on habitual routines, and inappropriate weighting of member contributions – are more likely when team members are separated from each other. If they don’t know each other, they can’t make good decisions about distributing the work since people don’t know each other well enough to do so. They also are less likely to have a shared understanding of the organization’s context. They won’t know how to come up with better work procedures for the task at hand. And finally, the knowledge will be less likely shared among team members. It’s so hard to do when you have only two hours of common office hours between sites.

Besides the wage differences, these factors are hard to price. Thus, so it’s even harder to compare the costs of the decision to work in remote sites. You can make these costs a bit more transparent if you raise the question what it would cost to bring the whole team together every other week. That’s unlikely? That’s hard to do? That’s expensive? Well, how expensive is it in the first place to not do that? A couple of years ago I worked with a client that flew in their team from Prague every second week. They were aware of the social costs attached. And they were up to pay the price for it.

Though, that’s not a guarantee that it will work. But it makes the failure of teams less likely.

But what if you don’t want to pay that price? Well, there’s always the option to create teams local to one site. When you do that, make sure that you make the coordination effort between teams as less awkward as possible.

Video conferencing to the rescue

A couple of years ago, I found myself on a project team with some people set up in Germany and others in Malaysia. That’s a six hours timezone difference.

We were doing telephone conferences every other work day. And we were noticing problems in our communication. The phone went silent when one side had made an inquiry. Usually, that was an awkward silence. And more often than not – we found out later – that silence led to some undone tasks. (Did I mention the project was under stress?)

At one point, I was able to convince the project manager at my side of the table to talk to his pendant in the other location. They set up video conferencing. We coupled that with a phone call, still, but at least we could see each other. From that day on we had a different meeting. Now, we were able to see the faces of the others. We were able to see their facial expressions. We were able to see if they were not able to understand something that was said on the other end. And we were able to repeat the message or paraphrase it to get the point across. Certainly, the same happened for the other side. That’s what changed the whole meetings for us.

So, if you decide to set up remote team members, then also make sure they have the technology necessary to communicate and coordinate well enough between the different locations. One prospective client that I visited once had taken over a whole meeting room. The room was no longer bookable by persons in the organization. They had set up the various boards of all the teams in that meeting room. They also had a video projector and a 360° camera installed there. The whole set up was targeted to make it easy to have cross-site communication available. I can only urge you to invest in such technology. Your meetings will become more effective in the long run.

Transparency creates trust

Seeing each other during meetings is a specialization of a more general concept. Transparency related to work- and task-oriented data creates trust. I have seen work teams mourning over each other just because they stopped to know what “the other” team was doing. The trust also turns into distrust when there is a lack of transparency in certain regards, and the transparency you get just confuses.

Unfortunately, creating transparency also takes effort in most cases. You have to provide the numbers that others want to see like percentage of code covered by unit tests or net profit values of new features. In software, you will be providing those numbers maybe by another program. In non-software teams, you may need to find other ways to provide such information. Still, it will take effort.

Is that effort well spent? If my claim is correct, that transparency creates trust (lots of, actually), you should aim for the sweet spot between creating enough trust and effort spent on providing the necessary transparency. In other words, ask yourself the question, is the effort I spend on creating transparency well spent for the trust that we gain as a group?

A couple of years ago, Kent Beck raised another point in this regard. He said that he always tried to be transparent in every regard. Because hiding information takes effort. When you are completely transparent, you can save those efforts that go into hiding information and use it to provide value. I like that attitude.

One final thought: if creating transparency is indeed too effortful for you, remember there is always the option to work in a non-distributed setting. When you have chosen to work for a distributed company, the extra trust through transparency should be the price that you want to pay.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Categories: Blogs

Debugging: the Science of Deduction

Testing TV - 11 hours 29 min ago
Software never works exactly the way we expect or intend it to, at least at first. Something inevitably goes wrong! What then? We are here for problem-solving, and every bug we encounter is a mystery, a wonderment, and a puzzle which upon resolution lets us move on to bigger, more interesting problems. Let’s clear our […]
Categories: Blogs

Testing Mobile Apps For Real Network Conditions

Software Testing Magazine - 11 hours 41 min ago
Mobile is no longer an option – more people are browsing, using mobile devices than desktops. Now that you have a mobile presence, how does it behave in extreme network conditions? (some of your customers will be on 2G, or at a conference) Learn techniques and tools to make sure your mobile experience is awesome at any speed. Video producer: http://oredev.org/
Categories: Communities

Test Studio R2 Release with Test Studio Mobile

Telerik TestStudio - 12 hours 58 min ago
Validating a consistent UX across devices is probably the biggest problem experienced in mobile testing today. In the latest release of Test Studio solution, we are introducing Test Studio Test Studio Mobile. It will help solve these real-time issues, and provide features and functionality to help you deliver high-quality apps on time. 2015-07-15T15:00:00Z 2015-07-27T13:23:27Z Shravanthi Alimilli
Categories: Companies

Delivering High Quality Applications in a Mobile World

Telerik TestStudio - 12 hours 58 min ago
Testing mobile applications is not an easy process. There are many common challenges that must be considered before testing a mobile application. This blog post will help you get started. 2015-06-17T15:45:27Z 2015-07-27T13:23:27Z Shravanthi Alimilli
Categories: Companies

Building an Automation Framework that Scales Webinar

Telerik TestStudio - 12 hours 58 min ago
The next Test Studio webinar, which will take place on June 25, 11:00 a.m. ET, will focus on how Test Studio solution can help scale the test coverage by integrating with TFS and Visual Studio. 2015-06-09T15:00:00Z 2015-07-27T13:23:27Z Fani Kondova
Categories: Companies

Mastering the Essentials of UI Automation—Webinar Q&A Follow Up

Telerik TestStudio - 12 hours 58 min ago
Wednesday, March 25, Dave Haeffner, joined me (Jim Holmes) for a webinar targeted at helping teams and organizations start up successful UI test automation projects. This webinar was based on a series of blogposts on the same topic hosted here on the Telerik blogs. You can find the recording of the webinar hosted online and you can sign up to receive a copy of an eBook assembled from those same blogposts. We had a very interactive audience for the webinar. Unfortunately we couldn’t answer all questions during the webinar itself. We’ve addressed quite a few of the unanswered questions below. 2015-04-03T20:07:39Z 2015-07-27T13:23:27Z Jim Holmes
Categories: Companies

Master the Essentials of UI Test Automation Series: Chapter Six

Telerik TestStudio - 12 hours 58 min ago
Chapter 6: Automation in the Real World So here you are: ready and raring to get real work done. Hopefully, at this point, you're feeling excited about what you've accomplished so far. Your team has set itself up for success through the right amount of planning, learning and prototyping. Now it's time to execute on what you've laid out. Remember: your best chance for success is focusing on early conversations to eliminate rework or waste, and being passionate about true collaboration. Break down the walls wherever possible to make the mechanics of automation all that much easier... 2015-03-25T17:00:00Z 2015-07-27T13:23:27Z Jim Holmes
Categories: Companies

Telerik Test Studio R1 2015 Release: Early Test Collaboration with IntelliMap

Telerik TestStudio - 12 hours 58 min ago
The latest release of Telerik Test Studio® solution is a milestone release as we are introducing revolutionary advancements in test automation. In this release, we are introducing the Next Generation of Automated Testing with Early Test Collaboration. Learn how testing teams can now develop tests in parallel to development and be less dependent on the UI. 2015-03-19T23:00:00Z 2015-07-27T13:23:27Z Shravanthi Alimilli
Categories: Companies

Master the Essentials of UI Test Automation Series: Chapter Five

Telerik TestStudio - 12 hours 58 min ago
You’re reading the fifth post in a series that’s intended to get you and your teams started on the path to success with your UI test automation projects: Look Before You Jump. 2015-03-18T22:00:00Z 2015-07-27T13:23:27Z Jim Holmes
Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today