Skip to content

Blogs

You should not be using WebComponents yet

Decaying Code - Maxime Rouiller - 6 hours 12 min ago

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since... well... a long time.

If you take a look at the specification, it's hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let's take a look at all four of them.

Reviewing the specificationsHTML Templates

Specification

This specific specification is not part of the "Web components" section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements

Specification

This specification is for review and not for implementation!

Alright no let's not touch this yet.

Shadow DOM

Specification

This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports

Specification

This one is still a working draft so it hasn't been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it's the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it's built by... well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we're only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it's a painful path.

What is wrong right now with WebComponents

First, it's not cross platform. We handled that in the past. That's not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there's no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn't move, we're back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what "Production" is concerned, do not use WebComponents directly. Also, avoid Polymer as it's only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Categories: Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - 6 hours 12 min ago

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - 6 hours 12 min ago

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - 6 hours 12 min ago

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
{
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;
	}

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
	}
}
      
    
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - 6 hours 12 min ago

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.

Powershell

Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.

Param(
    [Parameter(Mandatory=$True)]
    [string]$filename
)

Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
{
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
    {
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
    }
}
    
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - 6 hours 12 min ago

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - 6 hours 12 min ago

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:

SET GIT_SSL_NO_VERIFY=true

This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.

NOW.

Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - 6 hours 12 min ago

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… https://t.co/unhgFdMCrh — Awesome way of describing it. pic.twitter.com/KDPxpdB3UE

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

Do you have your own Batman Utility Belt?

Decaying Code - Maxime Rouiller - 6 hours 12 min ago
Just like most of us on any project, you (yes you!) as a developer must have done the same thing over and over again. I'm not talking about coding a controller or accessing the database.

Let's check out some concrete examples shall we?

  • Have you ever setup HTTP Caching properly, created a class for your project and call it done?
  • What about creating a proper Web.config to configure static asset caching?
  • And what about creating a MediaTypeFormatter for handling CSV or some other custom type?
  • What about that BaseController that you rebuild from project to project?
  • And those extension methods that you use ALL the time but rebuild for each projects...

If you answered yes to any of those questions... you are in great risk of having to code those again.

Hell... maybe someone already built them out there. But more often than not, they will be packed with other classes that you are not using. However, most of those projects are open source and will allow you to build your own Batman utility belt!

So once you see that you do something often, start building your utility belt! Grab those open source classes left and right (make sure to follow the licenses!) and start building your own class library.

NuGet

Once you have a good collection that is properly separated in a project and that you seem ready to kick some monkey ass, the only way to go is to use NuGet to pack it together!

Checkout the reference to make sure that you do things properly.

NuGet - Publishing

OK you got a steamy new hot NuGet package that you are ready to use? You can either push it to the main repository if your intention is to share it with the world.

If you are not ready quite yet, there are multiple way to use a NuGet package internally in your company. The easiest? Just create a Share on a server and add it to your package source! As simple as that!

Now just make sure to increment your version number on each release by using the SemVer convention.

Reap the profit

OK, no... not really. You probably won't be money anytime soon with this library. At least not in real money. Where you will gain however is when you are asked to do one of those boring task yet over again in another project or at another client.

The only thing you'll do is import your magic package, use it and boom. This task that they planned would take a whole day? Got finished in minutes.

As you build up your toolkit, more and more task will become easier to accomplish.

The only thing left to consider is what NOT to put in your toolkit.

Last minute warning

If you have an employer, make sure that your contract allows you to reuse code. Some contracts allows you to do that but double check with your employer.

If you are a company, make sure not to bill your client for the time spent building your tool or he might have the right to claim them as his own since you billed him for it.

In case of doubt, double check with a lawyer!

Categories: Blogs

Software Developer Computer Minimum Requirements October 2014

Decaying Code - Maxime Rouiller - 6 hours 12 min ago

I know that Scott Hanselman and Jeff Atwood have already done something similar.

Today, I'm bringing you the minimum specs that are required to do software development on a Windows Machine.

P.S.: If you are building your own desktop, I recommend PCPartPicker.

ProcessorRecommendation

Intel: Intel Core i7-4790K

AMD: AMD FX-9590

Unless you use a lot of software that supports multi-threading, a simple 4 core here will work out for most needs.

MemoryRecommendation

Minimum 8GB. 16GB is better.

My minimum requirement here is 8GB. I run a database engine and Visual Studio. SQL Server can easily take 2Gb with some big queries. If you have extensions installed for Visual Studio, it will quickly raise to 1GB of usage per instance and finally... Chrome. With multiple extensions and multiple pages running... you will quickly reach 4GB.

So get 8GB as the bare minimum. If you are running Virtual Machines, get 16GB. It won't be too much. There's no such thing as too much RAM when doing software development.

Hard-driveRecommendation

512 GB SSD drive

I can't recommend enough an SSD. Most tools that you use on a development machine will require a lot of I/O. Especially random read. When a compiler starts and retrieve all your source code to compile, it will need to read from all those file. Same thing if you have tooling like ReSharper or CodeRush. I/O speed is crucial. This requirement is even more important on a laptop. Traditionally, PC maker put a 5200RPM HDD on a laptop to reduce power usage. However, 5200 RPM while doing development will be felt everywhere.

Get an SSD.

If you need bigger storage (terabytes), you can always get a second hard-drive of the HDD type instead. Slower but capacities are also higher. On most laptop, you will need external storage for this hard drive so make sure it is USB3 compatible.

Graphic Card

Unless you do graphic rendering or are working with graphic tools that require a beast of a card... this is where you will put the less amount of money.

Make sure to get enough of them for your amount of monitors and that they can provide the right resolution/refresh rate.

Monitors

My minimum requirement nowadays is 22 inches. 4K is nice but is not part of the "minimum" requirement. I enjoy a 1920x1080 resolution. If you are buying them for someone else, make sure they can be rotated. Some developers like to have a vertical screen when reading code.

To Laptop or not to Laptop

Some company go Laptop for everyone. Personally, if the development machine never need to be taken out of the building, you can go desktop. You will save a bit on all the required accessories (docking port, wireless mouse, extra charger, etc.).

My personal scenario takes me to clients all over the city as well as doing presentations left and right. Laptop it is for me.

Categories: Blogs

A brief excerpt from mail to my team today

Rico Mariani's Performance Tidbits - Wed, 07/29/2015 - 19:24

"I couldn’t possibly list [...] all the crucial changes we made to make [‪#‎MicrosoftEdge‬] possible. Dozens of big initiatives and literally thousands of smaller improvements (and removals!) were needed to get us here. I certainly can’t say that our journey was 100% free of stumbles, because no worthwhile journey is, but I can say that in my 27 years at MS I’ve not seen anything like this combination of challenges: engineering, organizational, and operational. Any reasonable person could look at those challenges and conclude that no group could reasonably be expected to succeed with so many things in flight. But we did it anyway!"

"As the perf guy, it’s my job to tell you every day how bad everything is, that’s pretty much the gig. But today, just this one time, I’m gonna tell you the rest of what I’m thinking – it’s pretty smokin’ guys. You should be proud of what you’ve accomplished – I know I am."

Categories: Blogs

Working in a distributed company

Markus Gaertner (shino.de) - Mon, 07/27/2015 - 22:15

In my courses, one or more of the participants almost always raise a question like this:

How do you set up a team across many sites?

Almost always when digging deeper I find out that they are currently working in a setting with many sites involved. Almost always they have a project organization set up with single-skill specialists involved. These single-skill specialists are almost always working on at least three different projects at the same time. In the better cases, the remote team members are spread across a single timezone. In the worst cases I have seen so far, it had been a timezone difference of ten hours.

I will leave how to deal with such a setting for a later blog entry. Today, I want to focus on some tips and tricks for working with remote team members and remote colleagues.

Tripit reported that I was on the road in 2012 for 304 days. I hardly believe that since I stayed at home for our newborn son Nico the whole June back then. (I think they have had a bug there.) But it was close. I have worked with remote team members and remote project teams in distributed companies since 2006. I hope I have some nuggets worth sharing.

Remote comes with a price

When it comes to distributed working, remoteness comes with a price. The hard thing most managers don’t realize does not stem from the difference in wage. In fact most companies I have seen merely out-source to far distant locations only because they take the wage savings into account – but not the social loss alongside.

The social loss is what happens when team members don’t know each other since they didn’t meet for a single time in person.

What happens with social loss?

Richard Hackman provides a some insights in his book Leading Teams. According to Hackman, teams are subject to several gains and losses over the course of their lifetime together. There are gains and losses on the subject of effort, performance strategy, and knowledge and skill.

When it comes to effort, social loafing by team members may stand in the way of the development of high shared commitment to the team and its work. For performance strategy, mindless reliance on habitual routines can become an obstacle to the invention of innovative, task-appropriate work procedures. For knowledge and skill, inappropriate weighting of member contributions can become a drag against sharing of knowledge and development of member skills.

All these three losses – social loafing, mindless reliance on habitual routines, and inappropriate weighting of member contributions – are more likely when team members are separated from each other. If they don’t know each other, they can’t make good decisions about distributing the work since people don’t know each other well enough to do so. They also are less likely to have a shared understanding of the organization’s context. They won’t know how to come up with better work procedures for the task at hand. And finally, the knowledge will be less likely shared among team members. It’s so hard to do when you have only two hours of common office hours between sites.

Besides the wage differences, these factors are hard to price. Thus, so it’s even harder to compare the costs of the decision to work in remote sites. You can make these costs a bit more transparent if you raise the question what it would cost to bring the whole team together every other week. That’s unlikely? That’s hard to do? That’s expensive? Well, how expensive is it in the first place to not do that? A couple of years ago I worked with a client that flew in their team from Prague every second week. They were aware of the social costs attached. And they were up to pay the price for it.

Though, that’s not a guarantee that it will work. But it makes the failure of teams less likely.

But what if you don’t want to pay that price? Well, there’s always the option to create teams local to one site. When you do that, make sure that you make the coordination effort between teams as less awkward as possible.

Video conferencing to the rescue

A couple of years ago, I found myself on a project team with some people set up in Germany and others in Malaysia. That’s a six hours timezone difference.

We were doing telephone conferences every other work day. And we were noticing problems in our communication. The phone went silent when one side had made an inquiry. Usually, that was an awkward silence. And more often than not – we found out later – that silence led to some undone tasks. (Did I mention the project was under stress?)

At one point, I was able to convince the project manager at my side of the table to talk to his pendant in the other location. They set up video conferencing. We coupled that with a phone call, still, but at least we could see each other. From that day on we had a different meeting. Now, we were able to see the faces of the others. We were able to see their facial expressions. We were able to see if they were not able to understand something that was said on the other end. And we were able to repeat the message or paraphrase it to get the point across. Certainly, the same happened for the other side. That’s what changed the whole meetings for us.

So, if you decide to set up remote team members, then also make sure they have the technology necessary to communicate and coordinate well enough between the different locations. One prospective client that I visited once had taken over a whole meeting room. The room was no longer bookable by persons in the organization. They had set up the various boards of all the teams in that meeting room. They also had a video projector and a 360° camera installed there. The whole set up was targeted to make it easy to have cross-site communication available. I can only urge you to invest in such technology. Your meetings will become more effective in the long run.

Transparency creates trust

Seeing each other during meetings is a specialization of a more general concept. Transparency related to work- and task-oriented data creates trust. I have seen work teams mourning over each other just because they stopped to know what “the other” team was doing. The trust also turns into distrust when there is a lack of transparency in certain regards, and the transparency you get just confuses.

Unfortunately, creating transparency also takes effort in most cases. You have to provide the numbers that others want to see like percentage of code covered by unit tests or net profit values of new features. In software, you will be providing those numbers maybe by another program. In non-software teams, you may need to find other ways to provide such information. Still, it will take effort.

Is that effort well spent? If my claim is correct, that transparency creates trust (lots of, actually), you should aim for the sweet spot between creating enough trust and effort spent on providing the necessary transparency. In other words, ask yourself the question, is the effort I spend on creating transparency well spent for the trust that we gain as a group?

A couple of years ago, Kent Beck raised another point in this regard. He said that he always tried to be transparent in every regard. Because hiding information takes effort. When you are completely transparent, you can save those efforts that go into hiding information and use it to provide value. I like that attitude.

One final thought: if creating transparency is indeed too effortful for you, remember there is always the option to work in a non-distributed setting. When you have chosen to work for a distributed company, the extra trust through transparency should be the price that you want to pay.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Categories: Blogs

Debugging: the Science of Deduction

Testing TV - Mon, 07/27/2015 - 16:58
Software never works exactly the way we expect or intend it to, at least at first. Something inevitably goes wrong! What then? We are here for problem-solving, and every bug we encounter is a mystery, a wonderment, and a puzzle which upon resolution lets us move on to bigger, more interesting problems. Let’s clear our […]
Categories: Blogs

Story Telling with Story Mapping

Once upon a time, a customer had a great buying experience on a website.  The customer loved how from the moment the customer was on the site to the moment they checked out a product, the process was intuitive and easy to use.   The process and design of the customer experience was not by accident.  In fact, it was done very methodically using story mapping.  What is Story MappingStory Mapping is a practice that provides framework for grouping user stories based on how a customer or user would use the system as a whole.   Established by Jeff Patton, it helps the team understand the customer experience by imaging what the customer process might be.  This promotes the team to think through elements of what the customer finds as valuable.  Benefits of Story MappingStory Mapping is a way to bridge the gap between an idea and the incremental work ahead.  It's a great way to decompose an idea to a number of unique user stories.  What are some additional benefits of story mapping?
  • It moves away from thinking of functionality first and toward the customer experience first. 
  • It provides the big picture and end-to-end view of the work ahead
  • It's a decomposition tool from idea to multiple user stories. 
  • It asks the team to identify the highest value work from a customer perspective and where you may want the most customer feedback.
  • It advocates cutting only one increment of work at a time instead realizing that feedback from the current increment will help shape subsequent increments. 
Getting started with Story Mapping
How might you get started in establishing a story map?  It starts by having wall space available to place the customer experience upon.  Next you educate the team on the story mapping process (see below).  Its best to keep to a Scrum team size (e.g., 7 +/-2), where everyone participates in the process.  Then as a team, follow these high-level steps:
  • Create the “backbone” of the story map.  These are the big tasks that the users engage with. Capture the end-to-end customer experience.  Start by asking “what do users do?”  You may use a quiet brainstorming approach to get a number of thoughts on the wall quickly
  • Then start adding steps that happen within each backbone.  
  • From there explore activities or options within each step.  Ask, what are the specific things a customer would do here?  Are there alternative things they could do?  These activities may be epics and even user stories. 
  • Create the “walking skeleton”.  This is where you slice a set of activities or options that can give you the minimum end-to end value of customer experience.  Only cut enough work that can be completed within one to three sprints that represents customer value. 
As you view the wall, the horizontal access defines the flow of which you place the backbone and steps.  The vertical access under each contains the activities or options represented by epics and user stories for that particular area. Use short verb/noun phrases to capture the backbones, steps, and activities (e.g. capture my address, view my order status, receive invoice). 

Next time your work appears to represent a customer experience, consider the story mapping tool as a way to embrace the customer perspective.  Story mapping provides a valuable tool for the team to understand the big picture, while decomposing the experience to more bite size work that allows for optionality for cutting an increment of work.  Its another tool in your requirements decomposition toolkit.  
Categories: Blogs

Containers is the new AWS in CI

The Build Doctor - Mon, 07/27/2015 - 06:04
When Atlassian came out with AWS integration, it was a great step forward. Jenkins announced support for Kubernetes a few days ago, and I think many vendors will be accelerating plans to support...

Visit The Build Doctor for the full article.
Categories: Blogs

If working on Microsoft Edge with me is something you could get excited about then read on

Rico Mariani's Performance Tidbits - Thu, 07/23/2015 - 23:45

This link has the details.

https://careers.microsoft.com/jobdetails.aspx?ss=&pg=0&so=&rw=1&jid=182730&jlang=EN&pp=SS

Categories: Blogs

Testing Your Android Application

Testing TV - Wed, 07/22/2015 - 16:38
Everybody knows testing is important, so let’s focus on test-driven development, testing best practices and the most useful Android testing libraries in our quest to improve the user experience and developer happiness. In this talk you’ll get an overview of how several types of testing (unit, integration, UI testing) fit into an Android project. The […]
Categories: Blogs

Talking with Tech Leads now available in print

thekua.com@work - Tue, 07/21/2015 - 10:49

Late last year, I announced the release of “Talking with Tech Leads,” which was then available in a variety of e-book formats including PDF, mobi and epub. Although e-books have their place, I know how important it is for some people, to have a real physical book. After all, it’s very difficult to physically gift something to people, when it’s on a tablet, or computer.

Stack of books from Talking with Tech Leads

After some more work on the physical aspects of the book and many draft copies back and forth from the, you can now order your very physical copy of Talking with Tech Leads. You can order copies depending on region including:

What people are saying about the book:

Your book has really helped me find my feet – James

This book is a well-curated collection of interviews with developers who have found themselves in the role of a tech lead, both first-timers and veterans – Dennis

The book is well-organised around a number of themes and I found it very easy to read and to ‘dip into’ for some inspiration / ideas. – Gary

Categories: Blogs

What you really need to know about regular expressions before using them

Rico Mariani's Performance Tidbits - Tue, 07/21/2015 - 02:27
If you want to use regular expressions in production code the most important thing you must know about how these things are matched is that there are three general approaches to doing it.  They have different performance characteristics and it is absolutely vital that you know which approach the library you are using implements.

 

Approach #1 -- Non-deterministic Finite Automaton
  • This approach first converts the regular expression into a non-deterministic state machine, which is just like a state machine except you have to keep track of all the states you might be in.  Sort of like when you have a choice of going left or right you go both ways and remember it could be either.  The state machine you get from this approach is proportional to the size of the expression in a fairly direct way and does not depend on the text you will match at all, just the regular expression.  The maximum number of states you might have to track at any given moment is likewise determined by the expression in a fairly direct way and again does not depend on what you are matching, just the pattern.
  • This is the "medium" approach to the problem, and it is popular in command line tools like grep/sed etc.  Many text editors use this approach internally.
  • Typically a library that does this offers a compile step to compile the RE first into a machine and then a match method to do the matching.
  • The time to match an expression is linear in the size of the input.
Approach #2 -- Deterministic Finite Automaton
  • In this approach you basically post-process the NDFA from step number 1 into a regular state machine.  You can always do this, you can imagine that if in the NDFA you were in  the state [maybe #1, maybe #7, maybe #22] that in the DFA you're in a single state called [maybe_1_maybe_7_maybe_22], so there is one distinct state for every combination you might have been in.  The advantage of this approach is that once you have set up the DFA the amount of housekeeping you have to do at each step is much smaller. 
  • Like in #1 your final run time is not affected by the complexity of the expression, but only by the size of the text you are matching, linearly.  Also like #1 there is a compile step which you have to do once per expression, so the total cost includes both compilation and matching.
  • Command line tools like "egrep" and "lex" use this approach.  Here's the thing, if the tool is something like "lex" then you only pay the cost of the compilation once when you generate your machine.  Your already cooked state machine is then compiled directly into your program.  So that's a cool trade off.
  • The downside is that the number of states you could need can get very large indeed.  In the worst case if the NDFA had N states then the DFA would have 2^N states.  But usually that doesn't happen.
  • This is the "Heavy" approach in terms of initial cost
Approach #3 -- Interpret the regular expression as you go and backtrack to match
  • This doesn't create a machine, and in fact you don't need any extra state other than a stack.  In this approach when faced with a pattern like A|B you first try A and if that doesn't work you rewind and try B.   This is all fine and well except if you have even a simple pattern like .*.*X, you find yourself trying to swallow all you can with the .* by which time you're way past the X so you rewind but of course the next .* then grabs way too much, so you rewind that...  Now you can imagine if you had a very long line that didn't match X in it at all you could spend all kinds of time trying to eat different combinations of leading stuff... This pattern will be N^2 in the size of the input.  Now this kind of stuff happens all the time... and you can make it much, much worse (.*.*)*X just compounds the problem.  If that pattern looks too unrealistic to you how do you feel about ".*=.*+.*;" to match anything that looks like X=Y+Z; that pattern is a cosmic disaster...
  • Thing is for very simple patterns #3 will often do better because there's no compilation.  It could get bad but often it doesn't.  Some libraries do this.
  • This is the "Light" approach in terms of initial cost
Obviously these three all behave very differently. Now, if you need to match many different regular expressions to see if any match trying each one separately in a loop is the worst thing you can do.  Remember if you want to match A or B you could always encoded that in a single expression A|B and then you can see if you get a match in one pass over the input, not two!  If there are many expressions it's even more valuable.  Using a lexical analysis technique like that provided by "lex" or "flex" allows you to state your (many) expressions in advance and get one machine that matches them all (telling you which matched of course) and you only pay the compilation cost once.  This is a super economical way to match hundreds of expressions.
For more details consider these references:
  • https://en.wikipedia.org/wiki/Regular_expression
  • https://en.wikipedia.org/wiki/Regular_expression#Implementations_and_running_times
  • https://en.wikipedia.org/wiki/Regular_language
  • https://en.wikipedia.org/wiki/Nondeterministic_finite_automaton
  • https://en.wikipedia.org/wiki/Thompson%27s_construction_algorithm
  • https://en.wikipedia.org/wiki/Kleene%27s_algorithm
  • https://swtch.com/~rsc/regexp/regexp1.html
Categories: Blogs

Stay Hungry. Stay Foolish.

Yesterday the wonderful people from Lucky Cat Tattoo put a piece of art on my arm.

Stay Hungry. Stay Foolish.
This was the ‘farewell message’ of the whole earth catalog. It was placed on the back cover of the final edition in 1974. Steve Jobs used this quote in his famous commencement speech in 2005 on Stanford University. While writing this blog post I found this article by a neuroscientist explaining what the quote means…

Hungry
Hungry points to always looking for more, striving to improve, being ambitious and eager. Everything I do, I do with passion. What keeps me moving is energy and passion. I need challenges to feel comfortable. I want to be good in almost everything I do. Not just good, but the very best. All that makes that my surroundings sometimes suffer from me because I always want to do more and do better. Fortunately, I have an above average energy level and that helps me do what I do. This video of Steve Jobs summarizes how I work.

Foolish
Foolish points to taking risks, feeling young, being daring, exploratory and adventurous. Like a child learning how the world works by trying everything. It also reminds us not always do what people expect us to do and not always take the traditional paths in life.

I’m curious. This is an important characteristic in a software tester. Richard Feynman, the Nobel Prize winner was a tester, though he was officially natural scientist. In this video, “The pleasure of finding things out” he talks about certainty, knowledge and doubt (from 47:20). Critical thinking about observations and information is important in my work! Richard Feynman never took anything for granted. He took the scientific approach and thought critical about his work. He doubted a lot and asked many questions to verify.

Because of my curiosity, I want to know everything. This has one big advantage: I want to develop and practice continuous learning. The great thing about my job is that testers get paid for learning: testing is gathering information about things that are important to stakeholders to inform decisions. I love to read and I read a lot to discover new things. I also ask for feedback on my work to develop myself continuously. Lately, experiential learning has my special attention. I wrote a column about why I like this way of learning. When it comes to learning, two great TED videos come to mind: “Schools kill creativity” and “building a school in the cloud“. These videos tell a story about how we learn and why schools (or learning in general) should change.

Think different!

Here’s to the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in the square holes. The ones who see things differently. They’re not fond of rules. And they have no respect for the status quo. You can quote them, disagree with them, glorify or vilify them. About the only thing you can’t do is ignore them. Because they change things. They push the human race forward. And while some may see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do.

jediJedi
Besides I am a huge fan of Star Wars, the Jedi sign has a deeper meaning:
“Because testing (and any engineering activity) is a solution to a very difficult problem, it must be tailored to the context of the project, and therefore testing is a human activity that requires a great deal of skill to do well. That’s why we must study it seriously. We must practice our craft. Context-driven testers strive to become the Jedi knights of testing.” (source: “The Dual Nature of Context-Driven Testing” by James Bach).

I believe that learning is not as simple as taking a class and start doing it. To become very good at something you need mentors who guide you in your journey. I strongly believe in Master-Apprentice. Young Padawans are trained to become Jedi Knights by a senior (knight or master) who learns them everything there is to know. The more they learn, the more responsibility the student gets. That is why I am happy that I have mentors who teach, coach, mentor, challenge and guide me. And that is why I am a mentor for others doing the same. Helping them to learn and become better.

bearBear
In 2013 I took the awesome Problem Solving Leadership aka PSL workshop facilitated by Jerry Weinberg, Esther Derby and Johanna Rothman. This amazing six day workshop gave me many valuable insights in myself and how to be a better leader by dealing effectively with change. During the social event we visited the Indian Pueblo Cultural Center which also made a big impression on me. Here I bought a talisman stone with a bear engraved. In native American beliefs the bear symbolizes power, courage, freedom, wisdom, protection and leadership (more info on bear symbolic: herehere and here).

Categories: Blogs