Skip to content

Blogs

Creating a simple ASP.NET 5 Markdown TagHelper

Decaying Code - Maxime Rouiller - 3 hours 7 min ago

I've been dabbling a bit with the new ASP.NET 5 TagHelpers and I was wondering how easy it would be to create one.

I've created a simple Markdown TagHelper with the CommonMark implementation.

So let me show you what it is, what each line of code is doing and how to implement it in an ASP.NET MVC 6 application.

The Code
using CommonMark;
using Microsoft.AspNet.Mvc.Rendering;
using Microsoft.AspNet.Razor.Runtime.TagHelpers;

namespace My.TagHelpers
{
    [HtmlTargetElement("markdown")]
    public class MarkdownTagHelper : TagHelper
    {
        public ModelExpression Content { get; set; }
        public override void Process(TagHelperContext context, TagHelperOutput output)
        {
            output.TagMode = TagMode.SelfClosing;
            output.TagName = null;

            var markdown = Content.Model.ToString();
            var html = CommonMarkConverter.Convert(markdown);
            output.Content.SetContentEncoded(html);
        }
    }
}
Inspecting the code

Let's start with the HtmlTargetElementAttribute. This will wire the HTML Tag <markdown></markdown> to be interpreted and processed by this class. There is nothing stop you from actually having more than one target.

You could for example target element <md></md> by just adding [HtmlTargetElement("md")] and it would support both tags without any other changes.

The Content property will allow you to write code like this:

@model MyClass

<markdown content="@ViewData["markdown"]"></markdown>    
<markdown content="Markdown"></markdown>    

This easily allows you to use your model or any server-side code without having to handle data mapping manually.

TagMode.SelfClosing will force the HTML to use self-closing tag rather than having content inside (which we're not going to use anyway). So now we have this:

<markdown content="Markdown" />

All the remaining lines of code are dedicated to making sure that the content we render is actual HTML. output.TagName just make sure that we do not render the actual markdown tag.

And... that's it. Our code is complete.

Activating it

Now you can't just go and create TagHelpers and have them automatically served without wiring one thing.

In your ASP.NET 5 projects, go to /Views/_ViewImports.cshtml.

You should see something like this:

@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"

This will load all TagHelpers from the Microsoft.AspNet.Mvc.TagHelpers assembly.

Just duplicate the line and type-in your assembly name.

Then in your Razor code you can have the code bellow:

public class MyClass
{
    public string Markdown { get; set; }
}
@model MyClass
@{
    ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>  

<markdown content="Markdown"/>

Which will output your markdown formatted as HTML.

Now whether you load your markdown from files, database or anywhere... you can have your user write rich text in any text box and have your application generate safe HTML.

Components used
Categories: Blogs

Should our front-end websites be server-side at all?

Decaying Code - Maxime Rouiller - 3 hours 7 min ago

I’ve been toying around with projects like Jekyll, Hexo and even some hand-rolled software that will generate me HTML files based on data. The thought that crossed my mind was…

Why do we need dynamically generated HTML again?

Let me take examples and build my case.

Example 1: Blog

Of course the simpler examples like blogs could literally all be static. If you need comments, then you could go with a system like Disqus. This is quite literally one of the only part of your system that is dynamic.

RSS feed? Generated from posts. Posts themselves? Could be automatically generated from a databases or Markdown files periodically. The resulting output can be hosted on a Raspberry Pi without any issues.

Example 2: E-Commerce

This one is more of a problem. Here are the things that don’t change a lot. Products. OK, they may change but do you need to have your site updated right this second? Can it wait a minute? Then all the “product pages” could literally be static pages.

Product reviews? They will need to be “approved” anyway before you want them live. Put them in a servier-side queue, and regenerate the product page with the updated review once it’s done.

There’s 3 things that I see that would require to be dynamic in this scenario.

Search, Checkout and Reviews. Search because as your products scales up, so does your data. Doing the search client side won’t scale at any level. Checkout because we are now handling an actual order and it needs a server components. Reviews because we’ll need to approve and publish them.

In this scenario, only the Search is the actual “Read” component that is now server side. Everything else? Pre-generated. Even if the search is bringing you the list of product dynamically, it can still end up on a static page.

All the other write components? Queued server side to be processed by the business itself with either Azure or an off-site component.

All the backend side of the business (managing products, availability, sales, whatnot, etc.) will need a management UI that will be 100% dynamic (read/write).

Question

So… do we need dynamic front-end with the latest server framework? On the public facing too or just the backend?

If you want to discuss it, Tweet me at @MaximRouiller.

Categories: Blogs

You should not be using WebComponents yet

Decaying Code - Maxime Rouiller - 3 hours 7 min ago

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since... well... a long time.

If you take a look at the specification, it's hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let's take a look at all four of them.

Reviewing the specificationsHTML Templates

Specification

This specific specification is not part of the "Web components" section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements

Specification

This specification is for review and not for implementation!

Alright no let's not touch this yet.

Shadow DOM

Specification

This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports

Specification

This one is still a working draft so it hasn't been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it's the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it's built by... well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we're only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it's a painful path.

What is wrong right now with WebComponents

First, it's not cross platform. We handled that in the past. That's not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there's no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn't move, we're back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what "Production" is concerned, do not use WebComponents directly. Also, avoid Polymer as it's only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Categories: Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - 3 hours 7 min ago

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - 3 hours 7 min ago

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - 3 hours 7 min ago

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
{
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;
	}

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
	}
}
      
    
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - 3 hours 7 min ago

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.

Powershell

Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.

Param(
    [Parameter(Mandatory=$True)]
    [string]$filename
)

Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
{
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
    {
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
    }
}
    
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - 3 hours 7 min ago

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - 3 hours 7 min ago

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:

SET GIT_SSL_NO_VERIFY=true

This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.

NOW.

Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - 3 hours 7 min ago

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… https://t.co/unhgFdMCrh — Awesome way of describing it. pic.twitter.com/KDPxpdB3UE

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

Validation inside or outside entities?

Jimmy Bogard - Fri, 04/29/2016 - 21:45

A common question I get asked, especially around a vertical slice architecture, is where does validation happen? If you’re doing DDD, you might want to put validation inside your entities. But personally, I’ve found that validation as part of an entity’s responsibility is just not a great fit.

Typically, an entity validating itself will do so with validation/data annotations on itself. Suppose we have a Customer and its First/Last names are “required”:

public class Customer
{
    [Required]
    public string FirstName { get; set; }
    [Required]
    public string LastName { get; set; }
}

The issue with this approach is twofold:

  • You’re mutating state before validation, so your entity is allowed to be in an invalid state.
  • There is no context of what the user was trying to do

So while you can surface these validation errors (typically from an ORM) to the end user, it’s not easy to line up the original intent with the implementation details of state. Generally I avoid this approach.

But if you’re all up in DDD, you might want to introduce some methods to wrap around mutating state:

public class Customer
{
  public string FirstName { get; private set; }
  public string LastName { get; private set; }
    
  public void ChangeName(string firstName, string lastName) {
    if (firstName == null)
      throw new ArgumentNullException(nameof(firstName));
    if (lastName == null)
      throw new ArgumentNullException(nameof(lastName));
      
    FirstName = firstName;
    LastName = lastName;
  }
}

Slightly better, but only slightly, because the only way I can surface “validation errors” are through exceptions. So you don’t do exceptions, you use some sort of command result:

public class Customer
{
  public string FirstName { get; private set; }
  public string LastName { get; private set; }
    
  public CommandResult ChangeName(ChangeNameCommand command) {
    if (command.FirstName == null)
      return CommandResult.Fail("First name cannot be empty.");
    if (lastName == null)
      return CommandResult.Fail("Last name cannot be empty.");
      
    FirstName = command.FirstName;
    LastName = command.LastName;
    
    return CommandResult.Success;
  }
}

Again, this is annoying to surface to the end user because I have one validation error at a time being returned. I can batch them up, but how do I correlate back to the field name on the screen? I really can’t. Ultimately, entities are lousy at command validation. Validation frameworks, however, are great.

Command validation

Instead of relying on an entity/aggregate to perform command validation, I entrust it solely with invariants. Invariants are all about making sure I can transition from one state to the next wholly and completely, not partially. It’s not actually about validating a request, but performing a state transition.

With this in mind, my validation centers around commands and actions, not entities. I could do something like this instead:

public class ChangeNameCommand {
  [Required]
  public string FirstName { get; set; }
  [Required]
  public string LastName { get; set; }
}

public class Customer
{
  public string FirstName { get; private set; }
  public string LastName { get; private set; }
    
  public void ChangeName(ChangeNameCommand command) {
    FirstName = command.FirstName;
    LastName = command.LastName;
  }
}

My validation attributes are on the command itself, and only when the command is valid do I pass it to my entities for state transition. Inside my entity, I’m responsible for successfully accepting a ChangeNameCommand and performing the state transition, ensuring my invariants are satisfied. In many projects, I wind up using FluentValidation instead:

public class ChangeNameCommand {
  public string FirstName { get; set; }
  public string LastName { get; set; }
}

public class ChangeNameValidator : AbstractValidator<ChangeNameCommand> {
  public ChangeNameValidator() {
    RuleFor(m => m.FirstName).NotNull().Length(3, 50);
    RuleFor(m => m.LastName).NotNull().Length(3, 50);
  }
}

public class Customer
{
  public string FirstName { get; private set; }
  public string LastName { get; private set; }
    
  public void ChangeName(ChangeNameCommand command) {
    FirstName = command.FirstName;
    LastName = command.LastName;
  }
}

The key difference here is that I’m validating a command, not an entity. And since entities themselves are not validation libraries, it’s much, much cleaner to validate at the command level. Because the command is the form I’m presenting to the user, any validation errors are easily correlated to the UI since the command was used to build the form in the first place.

Validate commands, not entities, and perform the validation at the edges.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Your Testing is a Joke

Hiccupps - James Thomas - Thu, 04/28/2016 - 22:00

My second eBook has just been released! It's called Your Testing is a Joke and it's a slight edit of the piece that won the Best Paper prize at EuroSTAR 2015. Here's the blurb:
Edward de Bono, in his Lateral Thinking books, makes a strong connection between humour and creativity. Creativity is key to testing, but jokes? Well, the punchline for a joke could be a violation of some expectation, the exposure of some ambiguity, an observation that no one else has made, or just making a surprising connection. Jokes can make you think and then laugh. But they don't always work. Does that sound familiar? This eBook takes a genuine joke-making process and deconstructs it to make comparisons between aspects of joking and concepts from testing such as the difference between a fault and a failure, oracles, heuristics, factoring, modelling testing as the exploration of a space of possibilities, stopping strategies, bug advocacy and the possibility that a bug, today, in this context might not be one tomorrow or in another.  It goes on to wonder about the generality of the observations and what the value of them might be before suggesting ways in which joking can provide useful practice for testing skills.  There are some jokes in the eBook, of course. And also an explanation of why any groaning they provoke is a good sign… And in case you're wondering, my first eBook was called My Software Under Test and Other Animals. It's got some groanworthy moments too.
Categories: Blogs

Is There a Simple Coverage Metric?

DevelopSense Blog - Wed, 04/27/2016 - 00:35
In response to my recent blog post, 100% Coverage is Possible, reader Hema Khurana asked: “Also some measure is required otherwise we wouldn’t know about the depth of coverage. Any straight measures available?” I replied, “I don’t know what you mean by a ‘straight’ measure. Can you explain what you mean by that?” Hema responded: […]
Categories: Blogs

JavaScript Testing and Code Analysis at Facebook

Testing TV - Tue, 04/26/2016 - 09:25
Avik Chaudhuri, Software Engineer at Facebook, and Jeff Morrison, Software Engineer at Facebook discusses how Facebook handle software testing and code analysis of its JavaScript code. To encourage engineers to continue to make changes to the codebase, we spent some time identifying various important traits around automated testing, and used our insights to build a […]
Categories: Blogs

Patterns of Resistance in your Agile Journey

Resistance is a common reaction to a change initiative. As organizations attempt to grow or improve, it must change. Change can occur for many reasons. When moving to an organization that is embracing Agile, there is often a need for a significant culture change since Agile is effectively a culture change.
Agile brings about a change in mindset and mechanics, which affects both employees and customers. Whereas change can create new opportunities, it will also be met with resistance.  Agile change really isn’t any different than any other culture change, ergo the resistance will have similar patterns.  There are many reasons for resistance. Here are some of the patterns:  
Here we go again! It is comforting when things remain the same. Employees have seen change efforts come and go without any true commitment and may attempt to wait the new ones out.
  • What can you do?  The commitment to change must be clearly stated.  The change initiate must be treated as a program, with clear motivations and rewards for change.

Fear of the unknown.  Change is often defined by a journey into the unknown and it natural to resist what we don’t understand.  For most, it is unclear what the change will entail.  
  • What can you do?  Leaders should provide a vision of what the new world will look like 

Lack of communication.Employees need to know what is occurring to them. As information trickles down from the top, the message can be lost.
  • What can you do? Plan for continuous communications at all levels is important.  Include various communication channels and messages from as many champions as possible. 

Change in roles.Some employees like to retain the status quo and do not want to see their roles changed. When roles are vague, some don't know where they fit in the new culture, making them feel excluded. When they have no say in their new roles, they can feel alienated.
  • What can you do? Discuss the role changes with employees.  Give them time to adapt to the roles or give them time to try new roles.   

Competing initiatives.Introducing an agile initiative when there are already multiple initiatives occurring can lead to employees feeling overwhelmed, causing them to resist. Hardly an auspicious start!
  • What can you do?  It is important for management to prioritize initiatives and focus on the higher priority ones.

Change for people, not leaders. When asked “Who wants change?”, everyone raises their hands. But when asked, “Who wants to change?”, no one’s hand goes up.  This can be particularly true with leaders.  Leaders want change to occur within their teams but are not particularly interested in changing themselves and this may be been prevalent in past change initiatives.  
  • What can you do? Acknowledge the change that the leaders must make and convey the leaders’ commitment to change. 

New management's need to change something. New leaders often feel they must show they are action-oriented. They may reason that the change that worked in their previous company should work here. Some know their term is short, so they are not interested in long-term change. Some are unaware of what it takes to affect culture. Employees who are used to this scenario may resist. 
  • What can you do?  Avoid what may appear to be random changes.  Ensure the Agile change is aligned with better business outcomes and not just to do Agile. 

It will not always be possible to identify and manage all types of resistance.  However, it must be treated as a real and tangible activity.  It is better to start addressing resistance to change in a pro-active manner. The more you review and enact the "What can you do?" tips, the more likely you will increase your changes of a successful Agile change (or any culture change).   
Categories: Blogs

Fake Backends Testing with RpcReplay

Testing TV - Thu, 04/21/2016 - 20:46
Keeping tests fast and stable is critically important. This is hard when servers depend on many backends. Developers must choose between long and flaky tests, or writing and maintaining fake implementations. Instead, tests can be run using recorded traffic from these backends. This provides the best of both worlds, allowing developers to test quickly against […]
Categories: Blogs

What I Learned Pairing on a Workshop

Agile Testing with Lisa Crispin - Thu, 04/21/2016 - 18:20

I pair on all my conference sessions. It’s more fun, participants get a better learning opportunity, and if my pairs are less experienced at presenting, they get to practice their skills. Big bonus: I learn a lot too!

I’ve paired with quite a few awesome people. Janet Gregory and I have, of course, been pairing for many years. In addition, I’ve paired during the past few years with Emma Armstrong, Abby Bangser, and Amitai Schlair, among others. I’ve picked up many good skills, habits, ideas and insights from all of them!

The Ministry of Testing published my article on what I learned pairing with Abby at a TestBash workshop about how distributed teams can build quality into their product. If you’d like to hone your own presenting and facilitating skills, consider pairing with someone to propose and present a conference session. It’s a great way to learn! And if you want to pair with me in 2017, let me know!

The post What I Learned Pairing on a Workshop appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Cambridge Lean Coffee

Hiccupps - James Thomas - Thu, 04/21/2016 - 08:24

We hosted this month's Lean Coffee at Linguamatics. Here's some brief, aggregated comments on topics covered by the group I was in.

It is fair to refer to testers as "QA"?
  • One test manager talked about how he has renamed his test team as the QA team
  • He has found that it has changed how his team are regarded by their peers (in a positive way).
  • Interestingly, he doesn't call it "Quality Assurance" just "QA" 
  • His team have bought into it.
  • Role titles are a lot about perception: one colleague told him that "QA" feels more like "BA".
  • Another suggestion that "QA" could be "Quality Assisting"
  • We covered the angle that (traditional) QA is more about process and compliance than what most of us generally call testing.
  • We didn't discuss the fairness aspect of the original question.

What books have you read recently that contributed something to your testing?
  • The Linguamatics test team has a reading group for Perfect Software going on at the moment.
  • Although I've read the book several times, I always find a new perspective on some aspect of something when I dip into it. This time around it's been meta testing.
  • The book reinforces the message that a lot of testing (and work around the actual testing) is psychology.
  • But also that there is no simple recipe to apply in any situation.
  • We discussed police procedural novels and how the investigation, hypotheses, data gathering in them might be related to our day job

When should we not look at customer bugs?
  • When your product is a platform for your customers to run on, you may find bugs in customer products when testing yours.
  • How far should you go when you find a bug in customer code? 
  • Should you carry on investigating even after you've reported it to them?
  • In the end we boiled this question down to: as a problem-solver, how do you leave an unresolved issue alone?
  • Suggestions: time-box, remember that your interests are not necessarily the company priorities, automate (when you think you need lots of attempts to find a rare case), take the stakeholder's guidance, brainstorm with others, ... 
  • If the customer is still screaming, you should still be working. (An interesting metric.)

Image: https://flic.kr/p/cLViad
Categories: Blogs

It&#8217;s Not A Factory

DevelopSense Blog - Tue, 04/19/2016 - 06:38
One model for a software development project is the assembly line on the factory floor, where we’re making a buhzillion copies of the same thing. And it’s a lousy model. Software is developed in an architectural studio with people in it. There are drafting tables, drawing instruments, good lighting, pens and pencils and paper. And […]
Categories: Blogs

It is Possible to be Professional Without Being in a Profession

Hiccupps - James Thomas - Mon, 04/18/2016 - 21:32

This guest post is by Abby Bangser, writing on the recent MEWT 5. I enjoyed Abby's talk on the day, I enjoyed the way she spoke about testing both in debate and in conversation and I am very much enjoying her reflections now.

I'm also enjoying, and admiring, the open attitude of the MEWT organisers to Abby's comments on gender diversity at the workshop, later on in email and in this piece.  In particular, I like their eagerness to share their intentions, process and feelings about it at the event and then engage in the wider discussion in the testing community (e.g. 12, 3).

MEWT 5 was my first experience in a small peer conference, and the format provided a very interesting style of sharing and learning. A big thanks to Bill and Vernon for organizing, the Association for Software Testing for helping fund the event, and particularly Simon for identifying a common theme. The conference theme of What is a Professional Tester? was a tough one to prepare for, and it became apparent that other attendees had a diverse way of approaching it as well. Maybe that was what made it an interesting topic!

I want to briefly touch on the fact that we identified and discussed the difference between a capital P professional and a person working in a professional manner. Working in a profession does not, by itself, indicate a person's level of professionalism, and I enjoyed the conversation around what defines professionalism. Based on the discussion at the conference, my own definition of professionalism is achieving high standards of pride and integrity.

Just as with other heated topics related to testing, the term "professional" has a lot of baggage. This can include but is not limited to the idea that as a profession, there may need to be a board that regulates who gains access to, and who can be denied/revoked access to, the profession.

This point seemed to be the biggest reason why testing as a profession had a negative ring to many in the room. This, of course, runs too close to the debate around testing as an activity or a role for this to be omitted from our discussions. I want to use this blog post to dig a little more deeply into this.

In my opinion, the room had a bit of support on both sides of this debate but I think we made progress on why it has become such a heated topic. We seemed to identify why having a defined role of "tester" or "QA" is necessary in some contexts. By clearly articulating these needs as being focused around ease of recruiting and some industry regulations, it became clear to me that role does not need to equate to job title or job specification even though it often does these days.

There was a proposal that in lieu of roles we could look at areas of accountability, but this didn't quite sit well with me as it still has an air of assessing blame. I suggested (and prefer) thinking of roles as hats. Each person has a certain number of hats that they are skilled enough to wear, but are not required to wear them all at all times.

I can’t remember where I first heard this, but I like it for a number of reasons that I want to explore further:

  • Hats are not permanent; they are easy to take on and off: Each person should be able to find the ones that fit them, and be looking for ways to work new ones. While my day-to-day hat may be the testing one, I also enjoy putting on the infrastructure hat, the project management hat and the business analysis hat as the need arises.
  • You look a bit silly if you are wearing more than one: As stated above, changing hats is not only OK, it's pretty much required to be a successful team. But I still put a lot of value in focusing on a single hat at a time. This was referred to as time slicing instead of multi-tasking, and I really liked that distinction.
  • Hats are not unique to a single person: Just because you are wearing a certain hat does not mean someone else can’t put on the same type of hat. Some challenges may take a number of testing-focused people to solve, and others may take a variety of roles. In either case, the team should be able to self-organize.

I want to take the idea of changing hats just one step further. Throughout the day, there was a definite majority of the room who felt that successful team mates (not just testers) are those who step in and get the job done. They do not let job titles/specifications limit what they learn or where they provide support. This was a big reason I felt my topic of the “full stack” tester was well received.

I think that this topic has a lot of really interesting avenues left to personally explore, and I look forward to doing that both off- and on-line. If I had to sum up my current hope for a takeaway, it is that every team member has a responsibility to make their expertise accessible to others AND find ways to access others’ expertise. It is no longer acceptable to silo our team mates based on arbitrary terms like “technical”.

A final and important word on my experienceWhile I was very glad to be able to attend MEWT 5 and participate in the discussions, I would be remiss to not raise the lack of diversity in the room. While there are many axes that we could discuss diversity on, I am going to speak only of gender diversity here. The story told in the room of MEWT attendees has been told in countless other industries, organizations and events. A notable example is the article by the Guardian which noted that there are more FTSE 100 leaders named John than all the female chief executives and chairs combined. This definitely hit home, since the participants in our room named Dan outnumbered all the women combined by a ratio of 3:1.

There is no single answer on how to support diversity in these circumstances, but we have countless people paving the way who are showing that it is not only possible to succeed in doing so, but to actually thrive. I hope to attend another -EWT event in the future that can promote the kind of diversity shown by many including Rosie Sherry and her work at TestBash, Adi Bolboacă and Maaret Pyhäjärvi with European Testing Conference, and supporting organizations like Speak Easy started by Anne-Marie Charrett and Fiona Charles.
Image: https://flic.kr/p/azvNp3
Categories: Blogs