Skip to content

Blogs

Creating a simple ASP.NET 5 Markdown TagHelper

Decaying Code - Maxime Rouiller - 8 hours 13 min ago

I've been dabbling a bit with the new ASP.NET 5 TagHelpers and I was wondering how easy it would be to create one.

I've created a simple Markdown TagHelper with the CommonMark implementation.

So let me show you what it is, what each line of code is doing and how to implement it in an ASP.NET MVC 6 application.

The Code
using CommonMark;
using Microsoft.AspNet.Mvc.Rendering;
using Microsoft.AspNet.Razor.Runtime.TagHelpers;

namespace My.TagHelpers
{
    [HtmlTargetElement("markdown")]
    public class MarkdownTagHelper : TagHelper
    {
        public ModelExpression Content { get; set; }
        public override void Process(TagHelperContext context, TagHelperOutput output)
        {
            output.TagMode = TagMode.SelfClosing;
            output.TagName = null;

            var markdown = Content.Model.ToString();
            var html = CommonMarkConverter.Convert(markdown);
            output.Content.SetContentEncoded(html);
        }
    }
}
Inspecting the code

Let's start with the HtmlTargetElementAttribute. This will wire the HTML Tag <markdown></markdown> to be interpreted and processed by this class. There is nothing stop you from actually having more than one target.

You could for example target element <md></md> by just adding [HtmlTargetElement("md")] and it would support both tags without any other changes.

The Content property will allow you to write code like this:

@model MyClass

<markdown content="@ViewData["markdown"]"></markdown>    
<markdown content="Markdown"></markdown>    

This easily allows you to use your model or any server-side code without having to handle data mapping manually.

TagMode.SelfClosing will force the HTML to use self-closing tag rather than having content inside (which we're not going to use anyway). So now we have this:

<markdown content="Markdown" />

All the remaining lines of code are dedicated to making sure that the content we render is actual HTML. output.TagName just make sure that we do not render the actual markdown tag.

And... that's it. Our code is complete.

Activating it

Now you can't just go and create TagHelpers and have them automatically served without wiring one thing.

In your ASP.NET 5 projects, go to /Views/_ViewImports.cshtml.

You should see something like this:

@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"

This will load all TagHelpers from the Microsoft.AspNet.Mvc.TagHelpers assembly.

Just duplicate the line and type-in your assembly name.

Then in your Razor code you can have the code bellow:

public class MyClass
{
    public string Markdown { get; set; }
}
@model MyClass
@{
    ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>  

<markdown content="Markdown"/>

Which will output your markdown formatted as HTML.

Now whether you load your markdown from files, database or anywhere... you can have your user write rich text in any text box and have your application generate safe HTML.

Components used
Categories: Blogs

Should our front-end websites be server-side at all?

Decaying Code - Maxime Rouiller - 8 hours 13 min ago

I’ve been toying around with projects like Jekyll, Hexo and even some hand-rolled software that will generate me HTML files based on data. The thought that crossed my mind was…

Why do we need dynamically generated HTML again?

Let me take examples and build my case.

Example 1: Blog

Of course the simpler examples like blogs could literally all be static. If you need comments, then you could go with a system like Disqus. This is quite literally one of the only part of your system that is dynamic.

RSS feed? Generated from posts. Posts themselves? Could be automatically generated from a databases or Markdown files periodically. The resulting output can be hosted on a Raspberry Pi without any issues.

Example 2: E-Commerce

This one is more of a problem. Here are the things that don’t change a lot. Products. OK, they may change but do you need to have your site updated right this second? Can it wait a minute? Then all the “product pages” could literally be static pages.

Product reviews? They will need to be “approved” anyway before you want them live. Put them in a servier-side queue, and regenerate the product page with the updated review once it’s done.

There’s 3 things that I see that would require to be dynamic in this scenario.

Search, Checkout and Reviews. Search because as your products scales up, so does your data. Doing the search client side won’t scale at any level. Checkout because we are now handling an actual order and it needs a server components. Reviews because we’ll need to approve and publish them.

In this scenario, only the Search is the actual “Read” component that is now server side. Everything else? Pre-generated. Even if the search is bringing you the list of product dynamically, it can still end up on a static page.

All the other write components? Queued server side to be processed by the business itself with either Azure or an off-site component.

All the backend side of the business (managing products, availability, sales, whatnot, etc.) will need a management UI that will be 100% dynamic (read/write).

Question

So… do we need dynamic front-end with the latest server framework? On the public facing too or just the backend?

If you want to discuss it, Tweet me at @MaximRouiller.

Categories: Blogs

You should not be using WebComponents yet

Decaying Code - Maxime Rouiller - 8 hours 13 min ago

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since... well... a long time.

If you take a look at the specification, it's hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let's take a look at all four of them.

Reviewing the specificationsHTML Templates

Specification

This specific specification is not part of the "Web components" section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements

Specification

This specification is for review and not for implementation!

Alright no let's not touch this yet.

Shadow DOM

Specification

This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports

Specification

This one is still a working draft so it hasn't been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it's the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it's built by... well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we're only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it's a painful path.

What is wrong right now with WebComponents

First, it's not cross platform. We handled that in the past. That's not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there's no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn't move, we're back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what "Production" is concerned, do not use WebComponents directly. Also, avoid Polymer as it's only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Categories: Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - 8 hours 13 min ago

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - 8 hours 13 min ago

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - 8 hours 13 min ago

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
{
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;
	}

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
	{
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
	}
}
      
    
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - 8 hours 13 min ago

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.

Powershell

Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.

Param(
    [Parameter(Mandatory=$True)]
    [string]$filename
)

Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
{
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
    {
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
    }
}
    
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - 8 hours 13 min ago

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - 8 hours 13 min ago

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:

SET GIT_SSL_NO_VERIFY=true

This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.

NOW.

Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - 8 hours 13 min ago

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… https://t.co/unhgFdMCrh — Awesome way of describing it. pic.twitter.com/KDPxpdB3UE

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

Adventures in Retrospectives: Helping a large team focus!

Agile Testing with Lisa Crispin - Fri, 02/12/2016 - 07:25

I’m a tester on a (to me) relatively large team. Recently I was asked to facilitate an all-hands retro for our entire team. I’d like to share this interesting and rewarding experience. It’s not easy for a large team to enjoy a productive retrospective. I hope you’ll share your experiences, too.

At the time of this retro, our ever-growing team had about 30 people, including programmers, testers, designers, product owners, marketing experts, and content managers. While we all collaborate continually, we’re subdivided into smaller teams to work on different parts of our products.

I work mainly on the “Frontend” team. Our focus is our SaaS web app’s UI. We have weekly retros. The other big sub-team is the “Platform” team, further divided into “pods” that take care of our web server, API, reports and analytics, and other areas. This team has biweekly retros. The “Engineering” team (everyone in development, leaving out designers and marketing) has monthly “process retros”. But all-hands retros had become rare, due to logistics. Several of our team members are based in other cities. We hadn’t had an all-hands retro for more than six months.

Preparation

The team’s director asked me a few days in advance to facilitate the retro. I accepted the challenge gladly but I felt a bit panicked. Our team usually follows the same standard retro format. We spend some time gathering “happys”, “puzzlers” and “sads”. Then we discuss them – mainly the puzzlers and sads – and come up with action items, with a person responsible for taking the lead on each action item. Over the years this has produced continual improvement. However, I think it is good to “shake up” the retro format to generate some new thinking. Also, retros tend to be dominated by people who aren’t shy about speaking up. I wanted to find a way for everyone to contribute.

Thanks to my frequent conference and agile community participation, I have a great network of expert practitioners who like to help. I contacted the co-authors of two of my favorite books on retrospectives: Tom Roden, co-author with Ben Williams of 50 Quick Ideas to Improve Your Retrospectives , and Luis Gonçalves, co-author with Ben Linders of Getting Value out of Agile Retrospectives. (I’ve also depended for years on Agile Retrospectives: Making Good Teams Grea by Esther Derby and Diana Larsen.) I used their great advice and ideas to come up with a plan.

Food is a great idea for any meeting, so my teammates Jo and Nate helped me shop for and assemble a delicious platter of artisanal cheese and fresh berries (paid for by our employer, though I would have been willing to donate it.)

Pick a problem

Around 30 of us squeezed into a conference room. We had started a few minutes early so that our Marketing team could hand out new team hoodie sweatshirt jackets for everyone! That set a festive mood. Also, it is a tradition that people who are so inclined enjoy a beer, wine or other adult beverage during retros. For this reason, retros are typically scheduled at the end of the day. And of course, we had cheese and fruit to enjoy.

We had just over an hour for the retro, so I had to keep things moving. I gave a brief intro and said that we would choose one problem to focus on instead of our usual retro format.

I asked that they divide into their usual “pods”. Their task: choose the biggest problem that can’t be solved within their own pod that they’d like to see solved, or at least made better, in the next three months. In other words, the most compelling problem that needs to be addressed by the whole team. They should come back to the big group with this problem written on a sticky note.

As I expected, designers, testers and customer support specialists joined the pods they work with the most. Contrary to my expectations, the “Platform” team decided not to further subdivide into pods for the activity.  Together with the Marketing/content team and Frontend team, we only had three groups. I made a quick change of plan: I asked each group to pick their top *two* problems, so we’d have more to choose from. I gave them 10 minutes for this activity.

One group stayed in the conference room and the other two found their own place to work. They wrote ideas on sticky notes and dot voted to choose their highest priority problem areas. I walked around to each team to answer questions and let them know how much time they had left.

After 10 minutes, I called everyone back to the conference room. A spokesmodel from each team explained their top two problem areas and why those needed help from the whole team. We dot voted to choose the top topic, everyone got two votes. I would have preferred to use a process such as the Championship Game from Johanna Rothman’s Manage Your Product Portfolio  which would have been more fair, but we didn’t have time. Everyone seemed happy with the results anyway.

The winning problem area was getting more visibility into our company’s core values and technical practices, and how to integrate better with the company’s “ecosystem”. I don’t want to get into too much detail, because what I want to share is our process, rather than the specific problem.

Design experiments

Now that we had a problem for the whole team to solve, I explained that we wanted to design experiments, and we could use hypotheses for this. I explained Jason Little’s template for experiments:

Experiment template

Experiment template

We hypothesize by <implementing this>
We will <improve on this problem>
Which will <benefits>
As measured by <measurements>

I asked the groups to think about options to test the hypothesis. Who is affected? Who can help or hinder the experiment? I emphasized that we should focus on “minimum viable changes” rather than try to solve the whole problem at once. I gave an example using a problem that our Frontend team had identified in a previous retro.

hypoExample

Experiment example

I had three colors of index cards and we handed those out randomly around the large group. Then we divided up by index card color, so that we had three groups but each had different people than before. Again, each group found a place to work. Each designed an experiment to help address the problem area. I told them they had 10 minutes, but that wasn’t enough time, I ended up letting them have 15. I walked from group to group to help with questions and keep them informed of the time.

Then, we got back together in the conference room. A spokesmodel for each group explained their hypothesis and experiment. We had three viable experiments to work towards our goal.

Wrap-up

At this point, we had experiments including hypotheses, benefits, and ways to measuring progress. We only had about 10 minutes left in the retro now, and I wasn’t sure of the best way to proceed. How could we commit to trying these experiments? I asked the team what they’d like to do next. I was concerned about making sure the discussion wasn’t dominated by one or two people.

The directors and managers put forward some ideas, since they knew of resources that could help the team address the problem area. There were videos about the company’s core values and practices. We also discussed the idea of having someone within our team or outside of our team come in and do presentations about it. There were several good ideas, including coming up with games to help us learn.

Again, the experiments we decided to try aren’t the point, but the point is we came up with an action plan that included a way to measure progress. The managers agreed to schedule a series of weekly all-hands meetings to watch videos about company core development values and practices.

Immediately after the meeting, the marketing director and the overall director got together to put together a short survey to gauge how much team members currently know about this topic, and everyone took the survey. After watching and discussing all the videos, we can all take it again and see what we’ve learned.

I had hoped to wrap the retro up by doing appreciations, but there wasn’t time. With such a big group, I think sticking to a time box is important. We can try different techniques in future retros.

Outcomes

I was surprised and pleased to get lots of positive feedback from teammates, both in person and via email. My favorite comment, from one of the marketing specialists, was: “It’s the best retro by far I’ve been to in four years here. I felt productive!”

The initial survey has been done, and we’ve had three meetings so far to watch the videos. We watch right before lunch so that people can talk about it during lunch. Having 30 people in an hour-long meeting every week is expensive, which shows a real commitment to making real progress on our chosen problem area.

I think the key to success was that by dividing into groups, everyone had a better chance of participating in discussing and prioritizing problems, and in designing experiments to address them. The giveaway hoodies along with food and drink made the meeting fun. We stuck to our time frame, though we did vote to extend it by a few minutes at the end. Most importantly, we chose ONE problem to work on, and designed experiments that included ways to measure progress in addressing that one problem.

The team’s directors have decided they’d like to do an all hands retro every six weeks, and they’ve asked if I could facilitate these. I think it’s a great idea to do the all hands retro more often. I’m not sure I should facilitate them all, but I’ll do what I can to help our team keep identifying the biggest problem and designing experiments to chip away at it.

Do you work on a large team? How do you nurture continual learning and improvement?

The post Adventures in Retrospectives: Helping a large team focus! appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Becoming a More Technical Tester

Testing TV - Thu, 02/11/2016 - 18:45
Erica Walker shares her tips, approaches, and experiences on becoming a more technical tester. Video producer: http://www.associationforsoftwaretesting.org/
Categories: Blogs

Testing inside one sprint’s time

Markus Gaertner (shino.de) - Wed, 02/10/2016 - 23:41

Recently I was reminded about a blog entry from Kent Beck way back in 2008. He called the method he discovered during pairing the Saff Squeeze after his pair partner David Saff. The general idea is this: Write a failing test on a level that you can, then inline all code to the test, and remove everything that you don’t need to set up the test. Repeat this cycle until you have a minimal error reproducing test procedure. I realized that this approach may be used in a more general way to enable faster feedback within a Sprint’s worth of time. I sensed a pattern there. That’s why I thought to get my thoughts down while they were still fresh – in a pattern format.

Testing inside one Sprint’s time

As a development team makes progress during the Sprint, the developed code needs to be tested to provide the overall team with the confidence to go forward. Testing helps to identify hidden risks in the product increment. If the team does not address these risks, the product might not be ready to ship for production use, or might make customers shy away from the product since there are too many problems with it that make it hard to use.

With every new Sprint, the development team will implement more and more features. With every feature, the test demand – the amount of tests that should be executed to avoid new problems with the product – rises quickly.

As more and more features pile up in the product increment, executing all the tests takes longer and longer up to a point where not all tests can be executed within the time available.

One usual way to deal with the ever-increasing test demand is to create a separate test team that executes all the tests in their own Sprint. This test team works separately from the new feature development, working on the previous Sprint’s product increment to make it potentially shippable. This might help to overcome the testing demand in the short-run. In the long-run, however, that same test demand will pile further up to a point where the separate test team will no longer be able to execute all the tests within their own separate Sprint. Usually, at that point, the test team will ask for longer Sprint lengths thereby increasing the gap between the time new features were developed, and their risks will be addressed.

The separate test team will also create a hand-off between the team that implements the features, and the team that addresses risks. It will lengthen the feedback between introducing a bug, and finding it, causing context-switching overhead for the people fixing the bugs.

In regulated environments, there are many standards the product should adhere to. These additional tests often take long times to execute. Executing them on every Sprint’s product increment, therefore, is not a viable option. Still, to make the product increment potentially shippable, the development team needs to fulfill these standards.

Therefore:
Execute tests on the smallest level possible.

Especially when following object-oriented architecture and design, the product falls apart into smaller pieces that can be tested on their own. Smaller components usually lead to faster execution times for tests since fewer sub-modules are involved. In a large software system involving an application server with a graphical user interface and a database, the business logic of the application may be tested without involving the database at all. In hardware development, the side-impact system of a car may be tested without driving the car against an obstacle by using physical simulations.

One way to develop tests and move them to lower levels in the design and architecture starts with a test on the highest level possible. After verifying this test fails for the right reasons, move it further down the design and architecture. In software, this may be achieved by inlining all production code into the test, and after that throwing out the unnecessary pieces. Programmers can then repeat this process until they reached the smallest level possible. For hardware products, similarly focued tests may be achieved by breaking the hardware apart into sub-modules with defined interfaces, and executing tests on the module-level rather than the whole product level.

By applying this approach, regulatory requirements can be broken down to individual pieces of the whole product, and, therefore, can be carried out in a faster way. Using the requirements from the standards, defining them as tests, and being able to execute them at least on a Sprint cadence, helps the development team receive quick feedback about their current process.

In addition, these tests will provide the team with confidence in order to change individual sub-modules while making sure the functionality does not change.

This solution will still provide an additional risk. By executing each test on the smallest level possible, and making sure that each individual module works correctly, the development team will sub-optimize the testing approach. Even though each individual module works correctly according to its interface definition, the different pieces may not interact with each other or work on varying interface definitions. This risk should be addressed by carrying out additional tests focused on the interfaces between the individual modules to avoid sub-optimization and non-working products. There will be fewer tests for the integration of different modules necessary, though. The resulting tests will therefore still fit into a Sprint’s length of time.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Categories: Blogs

The practice of reflection in action

thekua.com@work - Mon, 02/08/2016 - 20:53

In a previous article, I explained how the most essential agile practice is reflection. In this article, I outline examples how organisations, teams and people use reflection in action.

Reflection through retrospectives

Retrospectives are powerful tools that whole teams use to reflect on their current working practices to understand what they might do to continuously improve. As an author of a “The Retrospective Handbook“, I am clearly passionate about the practice because they explictly give teams permission to seek ways to improve and when executed well, create a safe space to talk about issues.

Reflection through coaching

Effective leaders draw upon coaching as a powerful skill that helps individuals reflect on their goals and actions to help them grow. Reflective questions asked by a coach to a coachee uncover barriers or new opportunities for a coachee to reach their own goals.

Coaching is a skill in itself and requires time for both the person doing the coaching, and for the people being coached. When done well, coaching can massively improve the performance and satisfication of team members by helping coachees reach their own goals or find ways to further develop themselves.

Reflection through daily/weekly prioritisation

I have run a course for Tech Leads for the past several years and in this course, I teach future Tech Leads to make time during their week to reflect and prioritise. I see many people in leadership positions fall into a reactive trap, where they are too busy “doing” without considering if it is the most important task they should be doing.

Effective leaders build time into their schedules to regularly review all their activities and to prioritise them. In this process, leaders also determine what is the best way of accomplishing these activities which is often involving and enabling others rather than doing it themselves.

Reflection through 1 to 1 feedback

When I work with teams, I teach team members the principles of giving and receiving effective feedback. I truly believe in the Prime Directive – that everyone is trying to do the best that they can, given their current skills and the situation at hand. A lot of conflict in working enviornments is often due to different goals, or different perspectives and it is easy for people to be frustrated with each other.

When team members do not know how to give an receive feedback, being on either side can be a really scary prospect. 1 to 1 feedback gives people opportunites to reflect on themselves and make space for personally being more effective and for strengthening the trust and relationships of the people involved.

Reflection through refactoring

Refactoring is an essential skill for the agile software developer and a non-negotiable part of development.

Three strikes and you refactor – Refactoring: Improving the Design of Existing Code (Martin Fowler)

Developers should be making tiny refactorings as they write and modify software as it forces developer to reflect on their code and think explicitly about better designs or ways of solving problems, one bit at a time.

Reflection through user feedback

In more recent years I have seen the User Experience field better integrated with agile delivery teams through practices such as user research, user testing, monitoring actual usage and collecting user feedback to constantly improve the product.

While good engineering practices help teams build systems right, only through user feedback can teams reflect on if they are building the right system.

Conclusion

Reflection is the most powerful way that teams can become agile. Through reflection, teams can better choose the practices they want and gain value immediately because they understand why they are adopting different ways of working.

Categories: Blogs

The Canopy Test Framework

Testing TV - Mon, 02/08/2016 - 18:16
In this interview, Eric Potter presents the Canopy test framework. Canopy is an open source web testing framework with one goal in mind, make UI testing simple. It provides a solid stabilization layer built on top of Selenium. It is quick to learn, even if you have never done UI Automation, and don’t know F#. […]
Categories: Blogs

Serendipity Questions

Thoughts from The Test Eye - Fri, 02/05/2016 - 20:37
Skills

This Tuesday I held a EuroSTAR webinar: Good Testers are Often Lucky – using serendipity in software testing (about how to increase the chances of finding valuable things we weren’t looking for)
Slide notes and recording are available.
I got many good questions, and wanted to answer a few of them here:

How can we advocate for serendipity when managers want to cut costs?

Well, the “small-scale serendipity” actually doesn’t cost anything. It just requires a tester to be ready for unexpected findings, and sometimes spend 20 seconds looking at a second place. The cost appears when investigating important problems, but in that case, I would guess it is worth it (never seeing any problems or doing no testing at all would be the lowest cost…)
I also know that many testing efforts involve running the same types of tests over and over again. When you know these tests won’t find new information, maybe it is time to skip them sometimes and do something rather different?

Do you have issues finding the root of the problem considering you are doing many variations?

If it is a product I know well, I don’t have problems reproducing and isolating. But if it is a rather new product it can be more difficult, but I would rather see these problems and communicate what I know, than not see them at all!
To take more detailed notes than normally, or to use a tool like Problem Steps Recorder (psr on Windows) can help if you expect this to happen.

Is there any common field for automated testing and serendipity?

Yes!
It is easy to think that automation is a computeresque thing without a lot of manual involvement and tinkering with the product. But in my experience, you interact a lot with the product while learning and creating your tests. And I make mistakes that can discover problems with the product’s error handling.
I know this combination of coding and exploratory testing happens a lot, but it is not very elaborated in the literature (but the recent automation paper by Bach/Bolton have good examples on this.)

Another example of automation and serendipity is that you combine human observations while the tests are running. A person can notice patterns or anomalies, or maybe see what the users perception is when the software is occupied with a lot of other things.
Computers are marvellous, but they suck at serendipity.

Categories: Blogs

Must read: A Context-Driven Approach to Automation in Testing

Test Automation is a hot item in our industry. Many people talk about it and much has been written on this topic. Sadly there is still a lot of misconception about test automation. Also, some people say context-driven testing is anti test automation. I think that is not true. Context-driven testers use different names for it and they are more careful when they speak about automation and tooling to aid their testing. Also, context-driven testers have been fighting the myths that testing can be automated for years. In 2009 Michael Bolton wrote his famous blog post “Testing vs. checking“. Later flowed up by “Testing and checking refined” and “Exploratory testing 3.0“. These tremendous important blog post learn us about how context-driven testers define testing and that testing is a sapient process. A process that relies on skilled humans. Recently Michael Bolton and James Bach have published a white paper to share their view on automation in testing. A vision of test automation that puts the tester at the center of testing. This is a must read for everyone involved in software development.

The follow text is taken from the “A Context-Driven Approach to Automation in Testing” white paper written by James Bach and Michael Bolton.

We can summarize the dominant view of test automation as “automate testing by automating the user.” We are not claiming that people literally say this, merely that they try to do it. We see at least three big problems here that trivialize testing:

  1. The word “automation” is misleading. We cannot automate users. We automate some actions they perform, but users do so much more than that.
  2. Output checking can be automated, but testers do so much more than that.
  3. Automated output checking is interesting, but tools do so much more than that.

robotAutomation comes with a tasty and digestible story: replace messy, complex humanity with reliable, fast, efficient robots! Consider the robot picture. It perfectly summarizes the impressive vision: “Automate the Boring Stuff.” Okay. What does the picture show us?

It shows us a machine that is intended to function as a human. The robot is constructed as a humanoid. It is using a tool normally operated by humans, in exactly the way that humans would operate it, rather than through an interface more suited to robots. There is no depiction of the process of programming the robot or controlling it, or correcting it when it errs. There are no broken down robots in the background. The human role in this scene is not depicted. No human appears even in the background. The message is: robots replace humans in uninteresting tasks without changing the nature of the process, and without any trace of human presence, guidance, or purpose. Is that what automation is? Is that how it works? No!

The problem is, in our travels all over the industry, we see clients thinking about real testing, real automation, and real people in just this cartoonish way. The trouble that comes from that is serious…

Read more in the fabulous white paper “A Context-Driven Approach to Automation in Testing” by James Bach and Michael Bolton.

Categories: Blogs

Heuristics for Software Testing Leaders

Testing TV - Mon, 02/01/2016 - 16:27
Some testers are always leaders. Others see a project challenge that demands positive action and step into a leadership void because somebody has to. You may never seek or be given a formal role as a test leader, and yet be a trusted leader in the minds of your co-workers and managers. What is test […]
Categories: Blogs

End of one thing. Start of another.

The Social Tester - Mon, 02/01/2016 - 12:00

First, I’d like to say Thank You. Thank you for being loyal readers and for being here with me over the last 6 or so years on The Social Tester.…

The post End of one thing. Start of another. appeared first on The Social Tester.

Categories: Blogs

We. Use. Tools.

James Bach's Blog - Sun, 01/31/2016 - 13:17

Context-Driven testers use tools to help ourselves test better. But, there is no such thing as test automation.

Want details? Here’s the 10,000 word explanation that Michael Bolton and I have been working on for months.

Editor’s Note: I have just posted version 1.03 of this article. This is the third revision we have made due to typos. Isn’t it interesting how hard it is to find typos in your own work before you ship an article? We used automation to help us with spelling, of course, but most of the typos are down to properly spelled words that are in the wrong context. Spelling tools can’t help us with that. Also, Word spell-checker still thinks there are dozens of misspelled words in our article, because of all the proper nouns, terms of art, and neologisms. Of course there are the grammar checking tools, too, right? Yeah… not really. The false positive rate is very high with those tools. I just did a sweep through every grammar problem the tool reported. Out of the five it thinks it found, only one, a missing hyphen, is plausibly a problem. The rest are essentially matters of writing style.

One of the lines it complained about is this: “The more people who use a tool, the more free support will be available…” The grammar checker thinks we should not say “more free” but rather “freer.” This may be correct, in general, but we are using parallelism, a rhetorical style that we feel outweighs the general rule about comparatives. Only humans can make these judgments, because the rules of grammar are sometimes fluid.

Categories: Blogs