Skip to content

Blogs

The Bestselling Software Intended for People Who Couldn’t Use It.

James Bach's Blog - 6 hours 33 min ago

In 1983, my boss, Dale Disharoon, designed a little game called Alphabet Zoo. My job was to write the Commodore 64 and Apple II versions of that game. Alphabet Zoo is a game for kids who are learning to read. The child uses a joystick to move a little character through a maze, collecting letters to spell words.

We did no user testing on this game until the very day we sent the final software to the publisher. On that day, we discovered that out target users (five year-olds) did not have the ability to use a joystick well enough to play the game. They just got frustrated and gave up.

We shipped anyway.

This game became a bestseller.

alphabet-zoo

It was placed on a list of educational games recommended by the National Education Association.

alphabet-zoo-list

Source: Information Please Almanac, 1986

So, how to explain this?

Some years later, when I became a father, I understood. My son was able to play a lot of games that were too hard for him because I operated the controls. I spoke to at least one dad who did exactly that with Alphabet Zoo.

I guess the moral of the story is: we don’t necessarily know the value of our own creations.

Categories: Blogs

Dangers of Certainty in Realizing Customer Value

Don Quixote was certain he saw Giants instead of windmills. In this epic story, he believed he knew the answers and saw what he wanted to see.  Unfortunately in many organizations, there is this same phenomenon, a need to act as if we are certain.  In fact, the higher up you go in an organization, the compulsion of acting with certainty becomes greater and greater.  Statements like “That’s why we pay you the big bucks” are used to imply that the higher in an organization, the more you are expected to just “know”. 
Some think they must act with “pretend certainty” for the benefit of their career.  Others have convinced themselves of “arrogant certainty” where they believe they know the answer or solution but don’t (or can’t) provide any solid basis for this certainty. Unfortunately this arrogance can be interpreted as confidence that can be dangerous to the success of a company.  Nassim Nicolas Taleb refers to “epistemic arrogance” that highlights the difference between what someone actually knows and how much he thinks he knows. The excess implies arrogance.  What has allowed certainty within companies to thrive is that there is a distance between the upfront certainty and the time it takes to get to the final outcome.  There lacks accountability between certainty at the beginning and the actual results at the end.  Often times the difference is explained away by the incompetence of others who didn’t build or implement the solution correctly.
Of course, the truth is somewhere in between. The concept of certainty is actually dangerous to an enterprise since it removes the opportunity of acknowledging the options and allowing the enterprise to apply a discovery mindset approach toward real customer value via customer feedback loops and more.
We also want to avoid the inverse that is remaining in uncertainty due to analysis paralysis.  A way to avoid this is to apply work in an incremental framework with customer feedback loops to enable more effective and timely decision-making. Customer feedback will provide us with the evidence for making better decisions. Applying an incremental mindset will enable us to make smaller bets that are easier to make and allow us to adapt sooner. 
A healthier and more realistic approach is to have leaders who understand that uncertainty is actually a smart starting position and then apply processes that support gaining certainty. It is, therefore, incumbent upon us to have an approach that admits to limited information and uncertainty, and then applies a discovery process toward customer value. In the end, the beaten and battered Don Quixote forswears all the chivalric false certainty he followed so fervently.  Is it time for management to give up the certainty mindset they think they have and instead replace it with a discovery mindset as a better path to customer success? 
Categories: Blogs

Making the Earth Move

Hiccupps - James Thomas - Sat, 06/25/2016 - 09:59

In our reading group at work recently we looked at Are Your Lights On? By Weinberg and Gauss. Opinions of it were mixed but I forgive any flaws it may have for this one definition:

  A problem is a difference between things as desired and things as perceived.

It's hard to beat for pithiness, but Michael Bolton's relative rule comes close. It runs:

  For any abstract X, X is X to some person, at some time.

And combining these gives us good starting points for attacking a problem of any magnitude:
  • the things
  • the perception of those things
  • the desires for those things
  • the person(s) desiring or perceiving
  • the context(s) in which the desiring or percieving is taking place
Aspiring problem solvers: we have a lever. Let's go and make the earth move for someone!
Image: Wikimedia Commons
Categories: Blogs

AutoMapper 5.0 speed increases

Jimmy Bogard - Fri, 06/24/2016 - 23:43

Just an update on the work we’ve been doing to speed up AutoMapper. I’ve captured times to map some common scenarios (1M mappings). Time is in seconds:

  Flattening Ctor Complex Deep Native 0.0148 0.0060 0.9615 0.2070 5.0 0.2203 0.1791 2.5272 1.4054 4.2.1 4.3989 1.5608 134.39 29.023 3.3.1 4.7785 1.3384 72.812 34.485 2.2.1 5.1175 1.7855 122.0081 35.863 1.1.0.118 6.7143 n/a 29.222 38.852

The complex mappings had the biggest variation, but across the board AutoMapper is *much* faster than previous versions. Sometimes 20x faster, 50x in others. It’s been a ton of work to get here, mainly from the change in having a single configuration step that let us build execution plans that exactly target your configuration. We now build up an expression tree for the mapping plan based on the configuration, instead of evaluating the same rules over and over again.

We *could* get marginally faster than this, but that would require us sacrificing diagnostic information or not handling nulls etc. Still, not too shabby, and in the same ballpark as the other mappers (faster than some, marginally slower than others) out there. With this release, I think we can officially stop labeling AutoMapper as “slow” ;)

Look for the 5.0 release to drop with the release of .NET Core next week!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Reviewing "Context Driven Approach to Automation in Testing"

Chris McMahon's Blog - Fri, 06/24/2016 - 01:45


I recently had occasion to read the "Context Driven Approach to Automation in Testing". As a professional software tester with extensive experience in test automation at the user interface (both UI and API) for the last decade or more for organizations such as Thoughtworks, Wikipedia, Salesforce, and others, I found it a nostalgic mixture of FUD (Fear, Uncertainty, Doubt), propaganda, ignorance and obfuscation. 

It was weirdly nostalgic for me: take away the obfuscatory modern propaganda terminology and it could be an artifact directly out of the test automation landscape circa 1998 when vendors, in the absence of any competition, foisted broken tools like WinRunner and SilkTest on gullible customers, when Open Source was exotic, when the World Wide Web was novel. Times have changed since 1998, but the CDT approach to test automation has not changed with it. I'd like to point out the deficiencies in this document as a warning to people who might be tempted to take it seriously.

The opening paragraph is simply FUD. If we take out the opinionated language

poorly applied
terrible waste
confusion
pain
hard
shallow, narrow, and ritualistic
pandemic, rarely examined, and absolutely false

what's left is "Tool use in testing must therefore be mediated by people who understand the complexities of tools and of tests". This is of course trivially true, if not an outright tautology. The authors then proceed to demonstrate how little they know about such complexities.

The sections that follow down to the bits about "Invest in..." are mostly propaganda with some FUD and straw-man arguments about test automation strewn throughout. ("The only reason people consider it interesting to automate testing is that they honestly believe testing requires no skill or judgment" Please, spare me.) If you've worked in test automation for some time (and if you can parse the idiosyncratic language), there is nothing new to read here, this was all answered long ago. Again, much of these ten or so pages for me brought strong echoes of the state of test automation in the late 1990s. If you are new to test automation, consider thinking of this part of the document as an obsolete, historical look into the past. There are better sources for understanding the current state of test automation.

The sections entitled (as of June 2016) "Invest in tools that give you more freedom in more situations" and "Invest in testability" are actually all good basic advice, I can find no fault in any of this. Unfortunately the example shown in the sections that follow ignores every single piece of that advice.

Not only does the example that fills the final part of the paper ignore every bit of advice the authors give, it is as if the authors have chosen a project doomed to fail, from the odd nature of the system they've chosen to automate, to the wildly inappropriate tools they've chosen to automate it with.

Their application to be tested is a lightweight text editor they've gotten as a native Windows executable. Cursory research shows it is an open source project written in C++ and Qt, and the repo on github  has no test/ or spec/ directory, so it is likely to be some sort of cowboy code under there. I assume that is why they chose this instead of, say, Microsoft Word or some more well engineered application.

Case #1 and Case #2 describe some primitive mucking around with grep, regular expressions, and configuration. It would have been easier just to read the source on github. If this sort of thing is new to you, you probably haven't been doing this sort of work long, and I would suggest you look elsewhere for lessons.

Case #3 is where things get bizarre. First they try automating the editor with something called "AutoHotKey", which seems to be some sort of ad-hoc collection of Windows API client calls, which according to the AutoHotKey project history is wildly buggy as of late 2013 but has had some maintenance off and on since then. I would not depend on this tool in a production environment.

That fails, so then they try some Ruby libraries. Support for Windows on Ruby is notoriously bad, it's been a sticking point in the Ruby community for years, and any serious Ruby programmer would know that. Ruby is likely the worst possible language choice for a native Windows automation project. If all you have is a hammer...

Then they resort to some proprietary tool from HP. You can guess the result.

Again, assuming someone would want to automate a third-party Windows/Qt app at all, anyone serious about automating a native Windows app would use a native Windows language, C# or VisualBasic.NET, instead of some hack like AutoHotKey. C# and VisualBasic.NET are really the only reasonable choices for such a project.

It is as if this project has been deliberately or naively sabotaged. If this was done deliberately, then it is highly misleading; if naively, then it is simply sad.

Finally I have to point out (relevant to the article section "Invest in testability", and again strong shades of 1998) that this paper completely ignores the undeniable fact that the vast majority of modern software development takes place on the web, with the UI appearing in a web browser and APIs offered from servers over a network.  This article makes no mention that selenium/webdriver is a UI automation standard adopted by the World Wide Web Consortium (W3C), that the webdriver automation interface is fully supported by every major browser vendor:  Google Chrome, Mozilla Firefox, Microsoft Internet Explorer, Opera, and most recently Apple Safari, or that the Selenium API is fully supported in five programming languages: C#, Java, Ruby, Python, and Javascript, and partially supported in many more.

Ultimately, this article is mostly FUD, propaganda, and obfuscation. The parts that are not actually wrong or misleading are naive and trivial. Put it like this: if I were considering hiring someone for a testing position, and they submitted this exercise as part of their application, I would not hire them, even for a junior position. I would feel sorry for them.



Categories: Blogs

Who I am and where I am June 2016

Chris McMahon's Blog - Wed, 06/22/2016 - 07:09


From time to time I find it helpful to mention where I am and how I got here. I have been pretty quiet since 2010 but I used to say a lot of stuff in public.

For the past year I have worked for Salesforce.org, formerly the Salesforce Foundation, the independent entity that administers the philanthropic programs of Salesforce.com. My team creates free open source software for the benefit of non-profit organizations.  I create and maintain automated browser tests in Ruby, using Jeff "Cheezy" Morgan's page_object gem.  I'm a big fan.

My job title is "Senior Member of the Technical Staff, Quality Assurance".  I have no objection to the term "Quality Assurance", that term accurately describes the work I do. I am known for having said "QA Is Not Evil".

Before Salesforce.org I spent three years with the Wikimedia Foundation , working with Željko Filipin  mostly, on a similar browser test automation project , but much larger.

I worked for Socialtext, well known in some circles for excellent software testing. I worked for the well known agile consultancy Thoughtworks for a year, just when the first version of Selenium was being released. I started my career testing life-critical software in the US 911 telecom systems, both wired/landline and wireless/mobile.

I have been 100% remote/telecommuting since 2007. Currently I live in Arizona, USA.

I used to give talks at conferences, including talks at Agile2006, Agile2009, and Agile2013. I've been part of the agile movement since before the Manifesto existed.  I attended most of the Google Test Automation Conferences  held in the US. I have no plans to present at any open conferences in the future.

I wrote a lot about software test and dev mostly around 2006-2010. You can read most of it at stickyminds  and TechTarget , and a bit at PragProg

I hosted two peer conferences in 2009 and 2010 in Durango Colorado called "Writing About Testing". They had some influence on the practice of software testing at the time, and still resonate from time to time today.

I create UI test automation that finds bugs. Before Selenium existed I was user #1 for WATIR, Web Application Testing In Ruby. I am quoted in both volumes of Crispin/Gregory Agile Testing , and I am a character in Marick's Everyday Scripting.
Categories: Blogs

Usability Testing at the Cafe

Testing TV - Tue, 06/21/2016 - 16:16
Surprisingly, up to 85% of core usability problems can be found by observing just 5 people using your application. Conducting quick usability testing at a cafe is very effective, cheap, and doesn’t require any special tools. Resources Why You Only Need to Test with 5 Users Usability testing questionnaire
Categories: Blogs

Too controversial?

On May 11 2016 TestNet (*)  held her spring conference with “Strengthen your foundation: new skills for testers” as the central theme. The call for papers that was send out made me frown.  It said:

“In the final keynote of the TestNet autumn event, speaker Rini van Solingen referred to the end of software testing as we know it. ‘What one can learn in merely four weeks, does not deserve to be called a profession’, he stated. But is that true? Most of our skills, we learn on the job. There are many tools, techniques, skills, hints and methods not typical for the testing profession but essential for enabling us to do a good job nonetheless. Furthermore the testing profession is constantly evolving as a result of ICT and business trends. Not only functional testing, but also performance, security or other test varieties. This presses us to expand our knowledge, not just the testing skills, but also of the contexts in which we do our jobs. The TestNet Spring Event 2016 is about all topics that are not addressed in our basic testing course, but enable us to do a better job: knowledge, skills, experience.”

I think that there are a lot of skills that are not addressed in our “basic testing course” where they should have been addressed. I am talking about basic testing skills! So I wrote an abstract for a keynote for the conference:

The theme for the spring event is “Strengthen your foundation: new skills for testers”. My story takes a step back: to the foundation! Because I think that the foundation of most testers is not as good as they think. The title would then be: “New skills for testers: back to basics!

Professional testers are able to tell a successful story about their work. They can cite activities and come up with a thorough overview of the skills they use. They are able to explain what they do and why. they can report progress, risk and coverage at any time. They will gladly explain what oracles and heuristics they use, know everything about the product they are testing and are deliberately trying to learn continuously.

It surprises me that testers regularly can’t give a proper definition of testing. Let alone that they are able to describe what testing is. A large majority of people who call themselves professional testers can not explain what they do when they are testing. How can anyone take a tester seriously if he/she can not explain what he/she is doing all day? Try it: go to one of your testing colleagues and ask what he or she is doing and why it contributes to the mission of the project. Nine out of ten testers I’ve asked this simple question, start to stutter.

What do you exactly do if you use a “data combination test” or a “decision table”? What skills do you use? “Common sense” in this context does not answer the question because it is not a skill, is it? I think of: modeling, critical thinking, learning, combine, observe, reasoning, drawing conclusions just to name a few. By looking in detail at what skills you are actually using, helps you recognize which skills you could/should train. A solid foundation is essential to build on it in the future!

How can you learn the right skills if you do not know what skills you are using in the first place? In this presentation I will take the audience back to the core of our business: skills! By recognizing the skills and training them, we are able to think of and talk about our profession with confidence. The ultimate goal is to tell a good story about why we test and value it adds.

We need a solid foundation to build on!

My keynote wasn’t selected. So I send it in as a normal session, since I really am bothered by the lack of insight in our community. But it didn’t make it on the conference program as a normal session either. Why?  Because it is too controversial they told me. After applying for the keynote the chairman called me to tell me that they weren’t going to ask me to do a keynote because the did want a “negative” sound on stage. I guess I can imagine that you do not want to start the day with a keynote who destroys your theme by saying that we need to strengthen our foundation first before moving on.

But why is this story too controversial for the conference at all? I guess it is (at least in the eyes of the program committee) because we don’t like to admit that we lack skills. That we don’t really know how to explain testing. I wrote about that before here.  It bothers me that we think our foundation is good enough, while it really isn’t! We need to up our game and being nice and ignoring this problem isn’t going to help us. A soft and nice approach doesn’t wake people up. That is why I wanted to shake this up a bit. To wake people up and give them some serious feedback … I wrote about serious feedback before here. But the Dutch Testing Community (represented by TestNet) finds my ideas too controversial…

 

(*) TestNet is a network of, by and for testers. TestNet offers its members the opportunity to maintain contacts with other testers outside the immediate work environment and share knowledge and experiences from the field.

Categories: Blogs

10 Lessons from a Long Running DDD Project – Part 2

Jimmy Bogard - Mon, 06/20/2016 - 21:04

In Part 1 of this 2-part series, I walked through some lessons learned from the first incarnation of our project. The original project I’d still qualify as a success, in that it was delivered on-time, within budget, and is still under active development today. But we learned a lot of lessons from that project, and were lucky enough to have another crack at it so to speak when we started a new project, in the almost exact domain, but this time the constraints were quite a bit different.

In the first project, we targeted everyone that could possibly be involved with the overall process. This wound up to be a dozen state agencies and countless other groups and sub-groups. Quite a lot of contention in the model (also a great reason why you can never have a single master data model for an entire enterprise). We felt good about the software itself – it was modular and easy to extend, but the domain model itself just couldn’t satisfy all the users involved, only really a subset.

The second project targeted only a single aspect of the original overall legal process – the prosecution agency. Targeting just a single group, actually a single agency, brought tremendous benefits for us.

Lesson 6: Cohesiveness brings greater clarity and deeper insight

Our initial conversations in the second project were somewhat colored by our first project. We started with an assumption that the core focus, the core domain would be at least the same as the monolith, but maybe a different view of it. We were wrong.

In the new version of the app, the entire focus of the system revolves around “cases”. I know, crazy that an app built for the day-to-day functions of a prosecution agency focuses centrally on a case:

image

Once we settled on the core domain, the possibilities then greatly opened up for modeling around that concept. Because the first app only tangentially dealt with cases (there wasn’t even a “Case” in the original model), it was more or less an impedance mismatch for its users in the prosecution agency. It was a bit humbling to hear the feedback from the prosecutors about the first project.

But in the second project, because our core domain was focused, we could spend much more time modeling workflows and behaviors that fit what the prosecution agency actually needed.

Lesson 7: Be flexible where you need to, rigid in others

Although we were able to come to a consensus amongst prosecution agencies about what a case was, what the key things you could DO with a case were and the like, we couldn’t get any consensus about how a case should be managed.

This makes a lot of sense – the state has legal reporting requirements and the courts have a ton of procedural rules, but internal to an agency, they’re free to manage the work any way they wanted to.

In the first system, roles were baked in to the system, causing a lot of confusion for counties where one person wore many different hats. In the new system, permissions were hard-coded against tasks, but not roles:

image

The Permission here is an enum, and we tied permissions to tasks like “Approve Case” and “Add Evidence” and “Submit Disposition” etc. Those were directly tied to actions in our application, and you couldn’t add new permissions without modifying the code.

Roles (or groups, whatever) were not hardcoded, and left completely up to each agency how they liked to organize their work and decide who can do what.

With DDD it’s important to model both the rigid and flexible, they’re equally important in the overall model you build.

Lesson 8: Sometimes you need to invent a model

While we were able to model quite well the actions one can perform with an individual case, it was immediately apparent when visiting different county agencies that their workflows varied significantly inside their departments.

This meant we couldn’t do things like implement a workflow internal to a case itself – everyone’s workflow was different. The only thing we could really embed were procedural/legal rules in our behaviors, but everything else was up for grabs. But we still wanted to manage workflows for everyone.

In this case, we needed to build consensus for a model that didn’t really exist in each county in isolation. If we focused on a single county, we could have baked the rules about how a case is managed into their individual system. But since we were building a system across counties, we needed to build a model that satisfied all agencies:

image

In this model, we explicitly built a configurable workflow, with states and transitions and security roles around who could perform those transitions. While no individual county had this model, it was the meta-model we found while looking across all counties.

Lesson 9: Don’t blindly follow pattern advice

In the new app, I performed an experiment. I would only add tools, patterns, and libraries when the need presented itself but no sooner. This meant I didn’t add a repository, unit of work, services, really anything until an actual pain surfaced. Most of the DDD books these days have prescriptive guidance about what your domain model should look like, how you should do repositories and so on, but I wanted to see if I could simply arrive at these patterns by code smells and refactoring.

The funny thing is, I never did. We left out those patterns, and we never found a need to put them back in. Instead, we drove our usage around CQRS and the mediator pattern (something I’ve used for years but finally extracted our internal usage into MediatR. Instead, our controllers were pretty uniform in their appearance:

image

And the handlers themselves (as I’ve blogged about many times) were tightly focused on a single action, with no need to abstract anything:

image

I’ve extended this to other areas of development too, like front-end development. It’s actually kinda crazy how far you can get without jQuery these days, if you just use lodash and the DOM.

Lesson 10: Microservices and anti-corruption layers are your friend

There is a downside to going to bounded contexts and away from the “majestic monolith”, and that’s integration. Now that we have an application solely dealing with one agency, we have to communicate between different applications.

This turned out to be a bit easier than we thought, however. This domain existed well before computers, so the interfaces between the prosecution and external parties/agencies/systems was very well established.

This was also the section of the book skipped the most, around anti-corruption layers and bounded contexts. We had to crack open that section of the book, dust it off, smell the smell of pages never before read, and figure out how we should tackle integration.

We’ve quite a bit of experience in this area it turns out, so it was really just a matter of deciding for each 3rd party what kind of integration would work best.

image

For some 3rd parties, we could create an entirely separate app with no integration. Some needed a special app that performed the translation and anti-corruption layer, and some needed an entirely separately deployed app that communicated to our system via hypermedia-rich REST APIs.

Regardless, we never felt we had to build a single solution for all involved. We instead picked the right integration for the job, with an eye of not reinventing things as we went.

Conclusion

In both cases, I’d say both our systems were successful, since they shipped and are both being used and extended to this day. With the more tightly focused domain in the second system we were able to achieve that “greater insight” that the DDD book talks about.

In case anyone wonders, I intentionally did not talk about actors or event sourcing in this series – both things we’ve done and shipped, but found the applicability to be limited to inside a bounded context (or even more typically, a corner of a bounded context). Another post for another day!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

How to write a test strategy

I’ve documented my overall approach to rapid, lightweight test strategy before but thought it might be helpful to post an example.  If you haven’t read the original post above, see that first.

This is the a sanitised version of the first I ever did, and while there are some concessions to enterprise concerns, it mostly holds up as a useful example of how a strategy might look.  I’m not defending this as a good strategy, but I think this worked as a good document of an agreed approach.

Project X Test Strategy Purpose

The purpose of this document is –

  • To ensure that testing is in line with the business objectives of Company.
  • To ensure that testing is addressing critical business risks.
  • To ensure that tradeoffs made during testing accurately reflect business priorities
  • To provide a framework that allows testers to report test issues in a pertinent and timely manner.
  • To provide guidance for test-related decisions.
  • To define the overall test strategy for testing in accordance with business priorities and agreed business risks.
  • To define test responsibilities and scope.
  • To communicate understanding of the above to the project team and business.
Background

Project X release 2 adds reporting for several new products, and a new report format.

The current plan is to add metric capture for the new products, but not generate reports from this data until the new reports are ready.  Irrespective of whether the complete Project X implementation is put into production, the affected products must still be capturing Usage information.

Key Features
  • Generation of event messages by the new products (Adamantium, DBC, Product C, Product A and Product B.
  • New database schema for event capture and reporting
  • New format reports with new fields for added products
  • Change of existing events to work with the new database schema
Key Dates
  • UAT – Mon 26/03/07 to Tue 10/04/07
  • Performance/Load – Wed 28/03/07 to Wed 11/04/07
  • Production – Thu 12/04/07
Key Risks Risk Impact Mitigation Strategy Risk area The team’s domain knowledge of applications being modified is weak or incomplete for key products. Impact of changes to existing products may be misjudged by the development team and products adversely affected.
  1. Product C team to modify their application to create the required messages and perform regression testing.
  2. Product B application will not be modified.  Only the logs will be processed
  3. Product A has good selenium and unit test coverage, and domain skills exist in the team
  4. Greater focus will be placed on creating regression tests for PRODUCT D and leveraging automated QTP scripts from other teams.
Project Data architect being replaced. Supporting information that is necessary for generating reports may not be captured.  There may be some churn in technical details.
  1. Development team has improved domain knowledge from first release
  2. Intention is to provide improved technical specifications and mapping documents
  3. Early involvement of reporting testers to inspect output and provide up-front test cases
Project Strategy for maintaining version 1 and version 2 of the ZING database in production has not been defined. Test strategy may not be appropriate. None. Project Insufficient time to test all events and perform regression testing of existing products. Events not captured, not captured correctly, or applications degraded in functionality or performance
  1. Early involvement of reporting testers to inspect output and provide up-front test cases (defect prevention)
  2. Additional responsibility of developers to write dbUnit tests.

These two activities will free testers to focus on QTP regression scripts. Project Technical specifications and mapping documents not ready prior to story development Mappings may be incorrect.  Test to requirement traceability difficult to retrofit. Retrofit where time permits.  Business to determine value of this activity. Project XML may not be correctly transformed Incorrect data will be collected Developers will use dBUnit to perform integration testing of XML to database mapping.  This will minimise error in human inspection. Product Usage information may be lost.  It is critical that enough information be captured to relate event information to customers, and that the information is correct. No Usage information available. Alternate mechanisms exist for capturing information for Product A and Product B.  Product B needs to implement a solution.  Regression testing needs to ensure that existing Product D events are unaffected. Product No robust and comprehensive automated regression test suite for PRODUCT D components.  May not be time to develop a full suite of QTP tests for all events and field mappings. PRODUCT D regressions introduced, or regression testing of Product D requires extra resourcing. Will attempt to leverage PRODUCT D scripts from other projects and existing scripts, while extending the QTP suite. Project X changes affect performance of existing products Downtime of Product D products and/or loss of business. Performance testing needs to cover combined product tests and individual products compared to previous benchmark performance results Parafunctional Project X may affect products when under stress Downtime of Product D products and/or loss of business. Volume tests should simulate large tables, full disks and overloaded queues to see impact to application performance. Parafunctional Reliability tests may not have been performed previously.  That is,

tests that all events were captured under load. (I need to confirm this) Usage information may be going missing Performance testing should include some database checks to ensure all messages are being stored Historical Unable to integrate with the new reports prior to release of new event capturing. Important data may not be collected or data may not be suitable for use in reports.

  1. Production data being collected after deployment needs to be monitored.
  2. Output of transformations to be inspected by reports testers.
  3. Domain knowledge of developers is improved.
Product Project X Strategy Model

High-level architecture of application under test showing key interfaces and flows

The diagram above defines the conceptual view of the components for testing.  From this model, we understand the key interfaces that pertain to the test effort, and the responsibilities of different subsystems.

Products (Product D, Product A, Product C, Product B) Capabilities
  • Generate events
Responsibilities
  • Event messages should be generated in response to the correct user actions.
  • Event messages should contain the correct information
  • Event message should generate well-formed XML
  • Error handling?
SCE Capabilities
  • Receive events
  • Pass events to the ZING database

Responsibilities

  • Transform event XML to correct fields in ZING database for each event type
  • Error handling?
Reports Capabilities
  • Transform raw event information into aggregate metrics
  • Re-submit rejected events to ZING
  • Generate reports
Responsibilities
  • Correctly generate reports for event data which meets specifications
  • Correct data and re-load into ZING.
Interfaces Product to SCE

This interface will not be tested in isolation.

SCE to ZING

Developers will be writing dBUnit integration tests, which will take XML messages and verify that the values in the XML are mapped to the correct place in the ZING database.

ZING to Reports

The reporting component will not be available to test against, and domain expertise may not be as strong as for previously releases with the departure of senior personnel.  Available domain experts will be involved as early as possible to validate the contents of the ZING database.

Product to ZING

System testing will primarily focus on driving the applications and ensuring that –

  • Application’s function is unaffected
  • Product generates events in response to correct user actions
  • XML can be received by SCE
  • Products send the correct data through
Key testing focus
  • Ensuring existing event capture is unaffected (PRODUCT D).
  • Ensuring event details correctly captured for systems.  This is more critical for systems in which there is currently no alternative capture mechanism (Product C, Adamantium, DBC).  Alternative event capture mechanisms exist for Product B and Product A.
  • Ensuring existing system functionality is not affected.  Responsibility for Product C’s regression testing will lie with Product C’s team.  There is no change to the Product B application, but sociability testing may be required for log processing.  Product A has an effective regression suite (selenium), so the critical focus is on testing of PRODUCT D functionality.
Test prioritisation strategy

These factors guide prioritisation of testing effort:

  • What is the application’s visibility?  (ie. Cost of failure)
  • What is the application’s value? (ie. Revenue)

For the products in scope, cost of failure and application value are proportional.

There may be other strategic factors as presented by the business as we go, but the above are the primary drivers.

Priority of products –

  1. Adamantium/Product D
  2. Product C
  3. Product A
  4. Product B
  5. DBC

Within Product D, the monthly Usage statistics show the following –

  • 97% of searches are business type or business name searches.
  • 3% of searches are browse category searches
  • Map based searches are less than 0.2% of searches
Test design strategy Customer (Acceptance) Tests

For each event, test cases should address:

  • Ensuring modified applications generate messages in all expected situations.
  • Ensuring modified applications generate messages correctly (correct data and correct XML).
  • Ensuring valid messages can be processed by SCE.
  • Ensuring valid messages are transformed correctly go to the specified database fields.
  • Ensuring data in the database is acceptable for reporting needs.
Regression Tests

For each product where event sending functionality is added:

  • All other application functionality should be unchanged

Additionally, the performance test phase will measure the impact of modifications to each product.

Risk Factors

These tests correspond to the following failure modes –

  • Events are not captured at all.
  • Events are captured in a way which renders them unusable.
  • Systems whose code is instrumented to allow sending of events to SCE are adversely affected in their functionality.
  • Event data is mapped to field(s) incorrectly
  • Performance is degraded
  • Data is unsuitable for reporting purposes
Team Process

The development phase will consist of multiple iterations.

  1. At the beginning of each iteration, the planning meeting will schedule stories to be undertaken by the development team.
  2. The planning meeting will include representatives from the business, test and development teams.
  3. The goal of the planning meeting is to arrive at a shared understanding of scope for each story and acceptance criteria and record that understanding via acceptance tests in JIRA.
  4. Collaboration through the iteration to ensure that stories are tested to address the business needs (as defined by business representatives and specifications) and risks (as defined by business representatives and agreed to in this document).  This may include testing by business representatives, system testers and developers.
  5. The status of each story will be recorded in JIRA.

When development iterations have delivered the functionality agreed to by the business, deployment to environments for UAT and Performance and Load testing will take place.

Deliverables High priority
  • QTP regression suite for PROJECT X events (including Adamantium, DBC) related to business type and business name searches
  • Test summary report prior to go/no go meeting
Secondary priority
  • QTP regression suite for Product A (Lower volume, fewer events and the application already collects metrics).  Manual scripts and database queries will be provided in lieu of this.
Other
  • Product C should create PROJECT X QTP regression tests as part of their development work
  • Product B test suite will likely not be a QTP script as log files are being parsed as a batch process.  GUI regression scripts will be suitable when Product B code is instrumented to add event generation.  If time permits, we will attempt to develop a tool to parse a log file and confirm that the correct events were generated.
To do
  • Confirm strategy with stakeholders
  • Confirm test scope with Product C testers
  • Confirm events that are in scope for this release
  • Define scope of Product D testing and obtain Product D App. Sustain team testers for regression testing.
Categories: Blogs

Stop Calling It Theft: Thoughts on TheDAO

Radyology - Ben Rady - Sat, 06/18/2016 - 01:41
Like many people involved in Ethereum, my attention has been thoroughly captured by the recent events surrounding TheDAO. As an Ethereum miner, I have a little stake in this game. The reentrancy vulnerability found in TheDAO smart contract has resulted... Ben Rady
Categories: Blogs

Auto Did Act

Hiccupps - James Thomas - Fri, 06/17/2016 - 06:54

You are watching me and a machine interacting with the same system. Our actions are, to the extent that you can tell from your vantage point, identical and the system is in the same state at each point in the sequence of actions for both of us. You have been reassured that the systems are identical in all respects that are relevant to this exercise; you believe that all concerned in setting it up are acting honestly with no intention to mislead, deceive, distort or otherwise make a point. The machine and me performed the same actions on the same system with the same visible outcomes.

Are we doing the same task?

This is a testing blog. You are a tester. You have been around the block. More than once. You perhaps think that I haven't given you enough information to answer this question with any certainty. What task is being performed? Are the visible outcomes the only outcomes? To what extent does skill and adaptability form part of the task? To what extent does interpretation on the part of the actor need to happen for the task to be completed successfully? What does success mean in this task anyway? Was the task completed successfully in the examples you watched? What does it mean to be the "same task" here? And from whose perspective?

This is a testing blog. I am a tester. I've also been round the block. More than once. More than twice. I've recently finished reading Harry Collins' The Shape of Actions and, while I'll say up front that I found it reasonably hard-going, it was also highly thought-provoking. This post pulls out just one fragment of the argument made in that book, but one that I find particularly interesting:
Automation of some task becomes tractable at the point where we become indifferent to the details of it.There's probably some task that you perform regularly that was once tricky. Maybe it's one of those test setup tasks that involve getting the right components into the right configurations in relation to one another. One of the ones that means finding the right sequence of commands, in the right order, with the right timing, given the other things that are also in the environment.

As you re-ran this task, you began to learn what was significant to the task, which starting conditions influenced which steps, what could be done in parallel and what needed a particular sequence. You used to need to pay attention, exercise skill and judgment, take an active role. These days you just punch keys as efficiently as possible until its done. You don't look at the options on dialog boxes, you don't inspect the warnings that flash up on the console, you don't even stop checking Twitter on your other monitor. Muscle memory drives the process. Any tacit knowledge you were employing to coax your setup into being has been codified into explicit knowledge. You just need it to be done, and as quickly as possible.

You have effectively automated your task.

As a manager, I recognise an additional layer to this. Sometimes managers don't care (or, perhaps, don't care to think about) how a task is implemented and may thus mistake it for a task which can be automated. But the management perspective can be deceptive. Just because one actor in some task doesn't have to exercise skill, it doesn't mean that no skill is required for any aspect of the task by any actor.

Which reminds me of another Collins book, and a quote that I love from it: distance lends enchantment.
Image: https://flic.kr/p/8tf8q9
Categories: Blogs

State of Testing 2016 – My view

Markus Gaertner (shino.de) - Thu, 06/16/2016 - 20:44

Usually I don’t write many promotions for other’s contents on this blog as I try to keep it personal and focused on my personal views. Recently I was contacted on the International 2016 State of Testing report, and whether I would like to do a write-up about it. I asked whether it would be ok to post a personal view, so here it is.

Demographics – and what do they tell me?

The top areas from the report are Europe (& Russia), USA, and India. I think these are also the biggest areas when it comes to software testing. The demographics tell me that the data according to my impressions is not very biased but well-spread.

About a third of the respondents work across four different locations. Another third work in a single location. My personal view on this is that there is a good mix of testers working in on location, and way more spread across different locations. I think this might stem from different out-sourcing companies as well as companies working across different sites for various reasons – even though this usually makes the formation of real teams hard – at least in my experience.

Most of the respondents have working experience of five years or more. I think the majority of testers new in the field usually don’t get immediately their attention on such kind of surveys. I think this is tragic, as in the long run we should be working on integrating people new to the field more easily.

There also appear many test managers in the survey data. This seems quite unusual to me, as there certainly are way more testers than test managers – I hope. This usually raises the question to me how come there are so few testers passionate about their craft. In some way this is tragic, but it resembles the state of the industry.

Interestingly on time management, most time of the testers seems to be spent on documentation (51%) and dealing with environments (49%). That’s sort of weird, but also resembles my experiences with more and more open source tools, and more and more programmers not really caring how their stuff can be tested or even brought to production. On the other hand I notice many problems with test data-centric automation approaches, where handling test data appears to be the biggest worry in many organization. I usually attribute that to bad automation, as an automated tests usually should be easy to deal with, and create its own test data set that it operates on – a problem well-addressed in the xUnit Test Patterns book in my opinion – but few people appear to know about that book.

Skills – and what you should look out for?

Which sort of transitions my picture to the skills section. Testers appear to use a couple of approaches, foremost Exploratory Testing with 87%. There are also 60% mentioning they use scripted testing. This also matches my experience since testing rarely is purely Exploratory or purely scripted. I think the majority of testers claiming they use Exploratory Testing is either a signal of the rise of context-driven testing in general, or a bias in the data. I think it’s more of the former.

I liked that test documentation gets leaner. With the former 51% of the spare time of testers spent with documentation, this is certainly a good thing. At the conferences I attend I see more and more sessions on how to use mindmaps for that. Quite a third of the respondents said they already used mindmaps. I think that’s a good signal.

Even though the authors claim that formal training is on the raise when it comes to skills of testers, and their respective education, there are still many testers trained through training on the job and mentoring, as well as learning from books and online resources. I think this is a good trend, since I doubt that formal training will be able to keep up with transferring skills in the long run. They can inspire testers to dive deeper into certain topics, but on-the-job training and mentoring, as well as active reflection from material that you read, is a good thing, and way more powerful.

Unsurprisingly communication skills are the number one necessary skills for testers (78%). The next skillset that a tester needs according to the survey is on functional testing and automation, web technologies, and general testing methodologies. That resembles sort of my past as a tester, and which skills I put efforts into. Unsurprisingly 86% of the respondents claimed that they have test automation in place.

More Agile – less concerned

It seems that waterfall approaches are on the decline, even in the testing world. In 2015 42% mentioned they used Waterfall. In 2016 it were only 39%. 82% responded they used Agile – maybe every once in a while.

Even though the testing community usually is concerned from the historic background on their job safety, this uprise of Agile methodologies didn’t lead to more testers being concerned. Compared to 2015 where 42% were not concerned about their job, in 2016 there are 53% of the folks unconcerned. Probably that might be related to more context-driven approaches being more wide-spread.

This is just a summary with certain picks from myself. I encourage to dive into the State of Testing survey report on your own to get more details.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Categories: Blogs

Taking a (testing) break

Stefan Thelenius about Software Testing - Thu, 06/16/2016 - 16:49
This post is to inform that I am taking a break from being a professional software tester.

For those of you who follows me on Twitter it might not come as a surprise since I have been more into tweets about the domain of finance and pensions rather than specific testing related stuff during the last years.

I got some new energi after the Let's Test conference last year but it was not enough to fill up the amount of passion for testing that is necessary to maintain relevant in this occupation (in my opinion).

I get my kicks nowadays providing guidelines for consumers and I will start working on a small bureau providing independant consumer guidelines for pension- and insurance products.

I won't be closing any doors and I might return as a tester someday somewhere.

Until then I am thankful for all of the good stuff I have learned during my 20 years in the role and for all the amazing people I have met during this period of my life.

Au Revoir

/Stefan
Categories: Blogs

Forward Looking

Hiccupps - James Thomas - Wed, 06/15/2016 - 18:51

Aleksis Tulonen recently asked me for some thoughts on the future of testing to help in his preparation for a panel discussion. He sent these questions as a jumping-off point:
  • What will software development look like in 1, 3 or 5 years?
  • How will that impact testing approaches?
I was flattered to be asked and really enjoyed thinking about my answers. You can find them at The Future of Testing Part 3 along with those of James Bach, James Coplien, Lisa Crispin, Janet Gregory, Anders Dinsen, Karen Johnson, Alan Page, Amy Phillips, Maaret Pyhäjärvi, Huib Schoots, Sami Söderblom and Jerry Weinberg.
Image: https://flic.kr/p/pLsXJh
Categories: Blogs

Case Studies in Terrible Testing

Testing TV - Tue, 06/14/2016 - 10:29
Projects fail because they don’t test. Some fail because they test the wrong things. Others fail because they test too much. This session shares project case studies in software testing atrocities and what can be learned from them. You’ll come away questioning your own software testing. Check your dogma and let’s build better software. Video […]
Categories: Blogs

10 Lessons from a Long Running DDD Project – Part 1

Jimmy Bogard - Mon, 06/13/2016 - 18:14

Round about 7 years ago, I was part of a very large project which rooted its design and architecture around domain-driven design concepts. I’ve blogged a lot about that experience (and others), but one interesting aspect of the experience is we were afforded more or less a do-over, with a new system in a very similar domain. I presented this topic at NDC Oslo (recorded, I’ll post when available).

I had a lot of lessons learned from the code perspective, where things like AutoMapper, MediatR, Respawn and more came out of it. Feature folders, CQRS, conventional HTML with HtmlTags were used as well. But beyond just the code pieces were the broader architectural patterns that we more or less ignored in the first DDD system. We had a number of lessons learned, and quite a few were from decisions made very early in the project.

Lesson 1: Bounded contexts are a thing

Very early on in the first project, we laid out the personas for our application. This was also when Agile and Scrum were really starting to be used in the large, so we were all about using user stories, personas and the like.

We put all the personas on giant post-it notes on the wall. There was a problem. They didn’t fit. There were so many personas, we couldn’t look at all of them at one.

So we color coded them and divided them up based on lines of communication, reporting, agency, whatever made sense

image

Well, it turned out that those colors (just faked above) were perfect borders for bounded contexts. Also, it turns out that 72 personas for a single application is way, way too many.

Lesson 2: Ubiquitous language should be…ubiquitous

One of the side effects of cramming too many personas into one application is that we got to the point where some of the core domain objects had very generic names in order to have a name that everyone agreed upon.

We had a “Person” object, and everyone agreed what “person” meant. Unfortunately, this was only a name that the product owners agreed upon, no one else that would ever use the system would understand what that term meant. It was the lowest common denominator between all the different contexts, and in order to mean something to everyone, it could not contain behavior that applied to anyone.

When you have very generic names for core models that aren’t actually used by any domain expert, you have something worse than an anemic domain model – a generic domain model.

Lesson 3: Core domain needs consensus

We talked to various domain experts in many groups, and all had a very different perspective on what the core domain of the system was. Not what it should be, but what it was. For one group, it was the part that replaced a paper form, another it was the kids the system was intending to help, another it was bringing those kids to trial and another the outcome of those cases. Each has wildly different motivations and workflows, and even different metrics on which they are measured.

Beyond that, we had directly opposed motivations. While one group was focused on keeping kids out of jail, another was managing cases to put them in jail! With such different views, it was quite difficult to build a system that met the needs of both. Even to the point where the conduits to use were completely out of touch with the basic workflow of each group. Unsurprisingly, one group had to win, so the focus of the application was seen mostly through the lens of a single group.

Lesson 4: Ubiquitous language needs consensus

A slight variation on lesson 2, we had a core entity on our model where at least the name meant something to everyone in the working group. However, that something again varied wildly from group to group.

For one group, the term was in reference to a paper form filed. Another, something as part of a case. Another, an event with a specific legal outcome. And another, it was just something a kid had done wrong and we needed to move past. I’m simplifying and paraphrasing of course, but even in this system, a legal one, there were very explicit legal definitions about what things meant at certain times, and reporting requirements. Effectively we had created one master document that everyone went to to make changes. It wouldn’t work in the real world, and it was very difficult to work in ours.

Lesson 5: Structural patterns are the least important part of DDD

Early on we spent a *ton* of time on getting the design right of the DDD building blocks: entities, aggregates, value objects, repositories, services, and more. But of all the things that would lead to the success or failure of the project, or even just slowing us down/making us go faster, these patterns were by far the least important.

That’s not to say that they weren’t valuable, they just didn’t have a large contribution to the success of the project. For the vast majority of the domain, it only needed very dumb CRUD objects. For a dozen or so very particular cases, we needed highly behavioral, encapsulated domain objects. Optimizing your entire system for the complexity of 10% really doesn’t make much sense, which is why in subsequent systems we’ve moved towards a more CQRS model, where each command or query has complete control of how to model the work.

With commands and queries, we can use pretty much whatever system we want – from straight up SQL to event sourcing. In this system, because we focused on the patterns and layers, we pigeonholed ourselves into a singular pattern, system-wide.

Next up – lessons learned from the new system that offered us a do-over!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Going Postel

Hiccupps - James Thomas - Sun, 06/12/2016 - 23:29

Postel's Law - also known as the Robustness Principle - says that, in order to facilitate robust interoperability, (computer) systems should be tolerant of ill-formed input but take care to produce well-formed output. For example, a web service operating under HTTP standards should accept malformed (but interpretable) requests from its clients but return only conformant responses.  The principle became popular in the early days of Internet standards development and, for the historically and linguistically-minded, there's some interesting background in this post by Nick Gall.

On the face of it, the idea seems sensible: two systems can still talk - still provide a service - even if one side doesn't get everything quite right. Or, more generally: systems can extract signal from a noisy communication channel, and limit the noise they contribute to it.

The obvious alternative to Postel's Law is strict interpretation of whatever protocol is in effect and no talking - or no service - when one side errs even slightly. Which seems undesirable, right? But, as Joel Spolsky illustrates beautifully, following Postel's Law can lead to unwanted side-effects, confusions and costs over time as successive implementations of a system are built and tested against other contemporary and earlier implementations and bugs are either obscured or backwards compatibility hacks are required.

Talking to one of my team recently, I speculated that - even accepting its shortcomings - Postel's Law can provide a useful heuristic, a useful model, for human communication. We kicked that idea around for a short while and I've since spent a little time mulling it over. Here's some still quite raw thoughts. I'd be interested in others.

I like to use the Rule of Three when receiving input, when interpreting what I'm hearing or seeing. I find that it helps me get some perspective on whether or not I have reasonable confidence that I understand and gives me a chance to avoid locking into the first meaning I think of. (Although it's still a conscious effort on my part to remember to do it.) I feel like this also gives me the chance to be tolerant.

I will and do accept input that I don't believe is correct without clarification if I'm sufficiently confident that I understand the intended meaning. Perhaps I know the speaker well and that they have a tendency to say "please QA it" over "please test it" even though the former violates my preferred "standards". Context, as usual, is important. In different contexts I might decide to question the terminology (perhaps we are engaged in a private, friendly conversation) or let it slide (for example, we are in a formal meeting where there are significantly bigger fish to fry).

Unlike most, if not all, computer systems, I am able to have off-channel communications with other parties. I can be tolerant of input that I would prefer not to be and initiate a discussion about it (now or later; based on one instance or only after some number of similar occurrences) somewhere else. My choices are not binary.

I have attempted to head-off future potential interoperability problems by trying to agree shared terminology with parties that I need to communicate with. (This is particularly true when we need a language to discuss the problem we are trying to solve.) I have seen enough failures over time in this area that when I recognise the possibility of this being an issue I will consider investing time and effort and emotional capital in this meta-task.

Can we really say that there is a standard for human-human conversations? Simply, no. But there are conventions in different cultures, social situations, times and other contexts.

Despite this, when I'm producing output, I think that I want to conform to some basic standards of communication. (I've written about this kind of thing in e.g. 1 and 2) There are differences when communicating 1:1 versus 1:n, though. While I can tailor my output specifically for one person that I'm talking to right now, I can't easily do that when, say, speaking in front of, or writing for, a crowd.

I observe that sometimes people wilfully misunderstand, or even ignore, the point made by conversational partners in order to force the dialogue to their agenda or as a device to provoke more information, or for some other reason outside the scope of the content of the conversation itself, such as to show who is the boss. When on the receiving end of this kind of behaviour, is tolerance still a useful approach?

Sometimes I can't be sure that I understand and I have to ask for clarification. (And frequently people ask me for the same.) Some regular causes of my misunderstanding include unexpected terminology (e.g. using non-standard words for things), ambiguity (e.g. not making it clear which thing is being referred to), insufficient information (e.g. leaving out steps in reasoning chains).

Interestingly, all of these are likely to be relative issues. A speaker with some listener other than me might well have no problem, or different problems, or the same problems but with different responses. An analogy for this might be a web site serving the same page to multiple different browsers. The same input (the HTML) can result in multiple different interpretations (renderings in the browser). In some cases, nothing will be rendered; in other cases, the input might have been tailored for known differences (e.g. IE6 exceptions, but at cost to the writer of the web site); in still other cases something similar to the designer's idea will be provided; elsewhere a dependency (such as JavaScript) will be missing leading to some significant content not being present.

Spolsky talks about problems due to sequences of implementations of a system. Are there different implementations of me? Or the speakers I communication with? Yes, I think there are. We constantly evolve our approaches, recognise and attempt to override our biases, grow our knowledge, forget things, act in accordance with our mood, act in response to others' moods - or our interpretation of them, at least. These changes are largely invisible to those we communicate with, except for the impacts they might have on our behaviour. And interpreting internal changes from external behaviours is not a trivial undertaking.
Image: https://flic.kr/p/BojaF

Categories: Blogs

Extension Method in Selenium, what, why, and how???

Testing tools Blog - Mayank Srivastava - Thu, 06/09/2016 - 18:05
An Extension Method enable us to add methods to existing   types without creating a new derived type, recompile, or modify the original types. Alright, confusing??? Now consider you don’t have access for Car class and if I ask you to add a method for Car class, how would you do that? In this kind […]
Categories: Blogs

Writing cleaner JavaScript code with gulp and eslint

Decaying Code - Maxime Rouiller - Wed, 06/08/2016 - 09:35

With the new ASP.NET Core 1.0 RC2 right around the corner and it's deep integration with the node.js workflow, I thought about putting out some examples of what I use for my own workflow.

In this scenario, we're going to see how we can improve the JavaScript code that we are writing.

Gulp

This example uses gulp.

I'm not saying that gulp is the best tool for the job. I just find that gulps work really well for our team and you guys should seriously consider it.

Base file

Let's get things started. We'll start off the base template that is shipped with the RC1 template.

The first thing we are going to do is check what is being done and what is missing.

/// <binding Clean='clean' />
"use strict";

var gulp = require("gulp"),
    rimraf = require("rimraf"),
    concat = require("gulp-concat"),
    cssmin = require("gulp-cssmin"),
    uglify = require("gulp-uglify");

var paths = {
    webroot: "./wwwroot/"
};

paths.js = paths.webroot + "js/**/*.js";
paths.minJs = paths.webroot + "js/**/*.min.js";
paths.css = paths.webroot + "css/**/*.css";
paths.minCss = paths.webroot + "css/**/*.min.css";
paths.concatJsDest = paths.webroot + "js/site.min.js";
paths.concatCssDest = paths.webroot + "css/site.min.css";

gulp.task("clean:js", function (cb) {
    rimraf(paths.concatJsDest, cb);
});

gulp.task("clean:css", function (cb) {
    rimraf(paths.concatCssDest, cb);
});

gulp.task("clean", ["clean:js", "clean:css"]);

gulp.task("min:js", function () {
    return gulp.src([paths.js, "!" + paths.minJs], { base: "." })
        .pipe(concat(paths.concatJsDest))
        .pipe(uglify())
        .pipe(gulp.dest("."));
});

gulp.task("min:css", function () {
    return gulp.src([paths.css, "!" + paths.minCss])
        .pipe(concat(paths.concatCssDest))
        .pipe(cssmin())
        .pipe(gulp.dest("."));
});

gulp.task("min", ["min:js", "min:css"]);

As you can see, we basically have 4 tasks and 2 aggregate tasks.

  • Clean JavaScripts files
  • Clean CSS files
  • Minimize Javascript files
  • Minimize CSS files

The aggregate tasks are basically just to do all the cleaning or the minifying at the same time.

Getting more out of it

Well, that brings us to feature equality with what was available with MVC 5 with the Javascript and CSS minifying. However, why not go a step further?

Linting our Javascript

One of the most common thing we need to do is make sure we do not write horrible code. Linting is a code analysis technique that detects early problems or stylistic issues.

How do we get this working with gulp?

First, we install gulp-eslint with npm install gulp-eslint --save-dev run into the web application project folder. This will install the required dependencies and we can start writing some code.

First, let's start by getting the dependency:

var eslint = require('gulp-eslint');

And into your default ASP.NET Core 1.0 project, open up site.js and copy the following code:

function something() {
}

var test = new something();

Let's run the min:js task with gulp like this: gulp min:js. This will show that our file is minimized but... there's something wrong with the style of this code. The something function should be Pascal cased and we want this to be reflected in our code.

Let's integrate the linter in our pipeline.

First let's create our linting task:

gulp.task("lint", function() {
    return gulp.src([paths.js, "!" + paths.minJs], { base: "." })
        .pipe(eslint({
            rules : {
                'new-cap': 1 // function need to begin with a capital letter when newed up
            }
        }))
        .pipe(eslint.format())
        .pipe(eslint.failAfterError());
});

Then, we need to integrate it in our minify task.

gulp.task("min:js" , ["lint"], function () { ... });

Then we can either run gulp lint or gulp min and see the result.

C:_Prototypes\WebApplication1\src\WebApplication1\wwwroot\js\site.js 6:16 warning A constructor name should not start with a lowercase letter new-cap

And that's it! You can pretty much build your own configuration from the available ruleset and have clean javascript part of your build flow!

Many more plugins available

More gulp plugins are available on the registry. Whether you want to lint, transpile javascript (TypeScript, CoffeeScript), compile CSS (Less, SASS), minify images... everything can be included in the pipeline.

Look up the registry and start hacking away!

Categories: Blogs