Skip to content

Blogs

10 Things You Should Know About Specflow

Testing TV - Tue, 08/23/2016 - 16:11
SpecFlow is an open source acceptance test driven development and behavior driven development for .NET. You can use it to define, manage and execute automated acceptance tests from business-readable specifications. SpecFlow is quite a recent addition to the software development toolbox. Sometimes it feels like we’re using a hammer to drive in a screw, so […]
Categories: Blogs

GTAC 2014 Wrap-up

Google Testing Blog - Sun, 08/21/2016 - 18:33
by Anthony Vallone on behalf of the GTAC Committee

On October 28th and 29th, GTAC 2014, the eighth GTAC (Google Test Automation Conference), was held at the beautiful Google Kirkland office. The conference was completely packed with presenters and attendees from all over the world (Argentina, Australia, Canada, China, many European countries, India, Israel, Korea, New Zealand, Puerto Rico, Russia, Taiwan, and many US states), bringing with them a huge diversity of experiences.


Speakers from numerous companies and universities (Adobe, American Express, Comcast, Dropbox, Facebook, FINRA, Google, HP, Medidata Solutions, Mozilla, Netflix, Orange, and University of Waterloo) spoke on a variety of interesting and cutting edge test automation topics.

All of the slides and video recordings are now available on the GTAC site. Photos will be available soon as well.


This was our most popular GTAC to date, with over 1,500 applicants and almost 200 of those for speaking. About 250 people filled our venue to capacity, and the live stream had a peak of about 400 concurrent viewers with 4,700 playbacks during the event. And, there was plenty of interesting Twitter and Google+ activity during the event.


Our goal in hosting GTAC is to make the conference highly relevant and useful for, not only attendees, but the larger test engineering community as a whole. Our post-conference survey shows that we are close to achieving that goal:



If you have any suggestions on how we can improve, please comment on this post.

Thank you to all the speakers, attendees, and online viewers who made this a special event once again. To receive announcements about the next GTAC, subscribe to the Google Testing Blog.

Categories: Blogs

GTAC 2015 Coming to Cambridge (Greater Boston) in November

Google Testing Blog - Sun, 08/21/2016 - 18:33
Posted by Anthony Vallone on behalf of the GTAC Committee


We are pleased to announce that the ninth GTAC (Google Test Automation Conference) will be held in Cambridge (Greatah Boston, USA) on November 10th and 11th (Toozdee and Wenzdee), 2015. So, tell everyone to save the date for this wicked good event.

GTAC is an annual conference hosted by Google, bringing together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It’s a great opportunity to present, learn, and challenge modern testing technologies and strategies.

You can browse presentation abstracts, slides, and videos from previous years on the GTAC site.

Stay tuned to this blog and the GTAC website for application information and opportunities to present at GTAC. Subscribing to this blog is the best way to get notified. We're looking forward to seeing you there!

Categories: Blogs

GTAC 2015: Call for Proposals & Attendance

Google Testing Blog - Sun, 08/21/2016 - 18:32
Posted by Anthony Vallone on behalf of the GTAC Committee

The GTAC (Google Test Automation Conference) 2015 application process is now open for presentation proposals and attendance. GTAC will be held at the Google Cambridge office (near Boston, Massachusetts, USA) on November 10th - 11th, 2015.

GTAC will be streamed live on YouTube again this year, so even if you can’t attend in person, you’ll be able to watch the conference remotely. We will post the live stream information as we get closer to the event, and recordings will be posted afterward.

Speakers
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations are 30 minutes and lightning talks are 10 minutes. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 25 talks and 200 attendees for the event. The selection process is not first come first serve (no need to rush your application), and we select a diverse group of engineers from various locations, company sizes, and technical backgrounds (academic, industry expert, junior engineer, etc).

Deadline
The due date for both presentation and attendance applications is August 10th, 2015.

Fees
There are no registration fees, but speakers and attendees must arrange and pay for their own travel and accommodations.

More information
You can find more details at developers.google.com/gtac.

Categories: Blogs

The Deadline to Apply for GTAC 2015 is Monday Aug 10

Google Testing Blog - Sun, 08/21/2016 - 18:32
Posted by Anthony Vallone on behalf of the GTAC Committee


The deadline to apply for GTAC 2015 is this Monday, August 10th, 2015. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to submit your proposal for consideration. If you would like to speak or attend, be sure to complete the form by Monday.

We will be making regular updates to the GTAC site (developers.google.com/gtac/2015/) over the next several weeks, and you can find conference details there.

For those that have already signed up to attend or speak, we will contact you directly by mid-September.

Categories: Blogs

Announcing the GTAC 2015 Agenda

Google Testing Blog - Sun, 08/21/2016 - 18:31
by Anthony Vallone on behalf of the GTAC Committee 

We have completed the selection and confirmation of all speakers and attendees for GTAC 2015. You can find the detailed agenda at: developers.google.com/gtac/2015/schedule.

Thank you to all who submitted proposals!

There is a lot of interest in GTAC once again this year with about 1400 applicants and about 200 of those for speaking. Unfortunately, our venue only seats 250. We will livestream the event as usual, so fret not if you were not selected to attend. Information about the livestream and other details will be posted on the GTAC site soon and announced here.

Categories: Blogs

GTAC 2015 is Next Week!

Google Testing Blog - Sun, 08/21/2016 - 18:31
by Anthony Vallone on behalf of the GTAC Committee

The ninth GTAC (Google Test Automation Conference) commences on Tuesday, November 10th, at the Google Cambridge office. You can find the latest details on the conference site, including schedule, speaker profiles, and travel tips.

If you have not been invited to attend in person, you can watch the event live. And if you miss the livestream, we will post slides and videos later.

We have an outstanding speaker lineup this year, and we look forward to seeing you all there or online!

Categories: Blogs

The Inquiry Method for Test Planning

Google Testing Blog - Sun, 08/21/2016 - 18:30
by Anthony Vallone
updated: July 2016



Creating a test plan is often a complex undertaking. An ideal test plan is accomplished by applying basic principles of cost-benefit analysis and risk analysis, optimally balancing these software development factors:
  • Implementation cost: The time and complexity of implementing testable features and automated tests for specific scenarios will vary, and this affects short-term development cost.
  • Maintenance cost: Some tests or test plans may vary from easy to difficult to maintain, and this affects long-term development cost. When manual testing is chosen, this also adds to long-term cost.
  • Monetary cost: Some test approaches may require billed resources.
  • Benefit: Tests are capable of preventing issues and aiding productivity by varying degrees. Also, the earlier they can catch problems in the development life-cycle, the greater the benefit.
  • Risk: The probability of failure scenarios may vary from rare to likely, and their consequences may vary from minor nuisance to catastrophic.
Effectively balancing these factors in a plan depends heavily on project criticality, implementation details, resources available, and team opinions. Many projects can achieve outstanding coverage with high-benefit, low-cost unit tests, but they may need to weigh options for larger tests and complex corner cases. Mission critical projects must minimize risk as much as possible, so they will accept higher costs and invest heavily in rigorous testing at all levels.
This guide puts the onus on the reader to find the right balance for their project. Also, it does not provide a test plan template, because templates are often too generic or too specific and quickly become outdated. Instead, it focuses on selecting the best content when writing a test plan.

Test plan vs. strategy
Before proceeding, two common methods for defining test plans need to be clarified:
  • Single test plan: Some projects have a single "test plan" that describes all implemented and planned testing for the project.
  • Single test strategy and many plans: Some projects have a "test strategy" document as well as many smaller "test plan" documents. Strategies typically cover the overall test approach and goals, while plans cover specific features or project updates.
Either of these may be embedded in and integrated with project design documents. Both of these methods work well, so choose whichever makes sense for your project. Generally speaking, stable projects benefit from a single plan, whereas rapidly changing projects are best served by infrequently changed strategies and frequently added plans.
For the purpose of this guide, I will refer to both test document types simply as "test plans”. If you have multiple documents, just apply the advice below to your document aggregation.

Content selection
A good approach to creating content for your test plan is to start by listing all questions that need answers. The lists below provide a comprehensive collection of important questions that may or may not apply to your project. Go through the lists and select all that apply. By answering these questions, you will form the contents for your test plan, and you should structure your plan around the chosen content in any format your team prefers. Be sure to balance the factors as mentioned above when making decisions.

Prerequisites
  • Do you need a test plan? If there is no project design document or a clear vision for the product, it may be too early to write a test plan.
  • Has testability been considered in the project design? Before a project gets too far into implementation, all scenarios must be designed as testable, preferably via automation. Both project design documents and test plans should comment on testability as needed.
  • Will you keep the plan up-to-date? If so, be careful about adding too much detail, otherwise it may be difficult to maintain the plan.
  • Does this quality effort overlap with other teams? If so, how have you deduplicated the work?

Risk
  • Are there any significant project risks, and how will you mitigate them? Consider:
    • Injury to people or animals
    • Security and integrity of user data
    • User privacy
    • Security of company systems
    • Hardware or property damage
    • Legal and compliance issues
    • Exposure of confidential or sensitive data
    • Data loss or corruption
    • Revenue loss
    • Unrecoverable scenarios
    • SLAs
    • Performance requirements
    • Misinforming users
    • Impact to other projects
    • Impact from other projects
    • Impact to company’s public image
    • Loss of productivity
  • What are the project’s technical vulnerabilities? Consider:
    • Features or components known to be hacky, fragile, or in great need of refactoring
    • Dependencies or platforms that frequently cause issues
    • Possibility for users to cause harm to the system
    • Trends seen in past issues

Coverage
  • What does the test surface look like? Is it a simple library with one method, or a multi-platform client-server stateful system with a combinatorial explosion of use cases? Describe the design and architecture of the system in a way that highlights possible points of failure.
  • What platforms are supported? Consider listing supported operating systems, hardware, devices, etc. Also describe how testing will be performed and reported for each platform.
  • What are the features? Consider making a summary list of all features and describe how certain categories of features will be tested.
  • What will not be tested? No test suite covers every possibility. It’s best to be up-front about this and provide rationale for not testing certain cases. Examples: low risk areas that are a low priority, complex cases that are a low priority, areas covered by other teams, features not ready for testing, etc. 
  • What is covered by unit (small), integration (medium), and system (large) tests? Always test as much as possible in smaller tests, leaving fewer cases for larger tests. Describe how certain categories of test cases are best tested by each test size and provide rationale.
  • What will be tested manually vs. automated? When feasible and cost-effective, automation is usually best. Many projects can automate all testing. However, there may be good reasons to choose manual testing. Describe the types of cases that will be tested manually and provide rationale.
  • How are you covering each test category? Consider:
  • Will you use static and/or dynamic analysis tools? Both static analysis tools and dynamic analysis tools can find problems that are hard to catch in reviews and testing, so consider using them.
  • How will system components and dependencies be stubbed, mocked, faked, staged, or used normally during testing? There are good reasons to do each of these, and they each have a unique impact on coverage.
  • What builds are your tests running against? Are tests running against a build from HEAD (aka tip), a staged build, and/or a release candidate? If only from HEAD, how will you test release build cherry picks (selection of individual changelists for a release) and system configuration changes not normally seen by builds from HEAD?
  • What kind of testing will be done outside of your team? Examples:
    • Dogfooding
    • External crowdsource testing
    • Public alpha/beta versions (how will they be tested before releasing?)
    • External trusted testers
  • How are data migrations tested? You may need special testing to compare before and after migration results.
  • Do you need to be concerned with backward compatibility? You may own previously distributed clients or there may be other systems that depend on your system’s protocol, configuration, features, and behavior.
  • Do you need to test upgrade scenarios for server/client/device software or dependencies/platforms/APIs that the software utilizes?
  • Do you have line coverage goals?

Tooling and Infrastructure
  • Do you need new test frameworks? If so, describe these or add design links in the plan.
  • Do you need a new test lab setup? If so, describe these or add design links in the plan.
  • If your project offers a service to other projects, are you providing test tools to those users? Consider providing mocks, fakes, and/or reliable staged servers for users trying to test their integration with your system.
  • For end-to-end testing, how will test infrastructure, systems under test, and other dependencies be managed? How will they be deployed? How will persistence be set-up/torn-down? How will you handle required migrations from one datacenter to another?
  • Do you need tools to help debug system or test failures? You may be able to use existing tools, or you may need to develop new ones.

Process
  • Are there test schedule requirements? What time commitments have been made, which tests will be in place (or test feedback provided) by what dates? Are some tests important to deliver before others?
  • How are builds and tests run continuously? Most small tests will be run by continuous integration tools, but large tests may need a different approach. Alternatively, you may opt for running large tests as-needed. 
  • How will build and test results be reported and monitored?
    • Do you have a team rotation to monitor continuous integration?
    • Large tests might require monitoring by someone with expertise.
    • Do you need a dashboard for test results and other project health indicators?
    • Who will get email alerts and how?
    • Will the person monitoring tests simply use verbal communication to the team?
  • How are tests used when releasing?
    • Are they run explicitly against the release candidate, or does the release process depend only on continuous test results? 
    • If system components and dependencies are released independently, are tests run for each type of release? 
    • Will a "release blocker" bug stop the release manager(s) from actually releasing? Is there an agreement on what are the release blocking criteria?
    • When performing canary releases (aka % rollouts), how will progress be monitored and tested?
  • How will external users report bugs? Consider feedback links or other similar tools to collect and cluster reports.
  • How does bug triage work? Consider labels or categories for bugs in order for them to land in a triage bucket. Also make sure the teams responsible for filing and or creating the bug report template are aware of this. Are you using one bug tracker or do you need to setup some automatic or manual import routine?
  • Do you have a policy for submitting new tests before closing bugs that could have been caught?
  • How are tests used for unsubmitted changes? If anyone can run all tests against any experimental build (a good thing), consider providing a howto.
  • How can team members create and/or debug tests? Consider providing a howto.

Utility
  • Who are the test plan readers? Some test plans are only read by a few people, while others are read by many. At a minimum, you should consider getting a review from all stakeholders (project managers, tech leads, feature owners). When writing the plan, be sure to understand the expected readers, provide them with enough background to understand the plan, and answer all questions you think they will have - even if your answer is that you don’t have an answer yet. Also consider adding contacts for the test plan, so any reader can get more information.
  • How can readers review the actual test cases? Manual cases might be in a test case management tool, in a separate document, or included in the test plan. Consider providing links to directories containing automated test cases.
  • Do you need traceability between requirements, features, and tests?
  • Do you have any general product health or quality goals and how will you measure success? Consider:
    • Release cadence
    • Number of bugs caught by users in production
    • Number of bugs caught in release testing
    • Number of open bugs over time
    • Code coverage
    • Cost of manual testing
    • Difficulty of creating new tests


Categories: Blogs

Fail Over

Hiccupps - James Thomas - Thu, 08/18/2016 - 22:50
In another happy accident, I ended up with a bunch of podcasts on failure to listen to in the same week. (Success!) Here's a few quotes I particularly enjoyed.

In Failing Gracefully on the BBC World Service, David Mindell from MIT recalls the early days of NASA's Project Apollo:
The engineers said "oh it's going to have two buttons. That's the whole interface. Take Me To The Moon, that's one button, and Take Me Home is the other button" [but] by the time they landed on the moon it was a very rich interactive system ...The ultimate goal of new technology should not be full automation. Rather, the ultimate goal should be complete cooperation with the human: trusted, transparent, collaboration ... we've learned that [full autonomy] is dangerous, it's failure-prone, it's brittle, it's not going to get us to where we need to go.And NASA has had some high-profile failures. In another episode in the same series of programmes, Faster, Better, Cheaper, presenter Kevin Fong concludes:
In complex systems, failure is inevitable. It needs to be learned from but more importantly it needs to become a conscious part of everything that you do.Which fits nicely with Richard Cook's paper, How Complex Systems Fail, from which I'll extract this gem:
... all practitioner actions are actually gambles, that is, acts that take place in the face of uncertain outcomes. The degree of uncertainty may change from moment to moment. That practitioner actions are gambles appears clear after accidents; in general, post hoc analysis regards these gambles as poor ones. But the converse: that successful outcomes are also the result of gambles; is not widely appreciated. In the Ted Radio Hour podcast, Failure is an Option, Astro Teller of X, Google's "moonshot factory", takes Fong's suggestion to heart. His approach is to encourage failure, to deliberately seek out the weak points in any idea and abort when they're discovered:
... I've reframed what I think of as real failure. I think of real failure as the point at which you know what you're working on is the wrong thing to be working on or that you're working on it in the wrong way. You can't call the work up to the moment where you figure it out that you're doing the wrong thing failing. That's called learning. He elaborates in his full TED talk, When A Project Fails, Should The Workers Get A Bonus?:
If there's an Achilles heel in one of our projects we want to know it right now not way down the road ... Enthusiastic skepticism is not the enemy of boundless optimism. It's optimism's perfect partner.And that's music to this tester's ears.
Image: Old Book Illustrations
Categories: Blogs

Hackable Projects

Google Testing Blog - Thu, 08/18/2016 - 20:18
By: Patrik Höglund
IntroductionSoftware development is difficult. Projects often evolve over several years, under changing requirements and shifting market conditions, impacting developer tools and infrastructure. Technical debt, slow build systems, poor debuggability, and increasing numbers of dependencies can weigh down a project The developers get weary, and cobwebs accumulate in dusty corners of the code base.

Fighting these issues can be taxing and feel like a quixotic undertaking, but don’t worry — the Google Testing Blog is riding to the rescue! This is the first article of a series on “hackability” that identifies some of the issues that hinder software projects and outlines what Google SETIs usually do about them.

According to Wiktionary, hackable is defined as:
Adjective
hackable ‎(comparative more hackable, superlative most hackable)
  1. (computing) That can be hacked or broken into; insecure, vulnerable. 
  2. That lends itself to hacking (technical tinkering and modification); moddable.

Obviously, we’re not going to talk about making your product more vulnerable (by, say, rolling your own crypto or something equally unwise); instead, we will focus on the second definition, which essentially means “something that is easy to work on.” This has become the mainfocus for SETIs at Google as the role has evolved over the years.
In PracticeIn a hackable project, it’s easy to try things and hard to break things. Hackability means fast feedback cycles that offer useful information to the developer.

This is hackability:
  • Developing is easy
  • Fast build
  • Good, fast tests
  • Clean code
  • Easy running + debugging
  • One-click rollbacks
In contrast, what is not hackability?
  • Broken HEAD (tip-of-tree)
  • Slow presubmit (i.e. checks running before submit)
  • Builds take hours
  • Incremental build/link > 30s
  • Flakytests
  • Can’t attach debugger
  • Logs full of uninteresting information
The Three Pillars of HackabilityThere are a number of tools and practices that foster hackability. When everything is in place, it feels great to work on the product. Basically no time is spent on figuring out why things are broken, and all time is spent on what matters, which is understanding and working with the code. I believe there are three main pillars that support hackability. If one of them is absent, hackability will suffer. They are:


Pillar 1: Code Health“I found Rome a city of bricks, and left it a city of marble.”
   -- Augustus
Keeping the code in good shape is critical for hackability. It’s a lot harder to tinker and modify something if you don’t understand what it does (or if it’s full of hidden traps, for that matter).
TestsUnit and small integration tests are probably the best things you can do for hackability. They’re a support you can lean on while making your changes, and they contain lots of good information on what the code does. It isn’t hackability to boot a slow UI and click buttons on every iteration to verify your change worked - it is hackability to run a sub-second set of unit tests! In contrast, end-to-end (E2E) tests generally help hackability much less (and can evenbe a hindrance if they, or the product, are in sufficiently bad shape).

Figure 1: the Testing Pyramid.
I’ve always been interested in how you actually make unit tests happen in a team. It’s about education. Writing a product such that it has good unit tests is actually a hard problem. It requires knowledge of dependency injection, testing/mocking frameworks, language idioms and refactoring. The difficulty varies by language as well. Writing unit tests in Go or Java is quite easy and natural, whereas in C++ it can be very difficult (and it isn’t exactly ingrained in C++ culture to write unit tests).

It’s important to educate your developers about unit tests. Sometimes, it is appropriate to lead by example and help review unit tests as well. You can have a large impact on a project by establishing a pattern of unit testing early. If tons of code gets written without unit tests, it will be much harder to add unit tests later.

What if you already have tons of poorly tested legacy code? The answer is refactoring and adding tests as you go. It’s hard work, but each line you add a test for is one more line that is easier to hack on.
Readable Code and Code ReviewAt Google, “readability” is a special committer status that is granted per language (C++, Go, Java and so on). It means that a person not only knows the language and its culture and idioms well, but also can write clean, well tested and well structured code. Readability literally means that you’re a guardian of Google’s code base and should push back on hacky and ugly code. The use of a style guide enforces consistency, and code review (where at least one person with readability must approve) ensures the code upholds high quality. Engineers must take care to not depend too much on “review buddies” here but really make sure to pull in the person that can give the best feedback.

Requiring code reviews naturally results in small changes, as reviewers often get grumpy if you dump huge changelists in their lap (at least if reviewers are somewhat fast to respond, which they should be). This is a good thing, since small changes are less risky and are easy to roll back. Furthermore, code review is good for knowledge sharing. You can also do pair programming if your team prefers that (a pair-programmed change is considered reviewed and can be submitted when both engineers are happy). There are multiple open-source review tools out there, such as Gerrit.

Nice, clean code is great for hackability, since you don’t need to spend time to unwind that nasty pointer hack in your head before making your changes. How do you make all this happen in practice? Put together workshops on, say, the SOLID principles, unit testing, or concurrency to encourage developers to learn. Spread knowledge through code review, pair programming and mentoring (such as with the Readability concept). You can’t just mandate higher code quality; it takes a lot of work, effort and consistency.
Presubmit Testing and LintConsistently formatted source code aids hackability. You can scan code faster if its formatting is consistent. Automated tooling also aids hackability. It really doesn’t make sense to waste any time on formatting source code by hand. You should be using tools like gofmt, clang-format, etc. If the patch isn’t formatted properly, you should see something like this (example from Chrome):

$ git cl upload
Error: the media/audio directory requires formatting. Please run
git cl format media/audio.

Source formatting isn’t the only thing to check. In fact, you should check pretty much anything you have as a rule in your project. Should other modules not depend on the internals of your modules? Enforce it with a check. Are there already inappropriate dependencies in your project? Whitelist the existing ones for now, but at least block new bad dependencies from forming. Should our app work on Android 16 phones and newer? Add linting, so we don’t use level 17+ APIs without gating at runtime. Should your project’s VHDL code always place-and-route cleanly on a particular brand of FPGA? Invoke the layout tool in your presubmit and and stop submit if the layout process fails.

Presubmit is the most valuable real estate for aiding hackability. You have limited space in your presubmit, but you can get tremendous value out of it if you put the right things there. You should stop all obvious errors here.

It aids hackability to have all this tooling so you don’t have to waste time going back and breaking things for other developers. Remember you need to maintain the presubmit well; it’s not hackability to have a slow, overbearing or buggy presubmit. Having a good presubmit can make it tremendously more pleasant to work on a project. We’re going to talk more in later articles on how to build infrastructure for submit queues and presubmit.
Single Branch And Reducing RiskHaving a single branch for everything, and putting risky new changes behind feature flags, aids hackability since branches and forks often amass tremendous risk when it’s time to merge them. Single branches smooth out the risk. Furthermore, running all your tests on many branches is expensive. However, a single branch can have negative effects on hackability if Team A depends on a library from Team B and gets broken by Team B a lot. Having some kind of stabilization on Team B’s software might be a good idea there. Thisarticle covers such situations, and how to integrate often with your dependencies to reduce the risk that one of them will break you.
Loose Coupling and TestabilityTightly coupled code is terrible for hackability. To take the most ridiculous example I know: I once heard of a computer game where a developer changed a ballistics algorithm and broke the game’s chat. That’s hilarious, but hardly intuitive for the poor developer that made the change. A hallmark of loosely coupled code is that it’s upfront about its dependencies and behavior and is easy to modify and move around.

Loose coupling, coherence and so on is really about design and architecture and is notoriously hard to measure. It really takes experience. One of the best ways to convey such experience is through code review, which we’ve already mentioned. Education on the SOLID principles, rules of thumb such as tell-don’t-ask, discussions about anti-patterns and code smells are all good here. Again, it’s hard to build tooling for this. You could write a presubmit check that forbids methods longer than 20 lines or cyclomatic complexity over 30, but that’s probably shooting yourself in the foot. Developers would consider that overbearing rather than a helpful assist.

SETIs at Google are expected to give input on a product’s testability. A few well-placed test hooks in your product can enable tremendously powerful testing, such as serving mock content for apps (this enables you to meaningfully test app UI without contacting your real servers, for instance). Testability can also have an influence on architecture. For instance, it’s a testability problem if your servers are built like a huge monolith that is slow to build and start, or if it can’t boot on localhost without calling external services. We’ll cover this in the next article.
Aggressively Reduce Technical DebtIt’s quite easy to add a lot of code and dependencies and call it a day when the software works. New projects can do this without many problems, but as the project becomes older it becomes a “legacy” project, weighed down by dependencies and excess code. Don’t end up there. It’s bad for hackability to have a slew of bug fixes stacked on top of unwise and obsolete decisions, and understanding and untangling the software becomes more difficult.

What constitutes technical debt varies by project and is something you need to learn from experience. It simply means the software isn’t in optimal form. Some types of technical debt are easy to classify, such as dead code and barely-used dependencies. Some types are harder to identify, such as when the architecture of the project has grown unfit to the task from changing requirements. We can’t use tooling to help with the latter, but we can with the former.

I already mentioned that dependency enforcement can go a long way toward keeping people honest. It helps make sure people are making the appropriate trade-offs instead of just slapping on a new dependency, and it requires them to explain to a fellow engineer when they want to override a dependency rule. This can prevent unhealthy dependencies like circular dependencies, abstract modules depending on concrete modules, or modules depending on the internals of other modules.

There are various tools available for visualizing dependency graphs as well. You can use these to get a grip on your current situation and start cleaning up dependencies. If you have a huge dependency you only use a small part of, maybe you can replace it with something simpler. If an old part of your app has inappropriate dependencies and other problems, maybe it’s time to rewrite that part.

The next article will be on Pillar 2: Debuggability.
Categories: Blogs

Code as Music - Test Environments

ISerializable - Roy Osherove's Blog - Thu, 08/18/2016 - 01:11

Music Production also has some corollaries with coding in terms of testing.

When producing music, it is important that the music sounds good on multiple types of speakers: in your car, in your iphone headphones, on a boom box by the pool,  in a club speaker system etc.

In that regard priducing the music locally on yoru own speakers is much like “works on my machine”.

Producers will usually take the music and test it out manually in various systems, or have multiple sets of monitors, but still test the music in their car, on friend’s systems and more.

There are also “emulators” for sound. There are some software solutions that emulate how your music will sound through different speakers, in different locations , in different file formats (mp3 has less data than a .wav file, wav file, for example.

Nothing beats real work integration testing though. Much like code.

Categories: Blogs

AutoMapper 5.1 released

Jimmy Bogard - Wed, 08/17/2016 - 22:28

Release notes here: AutoMapper 5.1

Some big things from this release:

  • Supporting portable class libraries (again), profile 111. Because converting projects from PCL to netstandard is hard
  • More performance improvements (mainly in complex mappings), 70% faster in our benchmarks
  • Easy initialization via assembly scanning

As part of the release, we closed 57 issues. With the new underlying mapping engine, there were a few bugs to work out, which this release worked to close.

Enjoy!

Categories: Blogs

Building a Javascript Protractor framework – My experiment

So almost two years in to my test manager role, I seem to have freed up time to get back on the tools somewhat. While there are a huge number of problems to solve, I first decided to have a look at protractor for a few reasons:

  • I didn’t feel we were getting much value out of our UI tests.
  • I wanted to see how my automation principles transferred to javascript.
  • I wanted to have a concrete example to illustrate some points I discuss frequently with my team.
  • Putting this out there will no doubt get a bunch of people telling me how I can do this better (win)!
  • I still have huge misgivings regarding BDD and have vaguely promised to document something. This goes some way toward addressing that (Michele).

With those thoughts in mind, I started with the Protractor tutorial.

Setting up

I had installed node ages ago, so there were a few bumps in the setup due to how admin rights are given. There were some errors out of protractor (lost from my shell session now) and I went home to install in order to avoid proxy annoyances.

Thus:

Running in powershell (Windows 8):

Set-ExecutionPolicy Unrestricted -Scope CurrentUser -Force
npm install -g npm-windows-upgrade
.\webdriver-manager update
.\npm-windows-upgrade

Bits of node went to different places based on my setup. I had to look in here sometimes: C:\Users\\AppData\Roaming\npm

Round 1:

I have pasted the tutorial example code in to my editor:


// spec.js

describe('Protractor Demo App', function() {
  it('should add one and two', function() {
    browser.get('http://juliemr.github.io/protractor-demo/');
    element(by.model('first')).sendKeys(1);
    element(by.model('second')).sendKeys(2);

    element(by.id('gobutton')).click();

    expect(element(by.binding('latest')).getText()).toEqual('5'); // This is wrong!
  });
});

My skin is crawling a bit, because even if I think specification by example is a useful idea (marginally, sometimes), I want to describe the general behaviour first. To do this, I need to write my test so it’s about the general case (Adding integers) rather than the specific case of adding one and two.:


// spec.js

describe('Protractor Demo App', function() {
   it('should add two integers', function() {
     browser.get('http://juliemr.github.io/protractor-demo/');
     element(by.model('first')).sendKeys(Math.random()*1000);
     element(by.mode l('second')).sendKeys(Math.random()*1000 );
    element(by.id('gobutton')).click();

     expect(element(by.binding('latest')).getText()).toEqual('5');
   });
});

I get distracted by some random thoughts around reporting and cross-browser testing and how have:


// conf.js

var reporters = require('jasmine-reporters');
var junitReporter = new reporters.JUnitXmlReporter({
   savePath: 'c:/js_test/protractor/',
   consolidateAll: false
});

exports.config = {
   seleniumAddress: 'http://localhost:4444/wd/hub',
   specs: ['spec.js'],
   capabilities: {
     browserName: 'chrome'
  },
   onPrepare: function() {
     jasmine.getEnv().addReporter(junitReporter)}
}

Round two:

I have some duplication in the random function and know I will need to learn how to externalise libraries and such, so I attempt to create a random function:


// spec.js
helper= require('./libs/helper.js');

describe('Protractor Demo App', function() {
  it('should add two integers', function() {
    browser.get('http://juliemr.github.io/protractor-demo/');
    first=helper.randomInt(1000);
    second=helper.randomInt(1000);
    expected=Math.floor(first+second)+'';
    
    element(by.model('first')).sendKeys(first);
    element(by.model('second')).sendKeys(second);
    element(by.id('gobutton')).click();

    expect(element(by.binding('latest')).getText()).toEqual(expected);
  });
});

In the helper.js file, I tried a few different ways to export, and settled on this as my first version because it works and is less typing.


//helper.js
module.exports = {
    randomInt: function (max) {
     return Math.floor(Math.random() * (max)) + 1;
    }
}

Round 3 – Second test

I start by copying and pasting the add spec as subtract. I pull the initial duplicated browser.get out into the beforeEach function, and then I have to figure out how to click on the dropdown. Like most frameworks designed to let developers think in terms of their UI building tools (I’m looking at you Geb), it is a pain to do some simple things because accessing individual elements of a dropdown list is something you probably never have to do when you are building a UI. The actual browser automation framework is kind of an afterthought. Eventually, I stumble across a tidy solution, and I now have two specs. I also factor out the common ‘calculate’ action and make it part of a calculator model.


// spec.js

helper= require('./libs/helper.js');
calculator=require('./libs/calculator.js');

describe('Protractor Demo App', function() {
    
  beforeEach(function() {
    browser.get('http://juliemr.github.io/protractor-demo/');
  });
    
  it('should add two integers', function() {
    first=helper.randomInt(1000);
    second=helper.randomInt(1000);
    expected=Math.floor(first+second)+'';
    
    element(by.model('first')).sendKeys(first);
    element(by.model('second')).sendKeys(second);
    calculator.calculate();

    expect(element(by.binding('latest')).getText()).toEqual(expected);
  });
    
  it('should subtract two integers', function() {
    first=helper.randomInt(1000);
    second=helper.randomInt(1000);
    expected=Math.floor(first-second)+'';
    
    element(by.model('first')).sendKeys(first);
    element(by.model('second')).sendKeys(second);
    operators=element(by.model('operator')).$('[value="SUBTRACTION"]').click();
    calculator.calculate();

    expect(element(by.binding('latest')).getText()).toEqual(expected+1);
  });
});


//calculator.js
var calculate = function () {
     return element(by.id('gobutton')).click();
    }

module.exports.calculate = calculate;

I get on a bit of a roll at this point and keep adding methods to the calculator:


// spec.js
helper= require('./libs/helper.js');
calculator=require('./libs/calculator.js');

describe('Protractor Demo App', function() {
    
  beforeEach(function() {
    browser.get('http://juliemr.github.io/protractor-demo/');
  });
    
  it('should add two integers', function() {
    first=helper.randomInt(1000);
    second=helper.randomInt(1000);
    expected=Math.floor(first+second)+'';
    calculator.add(first,second);
    expect(calculator.last_calculation()).toEqual(expected);
  });
    
  it('should subtract two integers', function() {
    first=helper.randomInt(1000);
    second=helper.randomInt(1000);
    expected=Math.floor(first-second)+'';
    calculator.subtract(first,second);
    expect(calculator.last_calculation()).toEqual(expected);
  });
});

I also refactor somewhat prematurely and drive out duplication in the new calculator methods:


var calculate = function () {
     return element(by.id('gobutton')).click();
    }

var set_first = function (first) {
     element(by.model('first')).sendKeys(first);
    }

var set_second = function (second) {
     element(by.model('second')).sendKeys(second);
    }

var operation = function (first, second, operation) {
     set_first(first);
     set_second(second);
     element(by.model('operator')).$('[value="' + operation + '"]').click();
     calculate();
    }

var add = function (first, second) {
    operation(first,second,'ADDITION');
}

var subtract = function (first, second) {
    operation(first,second,'SUBTRACTION');
}
    
var last_calculation = function () {
     return element(by.binding('latest')).getText();
    }

module.exports.calculate = calculate;
module.exports.set_first = set_first;
module.exports.set_second = set_second;
module.exports.add = add;
module.exports.subtract = subtract;
module.exports.last_calculation = last_calculation;

Cleaning up

I move the ‘browser.get’ in the beforeEach function into the calculator app model. With the refactoring I made to remove duplication in add and subtract, it’s trivial to add all of the other calculator tests. I also factor out the data into a calculation object in order to remove duplication. The helper file vanishes from the main spec as it is only used by the data functions.


// spec.js

calculator=require('./libs/calculator.js');
data = require('./libs/calc_data.js');

describe('Integer math', function() {
    
  beforeEach(function() {
    calculator.open();
  });

  it('should add two integers', function() {
    addition= new data.addition;
    calculator.add(addition.first,addition.second);
    expect(calculator.last_calculation()).toEqual(addition.result);
  });
    
  it('should subtract two integers', function() {
    subtraction= new data.subtraction;
    calculator.subtract(subtraction.first,subtraction.second);
    expect(calculator.last_calculation()).toEqual(subtraction.result);
  });
    
  it('should multiply two integers', function() {
    multiplication = new data.multiplication;
    calculator.multiply(multiplication.first,multiplication.second);
    expect(calculator.last_calculation()).toEqual(multiplication.result);
  });

  it('should divide two integers', function() {
    division = new data.division;
    calculator.divide(division.first,division.second);
    expect(calculator.last_calculation()).toEqual(division.result);
  });

  it('should get modulus of two integers', function() {
    modulo = new data.modulo;
    calculator.modulo(modulo.first,modulo.second);
    expect(calculator.last_calculation()).toEqual(modulo.result);
  });
    
});

Now that I’ve factored out common code, adding all five operations is trivial.

//calculator.js

var open = function () {
     browser.get('http://juliemr.github.io/protractor-demo/');
    }

var calculate = function () {
     return element(by.id('gobutton')).click();
    }

var set_first = function (first) {
     element(by.model('first')).sendKeys(first);
    }

var set_second = function (second) {
     element(by.model('second')).sendKeys(second);
    }

var operation = function (first, second, operation) {
     set_first(first);
     set_second(second);
     element(by.model('operator')).$('[value="' + operation + '"]').click();
     calculate();
    }

var add = function (first, second) {
    operation(first,second,'ADDITION');
}

var subtract = function (first, second) {
    operation(first,second,'SUBTRACTION');
}

var multiply = function (first, second) {
    operation(first,second,'MULTIPLICATION');
}

var divide = function (first, second) {
    operation(first,second,'DIVISION');
}

var modulo = function (first, second) {
    operation(first,second,'MODULO');
}

var last_calculation = function () {
     return element(by.binding('latest')).getText();
    }

module.exports.open = open
module.exports.calculate = calculate;
module.exports.set_first = set_first;
module.exports.set_second = set_second;
module.exports.add = add;
module.exports.subtract = subtract;
module.exports.multiply = multiply;
module.exports.divide = divide;
module.exports.modulo = modulo;
module.exports.last_calculation = last_calculation;


//calc_data.js

helper = require('./helper.js');

var addition = function () {
    this.first = helper.randomInt(1000);
    this.second = helper.randomInt(1000);
    this.result = Math.floor(this.first+this.second)+'';
    }

var subtraction = function () {
    this.first = helper.randomInt(1000);
    this.second = helper.randomInt(1000);
    this.result = Math.floor(this.first-this.second)+'';
    }

var multiplication = function () {
    this.first = helper.randomInt(1000);
    this.second = helper.randomInt(1000);
    this.result = Math.floor(this.first*this.second)+'';
    }

var division = function () {
    this.first = helper.randomInt(1000);
    this.second = helper.randomInt(1000);
    this.result = (this.first/this.second)+'';
    }

var modulo = function () {
    this.first = helper.randomInt(1000);
    this.second = helper.randomInt(1000);
    this.result = (this.first%this.second)+'';
    }

module.exports.addition = addition;
module.exports.subtraction = subtraction;
module.exports.multiplication = multiplication;
module.exports.division = division;
module.exports.modulo = modulo;

And I stop here, with a few thoughts (and the source code on github if you want it).

I’m building a parallel implementation of a calculator

A big part of testing is about bringing two (or more) models into alignment. One is the implementation. The other is the model (or models), formal or informal that you use to check the product against. For something like a calculator, specification by example is a pretty poor approach, so comparing it to some reference implementation seems like a better idea, which is where this test implementation seems to be heading.

Specification by example isn’t useful everywhere

If you are building a mathematical function, specifying the result by example probably isn’t a great approach compared to a complete test against a reference model for all inputs (if feasible) or a high-volume randomised test. Imagine trying to specify a sine function by example…

Page objects are overused

There’s very little duplication of UI implementation in these tests, which is what normally happens unconsciously when I build a framework around a goal or activity-oriented way. I tend to do this because I am looking for a stable model to align my test artefacts to, and implementation is rarely so.

Examples can help though…

Once the suite runs, it feels like it would be helpful to know what values were provided to the test if, for example, I wanted to see if a particular case had ever been executed by the for framework. At some point I will try and hack the reporter to report the actual values executed.

Attention to design should help you go faster

As you build more and more useful abstractions, you should find it easier to add new tests over time. You only get this through mercilessly refactoring, which is slower at the beginning.

Cucumber probably still sucks
It feels like doing this in cucumber (as opposed to something code-y like RSpec) would’ve made a lot of things more difficult and led to many more layers, especially the need to abstract data and pass it around.

Things to do

  • See if I can get the Jasmine Tagged library working to get a richer reporting model. A single test hierarchy is pretty limiting, and SerenityBDD style reports are a step up from xUnit.
  • These tests will need to run in multiple environments with various configurations of real and fake components, as well as with different datasets. At some point I need to build this.’
  • Fix the duplication in the calculator data by creating a more generic calculation object (two numbers plus an operation)
  • Implement the history tests.
  • Implement the weird behaviours around Infinity that I found through exploratory testing.
  • Move it to something non-javascript so it’s a real independent oracle.
  • Fix my wordpress site to format code properly. Sorry it’s so ugly!
Categories: Blogs

Serverless Web Apps: "Client-Service" Architecture Explained

Radyology - Ben Rady - Tue, 08/16/2016 - 16:37
My earlier piece on serverless auth seems to have gotten some attention from the Internet. In that post, I made a comparison to client-server architecture. I think that comparison is fair, but after discussing it with people I have a... Ben Rady
Categories: Blogs

Reviewing the week’s blinks

thekua.com@work - Tue, 08/16/2016 - 09:03

I’ve signed up for a new service, called Blinkist, a service that provides summaries of books in 15 minutes both in text and audio format. I was looking for a way to review a number of books that I’ve both read and not yet read, to either determine whether or not I should read them, or just something new to learn.

Here’s a review of some of the book summaries that I’ve been listening/reading to:

  • Games People Play by Eric Berne – Humans play games all the time, acting in the role of Parent, Adult or Child depending on the “game” being played. We play games with different goals (safety, interaction, ) in mind although we cannot articulate them. Understanding the different roles people have when in a game gives insight into patterns of behaviour and this insight is useful in all relationships. We need to be particularly careful playing too many games in a personal relationship, as it is only when we stop playing games do you get to truly create deeper relationships.
  • Turn the Ship Around by David Marquet – A leadership tale that describes a leadership style that made one of the worst performing naval ships into one of the best. A good summary of turning a command-and-control leadership style, into a leaders building leaders style as well as other tricks to create quality control and feedback without using punishment. I’ll add this to my list of books to read further.
  • The Coaching Habit by Michael Bungay Stanier – A nice summary that distinguishes between the difference of mentoring (where you are providing more advice/answers) to coaching (where you lead through asking questions). A good summary of the benefits to this leadership skill, and some good examples of open questions to stimulate good conversations.
  • Getting There: A Book of Mentors by Gillian Zoe Segal – With a subtitle about mentors, I thought this book would focus more on how mentors helped people succeed and instead you end up hearing the stories of some successful people. Although still inspirational, I found the summaries didn’t focus very much on the role the mentor played.
Categories: Blogs

Multithreaded Test Synthesis

Testing TV - Mon, 08/15/2016 - 17:16
Subtle concurrency errors in multithreaded libraries that arise because of incorrect or inadequate synchronization are often difficult to pinpoint precisely using only static techniques. On the other hand, the effectiveness of dynamic detectors is critically dependent on multithreaded test suites whose execution can be used to identify and trigger concurrency bugs including data races, deadlocks […]
Categories: Blogs

The Real Revolution of Serverless is Auth, Not Microservices

Radyology - Ben Rady - Mon, 08/15/2016 - 16:16
Serverless computing has been getting a lot of attention lately. There are frameworks, a conference, some notable blog posts, and a few books (including mine). I'm really glad to see this happening. I think serverless web apps can be incredibly... Ben Rady
Categories: Blogs

Understanding Testing Understanding

Hiccupps - James Thomas - Fri, 08/12/2016 - 07:40
Andrew Morton tweeted at me the other day:
Does being able to make a joke about something show that you understand it? Maybe a question for @qahiccupps— Andrew Morton (@TestingChef) August 9, 2016I ran an on-the-spot thought experiment, trying to find a counterexample to the assertion "In order to make a joke about something you have to understand it."

I thought of a few things that I don't pretend to understand, such as special relativity, and tried to make a joke out of one of them. Which I did, and so I think I can safely say this:
@TestingChef Wouldn't have thought so. For example ...

Einstein's law of special relativity says you /can/ have a favourite child.— James Thomas (@qahiccupps) August 9, 2016Now this isn't a side-splitting, snot shower-inducing, self-suffocating-with-laughter kind of a joke. But it is a joke and the humour comes from the resolution of the cognitive dissonance that it sets up: the idea that special relativity could have anything to do with special relatives. (As such, for anyone who doesn't know that the two things are unrelated, this joke doesn't work.)

And I think that set up is a key point with respect to Andrew's question. If I want to deliberately set up a joke then I need to be aware of the potential for that dissonance:
@TestingChef To intentionally make a joke, you need to know about some aspect of the thing. (e.g. Special Relativity is not about family)— James Thomas (@qahiccupps) August 9, 2016@TestingChef If you're prepared to accept that intention is not required then all bets are off.— James Thomas (@qahiccupps) August 9, 2016Reading it back now I'm still comfortable with that initial analysis although I have more thoughts that I intentionally left alone on the Twitter thread. Thoughts like:
  • What do we mean by understand in this context?
  • I don't understand special relativity in depth, but I have an idea about roughly what it is. Does that invalidate my thought experiment?
  • What about the other direction: does understanding something enable you to make a joke about it?
  • What constitutes a joke?
  • Do we mean a joke that makes someone laugh?
  • If so, who?
  • Or is it enough for the author to assert that it's a joke?
  • ...
All things it might be illuminating to pursue at some point. But the thought that I've been coming back to since tweeting that quick reply is this: in my EuroSTAR 2015 talk, Your Testing is a Joke, I made an analogy between joking and testing. So what happens if we recast Andrew's original in terms of testing?Does being able to test something show that you understand it?And now the questions start again...
Image: https://flic.kr/p/i6Zqba
Categories: Blogs

Recording and Slides From Today's Webinar on Decision Tables

Thanks to everyone that attended today's webinar on decision tables. For those that could not get in
due to capacity limits, I apologize.

However, here are the slides:
http://www.riceconsulting.com/public_pdf/Webinar_Decision_Tables.pdf

And here is the recording:
https://youtu.be/z5RlCBKxfF4

I am happy to answer any questions by e-mail, phone or Skype. If you want to arrange a session, my contact info is on the final slide.

Thanks again,

Randy
Categories: Blogs

JavaScript Testing with Intern

Testing TV - Wed, 08/10/2016 - 07:25
Intern is a complete open source test system for JavaScript designed to help you write and run consistent, high-quality test cases for your JavaScript libraries and applications. It can be used to test any JavaScript code. It can even be used to test non-JavaScript Web and mobile apps, and to run tests written for other […]
Categories: Blogs