Skip to content

Feed aggregator

SoCraTes 2012

YouDevise Developer blog - Fri, 08/24/2012 - 16:36

Or, how I learned to stop worrying and love the weizenbier.

I kid. I didn’t just drink German beer for the whole four days. In fact, there were periods of several hours where I didn’t touch the stuff. Instead, I sat, listened and absorbed as much information as I could, and occasionally contributed some of my own ideas back.

So, what did I learn?

Practice constantly

Katas, dojos, and other repurposed Japanese words. Do exercises. Multiple times, in different ways, using tools and techniques both familiar and foreign. Always push yourself.

Martin ran the code retreat on the last day, and pointed out that musicians practice for months so they can perform for a few nights. We spend a lot of time learning on the job, but sometimes it’s useful to practice on something that doesn’t ask you to rush, cut corners and work when you’re not functioning at 100%. Adi and Erik ran a series of sessions on writing the best code you can, for which I am massively thankful. I’m going to steal their session on brutal refactoring and hopefully run it at an LSCC hands-on session soon.

Know which rules you are breaking and why

This, for me, was the most important. It’s OK to break the rules, as long as you’re aware you’re doing it and can justify it. This includes actually knowing the rules. Ones that come to mind are:

  • Red, green, refactor.
  • Take baby steps.
  • Acceptance tests are good.
  • Conditionals are bad.
  • Write the code you want to read, then implement it.
  • If you do something complicated three times, automate it.
  • SOLID is pretty cool, with the exception of the open/closed principle, which still makes no sense to me.
  • Everything in object calisthenics is important.

Of course, you may disagree with some of these, but there’s a huge difference between disagreeing and being ignorant of them. They should always be running through your mind, and an alarm should go off when you break one. If you choose to ignore the alarm, you better have a good reason for doing so.

Library functions are better than language features

Watching Andreas demonstrate Smalltalk, I realised something that had been ticking in my brain for a very long time. The most well-designed languages don’t have many features. They don’t need them—the few they have are powerful enough to simply express anything. Smalltalk doesn’t even have the if keyword or similar. Instead, booleans are objects. As a resuit, they can have methods, and do. The most important one, ifTrue:ifFalse: takes two blocks (closures), and calls one of them depending on the boolean you’re calling it on. Conditionals, implemented in the standard library. Pretty cool, right? It’s all handled through polymorphism: true and false simply implement the method differently.

The other thing I took away from this short talk was that Java is not object-oriented. It just isn’t. Object have behaviour. If the language encourages you to ask for values rather than tell an object what to do, it’s not OO.

Monads are hard to explain

But often you can make similarities with things more common in the Java/C#/C++ world. Some things are comparable to dependency inversion, some to container and collection types, and some are unfortunately just batshit insane.

Following the single responsibility principle is difficult

Let’s talk about if statements. It’s fairly clear that a method with an if has two responsibilities, not one. While making decisions is a necessary part of program execution, it should happen at the highest level possible, not deep down where it’s difficult to find and understand. What’s not so clear are the boolean logic operators, && and ||.

Here’s an example from Conway’s Game of Life:

public boolean step(boolean alive, int neighbours) {
    return alive && neighbours >= 2 && neighbours <= 3
        || neighbours == 3;

That covers all four rules. The problem is it does a bunch of things. I can’t even tell how many at a glance—it requires studying the code. The boolean logic operators are basically if blocks in disguise.

Let’s try again:

public boolean step(boolean alive, Neighbours neighbours) {
    return neighbours.step(alive);

enum Neighbours {
    Reproduction {
        @Override boolean step(boolean alive) {
            return true;

    public abstract boolean step(boolean alive):

I’ll let you fill in the other implementations of Neighbours yourself. Perhaps at a code retreat. ;-)

DDD simply realises that a thing has several different facets

This is by no means a complete definition of Domain-Driven Design, but it’s something I took away from Cyril’s session on it. We often talk about an Account class when we’re dealing with that hypothetical bank kata, but accounts have several different viewpoints.

  • If I’m the account holder, I want to see my balance and transactions.
  • However, if I’m a bank manager, I probably want to see information such as the account owner’s name and address, salary and whether she’s making full use of all account features. I probably want to find out if she’s using her packaged travel insurance and how much, so I can see whether I can upsell a new account with even more features.
  • If I’m a teller, I should probably see a recent list of transactions and the dates and times of when the money actually transferred (as opposed to when the account holder actually paid for something), so I can figure out why a payment didn’t go through.

We should represent these things as concrete objects in our system, instead of having a single Account class which is used by everything.

BDD is not ATDD is not E2ET

Let’s define those three things.

  • End-to-end testing is simply the process by which you write a test that covers the entirety of a system, or at least as much as concerns the feature under test.
  • Acceptance test driven development is something I picked up from The Pragmatic Programmer—before starting on a feature, determine what is required for this feature to be complete. Then, and only then, start on implementation (which should include unit testing).
  • Behaviour-driven development is closely related to ATDD, but involves writing that acceptance test with someone invested in the business who understands the customer. Ideally, it would be the customer himself.

Why is this important? It comes down to understanding your tools. I hear people talk about BDD when all they’re doing is writing a lot of end-to-end or integration tests, which is missing the point. Lots of integration tests are harmful to efficient software development—they’re slow, usually because they’re testing the same thing over and over again with slight variations in one small area (further reading: Integrated Tests are a Scam by J. B. Rainsberger). True acceptance tests should be small in number and simply prove that the feature is working approximately as expected. Unit tests should cover the rest.

You should practice architecture

Honestly, it’s worth it. Benjamin ran two sessions on solving architectural katas which really opened my mind to different designs, but more importantly, they pointed out to me how easy it was to miss requirements. It’s worth just sitting down and talking about what you need—you’ll find that half the time is spent throwing ideas away and the other half is coming to realisations which mean you might have to introduce something completely different. The process also really helps in clarifying the way you communicate, both inside your team and to the outside as you explain your end result, either through words or diagrams.

Metrics are not a replacement for thinking

I learnt three things about code metrics from Kore’s excellent talk.

  1. There are a lot I have not heard of that could be very useful.
  2. You can combine metrics to make new ones. A simple example is Number of Methods / Lines of Code = Average Lines per Method.
  3. You have to think. You can’t follow the metrics blindly.

Simple stuff, but always good to remember.

Kanban is more than a to do list

Erik showed us exactly how his personal kanban works, and I learnt a lot. The most important thing was that at work, we don’t do kanban. We call it a kanban board, but it’s really not. There are a few reasons for this:

  • We don’t limit work in progress.
  • Things often pile up.
  • Our backlog is absolutely humongous.

It’s something I believe we need to fix. I find I don’t get much out of the online software we use unless I put myself into tunnel vision and ignore most of it, which isn’t healthy. I’ve been using a personal one on my desk which is much, much simpler for the last couple of days and it’s made my working life a lot better.

Drink more beer

And talk to brilliant people. You’ll learn a lot. I did.

This post was cross-posted on Samir’s blog.

Original post blogged on b2evolution.

Categories: Blogs

Standardized application infrastructure contracts

YouDevise Developer blog - Tue, 07/17/2012 - 10:01

When we began implementing a continuous deployment pipeline for yet another application the age old principle of DRY (Don’t Repeat Yourself) started to become more difficult to ignore. Our applications had been produced at different times each with varying technology stacks and each with the newest approach the team had come up with to: deploy, load balance, monitor etc.. The newest applications were being continuously deployed to test and production environments an instance at a time (to provide zero-downtime), the oldest were being deployed by hand by executing a script in the target environment (and would take the service down); some applications were deployed from a maven style repository, whilst some applications were being deployed from our historical homegrown repository; some builds pushed the artifacts into the target environment via scp, whilst some builds instructed the target machines to pull the artifacts from their respective repository. All scripts for doing these were different for each app even though some shared the same heritage.

Each application had its own puppet codebase, its own way of providing information to our monitoring infrastructure and its own way of interacting with the load balancers; to put a new application into this world was like starting from scratch everytime.

The infrastructure team now had a backlog of requests to put new applications into production so we thought a more homogenous approach maybe more appropriate. The result was that a representative group of developers from each of the teams ended up in front of a whiteboard where we “merged” all our current implementations in order to form the new standard, and so the application infrastructure contracts (AIC’s) were born.

New applications must implement the contract and must pass the contract test, those that do get zero-downtime continuous deployment for “free".

In the next post I will explain the various parts of the contract, to support: deployments from standardized repository locations; standardized interaction with our load balancers and more.

Original post blogged on b2evolution.

Categories: Blogs

Software Testing Latest Training Courses for 2012

The Cohen Blog — PushToTest - Mon, 02/20/2012 - 05:34
Free Workshops, Webinars, Screencasts on Open Source Testing Need to learn Selenium, soapUI or any of a dozen other Open Source Test (OST) tools? Join us for a free Webinar Workshop on OST. We just updated the calendar to include the following Workshops:
And If you are not available for the above Workshops, try watching a screencast recording.

Watch The Screencast

Categories: Companies, Open Source

Selenium Tutorial For Beginners

The Cohen Blog — PushToTest - Thu, 02/02/2012 - 08:45
Selenium Tutorial for Beginners Selenium is an open source technology for automating browser-based applications. Selenium is easy to get started with for simple functional testing of a Web application. I can usually take a beginner with some light testing experience and teach them Selenium in a 2 day course. A few years ago I wrote a fast and easy tutorial Building Selenium Tests For Web Applications tutorial for beginners.

Read the Selenium Tutorial For Beginners Tutorial

The Selenium Tutorial for Beginners has the following chapters:
  • Selenium Tutorial 1: Write Your First Functional Selenium Test
  • Selenium Tutorial 2: Write Your First Functional Selenium Test of an Ajax application
  • Selenium Tutorial 3: Choosing between Selenium 1 and Selenium 2
  • Selenium Tutorial 4: Install and Configure Selenium RC, Grid
  • Selenium Tutorial 5: Use Record/Playback Tools Instead of Writing Test Code
  • Selenium Tutorial 6: Repurpose Selenium Tests To Be Load and Performance Tests
  • Selenium Tutorial 7: Repurpose Selenium Tests To Be Production Service Monitors
  • Selenium Tutorial 8: Analyze the Selenium Test Logged Results To Identify Functional Issues and Performance Bottlenecks
  • Selenium Tutorial 9: Debugging Selenium Tests
  • Selenium Tutorial 10: Testing Flex/Flash Applications Using Selenium
  • Selenium Tutorial 11: Using Selenium In Agile Software Development Methodology
  • Selenium Tutorial 12: Run Selenium tests from HP Quality Center, HP Test Director, Hudson, Jenkins, Bamboo
  • Selenium Tutorial 13: Alternative To Selenium
A community of supporting open source projects - including my own PushToTest TestMaker - enables you to apply your Selenium tests as functional tests for smoke testing, regression testing, and integration tests, load and performance tests, and production service monitors. These techniques and tools make it easy to run Selenium tests from test management platforms, including HP Quality Center, HP Test Director, Zephyr, TestLink, QMetry, from automated Continuous Integration (CI) tests, including Hudson, Jenkins, Cruise Control, and Bamboo.

I wrote a Selenium tutorial for beginners to make it easy to get started and take advantage of the advanced topics. Download TestMaker Community to get the Selenium tutorial for beginners and immediately build and run your first Selenium tests. It is entirely open source and free!

Read the Selenium Tutorial For Beginners Tutorial

Categories: Companies, Open Source

5 Services To Improve SOA Software Development Life Cycle

The Cohen Blog — PushToTest - Fri, 01/27/2012 - 00:25
SOA Testing with Open Source Test Tools PushToTest helps organizations with large scale Service Oriented Architecture (SOA) applications achieve high performance and functional service delivery. But, it does not happen at the end of SOA application development. Success with SOA at Best Buy requires an Agile approach to software development and testing, on-site coaching, test management, and great SOA oriented test tools.

Distributing the work of performance testing through an Agile epoc, story, and sprints reduces the testing effort overall and informs the organization's business managers on the service's performance. The biggest problem I see is keeping the testing transparent so that anyone - tester, developer, IT Ops, business manager, architect - follows a requirement down to the actual test results.

With the right tools, methodology, and coaching an organization gets the following:
  • Process identification and re-engineering for Test Driven Development (TDD)
  • Installation and configuration of a best-in-class SOA Test Orchestration Platform to enable rapid test development of re-usable test assets for functional testing, load and performance testing and production monitoring
  • Integration with the organization's systems, including test management (for example, Rally and HP QC) and service asset management (for example, HP Systinet)
  • Construction of the organization's end-to-end tests with a team of PushToTest Global Professional Services, using this system and training of the existing organization's testers, Subject Matter Experts, and Developers to build and operate tests
  • On-going technical support
Download the Free SOA Performance Kit On-Site Coaching Leads To Certification
The key to high quality and reliable SOA service delivery is to practice an always-on management style. That requires on-site coaching. In a typical organization the coaches accomplish the following:
  • Test architects and test developers work with the existing Testing Team members. They bring expert knowledge of the test tools. Most important is their knowledge of how to go from concept to test coding/scripting
  • Technical coaching on test automation to ensure that team members follow defined management processes
Cumulatively this effort is referred to as "Certification". When the development team produces quality product as demonstrated by simple functional tests, then the partner QA teams take these projects and employ "best practice" test automation techniques. The resulting automated tests integrate with the requirements system (for example, Rally), the continuous integration system, and the governance systems (for example, HP Systinet.)
Agile, Test Management, and Roles in SOA
Agile software development process normally focuses first on functional testing - smoke tests, regression test, and integration tests. Agile applied to SOA service development deliverables support the overall vision and business model for the new software. At a minimum we should expect:
  1. Product Owner defines User Stories
  2. Test Developer defines Test Cases
  3. Product team translates Test Cases into soapUI, TestMaker Designer, and Java project implementations
  4. Test Developer wraps test cases into Test Scenarios and creates an easily accessible test record associated to the test management service
  5. Any team member follows a User Story down into associated tests. From there they can view past results or execute tests again.
  6. As tests execute the test management system creates "Test Execution Records" showing the test results
Learn how PushToTest improves your SOA software development life cycle. Click here to learn how.

Download the Free SOA Performance Kit

Categories: Companies, Open Source

Application Performance Management and Software Testing Trends and Analysis

The Cohen Blog — PushToTest - Tue, 01/24/2012 - 16:25
18 Best Blogs On Software Testing 2011 began with some pretty basic questions for the software testing world:
  • To what extent will large organizations dump legacy test tools for open source test tools?
  • How big would the market for private cloud software platforms be?
  • Does mankind have the tools to make a reliable success of the complicated world we built?
  • How big of a market will SOA testing and development be?
  • What are the best ways to migrate from HP to Selenium?
Let me share the answers I found. Some come from my blog, others from friends and partner blogs. Here goes:

The Scalability Argument for Service Enabling Your Applications. I make the case for building, deploying, and testing SOA services effectively. I point out the weakness of this approach comes at the tool and platform level. For example, 37% of an application's code simply to deploy your service.

How PushToTest Uses Agile Software Development Methodology To Build TestMaker. A conversation I had with Todd Bradfute, our lead sales engineer, on surfacing the results of using Agile methodology to build software applications.

"Selenium eclipsed HP’s QTP on job posting aggregation site to become the number one requisite job experience / skill for on-line posted automated QA jobs (2700+ vs ~2500 as of this writing,)" John Dunham, CEO at Sauce Labs, noted.

Run Private Clouds For Cost Savings and Control. Instead of running 400 Amazon EC2 machine instances, Plinga uses Eucalyptus to run its own cloud. Plinga needed the control, reliability, and cost-savings of running its own private cloud, Marten Mickos, CEO at Eucalyptus, reports in his blog.

How To Evaluate Highly Scalable SOA Component Architecture. I show how to evaluate highly scalable SOA component architecture. This is ideal for CIOs, CTOs, Development and Test Executives, and IT managers.

Planning A TestMaker Installation. TestMaker features test orchestration capabilities to run Selenium, Sahi, soapUI, and unit tests written in Java, Ruby, Python, PHP, and other langauges in a Grid and Cloud environment. I write about the issues you may encounter installing the TestMaker platform.

Repurposing ThoughtWorks Twist Scripts As Load and Performance Tests. I really like ThoughtWorks Twist for building functional tests in an Agile process. This blog and screencast shows how to rapidly find performance bottlenecks in your Web application using Thoughtworks Twist with PushToTest TestMaker Enterprise test automation framework.

4 Steps To Getting Started With The Open Source Test Engagement Model. I describe the problems you need to solve as a manager to get started with Open Source Testing in your organization.

Corellation Technology Finds The Root Cause To Performance Bottlenecks. Use aspect-oriented (AOP)  technology to surface memory leaks, thread deadlocks, and slow database queries in your Java Enterprise applications.

10 Agile Ways To Build and Test Rich Internet Applicatiions (RIA.) Shows how competing RIA technologies put the emphasis on test and deploy.

Oracle Forms Application Testing. Java Applet technology powers Oracle Forms and many Web applications. This blog shows how to install and use open source tools to test Oracle Forms applications.

Saving Your Organization From The Eventual Testing Meltdown of Using Record/Playback Solely. The Selenium project is caught between the world of proprietary test tool vendors and the software developer community. This blog talks about the tipping-point.

Choosing Java Frameworks for Performance. A round-up of opinions on which technologies are best for building applications: lightweight and responsive, RIA, with high developer productivity.

Selenium 2: Using The API To Create Tests. A DZone Refcard we sponsored to explain how to build tests of Web applications using the new Selenium 2 APIs. For the Selenium 1 I wrote another Refcard, click here.

Test Management Tools. A discussion I had with the Zephyr test management team on Agile testing.

Migrating From HP Mercury QTP To PushToTest TestMaker 6. HP QTP just can't deal with the thousands of new Web objects coming from Ajax-based applications. This blog and screencast shows how to migrate.

10 Tutorials To Learn TestMaker 6. TestMaker 6 is the easier way to surface performance bottlenecks and functional issues in Web, Rich Internet Applications (RIA, using Ajax, Flex, Flash,) Service Oriented Architecture (SOA,) and Business Process Management (BPM) applications.

5 Easy Ways To Build Data-Driven Selenium, soapUI, Sahi Tests. This is an article on using the TestMaker Data Production Library (DPL) system as a simple and easy way to data-enable tests. A DPL does not require programming or scripting.

Open Source Testing (OST) Is The Solution To Modern Complexity. Thanks to management oversite, negligence, and greed British Petroleum (BP) killed 11 people, injured 17 people, and dumped 4,900,000 barrels of oil into the Gulf of Mexico in 2010. David Brooks of the New York Times became an unlikely apologist for the disaster citing the complexity of the oil drilling system.

Choosing automated software testing tools: Open source vs. proprietary.  Colleen Fry's article from 2010  discusses why software testers decide which type of automated testing tool, or combination of open source and proprietary, to best meets their needs. We came a long way in 2011 to achieve these goals.

All of my blogs are found here.

Categories: Companies, Open Source

Free Webinar on Agile Web Performance Testing

The Cohen Blog — PushToTest - Tue, 01/10/2012 - 19:22
Free Open Source Agile Web Application Performance Testing Workshop
Your organization may have adopted Agile Software Development Methodology and forgot about load and performance testing! In my experience this is pretty common. Between Scrum meetings, burn-down sessions, sprints, test first, and user stories, many forms of testing - including load and performance testing, stress testing, and integration testing - can get lost. And, it is normally not only your fault. Consider the following:
  • The legacy proprietary test tools - HP LoadRunner, HP QTP, IBM Rational Tester, Microsoft VSTS - are hugely expensive. Organizations can't afford to equip developers and testers with their own licensed copies. These tools licenses are contrary to Agile testing, where developers and testers work side-by-side building and testing concurrently.

  • Many testers still cannot write test code. Agile developers write unit tests in high level languages (Java, C#, PHP, Ruby.) Testers need a code-less way to repurpose these tests into functional tests, load and performance tests, and production service monitors.

  • Business managers need a code-less way to define the software release requirements criteria. Agile developers see Test Management tools (like HP Quality Center QC) as a needless extra burden to their software development effort. Agile developers are hugely attracted to Continuous Integration (CI) tools like Hudson, Jenkins, Cruise Control, and Bamboo. Business managers need anintegrated CI and test platform to define requirements and see how close to 'shipping' is their application.
Lucky for you there is a way to learn how to solve these problems and deliver Agile software development methodology benefits to your organization. The Agile Web Application Performance Testing Workshop is your place to learn the Agile Open Source Testing way to load and performance test your Web applications, Rich Internet Applications (RIA, using Ajax, Flex, Flash, Oracle Forms, Applets,) and SOAP and REST Web services. This free Webinar delivers a testing methodology, tools, and best/worst practices to follow. Plus, you will see a demonstration of a dozen open source test tools all working together.

Registration is free! Click here to learn more and register now:

Register Now

Categories: Companies, Open Source

Free Help To Learn TestMaker, Selenium, Sahi, soapUI

The Cohen Blog — PushToTest - Fri, 01/06/2012 - 05:57
Help Is Here To Learn TestMaker, Selenium, Sahi, soapUI Do you sometimes feel alone? Have you been trying any of the following:
  • Writing Load Test Scripts
  • Building Functional Tests for Smoke and Regression Testing
  • Trying to use Selenium IDE and needing a good tutorial
  • Configuring test management tools working with TestMaker, Sahi, and soapUI
  • Needing To Compare Selenium Vs HP QuickTest Pro (QTP)
  • Stuck While Doing Cloud Computing Testing
  • Need Help Getting Starting with Load Testing Tools
If you feel stuck, need help, or would like to see how professional testers solve these situations, then please attend a free live weekly Webinar.

Register Now

Here Is What We Have For You Bring your best questions, issues, and bug reports on installing, configuring, and using PushToTest TestMaker to our free weekly Workshop via live Webinar. PushToTest experts will be available to answer your questions.

Frank Cohen, CEO and Founder at PushToTest, and members of the PushToTest technical team will answer your questions, show you where to find solutions, and take your feedback for feature enhancements and bug reports.

Every Thursday at 1 pm Pacific time (GMT-8)
Registration Required

At the Webinar:
  1. Register for the Webinar in-advance
  2. Log-in to the Webinar at the given day and time
  3. Each person that logs-in will have their turn, ask their question, and hear our response
  4. You may optionally share/show your desktop for the organizers to see what is going wrong and offer a solution
  5. The organizers will hear as many questions as will fit in 1 hour. No guarantee that everyone will be served.
See how these tools were made to work together. Bring your best questions for an immediate answer!

Register Now

Categories: Companies, Open Source

Free Training Selenium IDE soapUI TestMaker PushToTest

The Cohen Blog — PushToTest - Wed, 01/04/2012 - 16:43
A Look Forward To Open Source Load Testing Tools Albert EinsteinThomas EdisonMarie Skodowska CurieNokola Tesla

You and I have come after some incredibly smart people. They inspire us to do our best when testing software for functionality, performance under load, and scalability.

The problems you need to solve are testing applications and business processes that use Rich Internet Application (RIA, using Ajax, Flex, Flash, Oracle Forms, Applets,) SOA, BPM, and SOAP and REST Web Service interfaces.

Thankfully you don’t have to be Einstein, Edison, Curie, or Tesla to “get” this stuff. You just need a good set of free open source test tools, a good methodology, and a good coach.
Upcoming Free Webinar Workshops On Open Source Load Testing
PushToTest will host 6 free Workshops via live Webinar in January 2012. Each Workshop features training for  performance testing using Selenium, soapUI, Sahi, JUnit, and TestMaker. Registration is free. Sign-up now while seats last.

Agile Open Source Performance Test Workshop for CIOs, CTOs, Business Managers
January 4, 2012
Agile Open Source Performance Test Workshop for Developers, Testers, IT Managers
January 5, 2012
Open Source Test Workshop for CIOs, CTOs, Business Managers
January 11, 2012
Selenium, soapUI, Sahi, TestMaker Workshop for Testers, Developers, IT Ops
January 12, 2012
Use Selenium, soapUI, Sahi, TestMaker Performance Testing In Your Organization
January 25, 2012
Load Testing Using Agile Open Source Tools for Developers, Testers, IT Managers
January 26, 2012
Agile Open Source Performance Test Workshop for CIOs, CTOs, Business Managers
February 14, 2012
Agile Open Source Performance Test Workshop for Developers, Testers, IT Managers
February 16, 2012
Free Webinar: Solve Performance Bottlenecks and Function Problems In Your Web Applications
February 22, 2012
Open Source Test Workshop for Developers, Testers, IT Ops
February 23, 2012

All Workshops are free, registration is limited, and this is an interactive Webinar where you ask your best questions.

Categories: Companies, Open Source

Let’s Stop the Wishful Thinking

Pillar Technology - Fri, 11/18/2011 - 23:42

Making a wish

I recently published an article on the Agile Journal titled “Let’s Stop the Wishful Thinking.” To me, estimates for projects are often just wishful thinking. Can we make them more fact-based? I have tried several techniques on projects and this article is the culmination of what I’ve seen work.

Here’s the link.

Out of Print

William Louth’s Weblog - Mon, 11/14/2011 - 21:35

I will be discontinuing this blog with a change in the scope of my research and interests. If there is anything of interest please print now because it will be out of print shortly.

Categories: Companies

APM at Velocity Berlin 2011: The wrong advice, wrong approach and wrong agent.

William Louth’s Weblog - Fri, 11/11/2011 - 18:36

At the Velocity Conf this week in Berlin (which had a very good turnout in terms of audience and speakers) I was stunned to hear NewRelic during their 5 minute lighting presentation claim that other tools to be “crappy”. Yes from the product/engineering team that gave us Wily Introscope which in itself completely redefined what crappy means in the enterprise space and who then went on to show that Crappy 2.0 (Lew Cirne’s second attempt at trying to code) can repeat its success with those with very little in the way of performance management knowledge, expertise and awareness – in the cloud. Then on the second day we had the “DevOps” team from advocate the use of logging (excessively that is with log4j) as a means to monitor and manage Java applications. Apparently no amount of logging could impact application response times which says a whole lot about the performance itself. Two huge WTF moments in two days. It seems we can’t seem to get over logging and we can’t seem to calibrate our own tools.

Allow me to kill two birds with one benchmark test. Here is the clean version.

Using JXInsight/OpenCore’s agent the code was instrumented dynamically at load time. No code changes needed. I ran with two different configurations. One that simply meters in memory and the second one that along with metering performs binary logging of the begin and end metering events, the meter readings as well as the name of the probe and thread context.

Here is the manual instrumentation needed to have a comparable test with log4j logging approach.

With NewRelic‘s Java agent there is no means to transparently instrument code that falls outside the few frameworks they support. You are forced to use a NewRelic specific @Trace annotation.

Here is a comparison of the average clock time cost reported after executing each test 100 million to 1 billion times.

You might initially think that NewRelic does not fair so bad compared to the log4j approach but bear in mind that NewRelic does not actually do any file IO. Instead it aggregates the data (losing all trace history in our case) and then dispatches it as a small packet to their web site service in 1 minute intervals.

None of our customers could tolerate the overhead incurred by the log4j and NewRelic with transaction latency as low as 200-300 microseconds and with 10 or more measurement points within such.

OpenCore without the recorder probes provider enabled and using the following configuration had an average clock time overhead of 130 ns with more than 50% of that attributed to the two clock time access reads.


With the recorder enabled as follows the overhead increased to 710 ns and thats for two metering event IO writes. That’s 10x better than log4j but considering this is mostly IO on a slow hard disk the efficiency differences are hugely bigger. I can’t imagine what the other 7 microseconds is being spent on by log4j.


Below is a slice of the CPU usage monitoring during each of the benchmark test runs. OpenCore is not only so much faster it also uses far much less system resources.

There is a reason why cloud platform vendors like Heroku (Salesforce) and CloudBees offer NewRelic subscriptions but don’t actually use it themselves internally to monitor and manage their own critical systems and services. They are not in the business of conserving your resource consumption (or cost).

Relatively speaking log4j and NewRelic are pretty darn expensive and for the value they offer it puzzles me why anyone would ever consider such for Java application monitoring. Granted not everyone or application operates at such speeds and resolutions but bear in mind that today we advise our customers that the recorder be used mainly in development and focused performance testing, yet it is 10 times faster than these solutions promoted/advised as production solutions.

Yes you can find low hanging fruit with whatever “crappy” approach and agent you choose but if you are still using Crappy 2.0 after resolving such issues then you must question whether you are truly trying hard enough to be good if not better (than the rest).

Clearly what I consider “crappy” and what Brain Doll over at NewRelic considers “crappy” are light years apart and in different time/space dimensions.

Apart from these two blips the Velocity EU conference was a resounding success. Lots of great discussions in the corridors and in the speakers room and not forgetting some really thought provoking talks from John AllspawTheo SchlossnagleJeff VeenJohannes Mainusch, and the WTF’er of the year Artur Bergman.

Model Name: iMac
Model Identifier: iMac11,1
Processor Name: Intel Core i7
Processor Speed: 2.8 GHz
Number Of Processors: 1
Total Number Of Cores: 4
L2 Cache (per core): 256 KB
L3 Cache: 8 MB
Memory: 8 GB
Processor Interconnect Speed: 4.8 GT/s
log4j.rootLogger=warn, root
log4j.appender.root.layout.ConversionPattern=%p %t %c – %m%n

Categories: Companies

Get Ready for O’Reilly Velocity (EU) Conf 2011 Berlin, Germany

William Louth’s Weblog - Thu, 11/03/2011 - 14:21

Velocity, the Web Performance and Operations conference from O’Reilly Media, is coming to Europe!  I will be attending the conference for the 2 days including (if travel plans permit) the Unconference on the Monday. I expect my session on QoS for Web Applications to be challenging, entertaining and inspiring especially as I frame Quality of Service within the context of self adaptive software (self aware, self regulated, self healing,…) and sketch out how the future of application management involves us managing by proxy by way of controllers, models, plans, goals and policies built directly in applications, containers and runtimes.

I hope to see you there. If you have not registered please consider doing so now. It looks like a great line-up and to be an important conference in the European calendar for performance and operations of web and cloud services.

And if you see me wandering around don’t be shy I promise not to bite ;-) .

Categories: Companies

Evaluating Java Application Performance Management Solutions?

William Louth’s Weblog - Mon, 09/26/2011 - 22:54

If you are currently evaluating Java application performance management (APM) solutions from vendors such as Compuware/dynaTrace, AppDynamics, and NewRelic then you owe it to yourself, your team members and company to read the following articles which pull back the curtains on the marketing shenanigans, trickery and lies that comes with claims of low overhead, accurate reporting, comprehensive coverage, scalability and not forgetting superhero intelligence (sadly that last one’s not a joke).

From Mgmt Dashboards & Consoles to Mgmt Code & Control

If you’re not metering you’re not trying hard enough to be the best – Part 3 of 3 (JXInsight/OpenCore, AppDynamics)
If you’re not metering you’re not trying hard enough to be the best – Part 2 of 3 (JXInsight/OpenCore, DTrace, NetBeans Profiler/VisualVM)
If you’re not metering you’re not trying hard enough to be the best – Part 1 of 3

Online & Offline Intelligence in Java Application Performance Measurement (JXInsight/OpenCore, AppDynamics)

JXInsight/OpenCore Competitive Comparison (JXInsight/OpenCore, AppDynamics, dynaTrace, NewRelic)

Which Ruby VM? Consider Monitoring! (JXInsight/OpenCore, NewRelic)

The Java Application Performance Management Vendor Showdown (AppDynamics, dynaTrace, NewRelic)

The Good and B(AD) of Application Performance Management Measurement (JXInsight/OpenCore, AppDynamics)

Don’t spend thousands if not millions licensing such products if at the end of the day they are only suitable for one type of application – an extremely slow performing web app that makes hundreds of remote database and web service calls in servicing a single request like it was emitting neutrinos that defy the laws of physics (latency).

Categories: Companies

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.


if (...choice 1 valid...) {
if (...choice 2 valid...) {


$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.


Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.



I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today