Skip to content

Open Source

#BreakingBuilds

A lot of us has grown fond of our loyal butler Mr.Jenkins over time, which was created by Frontside and chosen as a result of a logo contest. In the true open-source style, the logo has since evolved into many different derivative works, such as a plugin, a 3D model, and a bobble head.

Our friends at CloudBees are running a #BreakingBuilds social media contest through Jan 5th to have some fun with Mr.Jenkins. Read Sacha Labourey's blog post, where he draws parallels between what a butler does and what continuous delivery can do.

I especially agree with him on this point: I always loved the idea of using a butler to represent what Jenkins is about, as it projects all of the qualities that define continuous delivery: it is built to be proactive, it will help you fix problems before they happen, it orchestrates your entire pipeline to production without you having to worry about the sophisticated underlying sequence of steps and, if things go wrong Jenkins uses his fingerprint database to trace back the source of the issue. Full service. As your right arm, Jenkins is the reliable and trustworthy guy you want on your team!

Check out the contest rules and participate. Let's raise the visibility of Jenkins and have some fun in the process!

Categories: Open Source

Announcing Test Automation Bazaar Jan 16-17 in Austin

Watir - Web Application Testing in Ruby - Tue, 12/16/2014 - 17:38

Originally posted on Testing with Vision:

I am pleased to announce that the Test Automation Bazaar will be held in Austin, Texas on  January 16-17, 2015 (Fri – Sat). I am convening this event with Zeljko Filipin and the Austin Homebrew Testers, and we are pleased that the event will be sponsored by the Open Information Foundation, a non-profit which we have recently joined and which also sponsors the Citcon conferences. This is a follow up to the 2012 Test Automation Bazaar, also held in Austin. Like all OIF events, this conference will be free and open to the public, but we also will be asking for donations and sponsors to cover the expenses of the event. We are currently confirming a location in the Domain. We invite people to come, share their experiences with test automation and learn from others. The organizers have a bias for Ruby, Webdriver (Watir/Selenium), and open-source tools, but we…

View original 134 more words

Categories: Open Source

Walking the Tightrope: Balancing Agility and Stability

Sonar - Fri, 12/12/2014 - 14:22

About a year ago we declared a Long Term Support (LTS) version for the first time ever, and recently, we declared another one (version 4.5.1). But we never talked about what LTS means or why we did it.

Here’s the story:

SonarSource is an agile company. We believe deeply in the agile principles, including this one:

Deliver working software frequently, from a
couple of weeks to a couple of months, with a
preference to the shorter timescale.

So that’s why we deliver a new version about every two months. We know we need to get our changes out to the users so they can tell us what we’ve done well, and what we haven’t. (Feedback of the latter kind is more frequent, but both kinds are appreciated!) That way we can fix our goofs before they’re too deeply embedded under other layers of logic to fix easily.

That’s the agility part: release frequently and respond to customer feedback. But what about stability?

We know that many of our users are in large organizations. Since many of us came to SonarSource from large companies, we understand how well such organizations deal with frequent change: not well at all, sometimes. Instead, they need stability and reliability.

For a while, that left us with a conundrum: how could we be responsive to the customer need for stability, and still be agile?

(Drum roll, please!) Enter the LTS version.

An LTS version marks a significant milestone for us: it means that through incremental, we’ve brought a feature set to completion, and worked out the kinks. The vision that was established (a year ago, in this case) has been achieved, and it’s time to set out a new vision. (More on that soon!)

Once a version is marked LTS, we pledge to maintain it and fix any non-trivial bugs until the next LTS version is released. That way, users who need stability know that they’ll never be forced to upgrade to a non-LTS version just for a bug fix.

And if a bug fix is required for an LTS version, you know you can upgrade to it without any other change in behavior or features. I.e. it’s completely transparent to your users. Of course, we don’t mark a version LTS until we know it’s stable, and we’ve fixed all the bugs in it that we’re aware of. So the chance that you’ll need to perform this kind of transparent upgrade are small.

Of course, there are trade-offs. We release frequently, and pack a lot work into each release. By opting to stay with an LTS version, you trade benefiting from the cool new features for stability (yes, I know that’s a trade worth making for many people).

But there’s another trade-off to be aware of. When you go from one LTS version to the next, you’ll see a lot of change all at once. This time the jump from one LTS to the next includes significant UI changes, plugin compatibility differences, and changes that impact project analysis (possibly requiring analysis job reconfigurations). There’s an overview in the docs.

On the whole, a jump from one LTS to the next one will be awesome, but you need to be aware that it may not feel as trivial as SonarQube upgrades usually do. Really, it’s just a question of how you want to rip off the Band-Aid(R).

Categories: Open Source

New LTS Version Sums Impressive Array of New Features

Sonar - Thu, 12/04/2014 - 14:35

In November, SonarQube version 4.5.1 was announced as the new Long Term Support (LTS) release of the platform. It’s been nearly a year since the last LTS version was announced – a very busy, feature-packed year. Let’s take a look at the results.

Technical Debt moves into the core

If you’re upgrading directly from 3.7.4, the previous LTS, the change with the biggest impact is the addition of built-in support for Technical Debt calculation. The SonarQube platform is all about identifying and displaying technical debt, but previously there was no built-in way to calculate the cumulative amount.

Over the course of the last year, we’ve fixed that by moving the bulk of our SQALE implementation from the commercial SQALE plugin into the core platform. (Shameless plug: there’s still plenty of value left in the plugin.) Now you can see your technical debt in time (minutes, hours or days) without spending a penny. It’ll show up automatically in the re-vamped Issues and Technical Debt widget:

You can also see the technical debt ratio (time to fix the app versus an estimate of the time spent writing the current code base) and the SQALE rating. They show up in the Technical Debt Synopsis widget:

For more on Technical Debt, check out the docs.

Multi-language analysis debuts

This year also saw the fulfillment our most-requested feature ever: the ability to analyze a project for multiple languages. Now you can analyze all the Java, JavaScript, HTML, &etc. in your project with a single analysis, and see the consolidated results in SonarQube.

Note that if you previously relied on SonarQube’s default language setting (typically “Java”) rather than specifying sonar.language in the analysis properties, your first analysis after jumping to the new LTS will pick up more files than previously. Under multi-language, every language plugin that sees some of “its” files in your project will kick in automatically.

Rule management and Quality Gates emerge from the shadow of Profiles

Another paradigm shift this year, was the removal from Quality Profiles of Rule management and Quality Gates (previously known as Alerts). Now Quality Profiles are simply collections of rules, and Quality Gates and Rule management stand on their own.

Previously, if you wanted to apply the same standards to all projects across all languages, you had to set those standards up as Alerts in each and every single profile. I.E. you had to duplicate them. With the introduction of Quality Gates, those standards gain first-class citizenship and become applicable as a set across all projects.

Similarly, if you wanted to browse rules before, you could only do it in the context of a Quality Profile, and interacting with rules across multiple profiles was cumbersome at best. With the introduction of the Rules space, you can browse rules outside the context of a profile, and easily 1) see when a rule is active in multiple profiles, 2) activate or deactivate it in any profile in the instance. There are also impressive new search capabilities and a bevy of smaller, but nonetheless valuable, new features.

UI gains consistency

Several pages throughout the interface have been reworked to improve consistency and add functionality. The new Rules and Quality Gate spaces were implemented with an updated UI design, so the Measures and Issues pages have been updated as well to keep pace.

In addition, the component viewer has been completely re-imagined. Now it’s possible, for instance, to see the duplicated blocks and the issues in a file at the same time.

Widgets are added, updated

Several new widgets have been added for displaying Measure filters since the last LTS: donut chart (a pie with a hole in the middle ;-), bubble chart, and histogram. Word cloud has moved from a separate page to a reusable widget. And TreeMap has been completely rewritten to improve responsiveness and functionality.

Interactions outside the interface improve

The final two changes to mention are a little less showy, but no less important.

First, an incremental analysis mode has been added. It allows you to run a local analysis (the SonarQube database is not updated) on only the files that have changed locally since the last full analysis. With the two IDE plugins and the ability to run incremental analysis manually, there are three options for pre-commit analysis.

The last major change is the addition of built-in web service documentation. You’ll find the link in the SonarQube footer.

It leads to a listing of every web service available on the instance, with documentation for the methods, and in some cases a sample response. Never again will you wonder if you’re looking at the right version of the docs!

Coming up

It always seems a long time to me from release to release, but looking back, it has been an amazingly short time to have packed in so much new functionality. If you’re upgrading from the previous LTS, it’s definitely worth taking a look at the migration guide.

Coming up next, I’ll talk about SonarSource’s LTS philosophy – why we do it and what it means – and in another post I’ll talk about all the features planned for the next LTS.

Categories: Open Source

Workflow plugin is 1.0

Jenkins started with a notion of jobs and builds at heart. One script is one job, and as you repeatedly execute jobs, it creates builds as records. As the use case of Jenkins gets more sophisticated, people started combining jobs to orchestrate ever more complex activities.

A number of plugins have been developed to enable all sorts of different ways to compose jobs, and many are used quite successfully in production. But this resulted in a certain degree of complexity for users to figure out how to assemble these plugins.

So we felt the need to develop a single unified solution that subsumes all these different ways to orchestrate activities that may span across multiple build slaves, code repositories, etc. Various inputs from users as well as other plugin developers played a key role.

The result of this is the workflow plugin, which is what a number of us, including Jesse Glick an myself, are focused on in the past few months.

The plugin approaches the problem by defining a DSL for you to describe an execution of a job. Various convenient primitives are available, such as executing shell scripts, checking out the source code, obtaining an executor or a build workspace, etc. All sorts of classic existing plugins contribute their functionalities into this DSL, such as recording test results, fingerprints, or calling into other existing jobs. This allows you to leverage higher-level functionalities and report comprehension capability into a workflow. Similarly, you can leverage the ability of Groovy, the host language of workflow DSL, to define control flows, abstractions, and reuse.

A key feature of a workflow execution is that it's suspendable. That is, while the workflow is running your script, you can shut down Jenkins or lose a connectivity to a slave. When it comes back, Jenkins will still remember what it was doing, and your workflow script resumes execution as if it was never interrupted. A technique known as the "continuation-passing style" execution plays a key role in achieving this.

I'm very happy to report that the workflow plugin is finally 1.0. This version runs on the latest 1.580-based LTS. and we created a docker image for you to play with too. There’s also a JUC presentation that explains this. We are working toward 1.0 release within this year, and in the meantime, the syntax is stable enough to allow you to start designing workflows today.

We've been hearing a lot of good feedbacks and enthusiasm for this new effort. Please let us know what you think.

Categories: Open Source

Do you care about your code? Track code coverage on new code, right now !

Sonar - Thu, 11/27/2014 - 06:40

A few weeks ago, I had a passionate debate with my old friend Nicolas Frankel about the usefulness of the code coverage metric. We started on Twitter and then Nicolas wrote a blog entry stating “Your code coverage metric is not meaningful” and so useless. Not only am I thinking exactly the opposite, but I would even say that not tracking the code coverage on new code is almost insane nowadays.

For what I know, I haven’t found anything in the code

But before talking about the importance of tracking code coverage on new code and the relating paradigm shift, let’s start by mentioning something which is probably one of the root causes of the misalignment with Nicolas: static and dynamic analysis tools will never, ever manage to say “your code is clean, well-designed, bug free and highly maintainable”. Static and dynamic analysis tools are only able to say “For what I know, I haven’t found anything wrong in the code.”. And so by extension this is also true for any metric/technique used to understand/analyse the source code. A high level of coverage is not a guarantee for the quality of the product, but a low level is a clear indication of insufficient testing.

For Nicolas, tracking the code coverage is useless because in some cases, unit tests leading to increase the code coverage can be crappy. For instance, unit tests might not contain any assertions, or unit tests might cover all branches but not all possible inputs. To fix those limitations, Nicolas says that the only solution is to do some mutation testing while computing code coverage (see for instance pitest.org for Java) to make sure that unit tests are robust. Ok, but if you really want to touch the Grail, is it enough? Absolutely not! You can have a code coverage of 100% and some very robust but… fully unmaintainable unit tests. Mutation testing doesn’t provide any way, for instance to know how “unit” your unit tests are, or if there is lot of redundancy between your unit tests.

To sum-up, when you care about the maintainability, reliability and security of your application, you can/should invest some time and effort to reach some higher maturity levels. But if you wait to find the ultimate solution to start, that will never happen. Moreover maturity levels should be reached progressively:

  • It doesn’t make any sense to care about code coverage if there isn’t a continuous integration environment
  • It doesn’t make any sense to care about mutation testing if only 5% of the source code is covered by unit tests
  • … etc.

And here I don’t even mention the extra effort involved in the execution of mutation testing and the analysis of the results. But don’t miss my point: mutation testing is a great technic and I encourage you to give a try to http://pitest.org/ and to the SonarQube Pitest plugin done by Alexandre Victoor. I’m just saying that as a starting point, mutation testing is already a too advanced technic.

Developers want to learn

There is a second root cause of misalignment with Nicolas: should we trust that developers have a will to progress? If the answer is NO, we might spend a whole life fighting with them and always making their lives more difficult. Obviously, you’ll always find some reluctant developers, doing some push back and not caring at all about the quality and reliability of the source code. But I prefer targeting the vast majority of developers eager to learn and to progress. For that majority of developers, the goal is to always make life more fun instead of making it harder. So, how do you infect your “learning” developers with the desire to unit test?

When you start the development of an application from scratch unit testing might be quite easy. But when you’re maintaining an application with 100,000 lines of code and only 5% is covered by unit tests, you could quickly feel depressed. And obviously most of us are dealing with legacy code. When you’re starting out so far behind, it can require years to reach a total unit test coverage of 90%. So for those first few years, how are you going to reinforce the practice? How are you going to make sure that in a team of 15 developers, all developers are going to play the same game?

At SonarSource we failed during many years

Indeed, we were stuck with a code coverage of 60% on the platform and were not able to progress. Thankfully, David Gageot joined the team at that time, and things were pretty simple for him: any new piece of code should have a coverage of at least 100% :-). That’s it, and that’s what he did. From there we decided to set-up a quality gate with a very simple and powerful criteria: when we release a new version of any product at SonarSource, the code coverage on new or updated code can’t be less than 80%. When this is the case, the request for release is rejected. That’s it, that’s what we did, and we finally started to fly. One year and an half later, the code coverage on the SonarQube platform is 82% and 84% on the overall SonarSource products (400,000 lines of code and 20,000 unit tests).

Code coverage on new/changed code is a game changer

And it’s pretty simple to understand why:

  • Whatever your application is, and may it be a legacy one or not, the quality gate is always the same and doesn’t evolve over time: just make the coverage on your new/changed lines of code greater than X%
  • There’s no longer a need to look at the global code coverage and legacy Technical Debt. Just forget it and stop feeling depressed!
  • As each year X% of your overall code evolves (at Google for example, each year 50% of the code evolves), having coverage on changed code means that even without paying attention to the overall code coverage, it will increase quickly just “as a side effect”.
  • If one part of the application is not covered at all by unit tests but has not evolved during the past 3 years, why should you invest the effort to increase the maintainability of this piece of code? It doesn’t make sense. With this approach, you’ll start taking care of it if and only if one day some functional changes need to be done. In other words, the cost to bootstrap this process is low. There’s no need to stop the line and make the entire team work for X months just to reimburse the old Technical Debt.
  • New developers don’t have any choice other than playing the game from day 1 because if they start injecting some uncovered piece of code, the feedback loop is just a matter of hours, and anyway their new code will never go into production.

This new approach to deal with the Technical Debt is part of this paradigm shift explained in our “Continuous Inspection” white paper. Another blog entry will follow explaining how to easily track any kind of Technical Debt with such approach, not just debt related to the lack of code coverage. And thanks Nicolas Frankel for keeping feeding this open debate.

Categories: Open Source

What about Microsoft Component Extensions for C++?

Sonar - Wed, 11/19/2014 - 08:32

After my previous blog entry about the support of Objective-C, you could get the impression that we’re fully focused on Unix-like platforms and have completely forgotten about Windows. But that would be a wrong impression – with version 3.2 of the C / C++ / Objective-C plugin released in November, 2014, support for the Microsoft Component Extensions for Runtime Platforms arrived in answer to customer needs. The C-Family development team closely follows discussions in the mailing list for customer support, so don’t hesitate to speak about your needs and problems.

So what does “support of Microsoft Component Extensions for Runtime Platform” mean? It means that the plugin is now able to analyze two more C++ dialects: C++/CLI and C++/CX. C++/CLI extends the ISO C++ standard, allowing programming for a managed execution environment on the .NET platform (Common Language Runtime). C++/CX borrows syntax from C++/CLI, but targets the Windows Runtime (WinRT) and native code instead, allowing programming of Windows Store apps and components that compile to native code. Also could be noted there is not much static analyzers capable to analyze those dialects.

So now the full list of supported C++ dialects looks quite impressive – you can see it in the configuration page:

And this is doesn’t even count the C and Objective-C languages!

You also may notice from the screenshot above, that now there is clear separation between the ISO standards, the usual Microsoft extensions for C/C++ (which historically come from Microsoft Visual Studio compiler), and GNU extensions (which historically come from GCC compiler). The primary reason for the separation is that some of these extensions conflict with each other, as an example – the Microsoft-specific “__uptr” modifier is used as an identifier in the GNU C Library. To ease configuration, the plugin option names closely resemble the configuration options of GCC, Clang and many other compilers.

But wait, you actually don’t need to specify the configuration manually, because you can use the build-wrapper for Microsoft Visual Studio projects just like you can with non-Visual Studio projects. Just download “build-wrapper” and use it as a prefix to the build command for your Microsoft Visual Studio project. As an example:

build-wrapper --out-dir [output directory] msbuild /t:rebuild

and just add a single property to configuration of analysis:

sonar.cfamily.build-wrapper-output=[output directory]

The build wrapper will eavesdrop on the build to gather configuration data, and during analysis the plugin will use the collected configuration without the headaches of manual intervention. Moreover, this works perfectly for projects that have mixed subcomponents written with different dialects.

So all this means that from now you can easily add projects written using C++/CLI and C++/CX into your portfolio of projects regularly analysed by SonarQube.

Of course, it’s important that the growth of supported dialects is balanced with other improvements, and that’s certainly the case in this version: we made several improvements, added few rules and fixed 28 bugs. And we’re planning to go even further in the next version. Of course, as usual there will be new rules, and improvements, but we’ll also be adding a major new feature which will make analysis vastly more powerful, so stay tuned.

In the meantime, the improvements in version 3.2 are compatible with all SonarQube versions from 3.7.4 forward, and they’re worth adopting today.

Categories: Open Source

SonarQube 4.5 in Screenshots

Sonar - Tue, 11/11/2014 - 18:18

The team is proud to announce the release of SonarQube 4.5, which includes many new features:

  • Computation of the SQALE Rating and the Technical Debt Ratio
  • Improvement to the Rules pages
  • Redesign of the Treemap

Computation of the SQALE Rating and Technical Debt ratio

With this version of the SonarQube platform, the SQALE Rating (letter grade) and Technical Debt Ratio move from the SQALE plugin into core. Now there’s an easy index for project comparison, and an easy way to see it too, the new “Technical Debt Synopsis” widget:

Improvement the the Rules pages

This version of the platform brings several improvements to the Rules space.

The first is the addition of a new Active Severity feature, which lets you search for rules by the severities they have in a given profile (rather than by default severity):

This version also makes the rule’s SQALE/Technical Debt remediation function visible. I.e. you can see now how much violating a rule will “cost” you in terms of technical debt. Just click on the SQALE characteristic to see it:

The ability to create new manual rules has also been added to the Rules space. Since markdown is now supported in rule descriptions, you can use it to add some formatting to manual rules:

Once created, you’ll find them in the “Manual Rules” repository:

Redesign of the Treemap

The treemap, one of the earliest visualizations of code in SonarQube got a complete overhaul in this version. Rewritten in JavaScript, you should find it more responsive, more intuitive, and just as beautiful as ever. Among the changes are a better mouseover, and a clickable breadcrumb trail at the bottom for zooming out:

That’s all, Folks!

Time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

Categories: Open Source

SonarQube supports ECMAScript 6

Sonar - Tue, 10/28/2014 - 13:53

The 2.1 version of the JavaScript Plugin fully supports ECMAScript 6 (ES6). But what does that mean exactly ?

What is ECMAScript 6 ?

Let’s start from the beginning. The JavaScript language is one implementation of the ECMAScript language standard. (JScript and ActionScript are two more.) Each browser has its own JavaScript interpreter, and most modern browsers support the ECMAScript 5 (ES5) standard. That means that you can write JavaScript that adheres to the ES5 standard and have it work correctly for the vast majority of your users.

ES6 is the next version of the standard, and it provides great new features to allow you to write shareable, efficient, and readable code more easily. Luke Hobbans is a member of TC39, the committee behind ECMAScript. He’s written up a great summary, of the main features of ES6, including a short description of each, complete with code snippet.
Here’s a quick sample for 2 new constructs, variable and constant declaration, and arrow function:

Block scoped binding construct: let for variable declaration and const for constant declaration.

const a = 1;
a = 2;  // error - cannot be re-assigned

if (true) {
  let b = 1;
}
print(b);  // b is undefined

Arrow function: function shorthand using the => syntax.

let sum = (a, b) => a + b;
Can I use ES6 today ?

A new version of the standard also means that each browser needs to provide support for it, at least the major ones. It might take years before it happens, but you don’t have to wait for that to take advantage of the innovations in ECMAScript 6!
Thanks to the availability of ES6-to-ES5 transpilers, it is possible to use ES6 features today. A transpiler will translate your ECMAScript 6 code to ECMAScript 5 code so it can be run by today’s browsers. I like the Traceur compiler, which offers an online demo. You can enter ES6 on the left side and see the resulting ES5 code on right.
AngularJS has already taken advantage of a transpiler to make the move with AngularJS 2.0.

You can follow the progress of ES6 support in browsers and for transpilers in Juriy Zaytsev’s ES6 compatibility matrix.

Use SonarQube to analyze ES6 code

The SonarQube JavaScript Plugin 2.1 fully supports ES6. What does that mean?

It means that the plugin is able to:

  1. parse ES6 source code,
  2. compute all relevant metrics accordingly:
    • a classes count is computed when classes are used
    • class complexity is computed when classes are used
    • the function count includes generator functions
    • general complexity metrics take generators into account
    • the number of accessors is now computed
  3. analyse code against rules, all existing coding rules have been updated to cover the new features, e.g: “unused variable” will detect unused variables & constants declared with let and const.

In upcoming versions, we’ll be adding new rules specific to ES6 features. We’re still figuring out what those should be, but a set of rules about classes is likely. If you’ve got suggestions, please let us know at user@sonar.codehaus.com.

Wanna see a live demo? Check out Angular DI Framework using ES6 on nemo: Angular Dependency Injection v2.

Categories: Open Source

Test Framework Feature Comparisons – What If We Cooperated?

NUnit.org - Sun, 04/07/2013 - 03:14

Software projects often publish comparisons with other projects, with which they compete. These comparisons typically have a few characteristics in common:

  • They aim at highlighting reasons why one project is superior – that is, they are marketing material.
  • While they may be accurate when initially published, competitor information is rarely updated.
  • Pure factual information is mixed with opinion, sometimes in a way that doesn’t make clear which is which.
  • Competitors don’t get much say in what is said about their projects.
  • Users can’t be sure how much to trust such comparisons.

Of course, we’re used to it. We no longer expect the pure, unvarnished truth from software companies – no more than from drug companies, insurance companies, car salesmen or government agencies. We’re cynical.

But one might at least hope that open source projects might do better. It’s in all our interests, and in our users’ interests, to have accurate, up-to-date, unbiased feature comparisons.

So, what would such a comparison look like?

  • It should have accurate, up-to-date information about each project.
  • That information should be purely factual, to the extent possible. Where necessary, opinions can be expressed only if clearly identified as opinion by it’s content and placement.
  • Developers from each project should be responsible for updating their own features.
  • Developers from each project should be accountable for any misstatements that slip in.

I think this can work because most of us in the open source world are committed to… openness. We generally value accuracy and we try to separate fact from opinion. Of course, it’s always easy to confuse one’s own strongly held beliefs with fact, but in most groups where I participate, I see such situations dealt with quite easily and with civility. Open source folks are, in fact, generally quite civil.

So, to carry this out, I’m announcing the .NET Test Framework Feature Comparison project – ideas for better names and an acronym are welcome. I’ll provide at least a temporary home for it and set up an initial format for discussion. We’ll start with MbUnit and NUnit, but I’d like to add other frameworks to the mix as soon as volunteers are available. If you are part of a .NET test framework project and want to participate, please drop me a line.

Categories: Open Source

Software Testing Latest Training Courses for 2012

The Cohen Blog — PushToTest - Mon, 02/20/2012 - 05:34
Free Workshops, Webinars, Screencasts on Open Source Testing Need to learn Selenium, soapUI or any of a dozen other Open Source Test (OST) tools? Join us for a free Webinar Workshop on OST. We just updated the calendar to include the following Workshops:
And If you are not available for the above Workshops, try watching a screencast recording.

Watch The Screencast

Categories: Companies, Open Source

Selenium Tutorial For Beginners

The Cohen Blog — PushToTest - Thu, 02/02/2012 - 08:45
Selenium Tutorial for Beginners Selenium is an open source technology for automating browser-based applications. Selenium is easy to get started with for simple functional testing of a Web application. I can usually take a beginner with some light testing experience and teach them Selenium in a 2 day course. A few years ago I wrote a fast and easy tutorial Building Selenium Tests For Web Applications tutorial for beginners.

Read the Selenium Tutorial For Beginners Tutorial

The Selenium Tutorial for Beginners has the following chapters:
  • Selenium Tutorial 1: Write Your First Functional Selenium Test
  • Selenium Tutorial 2: Write Your First Functional Selenium Test of an Ajax application
  • Selenium Tutorial 3: Choosing between Selenium 1 and Selenium 2
  • Selenium Tutorial 4: Install and Configure Selenium RC, Grid
  • Selenium Tutorial 5: Use Record/Playback Tools Instead of Writing Test Code
  • Selenium Tutorial 6: Repurpose Selenium Tests To Be Load and Performance Tests
  • Selenium Tutorial 7: Repurpose Selenium Tests To Be Production Service Monitors
  • Selenium Tutorial 8: Analyze the Selenium Test Logged Results To Identify Functional Issues and Performance Bottlenecks
  • Selenium Tutorial 9: Debugging Selenium Tests
  • Selenium Tutorial 10: Testing Flex/Flash Applications Using Selenium
  • Selenium Tutorial 11: Using Selenium In Agile Software Development Methodology
  • Selenium Tutorial 12: Run Selenium tests from HP Quality Center, HP Test Director, Hudson, Jenkins, Bamboo
  • Selenium Tutorial 13: Alternative To Selenium
A community of supporting open source projects - including my own PushToTest TestMaker - enables you to apply your Selenium tests as functional tests for smoke testing, regression testing, and integration tests, load and performance tests, and production service monitors. These techniques and tools make it easy to run Selenium tests from test management platforms, including HP Quality Center, HP Test Director, Zephyr, TestLink, QMetry, from automated Continuous Integration (CI) tests, including Hudson, Jenkins, Cruise Control, and Bamboo.

I wrote a Selenium tutorial for beginners to make it easy to get started and take advantage of the advanced topics. Download TestMaker Community to get the Selenium tutorial for beginners and immediately build and run your first Selenium tests. It is entirely open source and free!

Read the Selenium Tutorial For Beginners Tutorial

Categories: Companies, Open Source

5 Services To Improve SOA Software Development Life Cycle

The Cohen Blog — PushToTest - Fri, 01/27/2012 - 00:25
SOA Testing with Open Source Test Tools PushToTest helps organizations with large scale Service Oriented Architecture (SOA) applications achieve high performance and functional service delivery. But, it does not happen at the end of SOA application development. Success with SOA at Best Buy requires an Agile approach to software development and testing, on-site coaching, test management, and great SOA oriented test tools.

Distributing the work of performance testing through an Agile epoc, story, and sprints reduces the testing effort overall and informs the organization's business managers on the service's performance. The biggest problem I see is keeping the testing transparent so that anyone - tester, developer, IT Ops, business manager, architect - follows a requirement down to the actual test results.

With the right tools, methodology, and coaching an organization gets the following:
  • Process identification and re-engineering for Test Driven Development (TDD)
  • Installation and configuration of a best-in-class SOA Test Orchestration Platform to enable rapid test development of re-usable test assets for functional testing, load and performance testing and production monitoring
  • Integration with the organization's systems, including test management (for example, Rally and HP QC) and service asset management (for example, HP Systinet)
  • Construction of the organization's end-to-end tests with a team of PushToTest Global Professional Services, using this system and training of the existing organization's testers, Subject Matter Experts, and Developers to build and operate tests
  • On-going technical support
Download the Free SOA Performance Kit On-Site Coaching Leads To Certification
The key to high quality and reliable SOA service delivery is to practice an always-on management style. That requires on-site coaching. In a typical organization the coaches accomplish the following:
  • Test architects and test developers work with the existing Testing Team members. They bring expert knowledge of the test tools. Most important is their knowledge of how to go from concept to test coding/scripting
  • Technical coaching on test automation to ensure that team members follow defined management processes
Cumulatively this effort is referred to as "Certification". When the development team produces quality product as demonstrated by simple functional tests, then the partner QA teams take these projects and employ "best practice" test automation techniques. The resulting automated tests integrate with the requirements system (for example, Rally), the continuous integration system, and the governance systems (for example, HP Systinet.)
Agile, Test Management, and Roles in SOA
Agile software development process normally focuses first on functional testing - smoke tests, regression test, and integration tests. Agile applied to SOA service development deliverables support the overall vision and business model for the new software. At a minimum we should expect:
  1. Product Owner defines User Stories
  2. Test Developer defines Test Cases
  3. Product team translates Test Cases into soapUI, TestMaker Designer, and Java project implementations
  4. Test Developer wraps test cases into Test Scenarios and creates an easily accessible test record associated to the test management service
  5. Any team member follows a User Story down into associated tests. From there they can view past results or execute tests again.
  6. As tests execute the test management system creates "Test Execution Records" showing the test results
Learn how PushToTest improves your SOA software development life cycle. Click here to learn how.


Download the Free SOA Performance Kit

Categories: Companies, Open Source

Application Performance Management and Software Testing Trends and Analysis

The Cohen Blog — PushToTest - Tue, 01/24/2012 - 16:25
18 Best Blogs On Software Testing 2011 began with some pretty basic questions for the software testing world:
  • To what extent will large organizations dump legacy test tools for open source test tools?
  • How big would the market for private cloud software platforms be?
  • Does mankind have the tools to make a reliable success of the complicated world we built?
  • How big of a market will SOA testing and development be?
  • What are the best ways to migrate from HP to Selenium?
Let me share the answers I found. Some come from my blog, others from friends and partner blogs. Here goes:

The Scalability Argument for Service Enabling Your Applications. I make the case for building, deploying, and testing SOA services effectively. I point out the weakness of this approach comes at the tool and platform level. For example, 37% of an application's code simply to deploy your service.

How PushToTest Uses Agile Software Development Methodology To Build TestMaker. A conversation I had with Todd Bradfute, our lead sales engineer, on surfacing the results of using Agile methodology to build software applications.

"Selenium eclipsed HP’s QTP on job posting aggregation site Indeed.com to become the number one requisite job experience / skill for on-line posted automated QA jobs (2700+ vs ~2500 as of this writing,)" John Dunham, CEO at Sauce Labs, noted.

Run Private Clouds For Cost Savings and Control. Instead of running 400 Amazon EC2 machine instances, Plinga uses Eucalyptus to run its own cloud. Plinga needed the control, reliability, and cost-savings of running its own private cloud, Marten Mickos, CEO at Eucalyptus, reports in his blog.

How To Evaluate Highly Scalable SOA Component Architecture. I show how to evaluate highly scalable SOA component architecture. This is ideal for CIOs, CTOs, Development and Test Executives, and IT managers.

Planning A TestMaker Installation. TestMaker features test orchestration capabilities to run Selenium, Sahi, soapUI, and unit tests written in Java, Ruby, Python, PHP, and other langauges in a Grid and Cloud environment. I write about the issues you may encounter installing the TestMaker platform.

Repurposing ThoughtWorks Twist Scripts As Load and Performance Tests. I really like ThoughtWorks Twist for building functional tests in an Agile process. This blog and screencast shows how to rapidly find performance bottlenecks in your Web application using Thoughtworks Twist with PushToTest TestMaker Enterprise test automation framework.

4 Steps To Getting Started With The Open Source Test Engagement Model. I describe the problems you need to solve as a manager to get started with Open Source Testing in your organization.

Corellation Technology Finds The Root Cause To Performance Bottlenecks. Use aspect-oriented (AOP)  technology to surface memory leaks, thread deadlocks, and slow database queries in your Java Enterprise applications.

10 Agile Ways To Build and Test Rich Internet Applicatiions (RIA.) Shows how competing RIA technologies put the emphasis on test and deploy.

Oracle Forms Application Testing. Java Applet technology powers Oracle Forms and many Web applications. This blog shows how to install and use open source tools to test Oracle Forms applications.

Saving Your Organization From The Eventual Testing Meltdown of Using Record/Playback Solely. The Selenium project is caught between the world of proprietary test tool vendors and the software developer community. This blog talks about the tipping-point.

Choosing Java Frameworks for Performance. A round-up of opinions on which technologies are best for building applications: lightweight and responsive, RIA, with high developer productivity.

Selenium 2: Using The API To Create Tests. A DZone Refcard we sponsored to explain how to build tests of Web applications using the new Selenium 2 APIs. For the Selenium 1 I wrote another Refcard, click here.

Test Management Tools. A discussion I had with the Zephyr test management team on Agile testing.

Migrating From HP Mercury QTP To PushToTest TestMaker 6. HP QTP just can't deal with the thousands of new Web objects coming from Ajax-based applications. This blog and screencast shows how to migrate.

10 Tutorials To Learn TestMaker 6. TestMaker 6 is the easier way to surface performance bottlenecks and functional issues in Web, Rich Internet Applications (RIA, using Ajax, Flex, Flash,) Service Oriented Architecture (SOA,) and Business Process Management (BPM) applications.

5 Easy Ways To Build Data-Driven Selenium, soapUI, Sahi Tests. This is an article on using the TestMaker Data Production Library (DPL) system as a simple and easy way to data-enable tests. A DPL does not require programming or scripting.

Open Source Testing (OST) Is The Solution To Modern Complexity. Thanks to management oversite, negligence, and greed British Petroleum (BP) killed 11 people, injured 17 people, and dumped 4,900,000 barrels of oil into the Gulf of Mexico in 2010. David Brooks of the New York Times became an unlikely apologist for the disaster citing the complexity of the oil drilling system.

Choosing automated software testing tools: Open source vs. proprietary.  Colleen Fry's article from 2010  discusses why software testers decide which type of automated testing tool, or combination of open source and proprietary, to best meets their needs. We came a long way in 2011 to achieve these goals.

All of my blogs are found here.

Categories: Companies, Open Source

Free Webinar on Agile Web Performance Testing

The Cohen Blog — PushToTest - Tue, 01/10/2012 - 19:22
Free Open Source Agile Web Application Performance Testing Workshop
Your organization may have adopted Agile Software Development Methodology and forgot about load and performance testing! In my experience this is pretty common. Between Scrum meetings, burn-down sessions, sprints, test first, and user stories, many forms of testing - including load and performance testing, stress testing, and integration testing - can get lost. And, it is normally not only your fault. Consider the following:
  • The legacy proprietary test tools - HP LoadRunner, HP QTP, IBM Rational Tester, Microsoft VSTS - are hugely expensive. Organizations can't afford to equip developers and testers with their own licensed copies. These tools licenses are contrary to Agile testing, where developers and testers work side-by-side building and testing concurrently.

  • Many testers still cannot write test code. Agile developers write unit tests in high level languages (Java, C#, PHP, Ruby.) Testers need a code-less way to repurpose these tests into functional tests, load and performance tests, and production service monitors.

  • Business managers need a code-less way to define the software release requirements criteria. Agile developers see Test Management tools (like HP Quality Center QC) as a needless extra burden to their software development effort. Agile developers are hugely attracted to Continuous Integration (CI) tools like Hudson, Jenkins, Cruise Control, and Bamboo. Business managers need anintegrated CI and test platform to define requirements and see how close to 'shipping' is their application.
Lucky for you there is a way to learn how to solve these problems and deliver Agile software development methodology benefits to your organization. The Agile Web Application Performance Testing Workshop is your place to learn the Agile Open Source Testing way to load and performance test your Web applications, Rich Internet Applications (RIA, using Ajax, Flex, Flash, Oracle Forms, Applets,) and SOAP and REST Web services. This free Webinar delivers a testing methodology, tools, and best/worst practices to follow. Plus, you will see a demonstration of a dozen open source test tools all working together.

Registration is free! Click here to learn more and register now:

Register Now

Categories: Companies, Open Source

Free Help To Learn TestMaker, Selenium, Sahi, soapUI

The Cohen Blog — PushToTest - Fri, 01/06/2012 - 05:57
Help Is Here To Learn TestMaker, Selenium, Sahi, soapUI Do you sometimes feel alone? Have you been trying any of the following:
  • Writing Load Test Scripts
  • Building Functional Tests for Smoke and Regression Testing
  • Trying to use Selenium IDE and needing a good tutorial
  • Configuring test management tools working with TestMaker, Sahi, and soapUI
  • Needing To Compare Selenium Vs HP QuickTest Pro (QTP)
  • Stuck While Doing Cloud Computing Testing
  • Need Help Getting Starting with Load Testing Tools
If you feel stuck, need help, or would like to see how professional testers solve these situations, then please attend a free live weekly Webinar.

Register Now

Here Is What We Have For You Bring your best questions, issues, and bug reports on installing, configuring, and using PushToTest TestMaker to our free weekly Workshop via live Webinar. PushToTest experts will be available to answer your questions.

Frank Cohen, CEO and Founder at PushToTest, and members of the PushToTest technical team will answer your questions, show you where to find solutions, and take your feedback for feature enhancements and bug reports.

Every Thursday at 1 pm Pacific time (GMT-8)
Registration Required

At the Webinar:
  1. Register for the Webinar in-advance
  2. Log-in to the Webinar at the given day and time
  3. Each person that logs-in will have their turn, ask their question, and hear our response
  4. You may optionally share/show your desktop for the organizers to see what is going wrong and offer a solution
  5. The organizers will hear as many questions as will fit in 1 hour. No guarantee that everyone will be served.
See how these tools were made to work together. Bring your best questions for an immediate answer!

Register Now

Categories: Companies, Open Source

Free Training Selenium IDE soapUI TestMaker PushToTest

The Cohen Blog — PushToTest - Wed, 01/04/2012 - 16:43
A Look Forward To Open Source Load Testing Tools Albert EinsteinThomas EdisonMarie Skodowska CurieNokola Tesla

You and I have come after some incredibly smart people. They inspire us to do our best when testing software for functionality, performance under load, and scalability.

The problems you need to solve are testing applications and business processes that use Rich Internet Application (RIA, using Ajax, Flex, Flash, Oracle Forms, Applets,) SOA, BPM, and SOAP and REST Web Service interfaces.

Thankfully you don’t have to be Einstein, Edison, Curie, or Tesla to “get” this stuff. You just need a good set of free open source test tools, a good methodology, and a good coach.
Upcoming Free Webinar Workshops On Open Source Load Testing
PushToTest will host 6 free Workshops via live Webinar in January 2012. Each Workshop features training for  performance testing using Selenium, soapUI, Sahi, JUnit, and TestMaker. Registration is free. Sign-up now while seats last.

Agile Open Source Performance Test Workshop for CIOs, CTOs, Business Managers
January 4, 2012
 
Agile Open Source Performance Test Workshop for Developers, Testers, IT Managers
January 5, 2012
  
Open Source Test Workshop for CIOs, CTOs, Business Managers
January 11, 2012
 
Selenium, soapUI, Sahi, TestMaker Workshop for Testers, Developers, IT Ops
January 12, 2012
 
Use Selenium, soapUI, Sahi, TestMaker Performance Testing In Your Organization
January 25, 2012
 
Load Testing Using Agile Open Source Tools for Developers, Testers, IT Managers
January 26, 2012
 
Agile Open Source Performance Test Workshop for CIOs, CTOs, Business Managers
February 14, 2012
 
Agile Open Source Performance Test Workshop for Developers, Testers, IT Managers
February 16, 2012
 
Free Webinar: Solve Performance Bottlenecks and Function Problems In Your Web Applications
February 22, 2012
 
Open Source Test Workshop for Developers, Testers, IT Ops
February 23, 2012

All Workshops are free, registration is limited, and this is an interactive Webinar where you ask your best questions.

Categories: Companies, Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.

before:

if (...choice 1 valid...) {
  add($widget1);
}
if (...choice 2 valid...) {
  add($widget2);
}
... 

after:

$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())
    add($each->widget());

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;
}

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
}
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();
}
Succession

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;
}

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.

Conclusion

Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.

 

===============================================

I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00