Skip to content

Open Source

Jenkins User Conference – Save the Date

We have some exciting news to share with you! We have finalized most of the dates and locations for the 2015 Jenkins User Conference (JUC) World Tour.

Save the date(s):

  • US East (Washington DC): June 18-19
  • Europe (London): June 23-24
  • Israel: July 16 (ETA)
  • US West (Santa Clara): September 3-4

The big news? The JUC agenda has been expanded this year to cover two days! That means you get twice as many opportunities to learn how others are using Jenkins and to network with other Jenkins users.

CALL FOR PAPERS IS OPEN FOR ALL JUC CONFERENCES

We need JUC speakers! The Call for Papers is open now and you can apply here. This is an opportunity for YOU to give back to the community by sharing your Jenkins knowledge and success. Jenkins speakers contribute significantly to the overall JUC experience.

In return for speaking, you will receive free admission to the conference and fame/fortune within the Jenkins community. OK, we can’t guarantee the latter, but we can guarantee the former! Hurry and apply now, because the Call for Papers deadline for US East and Europe expires on March 22, 2015.

Not interested in speaking? Another way to contribute to the community is by letting us know who you want to hear from. Nominate or refer that amazing speaker and we’ll do the rest. Contact alytong13@gmail.com

JUC SPONSORSHIPS

Lastly, be a JUC sponsor. Any organization can do this – whether a vendor that sells into the Jenkins ecosystem or a company that has received value from Jenkins and wants to give back to the community. You can find out more here. (NOTE: JUC is not a moneymaking venture for the community – so sponsorships do make a difference.)

Categories: Open Source

100K Celebration Podcast

As a part of the Jenkins 100K celebration, Dean Yu, Andrew Bayer, R. Tyler Croy, Chris Orr, and myself got together late Tuesday evening to go over the history of the project, how big the community was back then, how we grow, where we are now, and maybe a bit about future.

We got carried away and the recording became longer than we all planned. But it has some nice sound bites, back stage stories, and stuff even some of us didn't know about! I hope you'll enjoy it. The MP3 file is here, or you can use your favorite podcast app and subscribe to http://jenkins-ci.org/podcast.

Categories: Open Source

Eating the dog food

Sonar - Wed, 02/25/2015 - 17:36

The SonarQube platform includes an increasing powerful lineup of tools to manage technical debt. So why don’t you ever see SonarSourcers using Nemo, the official public instance, to manage the debt in the SonarQube code? Because there’s another, bleeding-edge instance where we don’t just manage our own technical debt, we also test our code changes, as soon as possible after they’re made.

Dory (do geeks love a naming convention, or what?), is where we check our code each morning, and mid-morning, and so on, and deal with new issues. In doing so, each one of us gives the UI – and any recent changes to it – a thorough workout. That’s because Dory doesn’t run the newest released version, but the newest milestone build. That means that each algorithm change and UI tweak is closely scrutinized before it gets to you.

The result is that we often iterate many times on any change to get it right. For instance, SonarQube 5.0 introduced a new Issues page with a powerful search mechanism and keyboard shortcuts for issue management. Please don’t think that it sprang fully formed from the head of our UI designer, Stas Vilchik. It’s the result of several months of design, iteration, and Continuous Delivery. First came the bare list of issues, then keyboard shortcuts and inter-issue navigation, then the wrangling over the details. Because we were each using the page on a daily basis, every new change got plenty of attention and lots of feedback. Once we all agreed that the page was both fully functional and highly usable, we moved on.

The same thing happens with new rules. Recently we implemented a new rule in the Java plugin based on FindBugs, "Serializable" classes should have a version id. The changes were made, tested, and approved. Overnight the latest snapshot of the plugin was deployed to Dory, and the next morning the issues page was lit up like a Christmas tree.

We had expected a few new issues, but nothing like the 300+ we got, and we (the Java plugin team and I) weren’t the only ones to notice. We got “feedback” from several folks on the team. So then the investigation began: which issues shouldn’t be there? Well, technically they all belonged: every class that was flagged either implemented Serializable or had a (grand)parent that did. (Subclasses of Serializable classes are Serializable too, so for instance every Exception is Serializable.) Okay, so why didn’t the FindBugs equivalent flag all those classes? Ah, because it has some exclusions.

Next came the debate: should we have exclusions too, and if so which ones? In the end, we slightly expanded the FindBugs exclusion list and re-ran the analyses. A few issues remained, and they were all legit. Perfect. Time to move on.

When I first came to SonarSource and I was told that the internal SonarQube instance was named Dory, I thought I got it: Nemo and Dory. Haha. Cute. But the more I work on Dory, the more the reality sinks it. We rely on Dory on a daily basis; she’s our guide on the journey. But since our path isn’t necessarily a straight line, it’s a blessing for all of us that she can forget the bad decisions and only retain the good.

Categories: Open Source

100K Celebration Podcast Recording

In preparation for Jenkins 100K celebration, I'm going to record a one-time podcast with Dean Yu, Andrew Bayer, and R. Tyler Croy.

My current plan is to go over the history of the project, how big the community was back then, how we grow, where we are now, and maybe a bit about future.

But if you have any other suggestions/questions that you'd like us to discuss, you have 3 or 4 more hours to send in that suggestion! Your feedback would help us make a better recording, so please don't hesitate to tell us.

Categories: Open Source

Four things that blew my mind with “Typemock Isolator”

The Typemock Insider Blog - Tue, 02/24/2015 - 12:49

I just started working at Typemock a few months ago and the company decided that we need to take a hour a week to write about our experience or about anything but is has to be about the company. Believe it or not I actually like writing and not only code. So yeah… I decided […]

The post Four things that blew my mind with “Typemock Isolator” appeared first on The Unit Testing Blog - Typemock.

Categories: Open Source

Jenkins 100K celebration pictures

In preparation of the celebration of 100K installations, 1000 plugins, and 10 years of Jenkins, we've got these images created.

I hope folks can use these images to mark the occasion! The full size pictures are here.

Categories: Open Source

SonarQube Java Analyzer : The Only Rule Engine You Need

Sonar - Thu, 02/12/2015 - 17:17

If you have been following the releases of the Java plugin, you might have noticed that we work on two major areas for each release: we improve our semantic analysis of Java, and we provide a lot of new rules.

Another thing you might have noticed, thanks to the tag system introduced by the platform last year, is that we are delivering more and more rules tagged with “bug” and “security”. This is a trend we’ll try to strengthen on the Java plugin to provide users valuable rules that detect real problems in their code, and not just formatting or code convention issues.

What you might wonder then is: where do we get the inspiration for those rules?  Well, for starters, the SonarSource codebase is mostly written in Java, and most SonarSource developers are Java developers. So in analyzing our own codebase we find some patterns that we want to detect, turn those patterns into rules, and provide the rules to our users. But that is not enough, and that is why we are taking inspiration from other rule engines, and more specifically FindBugs. We are in the process of deprecating FindBugs rules by rewriting them using our technology.

Our goal is that at some point in 2015 we’ll stop shipping the FindBugs plugin by default with the platform (we’ll still support it and provide it through the update center) because out of the box, the Java Plugin will provide at least as much (hopefully more!) value as FindBugs.

This might seem pretentious, but there are a couple of reasons we are moving in this direction:

  • This is a move we already made with PMD and Checkstyle (and we are still supporting the sonar-pmd-plugin and sonar-checkstyle-plugin).
  • FindBugs works only at the bytecode level: the analysis only runs on compiled classes. The Sonar Java Plugin works with both sources and bytecode, and is thus able to be more precise in its analysis, eliminating false positives and detecting patterns that cannot be detected by FindBugs.
    For instance consider the following code run against the Java Plugin rule “Identical expressions should not be used on both side of a binary operator”, which deprecates multiple FindBugs rules:

    //...
    if(a == a) { //self comparison
      System.out.println(“foo”);
    }
    if( 2+1*12 == 2+1*12 ) { //selfcomparison
      System.out.println(“foo”);
    }//...
    

The approach used by FindBugs, which relies only bytecode, will not be able to detect the second issue because the second if will be erased by the compiler and thus will not be visible in bytecode.

  • FindBugs project activity: The activity on the project is quite low and thus the value coming out of it does not come fast enough to satisfy our users.
  • Documentation: One thing we really value at SonarSource, and that we think has made our products great, is that for each issue we raise we provide a clear explanation of why we raised the issue and an indication of how to fix it. This is something that FindBugs clearly lacks in our view, and we are confident we can offer better value in this area.

As we reimplement the FindBugs rules, our goal is also to remove some useless  or outdated rules, merge close-in-meaning rules, and report fewer false positives that FindBugs does.

However, this is going to take some work: we are still one step behind FindBugs regarding an essential part of what makes it valuable, the Control Flow Graph (CFG). Briefly: CFG allows tracking the value of a variable through the execution paths your code. An example of its use is to be able to detect NullPointerException without executing the code. This feature is not implemented yet in the SonarQube Java Plugin, but a first version was shipped in the latest version (3.3) of the C/C++ plugin. It’s in the roadmap of the Java plugin to embed this feature and deprecate the FindBugs rules requiring it.

This rewriting of FindBugs rules has already started, with a huge effort on documenting and specifying them properly. Out of the 423 rules provided by FindBugs we have decided to reimplement 394, and have already specified replacements for 286. At the time of this writing, 157 rules have already been reimplemented using our own technology (so about 40% of the implementable rules).

Don’t get me wrong: FindBugs is a great tool providing a lot of useful feedback to Java developers. But within the year, we will be at a point in the development of the SonarQube Java plugin where we can deliver even better feedback, and detect issues that even FindBugs can’t find.

Categories: Open Source

Jenkins Celebration Day is February 26

Congratulations! The Jenkins project officially went over the 100K active users mark sometime in January. As of January 31, we were at 102,992. YOU are one of the 100K active users!

As discussed on a couple recent project meetings, we have designated February 26 as Jenkins Celebration Day.

To make some noise, here is what we are doing starting NOW:

  • Write a blog about anything related to Jenkins. Post your blog and Tweet out a link to it. Include the hashtag #Jenkins100K in your post.
  • On February 26, we will hold a raffle and pick four names at random. The grand prize winner will get a 3D Jenkins Butler model. Five others will get their pick of Jenkins swag (up to $20) from the Jenkins online store.
OTHER WAYS TO CELEBRATE

There are a number of other things planned and we want YOU to be involved. This blog post is the central place to come for all things related to the celebration.

  • Recording – Jenkins Governance Board Dean, Tyler, Andrew and I will get together this month and record some thoughts about the Jenkins project. We will share that recording with you from this page on February 26.
  • Twitter Badge For those of us on social media that want to proudly celebrate our community, we will have a special badge that you can use for your profile image on Twitter or any of the other social media forums. Feel free to use the badge as long as you want – but let’s get as many of us using it as possible between now and February 27.
  • Social Media Images
    • CloudBees is donating a series of images that we can all push out on social media (whatever platform(s) you use).
    • Pick your favorite(s) and push them out on Twitter, Facebook, G+
  • Certificate (available on this blog post soon) Download your very own “I am part of the Jenkins 100K” certificate. Print it out and proudly display it on the wall of your cube or office.
  • Visibility The Community will also issue a press release on February 26 announcing our milestone news.
  • Sign the “card” Consider this blog a Congratulations card to the entire community. Share your thoughts in a comment on this blog about anything Jenkins-related that you wish!

This is a big milestone for the Community and one you should be proud to be part of! Let’s make some noise…

Categories: Open Source

IntelliJ

Selenium - Sun, 02/08/2015 - 16:15

Every year, Jetbrains are kind enough to donate an OSS license for IntelliJ to the Selenium project. As part of that process, they’ve asked that we review the product and (kudos to them!) have been clear that they hope we’re open and honest. So, I’ll be open and honest.

When I tell people that I’m a professional Java developer, people in some circles make sympathetic noises and (sometimes) jokingly refer to how painful my coding life must be. After all, there are several far trendier and hipper languages, from Ruby, various flavours of Javascript, Python, Haskell, and even other languages running on the JVM such as Scala and Clojure. I tend to agree that Java is a relatively unexciting language as it’s generally practiced — Java 8 contains a wealth of goodies that lots of people won’t be using for years since they’ve still got to support Java 6(!) apps. Where I disagree with the detractors is that using Java is something to feel sorry for a developer for: Java on its own isn’t much fun, Java and IntelliJ is one of my favourite programming experiences.

I’ve been using Java since the (very) late 90s, and have been using IntelliJ off-and-on since 2003 or so. In the intervening just-over-a-decade, what started as a tool that crossed the Rubicon of “being able to do refactoring” has matured. It has literally changed the way I write code: I now use the “Introduce Variable” refactoring to avoid needing to do initial assignments of values to variables as a matter of course. Indeed, with IntelliJ, I frequently stop thinking about the programming language and start thinking about the structure of the solution. Its refactorings make exploring large scale changes easy and entirely reliable, and once the restructurings are complete, I can jump to symbols with ease.

Code exploration is aided by the simple and quick ways IntelliJ can find usages, and it’s simple to find unused code as method declarations get highlighted in a different shade to used ones. The integrated debugger is sufficiently capable that, coupled with unit tests, it’s normally pretty easy to figure out why some odd behaviour is happening. And, speaking of unit tests, the UI is clear and (I find) intuitive and easy to use.

And those users of fancy-pants languages such as Clojure, Ruby, Python and Javascript (and PHP) can get plugins that extend IntelliJ’s capabilities and insight into those languages. Although it’s been a long time since I’ve had to deal with Spring and JEE, when I do IJ has my back, grokking the config files. The maven and gradle integration appears to work too, though Selenium uses CrazyFun and is migrating to Buck, so I’ve seldom any need to

It’s not all wonder and joy. On large, multi-module codebases, IntelliJ seems to spend too long building caches. Activity Monitor on the Mac suggests it’s doing this in a single threaded manner, which is wasteful on a multicored machine. Switching away from IJ, doing something on the command line involving source control and then switching back is a sure-fire way to make it rebuild the caches, making it unresponsive. Extending IntelliJ by writing plugins is a black art — the documentation is scattered and appears out of date, making getting started on writing one hard.

Overall, though, I love IntelliJ. On the Selenium project, it’s the IDE of choice, and I’ve been incredibly productive in it. Thank you, Jetbrains, for a wonderful tool.


Categories: Open Source

C/C++/Objective-C: Dark past, bright future

Sonar - Thu, 02/05/2015 - 14:03

We’ve just released version 3.3 of the C/C++/Objective-C plugin, which features an increased scope and precision of analysis for C, as well as detection of real bugs such as null pointer dereferences and bugs related to types for C. These improvements were made possible by the addition of semantic analysis and symbolic execution, which is the analysis not of the structure of your code, but of what the code is actually doing.

Semantic analysis was part of the original goal set for the plugin about three years ago. Of course, the goal was broader than that: develop a static analyser for C++. The analyzer needed to continuously check your code’s conformance with your coding standards and practices, and more importantly detect bugs and vulnerabilities to help you to keep technical debt under control.

At the time, we didn’t think it would be hard, because many languages were already in our portfolio, including Java, COBOL, PL/SQL. Our best engineers, Freddy Mallet and Dinesh Bolkensteyn, were already working on C, the natural predecessor of C++. I joined them, and together we started work on C++. With the benefit of hindsight, I can say that we all were blind. Totally blind. We had no idea what a difficult and ambitious task we had set ourselves.

You see, a static analyzer is a program which is able to precisely understand what another program does. And, roughly speaking, a bug is detected when this understanding is different from what the developer really wanted to write. Huh! Already, the task is complex, but it’s doubly so for C++. Why is automatic analysis of C++ so complicated?

First of all, both C and C++ have the concept of preprocessing. For example consider this code:

struct command commands[] = { cmd(quit), cmd(help) };

One would think that there are two calls of the “cmd” function with the parameters “quit” and “help”. But that might not be the case if just before this line there’s a preprocessing directive:

#define cmd(name) { #name, name ## _command }

That directive completely changes meaning of the original code, literally turning it into

struct command commands[] = { { "quit", quit_command }, { "help", help_command } };

The existence of the preprocessor complicates many things on many different levels for an analysis. But most important is that the correct interpretation of preprocessing directives is crucial for the correctness and precision of an analysis. We rewrote our preprocessor implementation from scratch three times before we were satisfied with it. And it’s worth mentioning that on the market of static analysers (both commercial and open-source) you can easily find tools that don’t do preprocessing at all or do it only imprecisely.

Let’s move to the next difficulty. I’ve mentioned in the past that C and C++ are hard to parse. It’s time to talk a little bit about why. Roughly speaking, parsing is the process of recognizing language constructions – i.e. seeing what’s a statement, what’s an expression, and so on. Let’s take some example code and try to figure out what it is.

T * a

If this were Java code, the answer would be straightforward: most probably this is multiplication, and part of bigger expression. But the answer isn’t that simple in for C/C++. In general, the answer is “it depends…” This could indeed be an expression statement, if both “T” and “a” are variables:

int T, a;
T * a;

But it could also be the declaration of variable “a” with a type of pointer to “T”, if “T” is a type:

typedef int T;
T * a;

In other words, the context can completely change the meaning of code. This is called ambiguity.

Like natural languages, the grammars of programming languages can be ambiguous. While the C language has just a few ambiguous constructions, C++ has tons of them. And as you’ve seen, correct parsing is not possible without information about types. But getting that information is a difficulty in and of itself because it requires semantic analysis of language constructs before you can understand their types and relations. And that’s where it starts to be really complex. To parse we need semantic analysis, and to do semantic analysis we need to parse. Chicken and egg problem.

We had hit a wall, and when we looked around, we realized we weren’t alone. Many tools don’t even try to parse, get information about types or distinguish between ambiguous and unambiguous cases.

And then we found GLL, a relatively new theory about generalized parsing. It was first published in 2010, and there still aren’t any ready-to-use, publicly-available implementations for Java. Implementing a GLL parser wasn’t easy, and took us quite a while, but the ROI was high. This parser is able to preserve information about encountered ambiguities without their actual resolution. That allows us to do precise analysis of at least the unambiguous constructions without producing false-positives on ambiguous constructions.

The GLL parser was a win-win, and game changer! After 2 years of development from the first commit (which was approximately a year ago) we released precise preprocessing and parsing in version 2.0 of the C++ Plugin.

With the original goal well on the way to being met, we started to dream again, raised our expectations even higher, and were ready to welcome new developers. Today, I still work on the plugin, but it’s maintained primarily by Massimo Paladin and Samuel Mercier. They solved the analysis configuration problem, added support of Objective-C and Microsoft Component Extensions to the plugin.

Our next goal is to apply Semantic Analysis and Symbolic Execution on Objective-C and of course after that on C++, and to use them to cover more MISRA rules. So this is probably not the end of the story about difficulties in development of static analyser for C/C++/Objective-C – who knows what else will be encountered on the way. But now we are not blind as it was before, now we know that this is difficult. However based on past, I can say that we in SonarSource are unstoppable and even most incredible dreams come true! So keep dreaming! And just never ever give up!

Categories: Open Source

SonarQube 5.0 in Screenshots

Sonar - Wed, 01/28/2015 - 17:56

The team is proud to announce the release of SonarQube 5.0, which includes many new features

  • Issues page redesign
  • Keyboard shortcuts added to Issues
  • Built-in SCM support

Issues page redesign

With this version of the SonarQube platform, the Issues page has had a complete overhaul.

Issues are grouped in the list by file, and from the list of issues, you can easily navigate to the issue in the context of the code, as you’re used to seeing it,

Issue search has also been overhauled. You can still choose from criteria, but now next to each facet of the criteria, you see a count of the relevant issues.

Selected facets are highlighted in blue, and selecting/deselecting a facet immediately (okay, there’s a quick trip to the server and back) updates your search results and the issue counts next to all the other facets.

Keyboard shortcuts added to Issues

The intent behind the redesign is to allow you to use the Issues page quickly and efficiently to manage issues on a daily basis. To that end, extensive effort has gone into providing a broad set of keyboard short cuts. ‘?’ brings up the list, and Esc closes it.

From the list of issues, right-arrow takes you to the issue-in-context, and left-arrow takes you back to the list. In either context, up-arrow/down-arrow takes you to the next issue – in the same file or the next one – and you can use the shortcuts to comment, assign…

Built-in SCM support

SCM “blame” information has been an important data point in the SonarQube interface for a long time, but until now a plugin was required to use it. Now SCM data comes as part of the platform, with built-in support for SVN and Git.

Speaking of Git, its rise in popularity has meant that the use of ‘/’ in branch names has become increasingly common. Until now, that was unsupported by SonarQube. That changes with 5.0, presumably making many Git-ers happy. :-)

That’s all, Folks!

Time now to download the new version and try it out. But don’t forget that you’ll need Java 7 to run this version of the platform (you can still analyse Java 6 code), and don’t forget to read the installation or upgrade guide.

Categories: Open Source

2015 Jenkins User Conferences - Call for Papers

The Jenkins User Conference 2015 is seeking submissions that reflect the latest innovations in Jenkins usage. This is your chance to educate, share and inspire the community with stories of how you've used Jenkins to continuously build that amazing project or how you developed that popular plugin that everyone is using.

If you're gamed, here are some suggestions to get your creative juice going:

  • Scaling Jenkins within the enterprise
  • Jenkins as the orchestrator for continuous delivery
  • Plug-in development
  • Jenkins techniques that solve testing/building problems in specific application areas: mobile, enterprise/web/cloud and UI testing
  • War stories that speak to a problem you faced, the Jenkins solution you implemented to solve it and the results you realized
  • Jenkins best practices, tips and tricks
  • Jenkins in the cloud - if you or your company is currently using Jenkins in the cloud we’d love to hear your story
  • Beyond Java (Jenkins with PHP, Ruby, etc.)

We are upping the ante at this year's JUCs. We are moving from a 1 day conference to a 2 days conference for SF and London - that's 18 additional cutting edge sessions to be learned.

SUBMISSION DEADLINE IS MARCH 8, 2015!

There's also a wide variety of event sponsorship opportunities available. There are offerings from Gold to Silver packages, exhibitor packages in our world-class expo hall, speaking sessions, free passes, and many branding opportunities. For inquiries, pls contact juc-sponsorship@cloudbees.com

Looking forward to receiving your amazing proposals!

Categories: Open Source

Office Hours tomorrow: workflow security model & plugin compatibility

In tomorrow's Jenkins office hours, Jesse Glick will talk about two topics in the workflow plugin that he has been asked about:

  • Security model: script security, permissions
  • Plugin compatibility: SimpleBuildStep and friends, custom steps, etc.

The session should be interesting to anyone using workflow or thinking about using workflow. Jesse is one of the top contributors in the community, so it'd be definitely worth your time!

Categories: Open Source

COBOL is… Alive!

Sonar - Wed, 01/14/2015 - 20:20

Most C, Java, C++, C#, JavaScript… developers reading this blog entry might think that COBOL is dead and that SonarSource should better focus its attention on more hyped languages like Scala, Go, Dart, and so on. But in 1997, the Gartner Group reported that 80 percent of the world’s business ran on COBOL, with more than 200 billion lines of code in existence and an estimated 5 billion lines of new code annually. COBOL is mainly used in the banking and insurance markets, and according to what we have seen in the past years, the erosion of the number of COBOL lines of code used in production is pretty low. So not only is COBOL not YET dead, but several decades will be required to see this death really happen. We released the first version of the COBOL plugin at the beginning of 2010 and this language plugin was in fact the first one to embed our own source code analysis technology, even before Java, C, C++, PL/SQL, … So at SonarSource, COBOL is a kind of leading technology :).

Multiple vendor extensions and lack of structure

The COBOL plugin embeds more than 130 rules, but before talking about those rules, let’s talk about the wide range of different COBOL dialects that are supported by the plugin. Indeed, since 1959 several specifications of the language and preprocessor behavior have been published, and most COBOL compilers have extended those specifications. So providing an accurate COBOL source code analyser means supporting most of those dialects: IBM Enterprise Cobol, HP Tandem, Bull GCos, IBM Cobol II, IBM Cobol 400, IBM ILE Cobol, Microfocus AcuCobol, OpenCobol, … which is the case for our plugin Moreover for those of you who are not familiar with COBOL source code: let’s imagine a C source file containing 20,000 lines of code, no functions, and just some labels to group statements and to make it possible to “emulate” the concept of function. Said like this, I guess everyone can understand how easy it can be to write unmaintainable and unreliable COBOL programs.

Need for tooling

Starting from this observation, managing a portfolio of thousands of COBOL programs, each one containing thousands of COBOL lines of code, without any tooling to automatically detect quality defects and potential bugs is a bit risky. The SonarSource COBOL plugin allows to continuously analyse millions lines of COBOL code to detect such issues and here are several examples of the rules provided by the plugin:

  • Detection of unused paragraphs, sections and data items.
  • Detection of incorrect PERFORM ... THRU ... control flow, where the starting procedure is located after the ending one in the source code, thus leading to unexpected behavior.
  • Tracking of GO TO statements that transfer control outside of the current module, leading to unstructured code.
  • Copy of a data item (variable) into another, smaller data item, which can lead to data loss.
  • Copy of an alphanumeric data item to a numeric one, which can also lead to data loss.
  • Tracking of EVALUATE statements not having the WHEN OTHER clause (similar to an if without an else).
  • Detection of files which are opened but never closed.

And among those 130+ rules, 30+ target the SQL code which can be embedded into COBOL programs. One such rule tracks LIKE conditions starting with *. Another tracks the use of arithmetic expressions and scalar functions in WHEREconditions. And last but not least, here are some other key features of this SonarSource COBOL plugin :

  • Copybooks are analysed in the context of each COBOL program and issues are reported directly on those copybooks.
  • Remediation cost to fix issues is computed with help of the SQALE method: www.sqale.org.
  • Even on big COBOL applications containing thousands of COBOL programs and so potentially millions of lines of code and thousands of issues, tracking only new issues on new or updated source code is easy.
  • Duplications in PROCEDURE DIVISION and among all COBOL programs can also be tracked easily.
  • To make sure that code complies with internal coding practices, a Java API allows the development of custom rules.

How hard it is to evaluate this COBOL plugin ?

So YES, Cobol is alive, and the SonarSource COBOL plugin helps make it even more maintainable and reliable.

Categories: Open Source

SonarQube 5.x series: It just keeps getting better and better!

Sonar - Fri, 01/09/2015 - 15:03

We recently wrapped up the 4.x series of the SonarQube platform by announcing its Long Term Support version: 4.5.1. At the same time, we sat down to map out the themes for the 5.x series, and we think they’re pretty exciting.

In the 5.x series, we want the SonarQube platform to become:

  • Fully operational for developers: with easy management of the daily incoming technical debt, and “real” cross-source navigation features
  • Better tailored for big companies: with great performance and more scalability for large instances, and no more DB access from an analysis

Easy management of the daily incoming technical debt A central Issues page

If you came home one day to find an ever-growing puddle of water on your floor, what’s the first thing you’d do? Grab a mop, or find and fix the source of the water? It’s the same with technical debt. The first thing you should care about is stopping the increase in debt (shutting off the leak) before fixing existing debt (grabbing a mop).

Until now, the platform has been great for finding where technical debt is located, but it hasn’t been a good place for developers to efficiently manage the incoming technical debt they add every day. Currently, you can subscribe to notifications of new issues, but that’s all. We think that’s a failing; developers should be able to rely on SonarQube to help them in this daily task.

To accomplish this, we’ll make the Issues page central. It will be redesigned to let users filter issues very efficiently thanks to “facets”. For instance, it will be almost effortless to see “all critical and blocker issues assigned to me on project Foo” with a distribution per rule. Or “all JavaScript critical issues on project Foo“.

With these new capabilities, the central Issues page will inevitably replace the old Issues drilldown page and eliminate its limitations (e.g. few filters, static counts that aren’t updated when issues are changed in the UI, …). In other words, when dealing with issues and technical debt, users will be redirected to the Issues space, and benefit from all those new features.

Issues will also get a tagging mechanism to help developers better classify pending technical debt. Each issue will inherit the tags on its associated rule, so it will be easy to find “security” issues, for instance. And users will be able to add or remove additional tags at will. This will help give a clearer vision of what the technical debt on a project is about: is it mainly bugs or just simple naming conventions? “legacy framework”-related issues or “new-stack-that-must-be-mastered” issues?

The developer at the center of technical debt management

Developing those great new features on the Issues page is almost useless if you, as a developer, always have to head to the Issues page and play with “facets” to find what you’re looking for. Instead, SonarQube must know what matters to you as a developer, i.e. it must be able to identify and report on “my” code. This is one reason the SCM Activity plugin will gently die, and come back to life as a core feature in SonarQube – with built-in support for Git and Subversion (other SCM providers will be supported as plugins). This will let SonarQube know which changes belong to which developer, and automatically assign new issues to the correct user. So you’ll no longer need to swim through all of the incoming debt each day to find your new issues.

“Real” cross source navigation features

For quite some time, SonarQube has been able to link files together in certain circumstances – through duplications (you can navigate to the files that have blocks in common with your code) or test methods (when coverage per test is activated, you can navigate to the test file that covers your code, and vice-versa). But, this navigation capability has been quite limited, and the workspace concept that goes with it is the best proof of that: it is restricted to the context of the component viewer.

With the great progress made on the language plugin side, SonarQube will be able to know that a given variable or function is defined outside of the current file, and take you to the definition. This new functionality can help developers understand issues more quickly and thoroughly, without the need to open an IDE. You no longer have to wonder where a not-supposed-to-be-null-but-is attribute is defined. You’ll be able to jump to the class definition right from the Web UI. And if you navigate far away from the initial location, SonarQube will help you remember your way, and give quick access to the files you browsed recently – wherever they were. In fact, we want SonarQube to become the central place to take a quick look at code without expending a lot of effort to do it (i.e. without the need to go to a workstation, open an IDE, pull the latest code from the SCM repository, probably build-it, …).

Focus on scalability and performance

SonarQube started as a “small” application and gradually progressed to become an enterprise-ready application. Still, its Achilles’ heel is the underlying relational database. This is the bottleneck each time we want SonarQube to be more scalable and performant. What’s more, supporting 4 different database vendors multiplies the difficulty of writing complex SQL queries efficiently. So even though the database will remain the place where we ensure data integrity, updating that data with analysis results must be done through the server, and searching must use a stack designed for performant searches across large amounts of data. We’ve implemented this new stack using Elasticsearch (ES) technology.

Throughout the 5.x series, most domains will slowly get indexed in ES, giving a performance boost when requesting the data. This move will also open new doors to implementing features that were inaccessible with a relational database – like the “facets” used on the Rules or Issues pages. And because ES is designed to scale, SonarQube will benefit from its ability to give amazing performance while adding new features on large instances with millions of issues and lines of code.

Decoupling the SonarQube analyses from the DB

The highest-voted open ticket on JIRA is also one of the main issues when setting up SonarQube in large companies: why does project analysis make so many queries to the database? And actually, why does it even need a connection to the database at all? This has big performance issues (when the analysis is run far away from the DB) and security issues (DB credentials must be known by the batch, some specific ports must be opened).

Along the way, the SonarQube 5.X releases will progressively cut dependencies to the database so that in the end, analysis simply generates a report and sends it to the server for processing. This will not only address the performance and security concerns, it will also greatly improve the design of the whole architecture, clearly carving it into different stacks with their own responsibilities. In the end, analysis will only call the analysers provided by the language plugins, making source code analysis blazing fast. Everything related to data aggregation or history computation (which once required so many database queries during analysis) will be handled by a dedicated “Compute Engine” stack on the server. Integration in the IDE will also benefit from this separation because only the language plugin analysers will be run – instead of the full process – opening up opportunities to have “on-the-fly” analyses.

Enhanced authentication and authorization system

A couple of versions ago, we started an effort to break the initial coarse-grained permissions (mainly global ones) into smaller ones. The target is to be able to have more control over the different actions available in SonarQube, and to be able to define and customize the roles available on the platform. This is particularly important on the project side, where there are currently only 4 permissions, and they don’t allow a lot of flexibility over what users can or cannot do on a project.

On the authentication side, the focus will be providing a reference Single Sign-On (SSO) solution based on HTTP headers – which is a convenient and widespread way of implementing SSO in big companies. API token authentication should also come along to remove the need to pass user credentials over the wire for analysis or IDE configuration.

All this with other features along the way

These are the main themes we want to push forward for the 5.x series, but obviously lots of other “smaller” features will come along the way. At the time I’m writing this post, we’ve already started working on most of those big features and we are excited about seeing them come out in upcoming versions. I’m sure you share our enthusiasm!

Categories: Open Source

Test Framework Feature Comparisons – What If We Cooperated?

NUnit.org - Sun, 04/07/2013 - 03:14

Software projects often publish comparisons with other projects, with which they compete. These comparisons typically have a few characteristics in common:

  • They aim at highlighting reasons why one project is superior – that is, they are marketing material.
  • While they may be accurate when initially published, competitor information is rarely updated.
  • Pure factual information is mixed with opinion, sometimes in a way that doesn’t make clear which is which.
  • Competitors don’t get much say in what is said about their projects.
  • Users can’t be sure how much to trust such comparisons.

Of course, we’re used to it. We no longer expect the pure, unvarnished truth from software companies – no more than from drug companies, insurance companies, car salesmen or government agencies. We’re cynical.

But one might at least hope that open source projects might do better. It’s in all our interests, and in our users’ interests, to have accurate, up-to-date, unbiased feature comparisons.

So, what would such a comparison look like?

  • It should have accurate, up-to-date information about each project.
  • That information should be purely factual, to the extent possible. Where necessary, opinions can be expressed only if clearly identified as opinion by it’s content and placement.
  • Developers from each project should be responsible for updating their own features.
  • Developers from each project should be accountable for any misstatements that slip in.

I think this can work because most of us in the open source world are committed to… openness. We generally value accuracy and we try to separate fact from opinion. Of course, it’s always easy to confuse one’s own strongly held beliefs with fact, but in most groups where I participate, I see such situations dealt with quite easily and with civility. Open source folks are, in fact, generally quite civil.

So, to carry this out, I’m announcing the .NET Test Framework Feature Comparison project – ideas for better names and an acronym are welcome. I’ll provide at least a temporary home for it and set up an initial format for discussion. We’ll start with MbUnit and NUnit, but I’d like to add other frameworks to the mix as soon as volunteers are available. If you are part of a .NET test framework project and want to participate, please drop me a line.

Categories: Open Source

Software Testing Latest Training Courses for 2012

The Cohen Blog — PushToTest - Mon, 02/20/2012 - 05:34
Free Workshops, Webinars, Screencasts on Open Source Testing Need to learn Selenium, soapUI or any of a dozen other Open Source Test (OST) tools? Join us for a free Webinar Workshop on OST. We just updated the calendar to include the following Workshops:
And If you are not available for the above Workshops, try watching a screencast recording.

Watch The Screencast

Categories: Companies, Open Source

Selenium Tutorial For Beginners

The Cohen Blog — PushToTest - Thu, 02/02/2012 - 08:45
Selenium Tutorial for Beginners Selenium is an open source technology for automating browser-based applications. Selenium is easy to get started with for simple functional testing of a Web application. I can usually take a beginner with some light testing experience and teach them Selenium in a 2 day course. A few years ago I wrote a fast and easy tutorial Building Selenium Tests For Web Applications tutorial for beginners.

Read the Selenium Tutorial For Beginners Tutorial

The Selenium Tutorial for Beginners has the following chapters:
  • Selenium Tutorial 1: Write Your First Functional Selenium Test
  • Selenium Tutorial 2: Write Your First Functional Selenium Test of an Ajax application
  • Selenium Tutorial 3: Choosing between Selenium 1 and Selenium 2
  • Selenium Tutorial 4: Install and Configure Selenium RC, Grid
  • Selenium Tutorial 5: Use Record/Playback Tools Instead of Writing Test Code
  • Selenium Tutorial 6: Repurpose Selenium Tests To Be Load and Performance Tests
  • Selenium Tutorial 7: Repurpose Selenium Tests To Be Production Service Monitors
  • Selenium Tutorial 8: Analyze the Selenium Test Logged Results To Identify Functional Issues and Performance Bottlenecks
  • Selenium Tutorial 9: Debugging Selenium Tests
  • Selenium Tutorial 10: Testing Flex/Flash Applications Using Selenium
  • Selenium Tutorial 11: Using Selenium In Agile Software Development Methodology
  • Selenium Tutorial 12: Run Selenium tests from HP Quality Center, HP Test Director, Hudson, Jenkins, Bamboo
  • Selenium Tutorial 13: Alternative To Selenium
A community of supporting open source projects - including my own PushToTest TestMaker - enables you to apply your Selenium tests as functional tests for smoke testing, regression testing, and integration tests, load and performance tests, and production service monitors. These techniques and tools make it easy to run Selenium tests from test management platforms, including HP Quality Center, HP Test Director, Zephyr, TestLink, QMetry, from automated Continuous Integration (CI) tests, including Hudson, Jenkins, Cruise Control, and Bamboo.

I wrote a Selenium tutorial for beginners to make it easy to get started and take advantage of the advanced topics. Download TestMaker Community to get the Selenium tutorial for beginners and immediately build and run your first Selenium tests. It is entirely open source and free!

Read the Selenium Tutorial For Beginners Tutorial

Categories: Companies, Open Source

5 Services To Improve SOA Software Development Life Cycle

The Cohen Blog — PushToTest - Fri, 01/27/2012 - 00:25
SOA Testing with Open Source Test Tools PushToTest helps organizations with large scale Service Oriented Architecture (SOA) applications achieve high performance and functional service delivery. But, it does not happen at the end of SOA application development. Success with SOA at Best Buy requires an Agile approach to software development and testing, on-site coaching, test management, and great SOA oriented test tools.

Distributing the work of performance testing through an Agile epoc, story, and sprints reduces the testing effort overall and informs the organization's business managers on the service's performance. The biggest problem I see is keeping the testing transparent so that anyone - tester, developer, IT Ops, business manager, architect - follows a requirement down to the actual test results.

With the right tools, methodology, and coaching an organization gets the following:
  • Process identification and re-engineering for Test Driven Development (TDD)
  • Installation and configuration of a best-in-class SOA Test Orchestration Platform to enable rapid test development of re-usable test assets for functional testing, load and performance testing and production monitoring
  • Integration with the organization's systems, including test management (for example, Rally and HP QC) and service asset management (for example, HP Systinet)
  • Construction of the organization's end-to-end tests with a team of PushToTest Global Professional Services, using this system and training of the existing organization's testers, Subject Matter Experts, and Developers to build and operate tests
  • On-going technical support
Download the Free SOA Performance Kit On-Site Coaching Leads To Certification
The key to high quality and reliable SOA service delivery is to practice an always-on management style. That requires on-site coaching. In a typical organization the coaches accomplish the following:
  • Test architects and test developers work with the existing Testing Team members. They bring expert knowledge of the test tools. Most important is their knowledge of how to go from concept to test coding/scripting
  • Technical coaching on test automation to ensure that team members follow defined management processes
Cumulatively this effort is referred to as "Certification". When the development team produces quality product as demonstrated by simple functional tests, then the partner QA teams take these projects and employ "best practice" test automation techniques. The resulting automated tests integrate with the requirements system (for example, Rally), the continuous integration system, and the governance systems (for example, HP Systinet.)
Agile, Test Management, and Roles in SOA
Agile software development process normally focuses first on functional testing - smoke tests, regression test, and integration tests. Agile applied to SOA service development deliverables support the overall vision and business model for the new software. At a minimum we should expect:
  1. Product Owner defines User Stories
  2. Test Developer defines Test Cases
  3. Product team translates Test Cases into soapUI, TestMaker Designer, and Java project implementations
  4. Test Developer wraps test cases into Test Scenarios and creates an easily accessible test record associated to the test management service
  5. Any team member follows a User Story down into associated tests. From there they can view past results or execute tests again.
  6. As tests execute the test management system creates "Test Execution Records" showing the test results
Learn how PushToTest improves your SOA software development life cycle. Click here to learn how.


Download the Free SOA Performance Kit

Categories: Companies, Open Source

Application Performance Management and Software Testing Trends and Analysis

The Cohen Blog — PushToTest - Tue, 01/24/2012 - 16:25
18 Best Blogs On Software Testing 2011 began with some pretty basic questions for the software testing world:
  • To what extent will large organizations dump legacy test tools for open source test tools?
  • How big would the market for private cloud software platforms be?
  • Does mankind have the tools to make a reliable success of the complicated world we built?
  • How big of a market will SOA testing and development be?
  • What are the best ways to migrate from HP to Selenium?
Let me share the answers I found. Some come from my blog, others from friends and partner blogs. Here goes:

The Scalability Argument for Service Enabling Your Applications. I make the case for building, deploying, and testing SOA services effectively. I point out the weakness of this approach comes at the tool and platform level. For example, 37% of an application's code simply to deploy your service.

How PushToTest Uses Agile Software Development Methodology To Build TestMaker. A conversation I had with Todd Bradfute, our lead sales engineer, on surfacing the results of using Agile methodology to build software applications.

"Selenium eclipsed HP’s QTP on job posting aggregation site Indeed.com to become the number one requisite job experience / skill for on-line posted automated QA jobs (2700+ vs ~2500 as of this writing,)" John Dunham, CEO at Sauce Labs, noted.

Run Private Clouds For Cost Savings and Control. Instead of running 400 Amazon EC2 machine instances, Plinga uses Eucalyptus to run its own cloud. Plinga needed the control, reliability, and cost-savings of running its own private cloud, Marten Mickos, CEO at Eucalyptus, reports in his blog.

How To Evaluate Highly Scalable SOA Component Architecture. I show how to evaluate highly scalable SOA component architecture. This is ideal for CIOs, CTOs, Development and Test Executives, and IT managers.

Planning A TestMaker Installation. TestMaker features test orchestration capabilities to run Selenium, Sahi, soapUI, and unit tests written in Java, Ruby, Python, PHP, and other langauges in a Grid and Cloud environment. I write about the issues you may encounter installing the TestMaker platform.

Repurposing ThoughtWorks Twist Scripts As Load and Performance Tests. I really like ThoughtWorks Twist for building functional tests in an Agile process. This blog and screencast shows how to rapidly find performance bottlenecks in your Web application using Thoughtworks Twist with PushToTest TestMaker Enterprise test automation framework.

4 Steps To Getting Started With The Open Source Test Engagement Model. I describe the problems you need to solve as a manager to get started with Open Source Testing in your organization.

Corellation Technology Finds The Root Cause To Performance Bottlenecks. Use aspect-oriented (AOP)  technology to surface memory leaks, thread deadlocks, and slow database queries in your Java Enterprise applications.

10 Agile Ways To Build and Test Rich Internet Applicatiions (RIA.) Shows how competing RIA technologies put the emphasis on test and deploy.

Oracle Forms Application Testing. Java Applet technology powers Oracle Forms and many Web applications. This blog shows how to install and use open source tools to test Oracle Forms applications.

Saving Your Organization From The Eventual Testing Meltdown of Using Record/Playback Solely. The Selenium project is caught between the world of proprietary test tool vendors and the software developer community. This blog talks about the tipping-point.

Choosing Java Frameworks for Performance. A round-up of opinions on which technologies are best for building applications: lightweight and responsive, RIA, with high developer productivity.

Selenium 2: Using The API To Create Tests. A DZone Refcard we sponsored to explain how to build tests of Web applications using the new Selenium 2 APIs. For the Selenium 1 I wrote another Refcard, click here.

Test Management Tools. A discussion I had with the Zephyr test management team on Agile testing.

Migrating From HP Mercury QTP To PushToTest TestMaker 6. HP QTP just can't deal with the thousands of new Web objects coming from Ajax-based applications. This blog and screencast shows how to migrate.

10 Tutorials To Learn TestMaker 6. TestMaker 6 is the easier way to surface performance bottlenecks and functional issues in Web, Rich Internet Applications (RIA, using Ajax, Flex, Flash,) Service Oriented Architecture (SOA,) and Business Process Management (BPM) applications.

5 Easy Ways To Build Data-Driven Selenium, soapUI, Sahi Tests. This is an article on using the TestMaker Data Production Library (DPL) system as a simple and easy way to data-enable tests. A DPL does not require programming or scripting.

Open Source Testing (OST) Is The Solution To Modern Complexity. Thanks to management oversite, negligence, and greed British Petroleum (BP) killed 11 people, injured 17 people, and dumped 4,900,000 barrels of oil into the Gulf of Mexico in 2010. David Brooks of the New York Times became an unlikely apologist for the disaster citing the complexity of the oil drilling system.

Choosing automated software testing tools: Open source vs. proprietary.  Colleen Fry's article from 2010  discusses why software testers decide which type of automated testing tool, or combination of open source and proprietary, to best meets their needs. We came a long way in 2011 to achieve these goals.

All of my blogs are found here.

Categories: Companies, Open Source