Skip to content

Open Source

Call for Papers is Open for Geneva SonarQube Conference

Sonar - Fri, 07/31/2015 - 16:05

A few weeks ago, I announced a free SonarQube User Conference in Geneva on the 23rd and 24th of September. More than 100 people have already registered.

We want this 2-day conference to be as valuable as possible for participants, and thus want to have a variety of speakers. That’s why we are opening a call for papers, and invite anyone who wants to talk about SonarQube to submit a presentation. The CFP will close on Friday the 14th of August.

To submit a talk, simply send an email to marketing@sonarsource.com with the title, abstract and duration of the presentation. We are eagerly awaiting your proposal!

Categories: Open Source

JUC U.S. West News: Agenda is up

It seems a bit unreal, but the last JUC agenda for 2015 is now online. Jenkins User Conference U.S. West is the last JUC of the year running from September 2-3 in Santa Clara, CA. So, if you haven't attended JUC yet this year, this is your chance!

Register with a friend to take advantage of the community's 2-for-1 deal and get two tickets for the price of one.

Which talk are you looking forward to most? Check out the agenda and tweet your choice to @jenkinsconf!

Categories: Open Source

JUC Europe slides and video are now available online

Slides and video from JUC Europe are now available online!

If you made it to London to attend this year's JUC Europe, I hope you enjoyed the conference, met plenty of community members and learned more about Jenkins. Now that the slides and video are up, you can revisit your favorite talks or "attend" the ones that you missed...all at your leisure.

If you were unable to attend JUC Europe, well, now you can! The slides and video are here so you can "attend" any time you want. If JUC LIVE seems more appealing to you, there is one date left in the 2015 Jenkins User Conference World Tour: JUC U.S. West is September 2-3 in Santa Clara, CA. Register here.

Categories: Open Source

Reinforcements for the Subversion Plugin

This is a guest post by Manuel Recena Soto (aka recena).

Users of the plug-in know that it has undergone very important changes in the last two years.

Unfortunately, some of these changes resulted in regressions for some users that weren’t properly addressed in subsequent releases. Many users were therefore forced to keep using an older release of the plugin to keep their instances running.

To fix this difficult situation I've decided to dedicate my spare time to improving the plug-in and attempting to restore the stability that an essential plug-in like this requires.

In order to do so, me, my colleague Steven Christou and other members of the community have drawn up a plan.

In the coming weeks we will be focusing our efforts on:

  • Going through the Jira tickets
    • Checking whether they are duplicated
    • Checking whether they are still relevant
    • Asking for more information from the people who reported them
    • Establishing their priority
  • Reviewing pull requests
  • Investigating bug reports and try to reproduce them
  • Fixing serious bugs
  • Refactoring the plugin to improve its maintainability.

We’re planning to publish a new 2.5.x bugfix release once a fortnight. We are not considering the inclusion of new features or improvements. The priority now must be to obtain a stable and reliable plug-in, one that will allow us to take things up again in the future with greater security and peace of mind.

Interested in helping? Just send us a message!

Categories: Open Source

Bay Area Jenkins Area Meet-up is looking for you

Uday made a blog post yesterday that he is looking at organizing a regular Jenkins meet-up in the Silicon Valley Bay Area dubbed "Bay Area Jenkins Area Meetup (JAM)."

As a first step, he wants to have a kick-off meeting, to get more insights and opinions about what the topics could be and what people want to hear. I'm really looking forward to it as a means to build a local network, so I signed myself up as a speaker of the first meet-up.

If you are in the Peninsula, South Bay, East Bay, etc., please send some encouragements to him by posting a comment, or better yet, come to the kick-off meeting.

Categories: Open Source

JUC U.S. East slides and video are now available online

Slides and video from JUC U.S. East are now available online!

If you attended the conference, THANK YOU, and I'm sure you had fun, learned a lot and met many people from the Jenkins community. Now you can revisit your favorite talks or "attend" the ones that you missed.

If you were unable to attend JUC U.S. East, you now have the slides and video so you can "attend" anyways! If you like what you see and would like to attend a JUC this year, there is ONE date left in the 2015 Jenkins User Conference World Tour: JUC U.S. West is September 2-3 in Santa Clara, CA. Register here.

Categories: Open Source

Integrating Kubernetes and Jenkins

Kubernetes is an open-source project by Google that provides a platform for managing Docker containers as a cluster. In their own words:

Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. Using the concepts of "labels" and "pods", it groups the containers which make up an application into logical units for easy management and discovery.

Kubernetes-related services by Google are the Google Container Engine, a Kubernetes-powered platform for hosting and managing Docker containers, and the Google Container Registry, a private Docker image registry.

Several new Jenkins plugins allow you to make use of Kubernetes and these services:

Watch Kohsuke demoing Jenkins/Kubernetes integration at OSCON earlier this week.

For a more in-depth look at how you can use Kubernetes with Jenkins, check out these posts on the CloudBees blog by Tracy Kennedy:

Categories: Open Source

SonarLint brings SonarQube rules to Visual Studio

Sonar - Fri, 07/24/2015 - 14:33

We are happy to announce the release of SonarLint for Visual Studio version 1.0. SonarLint is a Visual Studio 2015 extension that provides on-the-fly feedback to developers on any new bug or quality issue injected into C# code. The extension is based on and benefits from the .NET Compiler Platform (“Roslyn”) and its code analysis API to provide a fully-integrated user experience in Visual Studio 2015.

Features

There are lots of great rules in the tool. We won’t list all 76 of the rules we’ve implemented so far, but here are a couple that show what you can expect from the product:

  • Defensive programming is a good practise, but in some cases you shouldn’t simply check whether an argument is null or not. For example, value types (such as structs) can never be null, and as a consequence comparing a non-restricted generic type parameter to null might not make sense (S2955), because it will always return false for structs.
  • Did you know that a static field of a generic class is not shared among instances of different close-constructed types (S2743)? So how many instances of DefaultInnerComparer do you think will be created with the following class? It is static so you might have guessed one, but actually there will be an instance for each type parameter used for instantiating the class.
  • We’ve been using SonarLint internally for a while now, and are running it against a few open source libraries too. We’ve already found bugs in both Roslyn and Nuget with the rule “Identical expressions used on both sides of a binary operator” rule (S1764).
  • Also, as shown below, some cases of null pointer dereferencing can be detected as well (S1697):

This is just a small selection of the implemented rules. To find out more, go and check out the product.

How to get it?

SonarLint is packaged in two ways:

  • Visual Studio Extension
  • Nuget package

To install the Visual Studio Extension download the VSIX file from Visual Studio Gallery. Optionally, you can download the complete source code and build the extension for yourself. Oh, and you might have already realized: this product is open source (under LGPLv3 license), so you can contribute if you’d like.

By the way, internally the SonarQube C# plugin also uses the same code analyzers, so if you are already using the SonarQube platform for C# projects, from now on you can also get the issues directly in the IDE.

What’s next?

In the following months we’ll increase the number of supported rules, and as with all our SonarQube plugins, we are moving towards bug-detection rules. Next to this effort, we’re continuously adding code fixes to the Visual Studio Extension. That way, the issues identified by the tool can be automatically fixed inside Visual Studio. In the longer run, we aim to bring the same analyzers we’ve implemented for C# to VB.Net as well. Updates are coming frequently, so keep an eye on the Visual Studio Extension Update window.

This SonarLint for Visual Studio is one piece of this puzzle we’ve been working on since the beginning of the year: providing to the .Net community a tight, easy and native integration of the SonarQube ecosystem into the Microsoft ALM suite. This 1.0 release of SonarLint is a good opportunity to warmly thanks again Jean-Marc Prieur, Duncan Pocklington and Bogdan Gavril from Microsoft. They have been highly and daily contributing to this effort to make SonarQube a central piece of any .Net development environment.

For more information on the product go to http://vs.sonarlint.org or follow us on Twitter.

Categories: Open Source

Office hours are back

After several months of inactivity, office hours, the bi-weekly meeting of Jenkins users and developers to learn more about Jenkins, are back.

I'll host the first session next Wednesday at 11 am PDT. This session will be about Stapler, focusing on what Jenkins plugin authors need to know about it, e.g. request routing, form submission handling, or how Jelly/Groovy views work.

While this is going to be a developer-focused session, future session topics will also have Jenkins users as target audience.

For general information on office hours, and how to join, see the wiki.

Categories: Open Source

Geneva SonarQube Conference – Sept. 23 & 24.

Sonar - Fri, 07/17/2015 - 10:13

We are very happy to announce the first SonarQube European User Conference! This two-day free event will take place in Geneva on September 23rd and 24th, right in the heart of the city at Côté Cour Côté Jardin – Rue de la Chapelle 8, 1207 Geneva.

Following the success of the conferences in the US and Paris earlier this year, the Geneva conference targets a broader and more international audience. It will provide a forum for users to meet, network, and share experiences and best practices around the SonarQube platform. Attendees will also be able to learn directly from the SonarSource Team about the platform roadmap and vision, as well as what was accomplished with 4.x and what is being accomplished with the 5.x series.

The program will highlight some of the exciting topics we’ve talked about these past weeks, like the Water Leak paradigm, the new SonarQube GitHub Plugin, the collaboration with Microsoft to integrate with the .NET ALM Suite, and the coverage of the security and vulnerability markets. We plan to make the conference as exciting as these recent topics and even more so!

There will also be more informal sessions, such as presentations and feedback from customers and partners on large scale implementations of SonarQube, and hands-on workshops on technical aspects such as platform monitoring, custom rules development…

We hope you will be able to make it, and are looking forward to meeting you there! To register, please click here.

Categories: Open Source

Juseppe, a custom update site for Jenkins

This is a guest post by Kirill Merkushev at Yandex. I met him at JUC Europe where he showed me the project he was working on: Juseppe. It looked really interesting, so I asked him to write this guest post.

When you write your first custom Jenkins plugin for internal use, it's easy enough to deploy it on one or maybe two Jenkins instances. You can save it on your local drive and upload the HPI file via the Jenkins Plugin Manager as needed. It's easy to do this for a few releases. But as your experience grows, the number of plugins and their releases grows as well. The plugins directory on your local drive soon looks like a garbage dump, and it's difficult to find that most recent version of any plugin. And if you have a lot of Jenkins instances coordinating updates of your plugins may cause a lot of pain.

A similar situation is when you contribute a much-needed patch to an existing plugin, but you don't have the time to wait until your pull request is be merged and a new release is cut. Or you may need to patch a plugin in ways not suitable for distribution, and decide to effectively fork the plugin for use on your Jenkins instances. How are you going to do this?

A solution avoiding the problems from these situations is to set up your own update site to serve your private plugin builds. Juseppe allows you to do this quickly and easily.

What is Juseppe?

Juseppe is an acronym for Jenkins Update Site Embedded for Plugin Publishing Easily. Juseppe can help you set up a Jenkins update site in just a few minutes.

Features
  • Generates signed update-center.json and release-history.json
  • Works with HPI files directly (stored in one folder), no need to set up a Maven repository
  • Watches for changes in the plugin folder and regenerates JSON files when changes are detected
  • Serves generated files and plugin files with built-in Jetty web server
  • Can be run in a "generate-only" mode when you want to use a different web server for these files.
How can I get Juseppe?

It ships as a Docker container, or can be built from source. Visit the GitHub project page to learn more. The complete user guide is available in the GitHub project wiki.

Categories: Open Source

SonarQube Swift Plugin Offers Mature Functionality for Young Language

Sonar - Fri, 07/10/2015 - 10:13

The Swift programming language is only a year old, but the SonarQube plugin for code written in this “green” language has already been out for six months and already offers a mature set of features.

The SonarQube Swift plugin is absolutely easy to use. All you need to do is specify the name of the project and the folder with the source files. As the analysis output, you get a wealth of metrics (lines of code, complexity etc.), code duplication detection, and of course, the most important and interesting thing: issues raised by your code.

The Swift language introduces a lot of new features, some of which developers have been waiting for for a long time (e.g. easy-to-use optional values), while others are a bit more controversial (operator overriding, custom operators). Love the new features or hate them, no developer is indifferent.

Because the Swift language is so new, our team has made an effort to create a pile of useful rules to help developers take proper advantage of its unique features. For instance, for those who are used to ending each switch case with a break, we have this rule: “break” should be the only statement in a “case”.
For those who are addicted to using custom operators, we have rules limiting the risks of using of this feature:

And for sure, the Swift plugin provides standard types of rules like name convention rules for all possible categories, a bunch of rules detecting too-complex code and other super useful bug detection rules such as:

The Swift language is developing rapidly, regularly releasing new versions with new features and syntax. At the recent WWDC 2015, Swift 2.0 was announced. It introduces error-handling mechanisms, defer statements, guard statements and a lot of other stuff. All of which is already supported by Swift plugin 1.4!

So if you are interested in being able to develop high quality Swift code quickly, take a look at Nemo to what the SonarQube Swift plugin offers, and then try it out for yourself.

Categories: Open Source

Jenkins User Event Scandinavia 2015

For the 4th consecutive year the Jenkins CI community is gathering in Scandinavia. JUES inspires both current as well as soon-to-be Jenkins users to network and harness inspiration from peers and experts on best practice and implementation of Continuous Integration, Continuous Delivery, and agile development with Jenkins.

As always we’ll precede the JUES conference with a Code Camp on the day before. The Code Camp is a full day community event where developers learn from fellow developers on coding and plugin enhancement, all delivered back to the community.

We welcome you and other leading Jenkins developers, QA, DevOps, and operations personnel to this years Scandinavian Jenkins CI festival hoping to continuously support the growth of the Jenkins Open Source community.

REGISTER FOR JUES HERE

Categories: Open Source

GitHub pull request analysis helps fix the leak

Sonar - Wed, 07/08/2015 - 12:15

If you follow SonarSource, you are probably aware of a simple and yet powerful paradigm that we’re using internally: the water leak concept. That is how we’ve been working on a daily basis at SonarSource since a couple of years already, using various features of SonarQube like “New Issues” notifications“Since previous version” differential period, and quality gates. These features allows us to make sure that no technical debt is introduced on new code. More recently, we have developed a brand new plugin to go even further in this direction: the SonarQube GitHub Plugin.

Analysing GitHub pull requests to detect new issues

At SonarSource, we use GitHub to manage our codebase. Every bug fix, improvement, and new feature is developed in a Git branch and managed through a pull request on GitHub. Each pull request must be reviewed by someone else on the team before it can be merged into the master branch. Previously, it was only after the merge and the next analysis (every master branch is analysed on our internal SonarQube instance several times a day) that SonarQube feedback was available, possibly leading to another pull request-review cycle. “Wouldn’t it be great” we thought, “if the pull request could be reviewed not only by a teammate, but also by SonarQube itself before being merged?” That way, developers would have the opportunity to fix potential issues before they could be injected into the master branch (and reported on the SonarQube server).

This is what we achieved with the new SonarQube GitHub Plugin. Basically, every time a pull request is submitted by a member of team, the continuous integration system launches a SonarQube preview analysis with the parameters to activate the GitHub plugin, so that:

  1. When the SonarQube analysis starts, the GitHub plugin updates the status of the pull request to mention that there’s a pending analysis
  2. Then SonarQube executes all the required language plugins
  3. And at the end, the GitHub plugin:
    • adds an inline comment for each new issue,
    • adds a global comment with a summary of the analysis,
    • and updates the status of the pull request, setting it to “failed” if at least one new critical or blocker issue was found.

Here’s what such a pull request looks like (click to enlarge):

Pull Request Analysis with SonarQube GitHub Plugin

Thanks to the GitHub plugin, developers get quick feedback as a natural, integrated part of their normal workflow. When a GitHub analysis shows new issues, developers can choose to fix the issues and push a new commit – thus launching a new SonarQube analysis. But in the end, it is up to the developer whether or not to merge the branch into the master, whatever the status of the pull request after the analysis. The SonarQube GitHub plugin provides feedback, but the power remains where it belongs – in the hands of the developers.

What’s next?

Now that the integration with GitHub has proven to be really useful, we feel that doing a similar plugin for Atlassian Stash would be valuable, and writing it should be quite straightforward.

Also, analysing pull requests on GitHub is a great step forward, because it gives early feedback on incoming technical debt. But obviously, developers would like to have this feedback even earlier: in their IDEs. This is why in the upcoming months, we will actively work on the Eclipse and IntelliJ plugins to make sure they allow developers to efficiently fix issues before commit and adopt the “water leak” approach wholesale. To achieve this target, we’ll update the plugins to run SonarQube analyses in the blink of an eye for instantaneous feedback on the code you are developing.

 

Categories: Open Source

Water Leak Changes the Game for Technical Debt Management

Sonar - Fri, 07/03/2015 - 09:07

A few months ago, at the end of a customer presentation about “The Code Quality Paradigm Change”, I was approached by an attendee who said, “I have been following SonarQube & SonarSource for the last 4-5 years and I am wondering how I could have missed the stuff you just presented. Where do you publish this kind of information?”. I told him that it was all on our blog and wiki and that I would send him the links. Well…

When I checked a few days later, I realized that actually there wasn’t much available, only bits and pieces such as the 2011 announcement of SonarQube 2.5, the 2013 discussion of how to use the differential dashboard, the 2013 whitepaper on Continuous Inspection, and last year’s announcement of SonarQube 4.3. Well (again)… for a concept that is at the center of the SonarQube 4.x series, that we have presented to every customer and at every conference in the last 3 years, and that we use on a daily basis to support our development at SonarSource, those few mentions aren’t much.

Let me elaborate on this and explain how you can sustainably manage your technical debt, with no pain, no added complexity, no endless battles, and pretty much no cost. Does it sound appealing? Let’s go!

First, why do we need a new paradigm? We need a new paradigm to manage code quality/technical debt because the traditional approach is too painful, and has generally failed for many years now. What I call a traditional approach is an approach where code quality is periodically reviewed by a QA team or similar, typically just before release, that results in findings the developers should act on before releasing. This approach might work in the short term, especially with strong management backing, but it consistently fails in the mid to long run, because:

  • The code review comes too late in the process, and no stakeholder is keen to get the problems fixed; everyone wants the new version to ship
  • Developers typically push back because an external team makes recommendations on their code, not knowing the context of the project. And by the way the code is obsolete already
  • There is a clear lack of ownership for code quality with this approach. Who owns quality? No one!
  • What gets reviewed is the entire application before it goes to production and it is obviously not possible to apply the same criteria to all applications. A negotiation will happen for each project, which will drain all credibility from the process

All of this makes it pretty much impossible to enforce a Quality Gate, i.e. a list of criteria for a go/no-go decision to ship an application to production.

For someone trying to improve quality with such an approach, it translates into something like: the total amount of our technical debt is depressing, can we have a budget to fix it? After asking “why is it wrong in the first place?”, the business might say yes. But then there’s another problem: how to fix technical debt without injecting functional regressions? This is really no fun…

At SonarSource, we think several parameters in this equation must be changed:

  • First and most importantly, the developers should own quality and be ultimately responsible for it
  • The feedback loop should be much shorter and developers should be notified of quality defects as soon as they are injected
  • The Quality Gate should be unified for all applications
  • The cost of implementing such an approach should be insignificant, and should not require the validation of someone outside the team

Even changing those parameters, code review is still required, but I believe it can and should be more fun! How do we achieve this?

water leak

When you have water leak at home, what do you do first? Plug the leak, or mop the floor? The answer is very simple and intuitive: you plug the leak. Why? Because you know that any other action will be useless and that it is only a matter of time before the same amount of water will be back on the floor.

So why do we tend to behave differently with code quality? When we analyze an application with SonarQube and find out that it has a lot of technical debt, generally the first thing we want to do is start mopping/remediating – either that or put together a remediation plan. Why is it that we don’t apply the simple logic we use at home to the way we manage our code quality? I don’t know why, but I do know that the remediation-first approach is terribly wrong and leads to all the challenges enumerated above.

Fixing the leak means putting the focus on the “new” code, i.e. the code that was added or changed since the last release. Things then get much easier:

  • The Quality Gate can be run every day, and passing it is achievable. There is no surprise at release time
  • It is pretty difficult for a developer to push back on problems he introduced the previous day. And by the way, I think he will generally be very happy for the chance to fix the problems while the code is still fresh
  • There is a clear ownership of code quality
  • The criteria for go/no-go are consistent across applications, and are shared among teams. Indeed new code is new code, regardless of which application it is done in
  • The cost is insignificant because it is part of the development process

As a bonus, the code that gets changed the most has the highest maintainability, and the code that does not get changed has the lowest, which makes a lot of sense.

I am sure you are wondering: and then what? Then nothing! Because of the nature of software and the fact that we keep making changes to it (Sonarsource customers generally claim that 20% of their code base gets changed each year), the debt will naturally be reduced. And where it isn’t is where it does not need to be.

Categories: Open Source

New Wiki URL Requirement for Plugins

Let's say you're browsing the 'Available' tab in the Jenkins plugin manager for interesting-looking plugins. How do you learn more about them, preferably without installing them on your production instance? You click the plugin's name, which usually links to the plugin's wiki page, of course!

Unfortunately, it's possible for plugins to be published without a wiki page, or any other documentation aside from what's provided in the plugin itself. This is really unfortunate, as users rely on wiki pages and similar documentation to learn more about a plugin before installing or upgrading it, like its features, limitations, or recent changes. Additionally, plugin wiki pages have a special section at the top that provides an automatically generated technical overview of the plugin, such as dependencies to other plugins, the minimum compatible Jenkins version, a list of developers, and links to the source code repository and issue tracker component. Everyone learning about or using a plugin benefits from a plugin wiki page and luckily, almost all plugins have one!

To ensure that every plugin has at least a basic wiki page with some documentation, we decided to only publish plugins in the Jenkins update center that have and link to a wiki page. To keep the impact to a minimum, we're implementing this plan in several stages.

The first stage went live on June 1: All existing plugins that don't have a (valid) wiki link got a wiki link assigned by the update center (a so-called 'override'), either to an existing wiki page if there was one, or a generic "This plugin has no documentation" wiki page otherwise. This ensures that no currently existing plugins get dropped from the update center at this point. Of course, new plugins that don't provide a wiki URL and don't have an override URL will not show up at all.

The second stage will be enabled later this year: We're planning to remove all the overrides mentioned above. At this point, plugins may get removed from the update center if they still don't specify a wiki URL. Of course this isn't our goal, and we'll try to work with plugin authors to prevent this from happening.

So what can you do? Check the current overrides list to see whether the plugins you care about are affected, and if so, see the landing page in the wiki to learn what you can do. If you have any questions about this process not covered by the wiki, ask us on the Jenkins developers mailing list.

Categories: Open Source

All your bases belong to us part 2- Verifying Called

The Typemock Insider Blog - Thu, 06/18/2015 - 10:42

We saw in part 1 how to change the behavior of ‘hidden’ base methods.To verify that a base method was called, in the coming isolator 8.1 we have added a new API OnBase(). For example: public class BaseClass {      public virtual int VirtualMethod(bool call)      {          return 1;     } } public class DerivedClass : BaseClass {     public override int VirtualMethod(bool call)     […]

The post All your bases belong to us part 2- Verifying Called appeared first on The Unit Testing Blog - Typemock.

Categories: Open Source

JUC Speaker Blog Series: Martin Hobson, JUC U.S. East

I’ve been using Jenkins for some time now as the build server for the various projects that are assigned to our four-person software development team, but recently I had exposure to how things were done in a much larger team, and I came away with a better understanding of the kinds of demands that are placed on a build pipeline in these environments. It was quite an education – while the CI pipelines that I administer in our small team might require a handful of virtual machines in our corporate cloud, the pipeline in this team supported over one hundred developers and required several hundred VM instances at any given time.

When operating at this scale, efficiency does become important, as the Amazon cloud charges add up and become significant at this level. Using some relatively simple techniques, I was able to gain insight into what actually happened in the more complex build jobs and learned just how these VM instances were utilized. These build jobs configured over a dozen virtual machines each, and understanding the startup and execution flows was critical to making changes and improving efficiencies. I will be discussing how to instrument and analyze these complex builds in my Lightning Talk: "Visualizing VM Provisioning with Jenkins and Google Charts” and hope to see you all there!

This post is by Martin Hobson, Senior Software Developer at Agilex Technologies. If you have your ticket to JUC U.S. East, you can attend his lightning talk "Visualizing VM Provisioning with Jenkins and Google Charts" on Day 1.

JUC IS HERE! JUC U.S. East will begin with registration at 7AM, Thursday June 18. The two day conference is sure to be a blast! If you have not registered, you can still get a ticket! Check out the agenda for JUC U.S. East here and find the link to register.



Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Categories: Open Source

JUC Speaker Blog Series: Stephan Hochdörfer, JUC Europe

I am very much looking forward to the Jenkins User Conference in London where I will present our insights on how to use Jenkins in a PHP related environment. Moving to Jenkins about 5 years ago bitExpert gained a lot of experience in running and managing a distributed Jenkins infrastructure. bitExpert builds custom applications for our clients which means that we have to deal with different project infrastructures, e.g. different PHP versions. We heavily rely on the build nodes concept of Jenkins which I will briefly outline in the session. Besides that I will give some in-depth insights on how we use Jenkins on a daily basis for the "traditional" CI related tasks (e.g. linting code, checking code style, running tests) as well as how Jenkins is used to power our integration tests. Last but not least I will cover how Jenkins acts as a kind of backbone for our Satis server which allows us to host the metadata of our company's private Composer packages. Throughout the talk I will point out which Jenkins plugins we use in the different contexts to give you a good starting point if you are new in the Jenkins ecosystem.

This post is by Stephan Hochdoerfer, Head of Technology at bitExpert AG. If you have your ticket to JUC Europe, you can attend his talk "Jenkins for PHP Projects" on Day 2.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.



Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Categories: Open Source

Quality Gates Work – If You Let Them

Sonar - Thu, 06/11/2015 - 21:20

Some people see rules – standards – requirements – as a way to hem in the unruly, limit bad behavior, and restrict rowdiness. But others see reasonable rules as a framework within which to excel, a scaffolding for striving, an armature upon which to build excellence.

Shortly after its inception, the C-Family plugin (C, C++ and Objective-C) had 90%+ test coverage. The developers were proud of that, but the Quality Gate requirement was only 80% on new code, so week after week, version by version, coverage slowly dropped. The project still passed the Quality Gate with flying colors each time, but coverage was moving in the wrong direction. In December, overall coverage dropped into the 80′s with the version 3.3 release, 89.7% to be exact.

In January, the developers on the project decided they’d had enough, and asked for a different Quality Gate. A stricter one. They asked to be held to a standard of 90% coverage on new code.

Generally at SonarSource, we advocate holding everyone to the same standards – no special Quality Profiles for legacy projects, for instance. But this request was swiftly granted, and version 3.4, released in February, had 90.1% coverage overall, not just on new code. Today the project’s overall coverage stands at 92.7%.

Language Team Technical Lead Evgeny Mandrikov said the new standard didn’t just inspire the team to write new tests. The need to make code more testable “motivated us to refactor APIs, and lead to a decrease of technical debt. Now coverage goes constantly up.”

Not only that, but since the team had made the request publicly, others quickly jumped on the bandwagon. Six language plugins are now assigned to that quality gate. The standard is set for coverage on new code, but most of the projects meet it for overall coverage, and the ones that don’t are working on it.

What I’ve seen in my career is that good developers hold themselves to high standards, and believe that it’s reasonable for their managers to do the same. Quality Gates allow us to set – and meet – those standards publicly. Quality Gates and quality statistics confer bragging rights, and set up healthy competition among teams.

Looking back, I can’t think of a better way for the C-Family team to have kicked off the new year.

Categories: Open Source