With the release of SonarQube 4.0, we now have three different paradigms for SonarQube analysis. There’s full analysis, which updates the central database and provides organizational visibility of code quality. There’s preview analysis, which tells you whether the code in question is good enough to move forward with (E. G. merging it into the Git master). And now that SonarQube has the ability to limit preview analysis to only changed files, there’s also incremental preview analysis, or “incremental analysis”.
Let’s talk about when you would use each one. I’ll start with the new guy: incremental analysis.
The use case for this is on the developer’s machine, before code is checked in. If you’ve only changed one or two files, waiting for the preview analysis (previously called “dry run” in some contexts) seems burdensome. I’ve been there myself on my day job, with some medium-large projects. Even though I’ve got a fairly fast machine, preview analysis takes long enough to be irritating, but not long enough for a bathroom break and a chat. My colleagues on larger projects are closer to the bathroom break range, and since they can make many small changes in a day, they’ve complained that the time to run a pre-commit preview analysis is a productivity drag. (So guess what doesn’t happen.)
Fortunately, the developers at SonarSource have heard those complaints. (Hmmm… They maintain a large project too…) Now, you can restrict a preview analysis to just the files you’ve made your small changes in, and you have your results almost immediately. The mechanism is a new analysis property that was introduced in 4.0: sonar.analysis.mode. It has three valid values:
- analysis – this is the default. It tells SonarQube to perform a full, store-it-in-the-database analysis.
- preview – this was previously known as the dryRun mode. It performs a full analysis, but doesn’t store the results in the database.
- incremental – this is the new option. It performs a preview analysis on only the changed files, allowing impatient developers to perform a pre-commit check of their changes without sinking a lot of time into the endeavor.
Incremental mode is already available if you’re using Issues Report, and it’s the new default in the Eclipse plugin.Preview
So where does that leave preview analysis? Now that the developers’ desktop needs are satisfied, is there still a need for a preview analysis of the whole codebase? Absolutely. But the use case moves to the Continuous Integration server. Similar to the way unit testing helps you ensure your new code hasn’t broken what was already working, a preview analysis helps you ensure your new code hasn’t caused a regression in code quality levels (by introducing new issues, new duplications, and so on.)
To add a preview analysis to a CI build, you’ll use most of the same properties you need for a full analysis. You won’t need the database connection credentials – after all, you won’t be updating the database – and you’ll need to add in a couple of other properties, the most notable of which is sonar.analysis.mode=preview. It’s this property that distinguishes an analysis from a preview analysis at a functional level.
You’ll also need the Build Breaker Plugin installed on your SonarQube instance. It will mark the build failed if new alerts are raised during analysis (you can configure the behavior on a project-by-project basis). Install Build Breaker, set up alerts in the relevant quality profiles and you’re ready to go. Those alerts can be as simple as Blocker Issues > 0, or more complex, like Coverage on new code < 90%.
When the build fails, your continuous integration server will send out the usual failure notice to the developer(s) involved.
The only downside to this setup is that your build server admin will need to maintain extra jobs. In addition to the Continuous-Integration-with-Preview-Analysis job, he’ll also need to set up a separate, nightly build to run the main analysis. You know, the one that actually updates the SonarQube database and provides organizational visibility of code quality.
As reported in various places, there was an incident in early November where commits in our Git repositories have become misplaced temporarily by accident. By the mid next week we were able to resurrect all the commits and things are back to normal now.
As there are many confusions and misunderstandings in people’s commentary, we wrote this post to clarify what exactly happened and what we are doing to prevent this.Timeline
In the early morning of Nov 10th 2013, one of the 680 Jenkins developers had mistakenly launched Gerrit with a partially misconfigured Gerrit replication plugin, while pointing Gerrit to his local directory that contains 186 Git repositories cloned from the Github Jenkins organization. These repositories were checked out about 2 months ago and weren’t kept up to date. Gerrit replication plugin had then tried to “replicate” his local repositories back to GitHub, which it considers mirrors, by doing the equivalent of “git push --force” instead of regular push. Unfortunately, Gerrit replication plugin defaults to a forced push, which is the opposite of what Git normally does. The replication also happens automatically, which is why this mistake has impacted so many repositories in such a short time.
As a result, these repositories have their branch heads rewinded to point to older commits, and in effect the newer commits were misplaced after the bad git-push.
When we say commits were "misplaced", this is an interesting limbo state that's worth an explanation for people who don’t use Git. A Git commit is identified by its SHA1 hash, and these objects will never get overwritten. So the misplaced commits are actually very much on the server intact. What was gone was the pointer that associates a human-readable branch name (such as "rc") to the latest commit on the branch.
By Nov 10th 12:54pm GMT, multiple developers had noticed this, and within several hours, we figured out what happened. From Gerrit log files and with the help of GitHub technical support, he was able to figure out all the affected repositories, and later an independent script was written to verify the accuracy of this list.
Some of the Jenkins developers were closely following this development, and were able to restore branches to point to correct commits by simply pushing their up-to-date local workspaces back into the official repositories. Git makes it very easy to do this, and most of the popular plugins affected were restored in this manner within 24 hours.
At the same time, we needed to systematically restore all the affected repositories, to make sure that we have not lost anything. For this, we contacted GitHub and asked for their help, and they were able to mostly restore branch heads to their correct positions. We have also independently developed a script to find out exactly what commits branch heads should be pointing to, based on the GitHub events API that exposes the activities to Git repositories. This script found a dozen or so branches that fell through the cracks of GitHub support, and we have manually restored those.Mitigation in the future
The level of support we got from GitHub and our ability to independently verify lost commits and subsequently restore them made us feel good about GitHub, and we have gained confidence in our ability to recover from future incidents.
That said, what happened was a serious disruption, and it’s clear we’d better prepare ourselves both to reduce the chance of accidents like this and increase the ability to recover. To that end, we hope GitHub would expose a configuration option to disable forced ref updates. They already do this on GitHub Enterprise after all. Dariusz pointed out that CollabNet takes this one step further and offers ability to track deleted branches, tags, and forced updates. Something like that would have made the recovery a lot easier.
We are going to make two improvements to our process so that we can recover from this kind of problems more easily in the future.
Firstly, we’ll develop a script that continuously records the ref update events across the GitHub Jenkins organization. This will accurately track which branch/tag is created/updated/deleted by who. In case of an incident like this one, we can use this log to roll back the problematic push more systematically.
Secondly, we’ll allow people to control access to individual Git repositories, as opposed to give them all or nothing access to the entire array of plugin repositories.
The Jenkins developers decided to continue the current open commit policy despite the incident to preserve our culture, which helped us navigate through this incident without a single argument nor flame war.FAQ Does everyone in the organization have full commit privileges to all the repositories?
Yes, with some exceptions. To encourage co-maintenance of plugins by different people, and to reduce the overhead of adding and removing people from our 1100+ repositories, we use one team that gives access to most repositories, and put committers in this team.I prevent forced push in my Git repositories. I’m safe from this trouble, right?
No, unfortunately something like this can still happen to you, as you can also accidentally delete branches. If you want to learn from our mistakes, you should definitely enable server-side reflog, to track ref updating activities. “git config core.logAllRefUpdates true” on the server will enable this.Can’t you just have people with up-to-date copy push their repos and fix it?
This is indeed how some of the repositories got fixed right away, where some individuals are clearly in charge and are known to have the up-to-date local repositories. But this by itself was not sufficient for an incident of this magnitude. Some repositories are co-maintained by multiple people, and none of them are certain if he/she was the last one to push a change. Many plugin developers just scratch their own itch and do not closely monitor the Jenkins dev list. We needed to systematically ensure that all the commits are intact across all the branches in all the affected repositories.Can’t you just roll back the problematic change?
Most mistakes in Git can be rolled back, but unfortunately ref update is the one operation in Git that’s not version controlled. As such Git has no general-purpose command to roll back arbitrary push operation. The closest equivalent is reflog, which offers the audit trail that Git offers for resolving those cases. But that requires direct access on the server, which is not available on GitHub. But yes, this problem would not have happened if we were hosting our own Git repositories, or using Subversion for example.
This is a guest post from Alyssa Tong, who drives JUC organizations around the world.
If you missed JUC Palo Alto on Oct 23, 2013 the videos are now available.
We are off to planning JUC 2014. It is hard to believe this will be the 4th annual JUC in the Bay Area. The growth in the Jenkins community since the first JUC is astounding.
Every year we are in search of a larger venue to accommodate the larger crowd. For 2014, the challenge of finding a venue for a capacity of 500+ attendees at a low cost will prove even more daunting. We would love to hear your suggestions for low cost venues (in the Bay Area) so that we may continue to keep entry cost low while providing convenience and the highest level of Jenkins education to attendees. Please send suggestion(s) to firstname.lastname@example.org
We are proud to launch the call for volunteers to join the JUC organizing committee (OC). If you are interested in shaping the 4th edition of this great event, please send email to email@example.com
We encourage you to share this blog within your network in case other people would be interested in joining the JUC OC or have ideas for a great JUC 2014 location.
The team is proud to announce the release of SonarQube 4.0. It includes many exciting new features:
- Computation of technical debt based on the SQALE model
- Issue exclusion/inclusion, code coverage exclusion
- Project provisioning
- Incremental analysis
- End of support of WAR mode
- Native support of SSL
With version 4.0, we have deprecated the Technical Debt plugin, and moved the finer-grained basic technical debt computations from the commercial SQALE plugin into SonarQube’s core. This brings several UI changes. First, the primary Issue-tracking widget changes, replacing the compliance score with the total cost in days to address all issues:
It offers a breakdown of exactly where your technical debt lies, and the pyramid is conceptual, rather than literal. The most basic, fundamental, fix-this-first items are at the bottom. The light blue band indicates the technical debt in the current area, with the dark blue showing the accumulated total debt as you move up the pyramid.Issue exclusion/inclusion, code coverage exclusion
With version 4.0, we also deprecated the Switch Off Violations plugin and consolidated all exclusions into SonarQube core:
In addition to the move, we also enhanced issue exclusion/inclusion and added code coverage exclusion.Project Provisioning
Because there are often settings you’d like to configure in advance of a project’s first analysis, you can now create, or provision a project in SonarQube without analyzing it. It won’t have any metrics yet, but it will be fully configurable:
There are now three different analysis modes: full (what you’ve been doing all along), preview (this used to be called “dryRun”), and incremental, which limits analysis to only changed files. The use case for this is on the developer’s machine, and it is the default in the newest version of the Eclipse plugin.End of support of WAR mode
Building SonarQube as a WAR is no longer supported. This change was made to allow us to better focus our efforts on bringing value to the end user.Native support of SSL
For the first time, SonarQube now natively supports https! See the docs for more.That’s all, Folks!
In the hope of streamlining account creation e-mail delivery and mailing list moderations, I have deployed SPF and DKIM over the weekend for e-mails coming out of @jenkins-ci.org, which includes account appliations, Confluence, and JIRA.
I've also used this opportunity to switch back the sender of JIRA notifications to firstname.lastname@example.org. It was originally this way, then changed to email@example.com when someone complained (on what ground I do not remember any more.)
To the degree that I have tested the setup, it is working correctly, but if you notice anything strange, please let me know.
It’s official… “SonarQube in Action” is available in stores – Thanks to the efforts of two community members, fanatics of software quality and advocates of SonarQube and its continuous inspection model. The book’s objective is to provide insight on how to effectively use SonarQube in a quality management process, and it systematically explores the Seven Axes of Quality (design, duplications, comments, unit tests, complexity, potential bugs, and coding rules). It targets software development professionals, including engineers, Q/A and testers as well as project/product managers and team leaders.
Interview with the authors, G. Ann Campbell and Patroklos P. Papapetrou:
What is your background?
Ann: I’m an English major, a former reporter, a self-taught coder, and a Computer Science graduate. After graduating with my English degree, I fell into reporting, and eventually transitioned from the newsroom to the web side to support and integration to coding the C back-end. Once I realized that programming was what I wanted to do for a living, I went back to school to formalize my education. I did it partly because when you’re self-taught, you don’t know what you don’t know, and partly so there would never be any question that I was qualified. I’ve been at it for 15 years now. I learned on Perl (the Llama book!), and my first compiled language was C, but these days I usually work in Java. I miss the bare elegance of C, but I do like Java’s String functions.
How and for how long have you been using SonarQube?
Patroklos: I did my first baby steps with SonarQube during the first months of 2010. I merely used version 1.12 and then upgraded to v.2.0. It’s almost 4 years! Woaoh! I haven’t thought about that before! At the beginning I was trying to familiarize myself with the meaning of metrics and how do they affect software quality. Very quickly SonarQube (Sonar at the time) became the first and last thing I was looking at my screen. It is fully integrated to our development process with automated nightly builds (thank you Jenkins) and recently we started to make Code reviews using SonarQube.
Are you part of the community?
Ann: Are you kidding? I’ve been a community gadfly since I started using SonarQube!
Patroklos: Yes. I’d consider myself as an active member of both user and dev mailing lists. Apart from helping people getting the best from SonarQube, I’ve contributed to several plugins such as Widget Lab, SCM Stats, Thucydides etc.
Why did you decide to write a book about SonarQube?
Ann: Because I could! When Patroklos told me he was going to write SonarQube in Action & needed a co-author, I jumped at the chance! It was a childhood ambition to be an author, but some part of me also hoped I could make a lasting contribution to the larger community by sharing my experience and insights.
Patroklos: Well, to be honest I didn’t have any intention to write a book (about SonarQube). One day another publisher (not Manning) approached me and asked me if I wanted to write one. I was flattered, but didn’t have the time to do it justice, so I turned it down. But although I didn’t write that book, I didn’t forget the idea. When my schedule cleared, I approached Manning about writing SonarQube in Action. A few short months after that, Ann and I were on our way.
Who is the book targeting and what’s the audience?
Ann: Ideally, the book would be read by every member of a development team: project managers, testers, coders, &etc. The first part of the book is about the metrics – what they are, what they mean, and why you care. Part 2 is about organizing your effort. It tries to answer the “now what?” after your first analysis. Part 3 is about how you can configure SonarQube to get the most out of it, with the final chapter outlining plugin development for those who want to go even further.
Patroklos: SonarQube is a great (personally I believe it’s the best) tool for managing source code quality. However it’s not always easy to get the best out of it, especially if you’re not familiar enough with quality metrics. The “SonarQube in action” book fills that gap and explains how SonarQube can make a difference for development teams. Through real life examples, it discusses the seven quality axes and all the quality management features it offers. It’s not a user or administration guide. It provides the steps to adopt Continuous Inspection, to understand the importance of the core quality metrics and how they affect source code quality.
Anything else to share with your readers?
Ann: I’ve seen some real coding horrors in my day: variables named things like please_work, miles-long strings of spaghetti, bone-headed mistakes you can’t believe someone actually had the gall to check in. I could go on and on. Probably most folks reading this could too. As a coder, you know when you’re looking at bad code, but when that code came from one of the most senior developers in the company, no one wants to believe you. SonarQube takes what has been subjective (“Oh my God! He’s making it throw an NPE on purpose!”), and makes it objective. It removes personalities and biases and lays out the facts for everyone to see.
Whether or not you buy the book, you should be using SonarQube. I think – I hope – there will come a day when quality scores are regularly included in software requirements and specs. As a consumer, I need quality software. Deserve is an over-used word these days, but I think it’s fair to say that when you put your trust in a software vendor – in an application – you deserve to have it rewarded with a quality offering. I think the users of my software deserve that. And SonarQube helps me deliver.
Patroklos: Every chapter is organized in such a way that you can read it separately from the rest. We do suggest that you read chapter 1, especially if you’re not an experienced SonarQube user, because it’s an overview of SonarQube and introduces some basic ideas you may need when reading the rest of the book. If you decide to read the book sequentially, you’ll find that each chapter is connected to the previous one, and the chapters flow smoothly, without gaps. But again, you can skip any chapter and come back later if you want to.
We did our best to ensure that this book will become a reference for you whenever you need to learn or remember anything about SonarQube or its computed metrics.
In this Open Space Technology style event, we went over war stories from users. Just to show the degree of seriousness, some of those people run 1500+ slaves, and others run Jenkins in HA configuration with a data center fail over! We then picked various topics in the afternoon and discussed what people would like to see to make Jenkins scale further. Slides and raw notes from this meeting is available here.
The event allowed me to rethink and revisit what I thought we should do in coming days in the area of scalability.
The event was far more popular than we anticipated originally, and we had to turn down many folks. So I'm going to do a webinar to go over what we did, and what we talked about. If you are interested in this area, and want to see what's being considered and provide your thoughts, please join us on Nov 19th 10am PT.
Here is a recap of our most recent Selenium Hangout where we answered a grab bag of questions ranging from how to use Selenium within your existing workflow down to nitty-gritty details around performance and deprecated functions.
Be sure to tune into our Twitter feed to find out details about our next Hangout.
And if your question didn’t get answered, we encourage you to hop on IRC and ask it there. Not sure what that means or how to do it? Then read this.
David Burns (@AutomatedTester)
Dave Haeffner (@TourDeDave)
Jim Evans (@jimevansmusic)
Kevin Menard (@nirvdrum)
00:00 – 05:50
Preamble and Introductions
05:51 – 18:09
Question 1 – For a team getting started with Selenium what are some typical workflows for how product code is built, and Selenium tests built, as well as for when product code is modified and Selenium test modified?
18:10 – 34:15
Question 2 – Recommendations for testing responsive design?
34:15 – 37:44
Question 3 – Was VerifyText removed?
37:45 – 46:20
Question 4 – Why is IE9 slow and hard to use and recommendations for alleviate this?
46:21 – 50:11
Question 5 – ChromeDriver2 seems less robust than it’s predecessor, thoughts on this?
50:12 – 53:39
Question 6 – The Selenium documentation is out of date, how can I contribute a fix for this?
53:40 – 54:31
How to help out with the Selenium Conference?
One of the things I love about SonarQube is that gives you tools to tackle all aspects of your technical debt. I am not just talking here about the Seven Axes of Quality / Seven Deadly Developer Sins. No, what I’m talking about is quality along the axis of time.
Of course SonarQube shows you what’s wrong in the present – from the macro level to the micro. It also reaches back to the past to show you where you’ve come from; start analyzing a new project on day one and you can get a great perspective on how its technical debt has grown (or not) along with its size. But that’s not what I want to talk about today. Today, I want to talk about the future, because SonarQube’s issues workflow can help you manage today’s debt into the future.
By default, there are seven different things you can do to an issue (other than fixing it in the code!): Comment, Assign, Plan, Confirm, Change Severity, Resolve, and False Positive. Plugins may add more options, such as Link to JIRA.
In my mind, these actions break out into four different categories, which I’ll talk about in what I consider their logical order. First up is the “technical review” category.Technical Review
Confirm, False Positive, and Change Severity fall into this category, which presumes an initial review of an issue to verify its validity. Assume it’s time to review the technical debt added in the last review period – whether that’s a day, a week, or an entire sprint. You go through each new issue and do one of three things:
- Confirm – By confirming an issue, you’re basically saying “Yep, that’s a problem.”
- False positive – Looking at the issue in context, you realize that for whatever reason, this issue isn’t actually an issue, erm… “problem.” It’s not actually a problem. So you mark it False Positive and move on. It will disappear from your issue counts and drilldown after the next analysis.
- Change severity – This is the middle ground between the first two options. Yes, it’s a problem, but it’s not as bad a problem as the rule’s default severity makes it out to be. Or perhaps it’s actually far worse. Either way, you adjust the severity of the issue to bring it in line with what you feel it deserves. The marker in the drilldown will change to show the new severity immediately, but the change won’t be reflected in your issue counts until after the next analysis.
Once issues have been through technical review, it’s time to decide how you’re going to deal them. You’ve got up to three choices here, and while the technical review options are mutually exclusive (well, mostly), you may find yourself using all three of these on the same issue:
- Assign – Assign the issue to yourself or a teammate for immediate handling. The assignee will receive email notification of the assignment if he signed up for notifications, and the assignment will show up everywhere the issue is displayed, as well as in certain widgets.
- Plan – Some issues will need immediate action, but others you might want to put off. The Action Plan functionality lets you group issues into sets, optionally assign dates, and track set resolution. Once you’ve created an action plan, the “Plan” option on an issue lets you put the issue into the set.
- Link to JIRA – Assuming you’ve installed the JIRA plugin, this option allows you to create a JIRA ticket for an issue. The URL to the JIRA ticket will be added to the issue and a link to the issue will be added to the JIRA ticket. After that though, there’s no relationship between the two. Updating the JIRA ticket won’t touch the issue and vice versa.
There’s only one option under the General category: comment. At any time during the lifecycle of an issue, you can log a comment on it. Comments are displayed in the issue detail in a running log. You have the ability to edit or delete the comments you made.
If you’ve been doing the math, you already know that there’s only one option left: Resolve. Use this option to signal that you think you’ve fixed an open issue. If you’re right, the next analysis will move it to closed status. If you’re wrong, its status will go to re-opened.
So that’s it. That’s how SonarQube lets you manage today’s issues into the future: by helping you vet them, organize what to fix now and what to schedule for later, and track them as your Plan comes together.
(This is a guest post by Alyssa Tong, the lead coordinator of Jenkins User Conference)
Our 3rd annual Jenkins User Conference in the Bay Area being held next Wednesday in Palo Alto is booked fully to the capacity and we couldn’t be more excited for this event! It’s going to be an amazing day of learning, talking to technology experts, networking with other Jenkins users, seeing cool demos and finding out how you can contribute to the Jenkins open source projects.
This event is being held at the Oshman Jewish Community Center and registration begins at 8am. There will be breakfast and plenty of coffee to get you caffeinated. Welcoming announcement will begin sharply at 9am and the keynote address follows shortly after. We’re so excited to have thirteen sponsors investing in and supporting the Jenkins community in this continuous integration space.
New this year, there will be BoF sessions so be sure to sign up for your preferred discussion at check-in. Or suggest a topic by leaving your suggestion in the comments section below. Let us know what Jenkins topic(s) is near and dear to your heart.
For those who missed out on purchasing your ticket or are unable to attend, we are happy to offer the live stream of Track 1. You can choose to watch the entire track or just specific session(s). Either way don’t forget to chat and tweet. We will also tweet live from the conference so you can follow along that way as well. Follow @jenkinsconf for the latest updates.
Thank you to everyone for making this sold-out event possible.
Can’t wait to see everyone on Wednesday!
(This is a guest post from Gareth Bowles, a Senior Software Engineer at Netflix.)
Jenkins has been a central part of the Netflix build and deploy infrastructure for several years now, and we've been attending and speaking at JUC since it started in 2011. It's a great opportunity to meet people who are as passionate about build, test and deployment automation as we are - although as Kohsuke said last year, having all those folks in one place could be dangerous if there's an earthquake !
CloudBees and the JUC Organizing Committee have put another great program together this year. We'll be doing two talks. Justin Ryan and Curt Patrick will present "Configuration as Code: Adoption of the Job DSL Plugin at Netflix", describing how we're shifting our users from manual job configuration via the UI, to defining their jobs as Groovy code using the Job DSL plugin. Justin and Curt will describe how Netflix development teams can now create and maintain complex sets of jobs for their projects with the bare minimum of coding.
In my lightning talk "Managing Jenkins with Jenkins", I'll go over how we use Jenkins' system Groovy scripts to maintain and monitor our Jenkins masters at a scale that couldn't be achieved with manual processes, and without the overhead of writing custom plugins.
As usual, there will be a whole crew of Netflix engineers at JUC this year. If you're interested in working on build and deployment at Netflix scale, find one of us (we'll all be wearing Netflix gear) to learn more - we're hiring !