Skip to content

Open Source

Water Leak Changes the Game for Technical Debt Management

Sonar - Fri, 07/03/2015 - 09:07

A few months ago, at the end of a customer presentation about “The Code Quality Paradigm Change”, I was approached by an attendee who said, “I have been following SonarQube & SonarSource for the last 4-5 years and I am wondering how I could have missed the stuff you just presented. Where do you publish this kind of information?”. I told him that it was all on our blog and wiki and that I would send him the links. Well…

When I checked a few days later, I realized that actually there wasn’t much available, only bits and pieces such as the 2011 announcement of SonarQube 2.5, the 2013 discussion of how to use the differential dashboard, the 2013 whitepaper on Continuous Inspection, and last year’s announcement of SonarQube 4.3. Well (again)… for a concept that is at the center of the SonarQube 4.x series, that we have presented to every customer and at every conference in the last 3 years, and that we use on a daily basis to support our development at SonarSource, those few mentions aren’t much.

Let me elaborate on this and explain how you can sustainably manage your technical debt, with no pain, no added complexity, no endless battles, and pretty much no cost. Does it sound appealing? Let’s go!

First, why do we need a new paradigm? We need a new paradigm to manage code quality/technical debt because the traditional approach is too painful, and has generally failed for many years now. What I call a traditional approach is an approach where code quality is periodically reviewed by a QA team or similar, typically just before release, that results in findings the developers should act on before releasing. This approach might work in the short term, especially with strong management backing, but it consistently fails in the mid to long run, because:

  • The code review comes too late in the process, and no stakeholder is keen to get the problems fixed; everyone wants the new version to ship
  • Developers typically push back because an external team makes recommendations on their code, not knowing the context of the project. And by the way the code is obsolete already
  • There is a clear lack of ownership for code quality with this approach. Who owns quality? No one!
  • What gets reviewed is the entire application before it goes to production and it is obviously not possible to apply the same criteria to all applications. A negotiation will happen for each project, which will drain all credibility from the process

All of this makes it pretty much impossible to enforce a Quality Gate, i.e. a list of criteria for a go/no-go decision to ship an application to production.

For someone trying to improve quality with such an approach, it translates into something like: the total amount of our technical debt is depressing, can we have a budget to fix it? After asking “why is it wrong in the first place?”, the business might say yes. But then there’s another problem: how to fix technical debt without injecting functional regressions? This is really no fun…

At SonarSource, we think several parameters in this equation must be changed:

  • First and most importantly, the developers should own quality and be ultimately responsible for it
  • The feedback loop should be much shorter and developers should be notified of quality defects as soon as they are injected
  • The Quality Gate should be unified for all applications
  • The cost of implementing such an approach should be insignificant, and should not require the validation of someone outside the team

Even changing those parameters, code review is still required, but I believe it can and should be more fun! How do we achieve this?

water leak

When you have water leak at home, what do you do first? Plug the leak, or mop the floor? The answer is very simple and intuitive: you plug the leak. Why? Because you know that any other action will be useless and that it is only a matter of time before the same amount of water will be back on the floor.

So why do we tend to behave differently with code quality? When we analyze an application with SonarQube and find out that it has a lot of technical debt, generally the first thing we want to do is start mopping/remediating – either that or put together a remediation plan. Why is it that we don’t apply the simple logic we use at home to the way we manage our code quality? I don’t know why, but I do know that the remediation-first approach is terribly wrong and leads to all the challenges enumerated above.

Fixing the leak means putting the focus on the “new” code, i.e. the code that was added or changed since the last release. Things then get much easier:

  • The Quality Gate can be run every day, and passing it is achievable. There is no surprise at release time
  • It is pretty difficult for a developer to push back on problems he introduced the previous day. And by the way, I think he will generally be very happy for the chance to fix the problems while the code is still fresh
  • There is a clear ownership of code quality
  • The criteria for go/no-go are consistent across applications, and are shared among teams. Indeed new code is new code, regardless of which application it is done in
  • The cost is insignificant because it is part of the development process

As a bonus, the code that gets changed the most has the highest maintainability, and the code that does not get changed has the lowest, which makes a lot of sense.

I am sure you are wondering: and then what? Then nothing! Because of the nature of software and the fact that we keep making changes to it (Sonarsource customers generally claim that 20% of their code base gets changed each year), the debt will naturally be reduced. And where it isn’t is where it does not need to be.

Categories: Open Source

New Wiki URL Requirement for Plugins

Let's say you're browsing the 'Available' tab in the Jenkins plugin manager for interesting-looking plugins. How do you learn more about them, preferably without installing them on your production instance? You click the plugin's name, which usually links to the plugin's wiki page, of course!

Unfortunately, it's possible for plugins to be published without a wiki page, or any other documentation aside from what's provided in the plugin itself. This is really unfortunate, as users rely on wiki pages and similar documentation to learn more about a plugin before installing or upgrading it, like its features, limitations, or recent changes. Additionally, plugin wiki pages have a special section at the top that provides an automatically generated technical overview of the plugin, such as dependencies to other plugins, the minimum compatible Jenkins version, a list of developers, and links to the source code repository and issue tracker component. Everyone learning about or using a plugin benefits from a plugin wiki page and luckily, almost all plugins have one!

To ensure that every plugin has at least a basic wiki page with some documentation, we decided to only publish plugins in the Jenkins update center that have and link to a wiki page. To keep the impact to a minimum, we're implementing this plan in several stages.

The first stage went live on June 1: All existing plugins that don't have a (valid) wiki link got a wiki link assigned by the update center (a so-called 'override'), either to an existing wiki page if there was one, or a generic "This plugin has no documentation" wiki page otherwise. This ensures that no currently existing plugins get dropped from the update center at this point. Of course, new plugins that don't provide a wiki URL and don't have an override URL will not show up at all.

The second stage will be enabled later this year: We're planning to remove all the overrides mentioned above. At this point, plugins may get removed from the update center if they still don't specify a wiki URL. Of course this isn't our goal, and we'll try to work with plugin authors to prevent this from happening.

So what can you do? Check the current overrides list to see whether the plugins you care about are affected, and if so, see the landing page in the wiki to learn what you can do. If you have any questions about this process not covered by the wiki, ask us on the Jenkins developers mailing list.

Categories: Open Source

All your bases belong to us part 2- Verifying Called

The Typemock Insider Blog - Thu, 06/18/2015 - 10:42

We saw in part 1 how to change the behavior of ‘hidden’ base methods.To verify that a base method was called, in the coming isolator 8.1 we have added a new API OnBase(). For example: public class BaseClass {      public virtual int VirtualMethod(bool call)      {          return 1;     } } public class DerivedClass : BaseClass {     public override int VirtualMethod(bool call)     […]

The post All your bases belong to us part 2- Verifying Called appeared first on The Unit Testing Blog - Typemock.

Categories: Open Source

JUC Speaker Blog Series: Martin Hobson, JUC U.S. East

I’ve been using Jenkins for some time now as the build server for the various projects that are assigned to our four-person software development team, but recently I had exposure to how things were done in a much larger team, and I came away with a better understanding of the kinds of demands that are placed on a build pipeline in these environments. It was quite an education – while the CI pipelines that I administer in our small team might require a handful of virtual machines in our corporate cloud, the pipeline in this team supported over one hundred developers and required several hundred VM instances at any given time.

When operating at this scale, efficiency does become important, as the Amazon cloud charges add up and become significant at this level. Using some relatively simple techniques, I was able to gain insight into what actually happened in the more complex build jobs and learned just how these VM instances were utilized. These build jobs configured over a dozen virtual machines each, and understanding the startup and execution flows was critical to making changes and improving efficiencies. I will be discussing how to instrument and analyze these complex builds in my Lightning Talk: "Visualizing VM Provisioning with Jenkins and Google Charts” and hope to see you all there!

This post is by Martin Hobson, Senior Software Developer at Agilex Technologies. If you have your ticket to JUC U.S. East, you can attend his lightning talk "Visualizing VM Provisioning with Jenkins and Google Charts" on Day 1.

JUC IS HERE! JUC U.S. East will begin with registration at 7AM, Thursday June 18. The two day conference is sure to be a blast! If you have not registered, you can still get a ticket! Check out the agenda for JUC U.S. East here and find the link to register.



Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Categories: Open Source

JUC Speaker Blog Series: Stephan Hochdörfer, JUC Europe

I am very much looking forward to the Jenkins User Conference in London where I will present our insights on how to use Jenkins in a PHP related environment. Moving to Jenkins about 5 years ago bitExpert gained a lot of experience in running and managing a distributed Jenkins infrastructure. bitExpert builds custom applications for our clients which means that we have to deal with different project infrastructures, e.g. different PHP versions. We heavily rely on the build nodes concept of Jenkins which I will briefly outline in the session. Besides that I will give some in-depth insights on how we use Jenkins on a daily basis for the "traditional" CI related tasks (e.g. linting code, checking code style, running tests) as well as how Jenkins is used to power our integration tests. Last but not least I will cover how Jenkins acts as a kind of backbone for our Satis server which allows us to host the metadata of our company's private Composer packages. Throughout the talk I will point out which Jenkins plugins we use in the different contexts to give you a good starting point if you are new in the Jenkins ecosystem.

This post is by Stephan Hochdoerfer, Head of Technology at bitExpert AG. If you have your ticket to JUC Europe, you can attend his talk "Jenkins for PHP Projects" on Day 2.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.



Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Categories: Open Source

Quality Gates Work – If You Let Them

Sonar - Thu, 06/11/2015 - 21:20

Some people see rules – standards – requirements – as a way to hem in the unruly, limit bad behavior, and restrict rowdiness. But others see reasonable rules as a framework within which to excel, a scaffolding for striving, an armature upon which to build excellence.

Shortly after its inception, the C-Family plugin (C, C++ and Objective-C) had 90%+ test coverage. The developers were proud of that, but the Quality Gate requirement was only 80% on new code, so week after week, version by version, coverage slowly dropped. The project still passed the Quality Gate with flying colors each time, but coverage was moving in the wrong direction. In December, overall coverage dropped into the 80′s with the version 3.3 release, 89.7% to be exact.

In January, the developers on the project decided they’d had enough, and asked for a different Quality Gate. A stricter one. They asked to be held to a standard of 90% coverage on new code.

Generally at SonarSource, we advocate holding everyone to the same standards – no special Quality Profiles for legacy projects, for instance. But this request was swiftly granted, and version 3.4, released in February, had 90.1% coverage overall, not just on new code. Today the project’s overall coverage stands at 92.7%.

Language Team Technical Lead Evgeny Mandrikov said the new standard didn’t just inspire the team to write new tests. The need to make code more testable “motivated us to refactor APIs, and lead to a decrease of technical debt. Now coverage goes constantly up.”

Not only that, but since the team had made the request publicly, others quickly jumped on the bandwagon. Six language plugins are now assigned to that quality gate. The standard is set for coverage on new code, but most of the projects meet it for overall coverage, and the ones that don’t are working on it.

What I’ve seen in my career is that good developers hold themselves to high standards, and believe that it’s reasonable for their managers to do the same. Quality Gates allow us to set – and meet – those standards publicly. Quality Gates and quality statistics confer bragging rights, and set up healthy competition among teams.

Looking back, I can’t think of a better way for the C-Family team to have kicked off the new year.

Categories: Open Source

Selenium Conf 2015 Details

Selenium - Thu, 06/11/2015 - 18:00

The Selenium Conf 2015 website is live!

You can now:

– purchase tickets (while supplies last)

– find out venue information

– submit a talk

– learn more about our talk selection process (tl;dr it is a blind review process to encourage diversity)

What are you waiting for? Go to the conference website already!


Categories: Open Source

JUC Speaker Blog Series: Damien Coraboeuf, JUC Europe

Scaling and maintenance of thousands of Jenkins jobs

How to avoid creating of a jungle of jobs when dealing with thousands of them?

In our organisation, we have one framework, which is used to develop products. Those products are themselves used to develop end user projects. Maintenance and support are needed at each level of delivery and we use branches for this. This creates hundreds of combinations.

Now, for each product or project version (or branch), we have a delivery pipeline. We start by compiling, testing, packaging, publishing. Then we deploy the application on the different supported platforms and go through different levels of validation, until we’re ready for delivery. Aside from a few details and configuration elements, most of the pipelines are identical from one branch to the other, from one project to the other.

So, one framework, some products, several projects, maintenance branches, complex pipelines… We end up having many many jobs to create, duplicate and maintain. Before even going into this direction, we saw this as a blocking issue - there was no way we could maintain manually thousands of jobs on a day to day basis.

The solution we were looking for should have the following characteristics:

  • Self service - our goal being to delegate the job and branch administration in Jenkins to the projects, in order to reduce the support time
  • Security - we didn’t want to open Jenkins to the projects at configuration level - not acceptable in our context
  • Simplicity - the solution should be simple enough to be manageable by people not knowledgeable about the core technologies of Jenkins
  • Extensibility - the solution must be flexible enough to allow extensions when needed

When we thought about using the Job DSL plug-in, delegating the creation of the pipeline to the project teams was OK from a self service point of view, but was not secure and definitely not simple for people not knowing Jenkins.

In the end, we opted for a solution where:

    The Jenkins team develops, maintains and versions several pipeline libraries
  • A project team would edit a simple property file listing the characteristics of the current branch, like which type of platform is supported, which version of the pipeline library to use, etc.
  • Upon commit of this shopping list, the complete branch pipeline is regenerated using the given version of the pipeline library
  • The pipeline library code reads the “shopping list” property file and runs a Job DSL script to generate the branch pipeline according to those parameters

By default, the pipeline library generates a classic pipeline, suitable for most needs. It is also possible to define and use extensions, like having additional jobs in the pipelines.

In case of new features or defects, we develop or branch a new version of the pipeline library and projects or branches can use it by changing the version of their shopping list file.

A project gets injected into the system by having only a project seed being generated. From it, the authorised members can generate the branch seed and any branch pipeline at any time. Those seed jobs and the pipelines themselves can also be driven directly from the SCM using our plugin.

The project teams are now autonomous and can pilot their pipelines without requesting any support. They act in a secure and isolated way, and cannot compromise the shared environment. The “shopping list” file is simple and well documented. The system is not rigid and allows for extensions.

This platform has been developed initially for a very specific framework and a set of projects which depend on it, but has been extended since to be able to support other stacks. It is structured in two different parts:

  • The seed platform itself - generation of branch structures in Jenkins and trigger end points for being piloted from the SCM
  • The pipeline libraries, referenced from the shopping list files

We still allow some small tools and applications to define directly their pipeline by providing a Job DSL script.

Using the same principle, we can also pilot other tools in the ecosystem - like Artifactory or Ontrack.

I'll talk about this seed platform on June 24th, in the Jenkins User Conference in London.

This post is by Damien Coraboeuf, Continuous Delivery Expert at Clear2Pay. If you have your ticket to JUC Europe, you can attend his talk "Scaling of Jenkins Pipeline Creation and Maintenance" on Day 2.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.






Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Categories: Open Source

JUC Speaker Blog Series: Will Soula, JUC U.S. East

Chat Ops and Jenkins

I am very excited to be attending the Jenkins User Conference on the East Coast this year. This will be my third presentation at a JUC and fourth time to attend, but my first on the East Coast. I have learned about a lot of cool stuff in the past, which is why I started presenting, to tell people about the cool stuff we are doing at Drilling Info. One of the cooler things we have implemented in the last year is Chat Ops and our bot Sparky. It started as something neat to play with ("Oooo lots of kittens") but quickly turned into something more serious.

Ever get asked the same questions over and over? What jobs to run to deploy your code? What is the status of the build? These question and more can all be automated so you do not have to keep answering them. Furthermore, when you do get asked you can show them, and everyone else, how to get the information by issuing the proper commands in a chat room for everyone to see. With chat rooms functioning as the 21st century water coolers, putting the information in the middle of the conversation is a powerful teaching technique. You are not sending people to some out dated documentation on how to get their code deployed, nor are you showing them the steps today only to be forgotten tomorrow. Instead you can deploy your code and they see the exact steps needed to get their code deployed.

Even more impressive is the way ChatOps can bring your company together. Recently our CTO got a hipchat account so he could interact with Sparky. This gave me the idea that if we extend Sparky to deliver information useful to the other teams (Sales, Marketing, Finance, etc) then we would be able to get these wildly disparate teams in the same chat room together and hopefully they will talk and learn from each other. Where DevOps is the bringing together of Dev and Ops, ChatOps can be the bridge across the entire organization. Come see my presentation Day 1: Track 1 at 4:00 PM to learn how ChatOps can enrich your team, how Drilling Info is using it, and what our future plans entail for ChatOps.

This post is by Will Soula, Senior Configuration Management/Build Engineer at Drilling Info. If you have your ticket to JUC U.S. East, you can attend his talk "Chat Ops and Jenkins" on Day 1.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.






Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Categories: Open Source

All your bases belong to us part 1- Changing Behavior

The Typemock Insider Blog - Sun, 06/07/2015 - 16:22

Handling ‘hidden’ base methods is tricky when testing components.A ‘hidden’ base is a virtual method that has an overridden implementation. In isolator 8.1 we have added a new API OnBase() to help in these situations. For example: public class BaseClass { public virtual int VirtualMethod(bool call) { // do something with the network int theValue = 0; ... return […]

The post All your bases belong to us part 1- Changing Behavior appeared first on The Unit Testing Blog - Typemock.

Categories: Open Source

JUC Speaker Blog Series: Andrew Phillips, JUC U.S. East

Automated Testing with Jenkins: At JUC East with Andrew Phillips

Next stop: Washington, DC! I’m looking forward to heading to JUC East in a couple weeks, which runs June 18-19. The Jenkins User Conference is the annual get-together for Jenkins customers, users, partners, developers and community members. It promises to be an exciting two days, and as an added bonus I get to catch up with Kohsuke Kawaguchi and Gene Kim!

I will be giving a talk about a topic that I think is a bit of an elephant in the room in the Continuous Delivery space: the critical importance of optimized Automated Testing. As you start to ship code faster, you’ll need numerous automated tests across many different tools, in many different jobs in your pipeline. But getting a grip on the results of all of your automated tests — and then figuring out whether your software is good enough to go live — becomes harder and harder as you speed up the delivery of your software.

I’ll share tips on how naming conventions, partitioning of testware and mirroring the application’s structure in the test code help you best handle automated testing with Jenkins. I’ll also try to provide some insight into how to keep the setup manageable, as well as share practical experiences of managing large portfolios of automated tests. Finally, we’ll showcase some practices that help you manage all your test results and add aggregation, trend analysis and qualification capabilities to your Jenkins setup.

Join us at the event, or check the slides or recording (which we’ll post after the talk) to learn more. Looking forward to seeing you there!

This post is by Andrew Phillips, at XebiaLabs. If you have your ticket to JUC U.S. East, you can attend his talk "How to Optimize Automated Testing with Everyone's Favorite Butler" on Day 1.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.






Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Categories: Open Source

JUC Speaker Blog Series: Peter Vilim, JUC U.S. East

In this talk I will be focusing on plugin development for Jenkins. I aim to capture some of the lessons that we have learned at Delphix and that I learned while I was in graduate school. At Delphix we have been large users of Jenkins for over four years which is most of the history of our startup. We currently run thousands of jobs per day. We have been quite happy with the experience and expect these numbers to grow significantly as our business scales beyond our current 300 head count.

The core concept of Delphix is Data as a Service. Our software allows businesses to virtualize databases and data associated with their applications then provision these on demand to developers and others who need virtual copies of them. Our development for this software spans the entire stack. We have quite a few kernel developers, including the original team for the ZFS filesystem who work on developing the open source application, OpenZFS, which underpins our product. Further up the stack we have a large java application that interacts with ZFS to perform virtualization operations, provides user facing webservices, and interfaces with our internal Postgres metadata store which stores the state of our system. Finally above this we have a modern Javascript front end for user interaction. Our full software product ships as a virtual machine on a variety of hypervisors. As a result of these numerous components, end to end integration testing is very important to us. This integration testing is the primary use of Jenkins for us. Before any developer checks in code to either our operating system or application repository, it must undergo several hours of automated integration testing. We also have nightly runs which go for far longer and tests a much more extensive set of functionality. In addition, we use Jenkins for the build process of our software as well as final packaging for release. Because Jenkins serves as a hub for our development processes, having a well designed system is very important to us and saves us significant time.

Below are some of the key points I will be discussing at my talk. I hope you attend to learn more about the areas that I find very interesting.

  • I'm planning to discuss the structure of a Jenkins plugin. I'll also cover a few of the more advanced areas of plugins such as distributed builds that I see less frequently in plugins. In addition, I'll briefly cover unit testing, which is something missing in many open source plugins.
  • I'll talk about some good patterns to use in plugins as well as some areas where a plugin is not a good idea. I'm planning to pull from my own personal experience developing plugins, the experience of other people at Delphix working with Jenkins, and our experience using other open source plugins to talk about what works and what doesn't.
  • I'll give an overview of the current plugin development at Delphix. I'll cover some of the lessons that we have learned along the way. We have also started to take a "dogfooding" approach to some of our development where we use plugins internally to help our test process and open source them since our customers find features used for testing our product to often be useful in their production environments. This has an added bonus of making it easier to justify our development time spent on making these plugins, since they are also features requested by our customers.
  • I'll discuss the trade-offs between using an already developed plugin or group of plugins, writing some scripts, and building your own plugin. Being able to figure out when to do which can lead to major time savings as well as a better user experience.

I hope you attend. Even if you have no immediate plans to write your own plugins, hopefully you'll be able to learn about what makes plugins tick and how to better evaluate plugins when picking them for your own projects. Plugins were what originally got me excited about Jenkins and they allowed me to see its true potential as a build and test system. I hope to share some of that inspiration.

This post is by Peter Vilim, Member of Technical Staff at Delphix. If you have your ticket to JUC U.S. East, you can attend his talk "Providing a First Class User Experience with Jenkins Plugins" on Day 1.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.






Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Categories: Open Source

JUC Speaker Blog Series: Nobuaki Ogawa, JUC Europe

On the 23rd and 24th June, I’ll attend Jenkins User Conference 2015 Europe in London. And I’ll present a lightning talk about Continuous Delivery with Jenkins.

Here is short overview of what I’d like to talk about there.

1. Continuous Build

My starting point was to get to know JenkinsCI. Our developers used JenkinsCI to make the Continuous Build of our software. So, our developing environment was quite Jenkins friendly from the beginning.

2. Continuous Deploy

--- Virtual Machine ---

We had to have an environment where we could deploy our new build. As we are big fans of Microsoft, we decided to use Azure as our environment to make Continuous Testing.

How do we control it? We use Powershell, which can be executed with JenkinsCI.

--- Product Deployment ---

How did we achieve the Continuous Deploy? Actually, my boss, who is DirectSmile’s Yoda developed a very powerful tool called “DirectSmile Installation Service” to enable this.

So we integrated this tool within JenkinsCI, and now Jenkins can deploy DirectSmile products on any target server with just one-button-click!

3. Continuous Testing

Of course, we use JenkinsCI to make the Continuous Testing. How do we do that? We use Selenium to make and run tests. So we can cover most features and we can execute it at anytime.

We are doing it after every new version build, to obtain Continuous Delivery.

4. Continuous Sharing

I think it’s important to share all knowledge and experiences I have had with others, especially those whom have just started with Continuous Delivery.

Don’t worry, it is probably much easier than you think.

As a part of this practice, I’d like to share all my knowledge and experiences with how easy it is to achieve Continuous Delivery with Jenkins at JUC 2015.

I’m really exciting to meet and talk about this there! See you at JUC 2015 in London!

About Me

My name is Nobuaki Ogawa, from Japan, and I currently work in Berlin, Germany for the software company DirectSmile as a DevOps QA Manager.

From the very first time I used JenkinsCI, it helped me a lot. Almost all the work I did last year was mainly with Continuous Delivery with Jenkins.

From Build to Deploy, Test, and even Maintenance and Monitoring, my Jenkins takes care of everything.

It was super easy to achieve Continuous Delivery in the DirectSmile world with the help of JenkinsCI.

This post is by Nobuaki Ogawa, DevOps QA Manager at DirectSmile. If you have your ticket to JUC Europe, you can attend his talk "Jenkins Made Easy" on Day 1.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.






Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Categories: Open Source

Insights – Get to know your fakes part 6- Static Constructors

The Typemock Insider Blog - Mon, 06/01/2015 - 16:43

This is part 6 of a series of post about the new Insights feature in Typemock Isolator 8.1 To start insights – either debug your test or turn on the Insights (Typemock->Windows->Typemock Insight and click the on/off button) Typemock will call the static constructor of fake objects once a real object is called, this emulates […]

The post Insights – Get to know your fakes part 6- Static Constructors appeared first on The Unit Testing Blog - Typemock.

Categories: Open Source

The SonarQube COBOL Plugin Tracks Sneaky Bugs in Conditions

Sonar - Thu, 05/21/2015 - 13:28

Not long ago, I wrote that COBOL is not a dead language and there are still billions lines of COBOL code in production today. At COBOL’s inception back in 1959, the goal was to provide something close to natural language so that even business analysts could read the code. As a side effect, the language is really, really verbose. Each time a ruby, python or scala developer complains about the verbosity of Java, C# or C++, he should have a look at a COBOL program to see how much worse it could be :). Moreover, since there is no concept of a local variable in COBOL, the ability to factorize common pieces of code in PARAGRAPHS or SECTIONS is limited. In the end, the temptation to duplicate logic is strong. When you combine those two flaws: verbosity and duplicated logic, guess what the consequence is: it’s pretty easy in COBOL to inject bugs in conditions.

Let’s take some examples we’ve found in production code:

Inconsistencies between nesting and nested IF statements

In the following piece of code the condition of the second nested IF statement is always TRUE. Indeed, when starting evaluation of the nested condition, we already know that the value of ZS-LPWN-IDC-WTR-EMP is ‘X’ so by definition this is TRUE: ZS-LPWN-IDC-WTR-EMP NOT EQUAL 'N' AND 'O'. What was the intent of the developer here? Who knows?

And what about the next one? The second condition in this example is by definition always TRUE since the nesting ELSE block is executed if and only if KST-RETCODE is not equal to ’02′ and ’13′:

Inconsistencies in the same condition

In the following piece of code and in the same condition we’re asking ZS-RPACCNT-NB-TRX to be both equal to 1 and 0. Obviously Quantum Theory is not relevant in COBOL, and a data item can’t have two values at the same time.

The next example is pretty similar, except that here it is “just” a sub part of the condition which is always TRUE: (ZS-BLTRS-KY-GRP NOT = 'IH' OR ZS-BLTRS-KY-GRP NOT = 'IN'). We can probably assume that this was not what the developer wanted to code.

Inconsistencies due to the format of data items

What’s the issue with the next very basic condition?

ZS-RB-TCM-WD-PDN cannot be greater than 9 since it’s declared as a single-digit number:

With version 2.6 of the COBOL plugin, you can track all these kinds of bugs in your COBOL source code. So let’s start hunting them to make your COBOL application even more reliable!

Categories: Open Source

JUC Speaker Blog Series: David Dang, JUC U.S. East

I’ve implemented numerous test automation projects for clients, but recently I had a unique request. Jenkins plays a critical role.

The “digital channel” is an industry buzzword for many companies these days. The digital channel represents a company’s content that is delivered by websites and mobile devices. Companies want the same website to work across any channel in multiple browsers and different operating systems. They also want that same website to work across an explosion of mobile devices. Add the new generation of smart watches showing up and testing is becoming a huge challenge for IT departments. One big issue is there is too much duplication of testing efforts.

In a perfect world, you would create a core set of test automation scripts that work across all digital channels. A client recently requested that my team and I create this perfect-world scenario, and we are doing just that. Jenkins pulls it all together by managing the execution and reporting.

Join me for my talk to learn how I’m using Jenkins, Selenium, TestNG, and Perfecto Mobile to solve the digital channel testing challenges for one client.

This post is by David Dang, VP of Automation Solutions at Zenergy Technologies. If you have your ticket to JUC U.S. East, you can attend his talk "Integrating Mobile Automation with Jenkins: A Case Study Using Perfecto Mobile with Jenkins" on Day 1.

Still need your ticket to JUC? Early bird pricing has been extended! Also, if you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.

Categories: Open Source

Insights – Get to know your fakes part 5- Base

The Typemock Insider Blog - Wed, 05/20/2015 - 13:42

This is part 5 of a series of post about the new Insights feature in Typemock Isolator 8.1 To start Insights – either debug your test or turn on the Insights (Typemock->Windows->Typemock Insight and click the on/off button) In Isolator 8.1 we introduced base method faking, this is also shown in Insight When a base […]

The post Insights – Get to know your fakes part 5- Base appeared first on The Unit Testing Blog - Typemock.

Categories: Open Source

JUC Speaker Blog Series: Andrew Bayer, JUC Europe

In the fall of 2011, the very first Jenkins User Conference was held in San Francisco. Over 250 people showed up. It was, to be completely honest, a bit shocking to me - that little project I’d gotten involved with less than three years earlier was big enough, interesting enough, important enough for 250 people to travel from around the world to spend a day talking about it? That’s an amazing feeling, and it was an amazing day. Since then, there’ve been three more JUCs in the Bay Area, three in Israel and two in Europe, with more talks on more Jenkins subjects and an ever-increasing number of attendees. This year, there are another four scheduled - three of them for two days each this time! Find out more about the first two, JUC US East and JUC Europe, below!

Not only are there enough worthy talks to merit a full day a few times a year - now there are enough to merit two days! At JUC US East 2015 outside Washington, DC on June 18 and 19, you can see talks on the Workflow plugin for Jenkins, test automation, mobile testing, plugin development, and a few talks on new and fascinating ways people are using Jenkins - even driving big data workflows! And then, just a few days later, on June 23 and 24 in London, there’s JUC Europe 2015, with talks covering things like the fantastic Job DSL plugin, reproducible build environments, Jenkins and Docker together, and my personal favorite, the 2015 edition of my Seven Habits of Highly Effective Jenkins Users talk.

Whether you’re interested in the latest innovations in continuous integration and delivery, or you’re a Jenkins plugin developer wanting to learn how to make your plugins more mature and useful, or you’re a Jenkins administrator trying to understand how to provide your users with a great platform for their builds and testing, or even if you’ve just heard about CI/CD and you want to find out more, the Jenkins User Conferences are a great opportunity to see all those things and meet with other Jenkins users and developers. I’m excited to attend my fifth JUC in London, and I hope to see you there!

This post is by Andrew Bayer, build and tools architect at Cloudera and longtime Jenkins contributor. If you have your ticket to JUC Europe, you can attend his talk "Seven Habits of Highly Effective Jenkins Users" on Day 1.

Still need your ticket to JUC? Early bird pricing ends May 15. Also, if you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.






Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Categories: Open Source

Insights – Get to know your fakes part 4- Pointers

The Typemock Insider Blog - Mon, 05/18/2015 - 09:25

This is part 4 of a series of post about the new Insights feature in Typemock Isolator 8.1 To start Insights – either debug your test or turn on the Insights (Typemock->Windows->Typemock Insight and click the on/off button) Insight will give you pointer to understand your fakes: Run on different thread than setup. Recursive Fake […]

The post Insights – Get to know your fakes part 4- Pointers appeared first on The Unit Testing Blog - Typemock.

Categories: Open Source

JUC Speaker Blog Series: Lorelei McCollum, JUC U.S. East

Have you heard Jenkins mentioned, but haven't really done much with it? Are you at JUC because you want to learn more? Has your company been pushing you to use Jenkins or to adapt a more agile build/test process using a Continuous Delivery/Continuous Integration method?

Jenkins 101 is going to give you an introduction to Jenkins and get you started in the right direction. Many sessions may be too in-depth, too specialized, or do a deep dive too fast, and while that is good for the more intermediate Jenkins user, the beginner can get lost fast and lose interest. My session will go through the basics of Jenkins, so anyone without prior knowledge can get up and running in just a short amount of time. We will cover building/configuring jobs, design of pipelines, security of your Jenkins master, fun groovy scripts and useful plugins to get you started. Whether you are a beginner or an advanced Jenkins user, you can always learn from how others are using Jenkins. Attend this session early on in your JUC lineup, so that you get the most out of the conference!

This post is by Lorelei McCollum, Software Engineer at IBM. If you have your ticket to JUC U.S. East, you can attend her talk "Jenkins 101" on Day 1.

Still need your ticket to JUC? Early bird pricing ends May 15. Also, if you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.






Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Categories: Open Source