Skip to content

Feed aggregator

Using TestTrack and The Scaled Agile Framework: Part 2

The Seapine View - Tue, 09/22/2015 - 03:00

In Part 1 of this series, we saw a pretty natural and easy way of documenting your Vision and processing epics through their assessment and approval stages. In this part, we look at how these records are used in the next stage of the Scaled Agile Framework® (SAFe®)—PROGRAM.

The Program stage involves breaking down these epics into features and user stories, and planning how the work to refine them is to be done. The big idea in TestTrack is that there is no linking or duplicating from one repository to another, because all the records are stored in one tool—making the integration from top to bottom so much easier.

Stage 2: PROGRAM The Program Stage of SAFeThe Program Stage of SAFe (CREDIT: Break down Epics into Features and User Stories

At some point in the process, your epics need to be broken down into features, and the features need to be broken down into user stories. I suggest doing it in this stage, but you could argue that this should be done in the previous Portfolio stage, because otherwise, how can you really decide whether or not to approve an epic? In my view, whether to do the breakdown now or in the Portfolio stage depends on your circumstance. Follow the Agile principle of self-organizing teams and do whatever works for you.

What is important is that, before you do your release planning, you need to have a set of features from your epics, because these form the backbone of your plans. According to SAFe, you should also have your initial set of user stories defined for each feature.

No matter how you do it, TestTrack makes it very easy to record all your features and user stories for each epic. First, create a requirements document, then create requirement types called “Feature” and “User Story.” The paragraph structure of the document provides a simple way to break down epics into features, and features into user stories, forming relationships that TestTrack can use later on. Remember our epic EP-1 “New CRM Capabilities” from the Portfolio stage? In the example below, it is broken down into features and user stories. It’s a new document, but EP-1 is still the same record. This is where the magic of TestTrack really starts to work.

Breakdown of EP-1Breakdown of EP-1 into features (FE) and user stories (US) Agile Release Train Roadmap Document

According to SAFe, your overall vision for your enterprise is delivered in a number of Agile “release trains.” Each Agile release train needs a roadmap that breaks down each train into high-level iterations called “program iterations.” The Program stage involves creating this roadmap by allocating each of the features in the backlog into program iterations.

The easiest and most natural way of documenting, managing, and communicating your roadmap in TestTrack is in a roadmap document. Create a requirement type called Program Increment, and organize your features into program increments in a roadmap document for each Agile release train.

In the image below, you can see how two of the features needed to deliver our Ep-1 epic are planned for Program Increment 1 within Agile Release Train 1, with the third feature planned for Program Increment 2. Note, too, how the Feature Records (FE-62, FE-63 and FE-64) appear both in this document and the breakdown document above. That’s because documents in TestTrack are just views of the records in the database. All of this is gluing your process together with a minimum of manual processes.

Roadmap for SAFeA Sample Roadmap in TestTrack Approve and Track Features

According to SAFe, you need to effectively sequence and prioritize features in the program backlog. Depending on your situation, you may also need approval steps for features. In the example below, features are organized by program increment in a Kanban process that includes the “Funnel,” “Under Review,” and “Analysis” steps, which are an approval process. You can use this or a simplified board specifically for features, which omits these steps.

Feature Review Process folderA Feature Review Process folder in TestTrack

As part of the planning, you allocate each feature to the chosen release train in the Feature Review Process folder. This allows you to track the feature status on a train-by-train basis. (You can also see the complete feature status, should you choose to do so.)

SAFe recommends using WSJF help prioritize your features, which you can do in exactly the same way you did for epics in the Portfolio stage.

From Portfolio to Program to Team

As you can see, the Program stage has to do with planning your work by allocating features and their related user stories into program iterations. In TestTrack, the epics from the Portfolio stage easily form the input for the Program stage and, without any duplication of data, break down the epics into features and user stories.

In Part 3 of this series, I’ll show you how to deliver these user stories in a series of traditional Scrum sprints, continuing the process using and adding to the user stories you have already created.

The post Using TestTrack and The Scaled Agile Framework: Part 2 appeared first on Blog.

Categories: Companies

The Slight Edge At Work

The Social Tester - Mon, 09/21/2015 - 09:00

The Slight Edge is a great book by Jeff Olson. The premise of the book is that in order to obtain the slight edge you need to do tiny positive improvements each day. It’s not the big things that give you the advantage in your work or life, it’s the daily consistent actions that make […]

The post The Slight Edge At Work appeared first on Rob Lambert.

Categories: Blogs

Using TestTrack and the Scaled Agile Framework: Part 1

The Seapine View - Mon, 09/21/2015 - 02:30

Many organizations are realizing the value of organizing their efforts using Agile software development practices. This often leads to bigger questions about how to scale this approach and how to integrate Agile with other related enterprise practices, such as portfolio and project management.

In other words, Agile is very good at delivering value as quickly as possible in a continuous flow, but how do we make sure that our Agile teams are working on the goals that our enterprise strategy has defined? How do we give the flexibility to our Agile teams that they need, while still making sure that we have the appropriate governance and budgeting directing their activities?

It’s natural to define and communicate your vision, goals, strategies, and roadmaps as documents or perhaps in distinct PPM tools. The problem with this approach is that these are disconnected from your development process. This forces you to make a difficult choice:

A. Allow the Agile development teams the flexibility they need, but struggle to keep them connected to your strategic goals.


B. Impose an Agile-killing bureaucracy to ensure alignment.

TestTrack gives you the best of both worlds because of the way it allows you to create documents. When you create a document in TestTrack, each item in the document is also a record in the database. This means that once you create your vision and roadmap documents and finish your high-level assessment and approval, the items within these documents simply and naturally evolve into features, user stories, and—ultimately—working software.

The Scaled Agile Framework

The Scaled Agile Framework® (SAFe®) has some very good ideas around scaling Agile. I’ve taken their ideas as a framework to illustrate how to use TestTrack to scale and integrate Agile into your enterprise. The basic idea is that, because you have a completely connected end-to-end process in one tool, it is easy to get visibility and transparency. Throughout this series, I’ll be referring to the sample project pictured below.

Sample TestTrack SAFe projectSample SAFe project in TestTrack

Before we go any further, I should say that this blog does assume some familiarity with SAFe. I don’t intend to (nor am I qualified to) explain the intricacies of this model. They explain it much better than I ever could here: However, I am going to try and give just enough background about SAFe so that we are all on the same page.

Here is how the good people at Scaled Agile Inc. visualize the Scaled Agile Enterprise:

 ScaledAgileFramework.comThe Scaled Agile Framework (CREDIT:

I’m going to use their terminology and some of their processes to illustrate how you can use TestTrack to assist you as you move towards Scaled Agile.

The SAFe model uses “epics” to mean the highest-level item to define what the software is going to do. A “feature” is the next level down, with each feature then broken down into one or more “stories.” See the diagram below, which shows the complete model.

Complete SAFe modelThe complete SAFe model

There are three main parts or stages defined in SAFe: Portfolio, Program, and Team. I’m going to walk you through how epics, features and user stories are used in these three stages.
I am going to give you an overview of the process, but even so, this is a very large topic. I’ll cover Portfolio today, and the other two stages in future blog posts.

Stage 1: PORTFOLIO SAFe PortfolioIn the Portfolio stage, you document your Vision. CREDIT: Vision Document

In the Portfolio stage, the first thing you do is document your Vision as themes, goals, initiatives, and epics within a TestTrack document. This is a simple and natural way of capturing, maintaining, and communicating the big picture of what you intend to do.

Create TestTrack requirement types of themes and epics to create a document structured like the image below. Note that just by virtue of the natural process of arranging themes and epics in paragraphs (as shown in the image below), TestTrack automatically knows which epics belong to which theme. This really helps in organizing and visualizing your activity by theme later on.

I’m going to follow Epic 1, “New CRM capabilities,” which I have highlighted throughout this blog series, so keep your eye on it! This epic is part of the “Become Leader in Customer Services” theme. Remember, too, that whenever you see EP-1 in TestTrack, it is the same record; you don’t have to worry about duplicating data from one place to another. This is true whether you are looking at a document, a Kanban board, a list view, or anywhere else. This is the key to ensuring seamless integration and visibility from top to bottom.


Epic 1 in TestTrackKeep your eye on Epic 1, highlighted by the green box. Lightweight Business Case and Epic Value Statements

SAFe suggests that you add a lightweight business case and epic value statement for each epic you consider. It’s easy to do that in TestTrack. I suggest that you use the model templates (or your own) as default values in TestTrack to automatically put the correct templates into custom fields “Light Business Case” and “Epic Value Statement.”

TestTrack Custom FieldsTestTrack with custom fields for “Epic Value Statement” and “Light Business Case” Use WSJF to score your epics

You need to figure out which epics you are going to sanction for the next stage, and in what order. You therefore need to figure out which is the most important one and whether each epic is going to be approved or not. SAFe suggests that you use “Weighted Shorted Job First” as a good way of scoring features, but I think it is not a bad way of evaluating epics, too. I’m sure some of you will shoot me down in flames for this, but whether you use this for features, epics, or both, it is relatively easy to implement in TestTrack. Add in a custom field for each of the factors in the calculation and a calculated field called “WSJF” derived from this. The calculation is defined as:

WSJF calculationWSJF calculation

A suitable list view for your epics allows you to enter the factors and see how that affects the score for each epic (or, later, features).


TestTrack list viewTestTrack list view Use a Kanban System to approve or reject Epics

Lightweight business cases, epic value statements, and WSJF values are all vital bits of information about your epic, but you need a way to manage the flow of epics as they appear in your Vision. You also need to control when work is done and data is added to your epic. There is no point, for example, in spending the effort on adding even a lightweight business case if the epic is a complete non-starter. SAFe suggests a Kanban process as the most suitable way of managing this, and also suggests a set of stages in this Kanban: Funnel, Under Review, Analysis, Backlog, Implementing, and Completed.

Implementing a Kanban along these lines is really easy in TestTrack:

1. Create a folder in TestTrack to contain your epics.
2. Create your Kanban (task board) as shown below.



Kanban task boardA Kanban task board in TestTrack

I would also recommend that you add an automation rule to automatically add each epic to this folder as soon as it is created. If you do that, then all your epics are immediately visible on your Kanban board as soon as you create them and appear in the first column, which is the “Funnel” column in SAFe. Note that you don’t need to do any additional grouping of your epics into themes. Epics are automatically grouped by theme in your Kanban (if you want them to be), simply because of the way you have organized your Vision document.

The Kanban board is used to manage the flow of epics throughout their lifespan. Once they reach backlog, they are then ready for inclusion in the Program stage, where we use these Epic records are used as the main input. I’ll look at the Program stage in Part 2 of this series.

The post Using TestTrack and the Scaled Agile Framework: Part 1 appeared first on Blog.

Categories: Companies

Office hour on form handling in Jenkins

Update: This week's office hour has been canceled.

This Wednesday, Sep 23, at 11 am PDT I will host another office hour on Stapler, the web framework used in Jenkins. This time, I'll show you how structured form submission in Jenkins works, and how Stapler can help you with it.

As usual, the office hour will use Hangout on Air, and a limited number of people will be able to join and participate. The others will be able to watch the office hour live on YouTube. Links to participate and watch will be posted before the event on the Office Hours wiki page.

Update: This week's office hour has been canceled.

Categories: Open Source

Only Kidding

Hiccupps - James Thomas - Sat, 09/19/2015 - 22:19
For the last week of the recent school holidays I was off work to look after my daughters, Hazel (7) and Emma (6). Amongst other things designed to occupy time and tire them out we went to the Centre for Computing History in Cambridge and on an adventure walk.

The Centre for Computing History is a bit of a nostalgia trip for me - Atari VCS, ZX Spectrum, Gorf (sadly not playable when we went) and the rest - but my girls don't carry that baggage and for them it stands or falls on its own merits. Although we did enter and run the classic BASIC program on the BBC micros (they chose to PRINT insulting things about their dad, naturally) the two things that really got them fired up were Big Trak and an Oculus Rift headset.

Big Trak is a 1980's toy moon rover with a keypad on the top for entering simple programs in a Logo-like language. The programs control forwards and backwards movement, rotation and the rover's lights. We spent ages experimenting with what they could do and what we could do with them (the instructions had gone AWOL) which included making them dance by spinning and moving back and forth, racing them across the room and driving them under a table from the front and navigating the legs of the table and its chair to exit from the side.

The Oculus Rift is a virtual reality headset. In the museum it was running some kind of demo reel showing a chairlfit ride. The girls were fascinated by the relationship between the real and unreal worlds and the consistencies between the sensations and information available in each. For example that they had arms and legs in the virtual world that were not controllable by movement, unlike the view which changed when the headset moved.

For me, these two things have three key qualities for getting children interested:
  • Wonder: How does it work? What can it do? Why does it do that? 
  • Control: I see it can do those things. Can I make it do those things? Can I  make it do those things when I want it to? In the way that I want it to?
  • Scope: How far can I take this thing? In what directions?
Scope has two interesting dimensions: intrinsic properties and extrinsic ones. A toy with no inherent variability, for example a building block, may have scope limited only by the imagination of the user. Alternatively, a toy plastic monster with a bunch of built-in behaviours appears to have scope until the behaviours are understood and then is good only for passive attendance at play tea parties.
    At the Pac Lunch Bar (oh yes!) I got talking to one of the museum staff about how kids could be made interested in computers and about current  projects like the Raspberry Pi and BBC micro:bit versus the early home and school computers like the BBC micro and Commodore 64. I'm all in favour of providing children with opportunities to get into computers - they have been a large and largely positive part of my own life after all - but being into computing for its own sake is not something I am particularly bothered about. Along with opportunities to try things out, I want to equip my children with curiosity, skills and tools that will help them in whatever domain they end up in.

    My kids' school issues homework from the Reception year onwards. I don't disagree with this in principle (although some do) even if I  sometimes wonder at the value of specific pieces. One aspect of the homework that I really do like, though, is the Learning Objective notes.

    According to the Glossary of Education Reform, "learning objectives are brief statements that describe what students will be expected to learn by the end of [the exercise]" and there's plenty of literature on them and the benefits they are perceived to bring to the various parties involved in education, including teachers, children, the schools and the parents (see e.g. 1 and 2).

    I guess I have something like learning objectives at the back of my mind when I'm setting up an adventure walk. I've mentioned these walks on the blog before: I make a list of things to spot and then lead the girls on a stroll round the local area where all of them can be found. It's a kind of I-Spy thing - so an observation task - but with lots of scope for some lateral thinking, creativity, numeracy, language, knowledge gathering and a bit of a laugh. Each walk has a sheet of paper with questions, spaces to draw, places to write down lists and so on, to be filled in as we go and I try to be clear to myself what the point behind  each element is, and to balance them along a variety of axes like the ones I've just mentioned and also wonder, control and scope.

    It's a really interesting challenge to create this kind of thing and then gratifying and illuminating to see it being worked through. We've done a few of them now and on this occasion for the first time we had a guest, Karo from my test team at Linguamatics. The girls had invited her after I told them she'd expressed an interest in the idea when it came up in conversation at a team meal down the pub.

    I thought it might be fun to list the questions. the objectives and what happened when we did the walk around the Cambridge Science Park.

    Emma's sheet
    Draw a giant metal tree.My goal here was to promote observation and imagination, metaphor and making connections. Because the description can be interpreted very literally I was hoping that they would be able to see past whatever image they conjured up on reading it and recognise the electricity pylon (the image at the top of this post) when they saw one. And there was a lot of excitement and laughter when they did! A mixture of surprise and pleasure and, I hope, the beginnings of a realisation that it's very easy to make assumptions without knowing it.

    Draw an animal that is worried about hurting its head.Deliberately tricky, this one needed a clue in the end. We saw some ducks on the lake in the Science Park, and talked about them. Emma even told Karo the joke we'd made up a few weeks earlier while walking round another lake at Wicksteed Park (it works better when you say it out loud):
    Emma: Shall we count the birds on the lake?
    Karo: OK then.
    Emma: (pointing) Swan.But it wasn't until I gave a clue - crouching down - that they connected that to ducking, that ducking was something you might do if you wanted to avoid banging your head on something, that that was one way of hurting a head and that the action is a homonym with the bird.

    Again, there was a lot of laughter on the realisation that there was an undiscovered - if somewhat tenuous - connection. The joy in discovery is something that I'm really keen to help them experience.

    Pick three things you like on the walk and write them down with an adjective.I wasn't sure whether they'd do this as they went or save it up to the end. All three of them left it to the end, and it turned out to be a nice coda to the walk. From a social science perspective it was fascinating that a kind of group decision without discussion resulted in each of them describing other participants. Emma had "great Karo", "wonderful Hazel" and "bald dad".

    What I'd wanted out of it was demonstration of language skills, vocabulary and thought given to the criteria used for selecting the things - on what basis do we like something? Is it the same for all things?

    Find a building with unusual windows. Why are they odd?I had a building on the Science Park in mind for this one,  but I was interested to see whether or not the girls would come up with the same building, what their reasons were, and how they would report them.

    Emma was entranced by the way that the windows reflected the sky and took Karo on a walk round the back of the building to see whether or not it was the same on the other side. Hazel was less interested in the question and more in trying to work out how the water flowed under the bridge outside the building.

    Find a park that you can't play in. What is it?I had a couple of answers ready here. We were on the Science Park and, although there's plenty of grass and space to play, I'd have accepted that as an answer. But what I was really thinking of was car parks, and there are many of them around the place. In fact, if it wasn't so heavily landscaped, there'd be a lot less difference between the Science Park and an out of town shopping centre.

    It wasn't until we were almost at the end of the walk that we crossed a car park and inspiration hit. I guess it might have been because we were reviewing the outstanding questions and that's neat too, because I deliberately set questions that could be answered in one shot and that would take some time, some that I expected to be crossed off early on the walk and some that probably wouldn't be, in order to see how they coped with managing the set of questions they were attacking at any given time.

    I think that the approach roughly went like this: at the start review all of the questions, focus on immediately tractable ones (e.g. find some red things), periodically return to the list of questions to see whether one has become tractable. When a multi-part question had been started it seemed to retain a high level of focus without a need to repeatedly ask it. This was true of ...

    Copy down the longest word you can see on a sign.I teasingly only supplied room to write one answer here, to see how they would cope with the problem of having to remember the longest word so far. Impressively, after only a short discussion, they just decided to write on the back of their clipboards (which I made out of cardboard and bulldog clips so they're clearly not precious).

    This was a great practical solution and what I particularly liked about it was that they took complete ownership of it, not asking me whether it was OK to do that, whether it was within the rules and so on. I was called upon to arbtitrate on whether answers were acceptable sometimes, but they seemed to feel that they could control the methodology used.

    Close to the end of the walk, when they'd got a 13-letter word, I asked how they defined "long" and whether there were any other ways it might be defined. This sparked a lovely conversation on the alternative of measuring the length of a word and how a "long" word in small letters might be shorter than a "short" word in big letters.

    How many bridges did you cross?Similar to the longest word, in this case I gave a simple box for entering the answer, wondering how they would count. Less discussion here, and no common approach. Hazel wrote numbers under the box, effectively counting in place "1" then "2" while Emma used a tally system.

    List five red things you see on the walk.More observation, but also the chance to compare the different shades of red, whether we perceive colours the same way and so on. We did have some discussion about whether or not particular things were red or orange and had some differences of opinion. We also thought about part-whole relationships: if a red car has red lights, can we count the lights as red independently of the car itself?

    I love this stuff.

    Pick three different leaves. Do you know what they are? Why do you like them?For this one I'd brought a reference book along and the idea was to look up the leaves in the book and try out different ways of comparing the leaves we'd got to the images. As it happened we only needed to do that once because they chose a couple of trees they already knew.

    When we did use the book it proved very hard to identify the particular kind of fir tree we were looking it, which itself was interesting - the fallibility of oracles, the need for our own judgement even when we have an apparent expert source, the fact that the book had drawings rather than photos, that it wasn't at true size, that there is significant variability amongst instances of the leaves from any one tree, but only one example of each leaf in the book. How can we compare the within-species variety to the cross-species variety to make an informed decision?

    A second-order concern of this question was to get the girls to do a little descriptive writing and, as in some of the earlier tasks, think about what decisions or judgements they're making and why.

    Hazel's sheet
    I had a route in mind when we started, with several possible alternatives depending on how we were doing for time. As it happened we used the shortest of them because we spent so long looking at stuff along the way. On a couple of occasions we diverged from the route because that was where interest had taken us. So long as we still stood a chance of seeing all of the things on the sheet - even if not spotting them - I went with that. Pursuing a thought for its own sake is a beautiful and fulfilling activity and I didn't want to spoil it.

    Another thing I didn't want to spoil was the degree of sharing and teamwork. My daughters didn't know Karo but still involved her right from the off. She was more one of them than like me and so got invited to discuss approaches, answers and so on in a way that I did not. I loved the way that they were prepared to suggest possible answers for the hard questions, or say what they were thinking and how they jointly decided where we'd go next. The transparent pleasure they were deriving from the walk (which they've got into the habit of asking for every school holiday now) was another great reward.

    Does this kind of activity give wonder, control and scope? I hope it does: I'm helping them to see wonder in and wonder about the world; I'm giving them control (in a safe way) of their exploration of their environment and exposing them to concepts which have limitless scope. Emma wants to make up the next adventure walk, so a new challenge for me: how to make the creation of an adventure into an adventure...

    A couple of things to note:

    I'm not an educationalist and I don't have any training in this kind of stuff. I'm just doing what feels natural to me, with and for my kids, being led by their enjoyment and interests. There's stacks of ways it works; for example, I get a kick out of helping them start to understand and make jokes because I adore the fact that the cognitive processes that that requires are sophisticated but also produce so much fun for the joker and the listener.

    Of course, I only get to do the experiment of bringing my kids up once so I'm heavily motivated to do it as well as I can, but I can't predict what the outcomes will be. Unintended consequences abound in our house: a familial war on the superiority of brown sauce over red being only one example. (Brown is best, by the way.)

    And the last thing: if you're going to try an adventure walk yourself, I've found that it's a good idea to finish somewhere that you can get a pint and a sit down.
    Categories: Blogs

    How does your website score? [and how to fix it]

    HP LoadRunner and Performance Center Blog - Fri, 09/18/2015 - 19:41

    satisfaction-scale.jpgOne of the biggest challenges for organizations is providing a  good user experience for their end users--be it on a mobile app or while using the company website.


    Often this begins with a solid design and results in a positive experience for all. However, it is often challenging to objectively know how it is performing and how to fix it versus the subjective it is slow or not working that many of us experience.


    Continue reading to learn more about how your site is performing and how you can fix it now.

    Categories: Companies

    How does your website score? [and how to fix it]

    HP LoadRunner and Performance Center Blog - Fri, 09/18/2015 - 19:41

    satisfaction-scale.jpgOne of the biggest challenges for organizations is providing a  good user experience for their end users--be it on a mobile app or while using the company website.


    Often this begins with a solid design and results in a positive experience for all. However, it is often challenging to objectively know how it is performing and how to fix it versus the subjective it is slow or not working that many of us experience.


    Continue reading to learn more about how your site is performing and how you can fix it now.

    Categories: Companies

    Continuous Integration with UrbanCode Deploy and IBM Business Process Server

    IBM UrbanCode - Release And Deploy - Fri, 09/18/2015 - 16:10

    I recently had the chance to put together a demo for IBM Business Process Manager which inspired me to share the results in this blog post. Before I continue, I wanted to set the context because BPM can cover many things as IBM has quite a rich product set in this area. In this post I will cover the continuous integration life cycle of IBM Integration Designer project builds into IBM Business Process Servers, not the IBM Business Process Center/Business Modeler-type deployments.


    First, I’ll cover some of the products that are being leveraged in this demo scenario. I have already mentioned some but to be more specific:

    • IBM Integration Designer (IID) (previously known as WebSphere Integration Developer)
      From the IBM product page, “an Eclipse-based software development tool that renders your current IT assets into service components for reuse in service-oriented architecture (SOA) solutions.”. In simpler terms, it takes a bunch of business workflows, interface mappings and integration points described in XML and written in Java that can be deployed into a IBM Business Process Server.
      It also comes with an integrated Business Process Server test environment that allows a developer to deploy a project locally for development and testing purposes from the IDE. From within this environment, a developer can also export an ear file build that gets deployed to a Business Process Server.
    • IBM Rational Team Concert (RTC)
      In a nutshell: integrated source configuration, work item, tracking and planning and build management. RTC integrates with IBM Integration Designer so that these capabilities are part of thedevelopers working environment. From here, the IID developer, can check in those workflows, mappings, interface definitions and other code, start builds (either personal or team) and collaborate with other developers through work items in an all-in-one integrated experience.

    • UrbanCode Deploy
      Considering that this blog finds its home in the UrbanCode Blog, I won’t waste too many words about its functionality as the multiple blogs by my colleagues do a great job already. If you’re new to this product, this page gives an excellent description. For the purposes of the demo scenario, I’ll be using the WebSphere Application Server – Deployment Plugin and the built-in integration with RTC’s build system to provide automated deployment to IBM Business Process Servers (which are essentially WebSphere Application Servers with Business Process Management capabilities added).
    • IBM Business Process Manager Advanced (BPM) (includes IBM Process Server; formally known as WebSphere Process Server)
      This contains the IBM Business Process Server that takes artifacts authored using IID and brings them to life. Any application (EAR) file that has been build using IID can be deployed to the IBM Business Process Server and it will start the required interfaces and bind the integration points as described by the workflows designed in IID. We will use UCD in combination with RTC to automate this process.

    The relationships between the products in a topology would look something like this:
    Product Roles

    Product Roles

    To make the demo realistic, I wanted to create a setup that somewhat reflected reality so I start out by setting up several BPM environments: Development (DEV), Integration (INT), Quality Assurance (QA), and Production (PROD). There are teams responsible for developing the code base that gets deployed onto these BPM servers. That is, they each have their own source code stream in RTC (besides PROD which receives build via promotion from other environments).

    Environments and Teams

    Environments and Teams

    The teams and environments are described as follows.

    • Development: This is representative of one more teams, each being principally responsible for the development of a product component (sometimes called a component team). The Development team’s team builds get deployed into the DEV environment.
    • Integration: This team is responsible for maintaining a clean status of the integration build, the build that is backed by an integration stream (INT) that contains all the changes from all development teams. As there is the possibility of merges occurring, especially in a BPM scenario, the integration team is responsible for working across the development teams to ensure that merges occur cleanly.The INT BPM Server contains the build which integrates all teams’ builds prior to quality assurance subsequent to production.
    • Quality Assurance: Integration Builds will be going through the rigor of QA testing and may contain additional artifacts (translation, legalese). There is an additional quirk in that a QA build can contain fixes performed by the Quality Assurance team that have not yet been integrated into development (now that’s agile!). This is why QA also has an RTC stream and build.
    • Production: A production server heavily wrapped in red tape: approvals and gates. There is no RTC stream, as this is primarily a release engineering team.

    For this blog post, I’ll demonstrate the end-to-end continuous integration scenario for deploying a DEV build, so we’ll just be dealing with the Development (DEV) stream but you will see the other streams, environments and teams reflected in the setup and the screenshots.


    Describing the setup in some detail goes a long way to explaining the scenario, so let’s dive right into it. You can follow along but you will need to already have the IBM Business Process Manager Advanced and IBM Integration Designer products. As they are no publically downloadable evaluations or trails you’ll need to contact an IBM Sales representative to arrange one or already be entitled. There is, however, evaluations of UrbanCode Deploy server and Rational Team Concert for which I provide the links for below.

    • UrbanCode Deploy Server
      I downloaded and installed a UCD Server on a server machine. If you don’t have UCD, you can fetch a trail version from here.
      If you are not sure about how to install UCD, you can find documentation here.
      There is nothing special about the install itself, I installed a versioning UCD agent on the same machine although it’s unused when using the RTC build system integration as builds are pushed from RTC’s build engine (Jazz Build Engine/JBE) and do not need to be pulled from UCD. (Pulling versions requires a versioning agent.) I install it anyway to get rid of the warning on the UCD page about a missing versioning agent but this is optional, you can simply ignore the warning.
    • UrbanCode Deploy WebSphere Deploy Plugin
      I downloaded the WebSphere Application Server – Deployment plugin, as for all intents and purposes installing an EAR file into IBM Process Server is identical.
    • RTC Server installation
      I downloaded and installed RTC on a machine. If you don’t have RTC, you can download a version that’s free for 10 developers. If you are not sure about how to install and set up RTC, you can find documentation here.
      Remember to set the public URL of the RTC to something that is accessible from the UCD server and from the developer’s workstation, which will have IID with the RTC client integration.
    • IBM Business Process Manger Advanced Servers
      1. I installed IBM Business Process Manger Advanced on all machines which constitute the environments (DEV, INT, QA, PROD) that will receive builds to deploy. The detailed installation instructions can be found here.
        I Installed the BPM into C:\IBM\WebSPhere\AppServer (as I was using a Windows machine)
      2. The UrbanCode Deploy Agent was installed on each of the BPM and connected them to the UrbanCode Deploy Server. This will allow the ear files built by the RTC Build Engine to be deployed into BPM.
      3. We also need to install the RTC Build System Toolkit on these machines. You can download the RTC Build System Toolkit here; it will need to be connected to the RTC server on start-up.
        As I installed BPM on Windows machines I created a build directory in C:\build that looks like this:
        Build Directory

        Build Directory

        The startJBE.bat just contains the following line as I unzipped the Build System Toolkit into the root of C, https://demo:10443/ccm is the public url of my RTC server and I am using a demo user to connect. (You’d normally want to create a build user for these purposes.) I created a build engine definition in RTC called dev so that the build engine knows which RTC build engine definition it attached to. (More on this later). Pass.txt contains the encoded password for the demo user.

        C:\jazz\buildsystem\buildengine\eclipse\jbe.exe -repository https://demo:10443/ccm -userId demo -passwordFile pass.txt -engineId dev

        (It should be noted that these machines are great candidates for a Blueprint in UrbanCode Deploy Blueprint designer.)

        1. Development Workstation
          The developer workstation has the IBM Integration Designer with the RTC capabilities installed via the the RTC p2 Install Repository. You can use the Install repository as a local updates site to install RTC from.
    Product Configurations Rational Team Concert

    The idea is to set up a project area and several teams which are responsible for maintaining a team stream (responsible for development of some aspect of the system being deployed). Each team has a team lead and there could also be several other developers working on the teams. Each team member is responsible for ensuring that their team stream adopts changes from other teams via the integration stream and at the same time delivers changes into the integration stream. In doing so, they also need to ensure that they don’ break the team and integration builds. The team lead monitors the team build to ensure that the build and the deployment are green but they also need to ensure the integration stream build and deployment are green. The integration team would be responsible for making sure all changes from team streams delivered into the integration stream doesn’t break the build.

    A typical DEV team flow would look something like this:

    DEV Team Delivery

    DEV Team Delivery

    In order to shorten an already lengthy post, I will not go through such a large scenario in this blog and concentrate on this one team, DEV though I have set up the structure to respect the work flow depicted. Generally, the same pattern can be applied to the other teams and builds.

    Check the RTC Knowledge Center if you don’t know the details of how to perform these operations. These are the things that I did to get the basic flow working:

    1. As the RTC administrator user, created some other users in RTC: DEV Developer (devdeveloper), and DEV Team Lead (devlead).Selection_184
    2. Launched IBM Integration Designer and created a connection to RTC using the administrator (demo in my case) in the Work Item perspective.
    3. Create a project area called Demo using the Scrum Process and assigned the administrator user as an administrator of the project and to the Product Owner role. I also assigned the DEV Team Lead (devlead) as a project Team Member.
    4. I created a DEV team , assigned DEV Team Lead as the Scrum Master and the DEV Developer as a Team Member
    5. In preparation for importing the IID sample project, I created Release 1.0 stream owned by the DEV team with four components:
      1. Library -> Contains the IID sample library project
      2. Mediation -> Contains the IID sample Mediation Project
      3. Service -> Container the IID Sample Service Project
      4. Release Engineering -> Contains the build files
    6. Selection_183

    We now have a basic stream setup and we should import some code. I switched to a new IID workspace and logged in as the DEV Team Lead user (devlead). Then I

    1. Created a workspace off of the DEV team’s Release 1.0 stream with team visibility (scope) with all components.Selection_191
    2. Imported the IID Hello World sample project completed artifacts which can be imported via the Help->Samples and Tutorials -> IID Integration Designer menu.
    3. Created a Release Engineering Eclipse project in the Java perspective with the following structure and these two files. These will be the files used by the build, the build.xml is used to communicate with RTC, which the iidBuild.xml uses BPM server service deploy tags to build the ear file. I right clicked on the project context menu, Team->Share Project… and added the Release Engineering Project to the Release Engineering Component in Jazz SCM.
    4. Switched to the Business Integration perspective, right clicked on the project context menu, Team->Share Project and added the sample:

      1. HelloService to the Service Component
      2. HelloWorldLibrary to the Library Component
      3. HellowWorldMediation to the Mediation Component
      4. Delivered into the Release 1.0 (DEV) stream using the Pending Changes view


    Once the code has been added, we can now create the build engine and definitions. To make things easier, I switch back over to the demo IID workspace as this is also our build user. Then I:

    1. Created a build engine called dev which the build engine on the dev BPM machine will bind. There are several properties that are used by the ant scripts that should be defined here:

      jazzBuildTookit C:\jazz\buildsystem\buildtoolkit jbePasswordFile C:\build\pass.txt jbeProperties C:\build\jbe-properties\${buildLabel}.properties jbeRoot C:\build jbeWorkspaceRoot C:\build\workspaces wps.home C:\IBM\WebSphere\AppServer
    2. We’ll create a build definition call DEV Service Build owned by the DEV team area using
      1. the Ant – Jazz Build Engine template
      2. The Jazz Source Control pre-build step
      3. Post-Build Deploy post-build step
    3. In the DEV Service Build Overview tab, add dev as the Supported Build Engine
    4. In the Properties tab set the following properties

      build.output.dir ${jbeRoot}/builds build.working.dir ${jbeRoot} componentVersion ${stream}-${buildLabel} library HelloWorldLibrary module HelloService stream DEV team.udeploy.debug false team.udeploy.timeout 480
    5. In the Jazz Source Control tab

      1. Create a workspace that flows from the Release 1.0 (DEV) stream called Release 1.0 DEV Service Build that is DEV team scoped and contains all the components.
      2. In the Load Options, set the Load directory to ${jbeWorkspaceRoot}/${team.scm.workspaceUUID}
      3. Select Delete directory before loading and all the Accept Options
    6. In the Ant tab

      1. set the Build file to ${team.scm.fetchDestination}/Release Engineering/ant/build.xml
      2. Set the check mark to include the Jazz build toolkit tasks
      3. Set the Properties file to ${build.output.dir}/${componentVersion}/
    7. In the Post-build deploy tab

      1. set the Enable flag and deploy if build has no errors or warnings
      2. Do not set deploy for personal builds
      3. Set the Server URI of the UCD server. This is the URL you use to access the UCD web interface
      4. Set a user name, the user needs to have appropriate access to the resource in UCD related to deploying the service component to the DEV environment. For now we can use the UCD admin user. This can be changed later. Test the connection to ensure connectivity.

      Set these properties

      1. Component: ${module}
      2. Version: ${componentVersion}
      3. Base Directory: ${}
      4. Include Files: ${module}.ear
      5. Links: RTC Build Result=${repositoryAddress}resource/itemOid/${buildResultUUID}

      In the Process Section Enable “Run Application Process” and set

      1. Application: HelloWorld
      2. Environment: ${stream}
      3. Process: Deploy
    8. You should now duplicate this build definition and call it DEV Mediation Build. Everything should be kept identical except:
      1. Create and use a new workspace called Release 1.0 DEV Mediation Build that also flows from the Release 1.0 (DEV) stream that is DEV team scoped and contains all the components.
      2. Set the module property in the Properties tab to HelloWorldMediation

    At the end of this your project tree should looks something like this:

    Make sure not to forget to start the RTC build engine on the BPM DEV machine after which you should be able to create builds in RTC but unless we create the artifacts in UCD that are used by RTC, it won’t deploy.

    UrbanCode Deploy

    UCD will be much simpler to setup as I have exported the entire application. The only thing you need to make sure is that you have loaded the WebSphere Application Server –Deployment Automation Plugin in the UCD Settings before you attempt to import it or continue. We’ll need some of the resource roles provided by the plugin in order to set up the resource tree in UCD to reflect the BPM deployments.
    If the agents were installed correctly they should already be in the Agents tab under Resources.

    1. In the Components Tab Create two components called HelloService and HelloWorldMediation, leaving all the defaults. These are just placeholders for now.Selection_164
    2. In the Resource tree
      1. Create a folder called DEV Servers and add the DEV BPM server agent to the folder. This should create a new resource bound to the agent. If you want to keep things simple, rename the resource (not the agent) to as it will already be bound to the environment when you do the application import. (Otherwise you’ll have to bind it yourself)
      2. Click Show->Resource Roles to bring up the resource roles panel and drag WebSphere Cell under and rename it to PSCell1 (the default for BPM)
      3. Modify the entries in the tree to facilitate this path, these are the defaults unless you changed them during BPM install and configuration, in which case use the value from the DEV environment BPM.
        /DEV Servers/
      4. Add the HelloService and HelloWorldMediation components directly under /DEV Servers/

      When finished the resource tree should looke like:

    3. At this point is probably worthwhile to define a resource template for the agent so that when configuring the resource tree for other agents, the template can be implied instead of redoing the entire tree. (Unless the cell, node and server names are different for all your servers in which case a template may not be as useful.)
    4. Finally, import the HelloWorld.json Application in the zip file and selecting “Upgrade Component If Exists”.
    5. Feel free to examine the component processes in UCD and the environment, most of the heavy lifting is done by the WebSphere Plugin and all we do is provide bindings between the RTC build output and BPM.
      The last thing is to create users for the DEV Team Lead and DEV developer in urbancode deploy. As I did in RTC, I also created team corresponding to the environments I then assigned DEV Team Lead and DEV developer to the Development team using the pre-packaged manager and developer roles. I then assigned the relevant objects using the team object mappings using the Standard type.
      Application: HelloWorld
      Component: HelloService, HellowWorldMediation
      Environment: HelloWorld/DEV
      Resource: DEV Servers/** (the entire resource tree)

      That should do it; you now have a continuous integration pipeline! To see it in action kick off a build in RTC, this should succeed in create a build record linked to the version that is uploaded to UDC. This version will be deployed using the application process for the specific environment bound to the BPM server. Checkout the following screenshots to see it in action.

    Categories: Companies

    My First Experience with Windows 10

    uTest - Fri, 09/18/2015 - 15:52

    This past weekend, I had the opportunity to click around on a computer that had recently been updated with the newest Microsoft software – Windows 10. While this wasn’t the most ‘up-to-date device’ (I was clicking around on my Grandmother’s PC’), I did get a quick glimpse of the latest operating system from Microsoft. Before […]

    The post My First Experience with Windows 10 appeared first on Software Testing Blog.

    Categories: Companies

    Q&A with WomenTesters Editor Jyothi Rangaiah

    uTest - Fri, 09/18/2015 - 15:40

    Jyothi Rangaiah is the editor of and the Women Testers e-magazine which celebrated it’s first anniversary this past July. Per their website they describe themselves as, “We at Women Testers have approached the experienced and the naive in the testing industry to contribute, come forward and cater to the needs of this field. An overwhelming response […]

    The post Q&A with WomenTesters Editor Jyothi Rangaiah appeared first on Software Testing Blog.

    Categories: Companies

    Pulse Admin UI Updates

    a little madness - Fri, 09/18/2015 - 06:59

    In my previous post, Pulse Roadmap Update, I mentioned that we are working on major changes to the Pulse administration UI. I also mentioned these changes were worthy of their own post, so here we are! After an initial evaluation and prototyping period, work on the new administration UI is now in full swing. We’ve still got months of work to go, but the direction has become clear enough for us to communicate.

    As I mentioned previously, the main goals of this rewrite are discoverability and efficiency. We want it to be easier for you to find the configuration you’re after, and faster to make changes when required. We’re also dragging the admin UI from it’s humble lightly-scripted beginnings into the brave new world of HTML5 (we might even drop the quaint “AJAX-powered” terminology from our website

    Categories: Companies

    DevOps Leadership Series: Accelerating Adoption

    Sonatype Blog - Thu, 09/17/2015 - 21:06
    We are at an inflection point in the adoption of DevOps. Now is not the time to blink. DevOps was once thought to be a practice attainable by a select few.  The early pioneers were the ones with arrows in their backs.  For those who succeeded a new level of awesomeness was achieved and admiration...

    To read more, visit our blog at
    Categories: Companies

    2015 DevOps Leadership Series

    Sonatype Blog - Thu, 09/17/2015 - 20:43
      Gene Kim (@RealGeneKim) Author, “The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win” Gene explains important themes in DevOps for 2015. Including: proving DevOps is applicable for large complex organizations, and how DevOps will help your organization not only win...

    To read more, visit our blog at
    Categories: Companies

    What is your definition of “Done”?

    Sauce Labs - Thu, 09/17/2015 - 19:00

    Why does a daily standup or scrum team have a definition of done (DoD)? It’s simple – everyone involved in a project needs to know and understand what “done” means.

    What is DoD? It is a clear and concise list of requirements a software increment must adhere to in order to be considered a completed user story, sprint, or be considered ready for release. However, for organizations just starting to apply Agile methods, it might be impossible to reach immediately. Your organization needs to identify the problems and work as a team to build your version of DoD to solve them.

    The Problem

    The following conversation occurs during your daily standup:

    Product Manager (PM): “Is the user story done?”
    Developer (Dev): “Yes!”
    Quality Assurance (QA): “Okay, we will execute our manual and/or automated tests today.”

    Later that same day:

    QA: “We found several issues, did Dev perform any code reviews or write any unit tests?”
    PM (to Dev): “QA found several issues, did you do any code reviews or unit testing?”
    Dev: “No, the code was simple. It was going to take too much time to write unit tests.”

    Has this ever happened to you?

    In traditional development practices, Dev finishes the code and hands it off to QA. Then QA spends hours, days, and sometimes weeks reviewing documentation, executing test cases, and holding bug-bash parties. This methodology was efficient initially, but now, an organization may realize it isn’t working as expected. The problems start when developers deliver code late in the sprint, not allowing enough time for code reviews, testing and bug fixes. This leads to undone work which will compound over multiple sprints, and can cripple a release.

    No one should neglect the importance of getting things done, and everyone needs to have a clear definition of “done” as an organization.

    How to Get There

    Determining the definition of “done” is an essential conversation every development team should have. A couple of elements can help paint a clear picture of the meaning of “done” for your organization.

    First, seriously consider creating lean user stories. This allows Dev to code complete on small, testable functionality of a story. This is a game changer. QA will get small chunks of completed code that can be tested throughout the entire sprint (same lean user story), versus waiting until the end of a sprint. I truly believe QA needs to be embedded and involved early, so they are working along with Dev and clearly understand the sprint deliverables. To make this efficient, Dev and QA must work on the same thing at the same time. Embedded QA has several benefits. They create transparency, help build in quality early, provide daily feedback, and more. If everything works out, it will eliminate the tradition of QA waiting for Dev to finish coding before starting QA tasks (development and testing). Until you change, you are still waterfall.

    Second, by following at least some of the guidelines listed below, your organization can start specifically defining its own meaning of DoD:

    Quality Of Work – inconsistent standards lead to bugs, unhappy customers, and poor maintainability. DoD effectively becomes a team’s declaration of values.

    Transparency – everyone understands the decisions being made and the work needed to complete a releasable increment.

    Generate Feedback – as progress towards “done” is made on a releasable increment, the opportunity for feedback should be built in. This can be accomplished in many ways, including code review, architecture review and automated testing.

    Clear Communication – progress is easy to track and report. The remaining work to do is clear.

    Expectation Setting – common understanding among all developers, product owners and quality assurance. When we say a task is done, everyone on the team knows what that means. When tasks are planned, they are estimated to account for the entire DoD.

    Better Decisions and Planning – work is planned to accommodate the DoD. Extra time can be estimated in the interest of ensuring a task is completed to the standards of the DoD.

    Checks and Balances

    The team owns, validates, and iterates over “done.” What elements need a checklist for DoD?

    • User Story – story or product backlog item
    • Sprint – collection of features developed within a sprint
    • Release – potentially shippable state

    The key principle of DoD is to have a predefined checklist for the user story, sprint, and release that the team agrees on. It is important to understand that everyone’s checklist will be different. The lists below are only samples, and not definitive, as each project may require its own definitions.

    When is your Team “Done” with a User Story in a Sprint?

    • Acceptance criteria are met
    • Peer review has been performed
    • Code is checked in
    • All types of testing are completed
    • Any other tasks and specified “Done” criteria are met

    When is your Team “Done” with a Feature in a Release?

    • Story planning is complete
    • All code reviews have been performed
    • Bugs are resolved
    • All types of testing are complete, with a 100% success rate
    • All appropriate documentation is in place
    • Demo
    • Retrospective
    • Any other specified tasks and “Done” criteria are met

    When is your team “Done” with a Release?

    • Satisfied with sprint(s) completion
    • Deployment to stage
    • All types of testing are complete, with a 100% success rate
    • Rollback / remediation planning
    • Configuration changes
    • Deployment to production
    • Production sanity checks are met
    • Release notes are complete
    • Training has been performed
    • Stakeholder communication

    The DoD is a comprehensive checklist that will add value to activities that assert the quality of a feature. It captures activities that can be committed by the team, which leads to improvement of the product and processes, minimized risk, and much clearer communication at each level (story, sprint, release), along with other benefits.

    Look for ways to grow your story DoD so that you can consistently build and release quality software quickly.

    Greg Sypolt (@gregsypolt) is a senior engineer at Gannett and co-founder of Quality Element. He is a passionate automation engineer seeking to optimize software development quality, coaching team members how to write great automation scripts, and helping testing community become better testers. Greg has spent most of his career working on software quality – concentrating on web browsers, APIs, and mobile. For the past 5 years he has focused on the creation and deployment of automated test strategies, frameworks, tools, and platforms.

    Categories: Companies

    The Primacy of Testability

    Software Testing Magazine - Thu, 09/17/2015 - 16:58
    An important responsibility for many software architects is fostering and defending non-functional software qualities. These qualities are numerous, and they can interact in complex ways, so techniques for keeping abreast of them are vital for ...
    Categories: Communities


    Testing TV - Thu, 09/17/2015 - 16:37
    Software developers generally don’t write poor, unmaintainable code out of any malicious or deliberate intent. They do it because software development is complex and there is not enough feedback in the process to ensure they adhere to good SOLID (Single responsibility, Open-closed, Liskov substitution, Interface segregation and Dependency inversion) object oriented principles. By using Test-Driven […]
    Categories: Blogs

    iOS 9 is Here, But Is It Worth It?

    uTest - Thu, 09/17/2015 - 16:17

    Crisper temperatures…the start of the school year…the changing colors of the leaves…the return of football…apple picking…all of these conjure up images and feelings that come with the transition out of summer and into fall.  However, there’s one other annual event that you can almost certainly set your autumnal calendar to: Apple’s latest full iOS release. The […]

    The post iOS 9 is Here, But Is It Worth It? appeared first on Software Testing Blog.

    Categories: Companies

    Docker Hub 2.0 Integration with the CloudBees Jenkins Platform

    Docker Hub 2.0 has just been announced, what a nice opportunity to discuss Jenkins integration!
    For this blog post, I'll present a specific DockerHub use case:  How to access to the Docker Hub registry and to manage your credentials in Jenkins jobs.  

    The Ops team is responsible for maintaining a curated base image with a common application runtime. As the company is building Java apps, they bundle Oracle JDK and Tomcat, applying security updates as needed.

    The Ops team uses CloudBees Docker Build and Publish plugin to build a Docker image from a clean environment, and deploy to DockerHub on a private repository. Integration with Jenkins credentials makes it easy, and the plugin allows them to both deploy the base-image as "latest" and track all changes with a dedicated tag per build.


    The Dev team are very productive, producing thousands of lines of Java code, relying on Jenkins to ensure the code follows coding, and testing coverage standards whilst packaging the application. 

    During the build, they eventually include the packaged WAR file in a new Docker image, relying on Ops' base-image. To do this, they just had to write a minimalist Dockerfile and add it to their git repository. They can use this image to run some advanced tests and reproduce the exact production environment (even on their laptop for diagnostic purposes if needed). The Ops team is confident with such an image as they know the base image is safe.
      They also have installed Jenkins DockerHub Notification plugin, so they can configure the job to run when the Docker base-image is updated on the Hub. With this setup they know the last build will always rely on latest base-image, including all important security fixes that the Ops team is concerned about.

      This scenario has been tested on DockerHub 2.0 and works like a charm. Updating the base image sources on Github triggers a build for base-image job, which is then published to DockerHub 2.0.
      Jenkins detects these changes to the DockerHub hosted images, and and jobs that depend on the upstream base-image* will be rebuilt, tested, and published (and possibly released).

      The Ops team are happy with this, as their fears of developers running ancient docker images full of security holes are calmed by knowing that by simply updating the base-image, all projects that depend on it will be notified and updated automatically:

      An actual company would probably have a more sophisticated deployment pipeline than outlined above, with validation steps (and possibly approval) for each image.

      To learn more about Docker integration with CloudBees Jenkins Platform, be sure to read additional blogs on, including Architecture: Integrating CloudBees Jenkins Platform with Docker Hub (INSERT LINK).

      You can read more documentation about CloudBees and Docker containers here.

      * Note the new Docker-Workflow feature will automatically register for changes to base images if you use that way to build out your pipeline:

      Team's logos are from, which I recommend you follow - you may not learn much but you should get some good laughs.

      Nicolas De Loof
      Software Engineer

      Nicolas De Loof is based in Rennes, France. Read more about Nicolas in his meet the bees blog post, and follow him on Twitter.

      Categories: Companies

      Architecture: Integrating the CloudBees Jenkins Platform with Docker Hub 2.0

      Docker is an incredibly hot topic these days. Its role in Jenkins infrastructures will soon become predominant as companies are discovering how Docker fits within their own environments as well as how to use Docker and Jenkins together most effectively across their software delivery pipelines.

      The major use cases for Docker in a Jenkins infrastructure are:
      • Customize the build environment: Different applications often require different build tools, some of these tools require root permissions to be installed on the build servers (x11/xvfb and Firefox for headless tests such as selenium, ImageMagick...). Jenkins admins once solved this problem by increasing the number of flavors of Jenkins slaves, but it was limited by hardware constraints and was not flexible for project teams. The CloudBees Docker Custom Build Environment Plugin and the CloudBees Docker Workflow Plugin offer a new way to solve this challenge with much more flexibility, allowing Jenkins admins to manage only one flavor of Jenkins slaves—Docker enabled slaves—and let the project team customize their build environment to their needs running their jobs in Docker containers.
      • Ship applications as Docker images: More and more applications get shipped as Docker images (instead of war/exe/... files) and the Continuous Integration platform has to build and publish these Docker images.

      For these scenarios, the Jenkins infrastructure needs to access to a Docker registry to retrieve/pull the Docker images used on Docker enabled slaves and to store/push the Docker images created by Jenkins builds.

      Docker HubThe Docker Hub is the cloud-based registry service proposed by Docker, Inc that combines the "official" registry of public images on which "every" Docker user relies with a private registry that will allow the user to manage private images.

      Integrating a Jenkins infrastructure with Docker Hub requires architecture decisions that are similar to the decisions to integrate a Jenkins infrastructure with online services such as GitHub or BitBucket.

      Direct connectivity from the Jenkins infrastructure to Docker Hub
      The most straightforward solution is to simply open network connectivity (http and https) from the Jenkins slaves to Docker Hub.

      Architecture: Jenkins infrastructure and
      Connecting the Jenkins infrastructure to Docker Hub through a proxy
      Several organisations will prefer to secure the connectivity of the Jenkins infrastructure to the "outside world" with firewalls and proxies.

      To do so, it is necessary to declare the HTTP proxy in the configuration of the Docker daemon on each Jenkins slaves as documented in Docker Documentation - Control and configure Docker with systemd - HTTP Proxy.

      Sample /etc/systemd/system/docker.service.d/http-proxy.conf:


      Architecture: Jenkins infrastructure and through an HTTP proxyPrivate Docker Registries behind firewalls?
      This blog post covered how to integrate a Jenkins infrastructure with the Docker Hub public registry service. We will cover in seperate post the integration of a Jenkins infrastructure with a private registry behind the firewalls.

      Accessing the Docker Hub registry in Jenkins jobs
      To see how to access to the Docker Hub registry and to manage your credentials in Jenkins jobs, please read Nicolas de Loof's blog post Docker Hub 2.0 Integration with CloudBees Jenkins Platform and watch the screencast:

      Cyrille Le Clerc is a product manager at CloudBees, with more than 15 years of experience in Java technologies. He came to CloudBees from Xebia, where he was CTO and architect. Cyrille was an early adopter of the “You Build It, You Run It” model that he put in place for a number of high-volume websites. He naturally embraced the DevOps culture, as well as cloud computing. He has implemented both for his customers. Cyrille is very active in the Java community as the creator of the embedded-jmxtrans open source project and as a speaker at conferences.
      Categories: Companies

      Ensure Availability & Performance in SAP’s Digital Economy

      SAP applications play a key role in fulfilling business processes in today’s digital enterprises. Availability problems, even those that impact single users, result in efficiency loss and in a worst case scenario may even stop the process. From an IT operation perspective it is a challenging task to isolate and identify the root-cause of intermittent availability […]

      The post Ensure Availability & Performance in SAP’s Digital Economy appeared first on Dynatrace APM Blog.

      Categories: Companies

      Knowledge Sharing

      SpiraTest is the most powerful and affordable test management solution on the market today