Skip to content

Feed aggregator


Stefan Thelenius about Software Testing - Tue, 06/09/2015 - 21:00

This is the second post of a "blog version"-series from my Let's Test session: Testability features.

What is testability?

I like the supports testing-statement mentioned above. You could probably elaborate more about the definition of testability but for this series of posts I stick to supports testing.
My previous post was about Setup, Testing and Bug investigation and you could use testability features/tools to improve the actual testing but also to reduce setup and enhance bag investigation in order to liberate time for testing.

So using testability features can take you from a situation of


which is a better state for testing in my opinion.

But what testability features/tools can help us having more time for testing?

I will include some examples in my next post about this subject, so stay tuned...

Categories: Blogs

The 5 most common SharePoint Performance Insights you can take action on now

I’ve been on a SharePoint Performance Evangelist Tour for the past couple of months. Here are some of my contributions to make Users, and by result, SharePoint Admins happier than they are right now: Blog and YouTube Tutorial on Performance Sanity Check in 15 Minutes Webinar with Wendy Neal on Driving SharePoint Adoption Workshops at […]

The post The 5 most common SharePoint Performance Insights you can take action on now appeared first on Dynatrace APM Blog.

Categories: Companies

Ready! API Integrated with TestComplete and Selenium

Software Testing Magazine - Tue, 06/09/2015 - 15:40
SmartBear Software has integrated Ready! API with popular development and monitoring tools Git, JIRA, TestComplete, Selenium and AlertSite UXM. Integration with these tools provides developers and IT operations teams an end-to-end strategy in continuous improvement over API quality. Every day, more teams are using Git to store source code and other deployment artifacts than any other repository system. Leaving out test artifacts from this process introduces risk in overwrites, conflicts and delays. With Git integration in Ready! API, team members can now see which tests are currently being worked in Git, ...
Categories: Communities

Four Must-Have Tools for Apple Watch Owners

uTest - Tue, 06/09/2015 - 15:00

Ever since the recent release of the Apple Watch, there has been a large amount of buzz around Apple’s foray into the world of wearables. For the average consumer, this swell of information can tend to be a little bit overwhelming. If you are one of the trendsetters out there to have one of these […]

The post Four Must-Have Tools for Apple Watch Owners appeared first on Software Testing Blog.

Categories: Companies

Decision Driven Test Management – 6 tips to improve the value of your testing

PractiTest - Tue, 06/09/2015 - 14:07
You were not hired to find bugs!

I have said this a number of times in the past, if your management thinks that your job is to find all the bugs in the product and deliver a defect-free release at the end of the process, I strongly recommend you find another management…

InspectorIf you are lucky and work on a smart company, then the main objective of your work as a tester and specially as a test manager will be better described as follows:

To provide stakeholders with visibility
into the status of the product and process,
so they can make the correct decisions.

Don’t get me wrong, we are still testing and reporting on our findings, but in this definition the focus is not on the product or even on the testing, it is placed on the stakeholders and the decision they need to make (based on the information we provide them).

In order to explain this test management approach better I want to start by giving it a name:

Decision Driven Test Management

Many experienced test managers already use DDTM or Decision Driven Test Management unconsciously in one level or another.  In lots of ways it is the logical way to work, and many of us adopt this approach even without noticing it.

But I don’t recall someone defining it explicitly or teaching it publicly yet, and so what I want to do is to help understand how we can all use this approach consciously to improve the value of our test management work, and to teach it to those who are still moving towards this methodology but have not yet made the jump.

In a nutshell: Start from the people and their decisions, then plan your tests accordingly

Most people assume that when you start a testing project you begin by learning the product, then you plan the tests you want to run, and then you create a work plan or schedule.

Well… this is wrong!

The truth is that most projects usually set their release dates and internal milestones long before they start thinking about testing.

And so you will usually see an experienced test manager start a project by learning and understanding these dates and milestones, only then will he or she move forward to learn the product, and then based on the understanding of the complete situation will he or she plan the work schedule.

The reason we look at the project plan first is that by learning our milestones we also learn a lot about the decisions that need to be made as part of the process, and so we are starting by understanding what information will be required from us and when will this information be needed, and then we can plan our testing accordingly.

6 tips to master DDTM

It is not always easy to grasp the small things that experienced managers do without even noticing, so let me try and explain the simple yet important principles behind this approach to test planning and test management.

What are the most important things you need to do to work correctly and efficiently with DDTM?

1.  Map your stakeholders and listen carefully to their needs.

Start by understanding who you are working for – and no, it is not (mainly) the end user!

Make a list of your project stakeholders and then prioritize this list based on who is more important to the project and to you.  Then go and talk to them to understand what they need to know as part of the project.

Typical stakeholders are Product, Project and Release Managers; Development Leads and even Developers;  sometimes your circle will be broader and it may include VP’s of Marketing, Services, Supports and even Sales; and in some occasions when the project is very important or when the company is relatively small it can even include the CEO.

Each time you will have different stakeholders and you will need to make an effort to find all of them or at least the most important ones.

From my experience, the biggest challenge is that these people are not really aware of what information they need, so you will need to work with them to define these needs.

2.  Plan tests and deliverables based on the information needs.

dashboardTake the information needs of your stakeholders and translate this into concrete information deliverables.  You are looking here to plan your Metrics, Reports, Dashboards, etc.

Schedule these deliverables based on when they will be needed, many times these “delivery dates” are the milestones already defined in your project.

This will give you the internal milestone plan for your testing team.

3.  Plan your tests according to your information needs, understand what information you want to capture up-front.

Now you know what reports, dashboards, statistics and additional information you need to provide and when.  This is what you need to plan your testing operations.

Make sure to explain to your testers why they are testing and what information to look for.  Many times we feel that our junior testers do not care why they are running the tests we give them, and this is one of the worst mistakes to make.

Your testers are intelligent (and if they are not then replace them!) so make them part of your “Testing Intelligence Team” and help them bring forward the information that will help your team make the right decisions.

Sometimes these findings cannot be planned, and they will depend on the avid eye of a smart tester to be found and reported in time!

4.  Be ready to change and improvise.

Fact No. 1:  You will not have enough time to test everything you need to test in order to provide the information that is required.

Fact No. 2:  Even if you manage to plan your schedule perfectly to fit every need known at the beginning of the project.  As your project progresses things will change and more stakeholders will require more information from you.

If this is the case, then you need to keep your eyes on the ball all the time, and don’t lose your mind when you realize that people have new questions, and that you will need to alter and improvise your plans in order to keep up.

People are not bad because they change their minds or modify their requests!

Change is the only constant parameter in most projects :-)  Embrace change, you have no other alternative.

5.  Stream information via multiple channels.

Here are 2 additional and important things to remember:
– Different people absorb data differently.
– Sometimes you will need to present the same information twice or tree times before it is absorbed and understood by your busy stakeholders.

It is more effective to use multiple channels to stream your information, and so help your goal of supporting the decision-making process.

Use Dashboards, Kitchen Monitors, Email Reports, Meetings, etc to pass along the information needs of your stakeholders.

6.  Make this work iterative, improve with each iteration.

The first time you try this approach you will fail poorly!

The second time you will feel that you were almost able to help but not there 100%…

The third time you will start seeing a difference in the approach of your company and stakeholders towards your testing.  They will begin noticing that you are proactively coming with the information they need.

From then on, they will come to you and ask you to be a more active part of the decision making circles.

This type of project, specially at the beginning, will be a continuous improvement effort.  Knowing this may help you cope with it better :-)

* Bonus tip – remember you are providing a service.

For some reason some of my best testers have been people who worked previously either selling things or waiting tables, really!!

Why?  I think this is because they understand that they need to provide a service and so the “customer” is always the most important thing in their work.

I love geeks and computers, they are some of my best friends in life :-)  But when you work in testing you need to have soft skills and not only analytical skills.


Do you use DDTM in your process?  Share your tips!

As I mentioned before, I am sure that many of you already work this way without even noticing it.

If you do, please go ahead and share with us your experience!
What works best, and what do you do to make it better in your team?

Leave your comment and help all of us improve our test management approach!

Categories: Companies

Parasoft Resources

Improving productivity & quality - Tue, 06/09/2015 - 10:13
I wanted to let you know about Parasoft's API Testing Resource Library, which features a number of white papers, videos, articles, and case studies related to API testing and functional test automation. Here are a few recent additions I thought you might be interested in:·         DirecTV Case Study: Learn how DirecTV automated a complex manual API testing process to dramatically increase the speed and scope of their testing— enabling them to bring top-quality software innovations to market in record time.·         Western Pacific Bank Case Study: A leading NZ bank and financial-services provider wanted to extend their existing GUI-based testing to more extensively exercise the application logic within internal systems; learn how automated API testing helped them reduce business risks and save $2.1 million NZD over 18 months.·         Lufthansa Cargo Case Study: Lufthansa Cargo needed to develop central, stable, and optimal-performance APIs without affecting the various front ends that were already in place or currently under construction. Discover how they achieved these goals while significantly increasing productivity.·         MedicAlert Case Study: MedicAlert needed to accelerate its ability to roll out new services in a secure and effective fashion. Learn how they established a process for managing the functional, security, and performance testing challenges associated with their new capabilities and offerings.
Also, I'd like to invite you to schedule a demo if you'd like to see first-hand how Parasoft can help you take your API testing efforts to the next level. 
Categories: Blogs

Announcing the 2015 State of Medical Device Development Survey

The Seapine View - Tue, 06/09/2015 - 09:30

2015 State of Medical Device Development SurveyWhat’s the state of the medical device development industry? How is it changing? What trends and technologies are driving these changes?

If you work in the medical device industry, we need your insights for our annual State of Medical Device Development Survey.

The purpose of the survey is to investigate development methodology trends within the life sciences industry.

Painting the Landscape

When we first began the survey in 2011, our goal was paint a picture of the medical device development landscape. Each year, we add a little more detail.

Last  year, we heard from nearly 500 engineering, R&D, and regulatory professionals. Their responses showed how medical device teams manage key areas such as core product development artifacts, compliance, and traceability.

This year, we’re expanding the scope of the survey to learn more about:

  • Product development challenges
  • Agile’s place in the industry
  • The impact of emerging technologies
Share Your Knowledge

The survey is your chance to share your knowledge with your peers. All survey responses are kept anonymous, but you have the option to provide your email address if you’d like to register to win Amazon gift cards. We’re giving away four $25 gift cards and one $100 gift card in drawings throughout the run of the survey. We’ll notify you by email if you win.

You’ll also receive a free copy of the survey report when it is published in the fall.

We hope you’ll take the 10-minute survey to share your views on these topics and a few others. Your input is vital for accurate and meaningful results.

Take the Survey

The post Announcing the 2015 State of Medical Device Development Survey appeared first on Blog.

Categories: Companies

Stretching a Pint

Hiccupps - James Thomas - Tue, 06/09/2015 - 08:28
At last night's Cambridge Tester Meetup, Karo talked about heuristics (slides here). After a brief introduction to the topic, she walked us through a couple of testing mnemonics:

We then split into two groups for an exercise. While the other group applied FCC CUTS VIDS to testing Karo's kitchen - in fact, a schematic and floor plan of it  - the group I was in took FEW HICCUPPS and a beer glass used at the 42nd Cambridge Beer Festival.

There's plenty of pictures of the glass at #cbf42 but to give a quick description: it's a pint glass with a loosely-themed Hitchiker's Guide to the Galaxy/Beer Festival mashup logo (because it's the 42nd festival, we assume) on one side and a Campaign for Real Ale (Camra) logo along with the festival name and dates on the other. It has calibrations for different amounts of beer, apparently in accordance with some kind of volume marking regulations which have a CE logo.

HICCUPPS is a set of consistency oracles and we agreed to use each of them as springboards for test ideas rather than receptacles of them, to avoid being constrained by whether an idea was "in" the category we happened to be discussing and risk losing it.

Here's a selection of the ideas we came up with. I haven't edited much, only to combine some overlapping items and lose some repetition and the notes I can't understand this morning. We didn't use the internet for the exercise, but I've looked up some references while writing this post and we could certainly use it for evidence and to inspire more questions.

  • is the glass supplied at the festival always a pint glass, this shape, this size, of this manufacturing quality? 
  • is the logo in the same position, in the same proportions, in the same style across festivals? (e.g. compare images in the festival's Flickr account)
  • are the same volume measures always printed on it (pint, half-pint, third-of-a-pint)?
  • does the glass always show the certification of volume using the CE volume mark
  • is there always a theme to the beer festival? Does it need to be reflected on the glass?
  • is it important to the festival that there is continuity or consistency across festivals, glasses etc?
  • is the festival logo on the glass intended to look this amateurish? (First impression: it's like a student rag picture) 
  • There's plenty of space on the glass, why squash the logo up in the way that has been done? (The mice have detail that's hard to see)
  • Would a simpler graphic design have been more striking? 
  • is it important that the measurements are accurate? (To what degree?)
  • are the fonts chosen appropriate for the audience? (No comic sans!)
  • is the use of colour appropriate? (The logo has to sit in front of many different colours of liquid)
  • is there any relationship between Hitchikers Guide to the Galaxy and Camra? Are there any potential negative connotations that could be made? 
  • is the glass consistent with the festival beer listing booklet, posters, staff uniforms etc?
Comparable Products
  • is there a standard shape, size, material etc for beer glasses at festivals? How about at Camra festivals? Cambridge festivals?
  • what about non-UK drinkers, what would they expect from a glass? In Britain, we still use imperial measurements but Cambridge is multicultural
  • pubs often don't have oversized beer glasses (as this one is, where the pint mark is below the top of the glass)
  • how easy is it to clean vs similar products?
  • what do similar kinds of events do about glasses? e.g. do wine festivals expect drinkers to use the same glass for red and white? Do they provide cleaning facilities for glasses? Is that part of this product?
  • is the glass solid? Will it break easily if dropped? Is the flooring chosen to be gentle on dropped glasses?
  • do Camra members have any expectations about the glass based on Camra conventions?
  • we observed what we thought were injection moulding marks on our glass - would hand-made glass be expected by any attendees? (They are already connoisseurs to some extent by going to the festival.)
  • are the volume markings correct?
  • is the time and date information printed on the glass correct?
  • is the vessel suitable for drinking beer from? Is it optimal? (What is the optimal glass for beer? cider? perry? soft drinks? Does it differ across beers?)
  • is the glass dishwasher safe?
  • would a glass from earlier beer festivals be honoured at this festival?
  • what does other festival material say, show, suggest about this glass?
  • does the glass alter the taste of its contents?
  • does the logo imply some endorsement from Douglas Adams' estate? (Particularly since Adams was from Cambridge)
  • is the glass built to last? (If so, last for what duration? The festival, life?)
  • is this really the 42nd festival? (according to who?)

User Desires
  • is it easy to drink from? to hold? to pass between people (e.g. friends for trying a taste, to the bar staff?)
  • is it stable when put down?
  • should it be more tactile, e.g. with 3D logo on it?
  • can this design of glass be stacked? is it stable when stacked?
  • is it easy to fill, can the measures be seen by the bar staff?
  • is it easy to carry multiple glasses (e.g. three in a triangle)
  • is it unique (e.g. for collectors)
  • it is robust?
  • does it have appropriate thermal properties (e.g. help to keep cold beer cold?)
  • is it safe (e.g. will it break into sharp shards when dropped?)
  • do customers desire gender-specific glasses? ("Do you want that in a lady's glass?" )
  • do customers want a glass that signifies no alcohol is in the drink? How about other kinds of specialist desires e.g. markings for gluten-free or vegetarian beer (is there such a thing, we asked? Yes.)
  • how are the glasses packaged for transport? Are they space-efficient?
  • are the production costs reasonable? affordable? 

  • what is the product here? (we have permitted ourselves to switch between the glass, use of the glass, the festival ...)
  • is the thickness of the glass appropriate, comfortable to drink from?
  • do all of the instances of this glass at the festival look the same? Should they? To what tolerance?
  • should there be half-pint glasses too?
  • is the glass consistent with other aspects of branding?
  • what is the Camra logo about? It's looks like it has a lid. Is that intentional?

  • do I want to drink out of it?
  • is it obvious that it's a receptable for liquids? For drinking from?
  • is it suitable for display?
  • does it look good in a collection of such glasses?
  • can it be easily, safely, efficiently transported and stored?
  • will the colours and other markings fade?
  • what else could it be used for? (e.g. holding coins or pens, as a vase, watering flowers, magnifying glass ... but this is a different testing exercise)
  • does it chip easily?
  • could you hurt people with it? (deliberately or not?)
  • is the glass inert?
  • can you stick it in your pocket when you need your hands free?
  • is it compatible with devices for holding glasses (e.g. deckchairs, belts)

  • what is the CE volume marking? Would we need to test it in some way (e.g. check that the manufacturer is licensed to use it?)
  • are there hygiene standards for drinking vessels (e.g. certain grade, thickness, transparency of glass?)
  • are there conventions, contractual agreements, regulations about using the name of Cambridge or Camra in association with events?
  • does the festival have a license to sell beer?
  • do the bar staff need licenses or training to serve beer?
  • some brands of beer might require their product to be served in glasses branded for them?
  • does the logo conform to copyright law (e.g. with Hitchikers Guide to the Galaxy images)
  • does the glass fulfil the needs of the beer? (e.g. to have its head displayed, show bubbles, permit its colour to be appreciated, compared with others etc)
  • are glasses required to be round? (if so, how round? Could it be square? elliptical?)

  • what are common problems of any kind of branded product? branding wearing off, typos, correct copy etc
  • glasses with handles are often a pain to fit into a cupboard

  • the logo might not be obvious to people not in the intersection of Hitchikers and Camra  fans
  • explainability is a kind of testability heuristic
  • the precise location of the festival isn't given, only the town. Should it be precise?

  • this is a pint glass to most Brits at least. To others it might just be a glass
  • is it obvious a beer glass? Probably it is to those familiar with the conventions of such glasses
  • does it obey the laws of physics?
  • is it a practical object, or a collectors item?
  • why does it have a wooden barrel in the logo when the beer at the festival no longer uses them?
  • does the audience expect nostalgia?
  • is it quintessentially English?

Images: Twitter, Wikipedia, Amazon 
Categories: Blogs

Multi-tenancy with Jenkins


As your Jenkins use increases, you will likely extend your Jenkins environment to new team members and perhaps to new teams or departments altogether. It's quite a common trend, for example, to begin using Jenkins within a development team then extend Jenkins to a quality assurance team for automating tests for the applications built by the development teams. Or perhaps your company is already using Jenkins and your team (a DevOps or shared tooling kind of team) has a mission to implement Jenkins as a shared offering for a larger number of teams.

Regardless, the expansion is an indication your teams are automating more of their development process, which is a good sign. It should go without saying, organizations are seeing a lot of success automating their development tool chain with Jenkins, allowing their teams to focus on higher value, innovative work and reducing time wasted on mundane tasks.

No one wants this, after all (no dev managers or scrum masters, anyway):

At the same time, if not planned, the expansion which was meant to extend those successes to more teams could have unintended consequences and lead to bottlenecks, downtime, and pain. Besides avoiding the pain, there are also proactive steps you can take to further increase your efficiency along the way.

What is multi-tenancy?

For the purposes of this blog post, let's define multi-tenancy for Jenkins: multi-tenancy with Jenkins means supporting multiple users, teams, or organizations within the same Jenkins environment and partitioning the environment accordingly.

Why go multi-tenant?

You might ask --- "Jenkins is pretty easy to get up-and-running; why not just create a new Jenkins instance?" To some extent, I agree! Jenkins is as simple as java -jar jenkins.war, right? This may be true, but many teams are connected in one way or another… if two related but distinct teams or departments work on the related components, it's ideal that they have access to the same Jenkins data.

Implementing Jenkins - at least, implementing it well - takes some forethought. While it is indeed easy to spin up a new Jenkins instance, if your existing team using Jenkins already has a great monitoring strategy in place or a well-managed set of slave nodes attached to their Jenkins instance, reusing a well-managed Jenkins instance seems like a good place to start. I mean, who wants to wear a pager on the weekend for Jenkins, anyway?

Establishing an efficient strategy for Jenkins re-use in an organization can help reduce costs, increase utilization, enhance security, and ensure auditability/traceability/governance within the environment.

What features can I use to set up multi-tenancy?

As you begin to scale your Jenkins use, there are a number of existing features available to help:

  • Views
    • The views feature in the Jenkins core allows you to customize the lists of plugins and tabs on the home screen for better user experience when using a multi-tenant Jenkins instance.

  • Folders
    • The Folders plugin, developed in-house at CloudBees, is even more powerful than views for optimizing your Jenkins environment for multi-tenancy. Unlike views, Folders actually create a new context for Jenkins.

    • This new context allows for example, creating folder-specific environment variables. From the documentation: "You can [also] create an arbitrary level of nested folders. Folders are namespace aware, so Job A in Folder A is logically different than Job A in Folder B".
  • Distributed Builds
    • If you're not already using Jenkins distributed builds, you should be! With distributed builds, Jenkins can execute build jobs on remote machines (slave nodes) to preserve the performance of the Jenkins web app itself.

    • If you extend your Jenkins environment to additional teams, all the more reason to focus on preserving the master's performance.

    • Even better, distributed builds allow you to set up build nodes capable of building the various types of applications your distributed teams will likely require (Java, .NET, iOS, etc.)

  • Cleaning Up Jobs
    • When the Jenkins environment is shared, system cleanup tasks become more critical.

    • Discarding Old Builds and setting reasonable timeouts for builds will help ensure your build resources are available to your teams.

  • Credentials API
    • Jenkins allows managing and sharing credentials across jobs and nodes. Credentials can be set up and secured at the Folder level, allowing team-specific security settings and data.

Stressing the multi-tenancy model

As you scale your Jenkins use, you will find there are some stress points where it can be... less than ideal to share a single Jenkins master across teams:

  • Global configuration for plugins
    • Some plugins support only global configuration. For example, the Maven plugin's build step default options are global. Similarly, the Subversion SCM plugin's version configuration is a global setting.

    • If two teams want to use the same plugin differently, there aren't many options (even worse: different versions of the same plugin).

  • Plugin Installation and Upgrades
    • While Jenkins allows plugins to be installed without a restart, some plugins do require a restart on install. Further, all plugins require a Jenkins restart on update.

    • Some plugins have known performance, backward compatibility, and security limitations. These may be acceptable for one team, but perhaps not all your users.

  • Slave Re-use
    • When multiple-teams use the same slaves, they usually share access to them. As mentioned above, care must be taken to clean up slave nodes after executing jobs.

    • Securing access for sensitive jobs or data in the workspace is a challenge.

    • Scale
      • Like any software application, a single Jenkins master can only support so many builds and job configurations.

      • While determining an actual maximum configuration is heavily environment-specific (available system resources, number and nature of jobs, etc.), Jenkins tends to perform best with no more than 100-150 active, configured executors.

      • While we've seen some Jenkins instances with 30,000+ job configurations, Jenkins will need more resources and start-up times will increase as the job count increases.

    • Single Point of Failure
      • If more and more teams use the same Jenkins instance, when outages occur, the impact becomes larger.

      • When Jenkins need to be restarted for plugin updates or core upgrades, more teams will be impacted.

      • As teams rely more and more on Jenkins, particularly for automating processes beyond development (e.g.: QA, security, and performance test automation), down time for Jenkins becomes less acceptable.

    Tipping Point

    Hopefully this article saves you some time by laying out some of the stress points you'll encounter when setting up multi-tenancy in Jenkins. Eventually, you'll get to a tipping point where running a single, large multi-tenant Jenkins master may not be worth it. For that reason, we recommend developing a strategy for taking your multi-tenancy strategy to the next level: creating multiple Jenkins masters.

    For each organization, the answer is a little different, but CloudBees recommends establishing a process for creating multiple Jenkins masters. In a follow-up post, we'll highlight how CloudBees Jenkins Platform helps manage multiple Jenkins masters. With CloudBees Jenkins Operations Center, your multi-tenancy strategy is simply expanded to masters as well, making your Jenkins masters part of the same Jenkins Platform. We'll also share some successful strategies (and some not so successful strategies) for determining when to split your masters.

    Categories: Companies

    Neuxs Lifecycle and Atlassian Bamboo: Improve Your Builds

    Sonatype Blog - Mon, 06/08/2015 - 22:27
    Sonatype Lifecycle now provides native Atlassian Bamboo support to improve the quality of your build outputs. Sonatype provides instant analysis of open source components used in every Bamboo build and alerts development teams to any quality, license, or security issues identified.  By catching the...

    To read more, visit our blog at
    Categories: Companies

    JUC Speaker Blog Series: Will Soula, JUC U.S. East

    This year will be Will Soula's third time presenting at a Jenkins User Conference, fourth year as an attendee, and his first time at a JUC on the East Coast! In his presentation this year, Will will be talking about what Drilling Info uses to bring their entire organization together. ChatOps allows everyone to come together, chat and learn from each other in the most efficient way. 
    This post on the Jenkins blog is by Will Soula, Senior Configuration Management/Build Engineer at Drilling Info . If you have your ticket to JUC U.S. East, you can attend his talk "Chat Ops and Jenkins" on Day 1.

    Still need your ticket to JUC? If you register with a friend you can get two tickets for the price of one! Register for a JUC near you.

    Thank you to the sponsors of the Jenkins User Conference World Tour:

    Categories: Companies

    JUC Speaker Blog Series: Will Soula, JUC U.S. East

    Chat Ops and Jenkins

    I am very excited to be attending the Jenkins User Conference on the East Coast this year. This will be my third presentation at a JUC and fourth time to attend, but my first on the East Coast. I have learned about a lot of cool stuff in the past, which is why I started presenting, to tell people about the cool stuff we are doing at Drilling Info. One of the cooler things we have implemented in the last year is Chat Ops and our bot Sparky. It started as something neat to play with ("Oooo lots of kittens") but quickly turned into something more serious.

    Ever get asked the same questions over and over? What jobs to run to deploy your code? What is the status of the build? These question and more can all be automated so you do not have to keep answering them. Furthermore, when you do get asked you can show them, and everyone else, how to get the information by issuing the proper commands in a chat room for everyone to see. With chat rooms functioning as the 21st century water coolers, putting the information in the middle of the conversation is a powerful teaching technique. You are not sending people to some out dated documentation on how to get their code deployed, nor are you showing them the steps today only to be forgotten tomorrow. Instead you can deploy your code and they see the exact steps needed to get their code deployed.

    Even more impressive is the way ChatOps can bring your company together. Recently our CTO got a hipchat account so he could interact with Sparky. This gave me the idea that if we extend Sparky to deliver information useful to the other teams (Sales, Marketing, Finance, etc) then we would be able to get these wildly disparate teams in the same chat room together and hopefully they will talk and learn from each other. Where DevOps is the bringing together of Dev and Ops, ChatOps can be the bridge across the entire organization. Come see my presentation Day 1: Track 1 at 4:00 PM to learn how ChatOps can enrich your team, how Drilling Info is using it, and what our future plans entail for ChatOps.

    This post is by Will Soula, Senior Configuration Management/Build Engineer at Drilling Info. If you have your ticket to JUC U.S. East, you can attend his talk "Chat Ops and Jenkins" on Day 1.

    Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.

    Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

    Categories: Open Source

    Test Cases Are Not Software Testing

    Testing TV - Mon, 06/08/2015 - 17:43
    Software testing means evaluating a product by learning about it through experimentation. This is a dynamic, exploratory process. Although we might script parts of it, and even reduce some of it to programmatic fact checks, testing itself is a live performance. In fact, all technical work is a live performance. Programming, managing, designing…it’s all a […]
    Categories: Blogs

    Behaviour-Driven Development with Behat

    Software Testing Magazine - Mon, 06/08/2015 - 17:14
    Agile development is a big thing nowadays. Almost every project wants to deliver value as quick as possible, but not all of them succeed because of the share amount of work most projects require. But what if you could actually deliver 2 times more value, but 3 times less features? Behat is an open source Behavior Driven Development (BDD) framework for PHP inspired by the Ruby Cucumber BDD framework. This talk will discover the way to focus on quality as oppose to quantity in regards to software development. And more importantly, ...
    Categories: Communities

    Apache ANT Setup

    Testing tools Blog - Mayank Srivastava - Mon, 06/08/2015 - 14:07
    Follow the below steps to set-up ANT build tool- Go to the folder which have been downloaded in this cases “C:\Selenium\ANT\apache-ant-1.8.4-bin\apache-ant-1.8.4” Go to Computer properties->Click on Advance tab->Environment Variables In the System Variable Create an new variable name as “ANT_HOME” Paste the above ANT directory in the path text box Click Search for a PATH variable […]
    Categories: Blogs

    How To Test Responsive Web Apps with Selenium

    Sauce Labs - Mon, 06/08/2015 - 14:00

    The Problem

    When testing a web application with a responsive layout you’ll want to verify that it renders the page correctly in the common resolutions your users use. But how do you do it?

    Historically this type of verification has been done manually at the end of a development workflow — which tends to lead to delays and visual defects getting released into production.

    A Solution

    We can easily sidestep these concerns by automating responsive layout testing so we can get feedback fast. This can be done with a Selenium test, Applitools Eyes, and Sauce Labs.

    Let’s dig in with an example.

    An Example

    NOTE: This example is built using Ruby and the RSpec testing framework. To play along, you’ll need Applitools Eyes and Sauce Labs accounts. They both have free trial accounts which you can sign up for here and here (no credit card required).

    Let’s test the responsive layout for the login of a website (e.g., the one found on the-internet).

    In RSpec, a test file is referred to as a “spec” and ends _spec.rb. So our test file will be login_spec.rb. We’ll start it by requiring our requisite libraries (e.g., selenium-webdriver to drive the browser and eyes_selenium to connect to Applitools Eyes) and specifying some initial configuration values with sensible defaults.

    # filename: login_spec.rb
    require 'selenium-webdriver'
    require 'eyes_selenium'
    ENV['browser']          ||= 'internet_explorer'
    ENV['browser_version']  ||= '9'
    ENV['platform']         ||= 'Windows 7'
    ENV['viewport_width']   ||= '1000'
    ENV['viewport_height']  ||= '600'
    # ...

    By using Ruby’s ||= operator we’re able to specify default values for these environment variables. These default values will be used if we don’t specify a value at run time (more on that later).

    Next we need to configure our test setup so we can get a browser instance from Sauce Labs and connect it to Applitools Eyes.

    # filename: login_spec.rb
    # ...
    describe 'Login' do
      before(:each) do |example|
        caps                      = Selenium::WebDriver::Remote::Capabilities.send(ENV['browser'])
        caps.version              = ENV['browser_version']
        caps.platform             = ENV['platform']
        caps[:name]               = example.metadata[:full_description]
        @browser                  = Selenium::WebDriver.for(
          url: "http://#{ENV['SAUCE_USERNAME']}:#{ENV['SAUCE_ACCESS_KEY']}",
          desired_capabilities: caps)
        @eyes                     =
        @eyes.api_key             = ENV['APPLITOOLS_API_KEY']
        @driver                   =
          app_name:       'the-internet',
          test_name:      example.metadata[:full_description],
          viewport_size:  { width: ENV['viewport_width'].to_i,
                            height: ENV['viewport_height'].to_i },
          driver:         @browser)
    # ...

    In RSpec you specify a test suite with the word describe followed by the name as a string and the word do at the end (e.g., describe 'Login' do).

    We want our test setup to run before each test. To do that in RSpec we use before(:each) do. And to gain access to test details (e.g., the test name) we append a variable name in pipes to the incantation (e.g., before(:each) do |example|).

    To control the browser and operating system we use a Selenium Remote Capabilities object (e.g., Selenium::WebDriver::Remote::Capabilities.send(ENV['browser'])). With it we’re also able to specify the name of the test so it shows up correctly in the Sauce Labs job. We then connect to Sauce Labs by using Selenium Remote (specifying our credentials in the URL), passing our capabilities object to them (e.g., desired_capabilities: caps), and storing the browser instance they provide in an instance variable (e.g., @browser).

    Then we open a connection with Applitools Eyes by creating an instance of the Applitools Eyes object (e.g., @eyes =, specifying the API key, and calling (providing the application name, test name, viewport size, and the browser instance from Sauce Labs). This returns a Selenium object that is connected to both the browser instance in Sauce Labs and Applitools Eyes. We store this in another instance variable (e.g., @driver) which we’ll to drive the browser in our test.

    After each test runs we’ll want to close the Applitools Eyes session and destroy the browser instance in Sauce Labs. To do that in RSpec, we’ll place the necessary commands in use after(:each) do.

    # filename: login_spec.rb
    # ...
      after(:each) do
    # ...

    Now we’re ready to write our test. In it we will have access to two instance variables. One for the Selenium browser instance in Sauce Labs (e.g., @driver) and another for the job in Applitools Eyes (e.g., @eyes).

    # filename: login_spec.rb
    # ...
      it 'succeeded' do
        @driver.get ''
        @eyes.check_window('Login Page')
        @driver.find_element(id: 'username').send_keys('tomsmith')
        @driver.find_element(id: 'password').send_keys('SuperSecretPassword!')
        @driver.find_element(id: 'login').submit
        @eyes.check_window('Logged In')

    Tests in RSpec are specified with the word it, a string name for the test, and the word do (e.g., it 'succeeded' do).

    Our test is simple. It visits the login page and completes the login form with two visual verifications being performed — one after the page loads and another after completing the login.

    If we save this file and run it (e.g., rspec login_spec.rb from the command-line) it will work in a single screen resolution (e.g., 1000×600). Now let’s make it so we can specify _multiple_ screen resolutions and have it run the same test on all of them. To do that we’ll need a little help from a library called [Rake](

    Packaging Things Up

    With Rake we can create a file (e.g., Rakefile) and store tasks in it (using Ruby syntax) that we can call from the command line.

    Let’s create a task that will handle executing our test for each screen resolution we want in parallel.

    # filename: Rakefile
    desc 'Run tests against each screen resolution we care about'
    task :run do
      RESOLUTIONS = [ { width: '1000', height: '600' },
                      { width:  '414', height: '699' },
                      { width:  '320', height: '568' } ]
      threads = []
      RESOLUTIONS.each do |resolution|
        threads << do
          ENV['viewport_width']   = resolution[:width]
          ENV['viewport_height']  = resolution[:height]
          system("rspec login_spec.rb")
      threads.each { |thread| thread.join }

    In Rake you can provide a descriptor for a task with the keyword desc followed by the description text in a string (e.g., desc 'Run tests...'). Tasks are specified by the task keyword followed by the name of the task (specified as a symbol) and ending with the word do (e.g., task :run do).

    We start our :run task off by specifying the screen resolutions we want in key/value pairs (a.k.a. a hash) inside of an array (a.k.a. a collection). This enables us to easily iterate through the collection (e.g., RESOLUTIONS.each do |resolution|) and grab out the width and height values for each resolution. When we do that we’re storing them in environment variables (e.g., ENV['viewport_width'] = resolution[:width]) which are also used in our test code. So when we run our test (e.g., system("rspec login_spec.rb")) it will be using the correct width and height values.

    NOTE: The resolutions used here will trigger different screen layouts (e.g., desktop, smart phones, etc.). For a true test of your app, be sure to look at your usage analytics to see what screen resolutions your users are using.

    For each iteration of our screen resolution we’re creating a new thread, which will make each test run at the same time. So when we run this task, our single Selenium test will get executed three times (once for each resolution specified), and each run will use a difference screen resolution.

    After saving this file we can do a quick sanity check to make sure rake runs and the task is listed by issuing rake -T from the command-line.

    > rake -T
    rake run # Run tests against each screen resolution we care about

    To run this task it’s as simple as rake run from the command line. And to specify a different browser, browser version, or platform you just need to prepend the command with different values.

    browser=internet_explorer browser_version=8 platform="Windows XP" rake run
    browser=firefox browser_version=37 rake run
    browser=safari browser_version=8 platform="OS X 10.10" rake run
    browser=chrome browser_version=40 platform="OS X 10.8" rake run

    See the Sauce Labs platform documentation for a full list of available browser/OS combinations.

    Expected Behavior

    If we run this (e.g., rake run from the command-line) here is what will happen:

    • The test will run numerous times (in parallel) — once for each resolution specified
    • Each test will retrieve a browser instance from Sauce Labs and connect it to Applitools Eyes with the correct screen resolution
    • The test will run and perform its visual checks
    • The browser instance on Sauce Labs and connection to Applitools Eyes will close
    • The results for the job will be displayed in the terminal output

    When the rake task is complete, you can view the visual checks for each resolution in your Applitools Eyes dashboard. Each resolution will have it’s own job. In each job you can either accept or decline the result. Accepting will set it as the baseline for subsequent test runs. If you do nothing, then the result will automatically be used as the baseline. You can also see each of the test runs in full detail (e.g., video, screenshots, Selenium log, etc.) in your Sauce Labs job dashboard.

    On each subsequent run, if there is a visual anomaly for any of the given resolutions specified then the test will fail for that resolution — and you’ll be able to easily identify it.


    Hopefully this tip has helped you add automated responsive layout testing to your suite, enabling you to catch visual layout bugs early on in your development workflow.

    For reference, you can see the full code example here.

    Happy Testing!

    About Dave Haeffner: Dave is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

    Categories: Companies

    .NET Diversity

    NCover - Code Coverage for .NET Developers - Mon, 06/08/2015 - 12:42

    Part of what makes our .NET community of developers awesome is the diversity and flexibility that language offers. Today we celebrate two .NET developers who have different backgrounds but are linked through that diversity by a shared core understanding of .NET and giving back to our community. Check out these two .NET developers and how they make a difference:

    Jamil Haddadin

    ncover_mvp_jamil_haddadin_twitterA consultant, technical leader & trainer, Jamil Haddadin is a SharePoint Principle Consultant. With 10 years of experience in Microsoft products, he has been recognized as a SharePoint MVP since 2014, a .NET and SharePoint MCT since 2010 and a SharePoint MCPD.

    Jamil is a technology enthusiastic who enjoys coding, writing and teaching, working with SharePoint development and administration, and delivering high quality and long lasting solutions for small, medium and large scale enterprise. Jamil enjoys contributing to the community through blogging, authoring articles on thought-leadership sites and public speaking. He is one of the co-founders of UAE SharePoint User Group and Jordan SharePoint User Group. Connect with Jamil on twitter @jamilhaddadin and at his blog.

    Javier Holguera

    ncover_mvp_javier_holguera_twitterJavier Holguera is a passionate software engineer, driven by quality and well-delivered software solutions. Over the last 10 years, he has engaged with the software development process through every phase: from gathering and analysis, design, coding and testing, to delivery and maintenance. Currently, Javier works as a full-stack developer with MarketInvoice, a UK-based alternative finance provider.

    Javier’s contributions to the Spanish technical community have led to his recognition as a Microsoft MVP. He is also recognized as a Professional Scrum Developer. His specialties include C#, ASP.NET, agile, SOA, TFS, WCF, MVC, and DDD. Follow Javier on twitter @javierholguera or at his website.

    The post .NET Diversity appeared first on NCover.

    Categories: Companies

    Shortest Proof of Elegance

    Rico Mariani's Performance Tidbits - Mon, 06/08/2015 - 00:48

    About two months ago I had an extraordinary opportunity to talk to some Great People in the context of creating a computer science program at Reed College.  These days being in a roomful of people in which I am the least experienced, or nearly least, is not a thing that happens to me so very much.

    Imagine being in room full of people, each with so many interesting things to say that you feel the whole time that there are just not enough moments to adequately allow everyone to express what they are thinking.  And rather than wanting to talk so much your overwhelming inclination is that it’s so very important that you yield because it is crucial that the person over there be given a chance to speak because that’s how much you want to know what they think.

    That’s what my experience was like.  It was simultaneously gripping and frustrating because it was hard to truly finish a thought and yet there was so much more to learn by listening.

    There was one thing in particular that I tried to explain and I feel like I didn’t do nearly as good a job as I would have liked and so I’ve stewed on it somewhat and wrote these few words.  It’s about what makes some computer programs elegant.  And why elegance is, in my view anyway, a much more practical thing than you might think.

    For me elegance, simplicity, and correctness are inextricably entwingled.  A notion I’ve been recently introduced to is that the best code is the code “shortest proof of correctness”, but what does that even mean?

    Well, for one thing it means that code in which you can apply local reasoning to demonstrate correctness is superior to code in which you must appeal to many broader factors about how the code is combined in some larger context to demonstrate the same.

    But already I’m ahead of myself, what do we mean by correctness?  In an academic context one might appeal to some formal statement of given conditions and expected results, but the thing is that process is so very artificial.  In some sense creating those statements is actually the hard part of a professional programmer’s work.

    So, in a very real way, putting aside any academic aspirations, or great theories, just as a practiced coder trying to do his/her job, the first thing you must do if you want to be successful is to describe what it is you intend to do in some reasonable way, and why that is right.  It doesn’t have to be fancy, but it’s essential.  In Test Driven Development we say “write a test that fails” which is another way of saying “describe some correct behavior that isn’t yet implemented and do so with code in the form of a unit test”, but the essential idea is to describe your intent clearly.  It's essential because the best you can ever hope to do as far as correctness goes is to demonstrate that the code is working as you intended.

    It’s funny that I sort of had to re-learn this over the years.  When I first started coding I didn’t even own a computer so I would make my plans in a notebook.  In my sometimes not-so-little books I would describe what I wanted to do, and even code up things in writing and cross them out and so forth.  I had to do this because there was no computer to tempt me to just “bang out the code” and fix it; thinking about the problem was a necessary pre-step.  When my friends asked me how I went about writing programs I always said, “Well, first you make a plan”.

    Intent can and should encompass a variety of factors depending on the situation.  Not just a statement of inputs and outputs but perhaps CPU consumption, memory consumption, responsiveness and other essential characteristics of an excellent solution.   And so correctness likewise encompasses all these things.

    When I talk about a “short proof of correctness” I usually mean only in the sense that by looking at the intent and the code that you can readily see that it is doing exactly what was intended without long and complicated chains of reasoning.  And hopefully with limited or no appeal to things that are going on elsewhere in the overall system.  The bad proofs read something like “well because that number way over there in this other module can never be bigger than this other number over in this other module then this bad-looking chain of events can’t actually happen and so we don’t ever have a problem.”

    Of course, the trouble with relying on those sorts of long proofs is that changes in galaxies far far away can (and do) break anything and everything, everywhere.

    So far we’re only talking about reasoning about the code, but in some sense that’s not enough for an engineer.  To make things better in practical ways you want to have tests that actually try the various conditions and force the code to do the right thing -- or else.  This affirmative verification helps to make sure that nothing is inadvertently broken by those that follow and provides evidence that the reasoning was correct in the first instance.

    Likewise in good code we sprinkle other verifications, like “asserts” that will cause the code to immediately fail should underlying assumptions be violated.  This is an important part of the process because occurrence of these unexpected situations mean our proofs are no longer valid and some change is necessary to restore a working state.

    All of these considerations are of profound practical value, and it’s a rare situation in which the practical desires to be thorough also result in the most elegant designs, which are necessarily the most minimal and the easiest to understand but do the job.  It's even more amazing that these methods tend to get the best overall productivity for an organization, at least in my experience.

    Good results begin by understanding your intent, and describing what a great solution looks like.  When you can show, in far fewer words than this blog, that your code does just what you intended, and you have tests to prove it, and safeguards to ensure your proof does not become accidently obsolete, you will have, in my mind, produced a solution that is indeed elegant.

    And we all know beautiful code when we see it.

    Categories: Blogs

    On Humility and Being Humbel

    Hiccupps - James Thomas - Sun, 06/07/2015 - 21:37
    From plitter to drabbletail: the words we love is list of lexical lostlings, of forgotten or underused words such as clarty, slipe, eschew and splunder that have special appeal to a selection of leading authors. In his piece, Robert MacFarlane offers the term apophenia, attributed to Klaus Conrad, and defined as:
    the unmotivated perception of connections between entities or data ... abnormal meaningfulness.MacFarlane counters apophenic tendencies by approaching his work in a way he describes using another uncommon locution, humbel, from James Stout Angus, the Shetland poet:
    to reduce protruberant parts ... as the beard of corn is knocked off by ... thrashing with a flail.I've long thought that it's a useful heuristic for testers to be humble (1, 2)  and to that I can now add that we should also be humbel.
    Categories: Blogs

    The piano analogy: some practice required

    Thought Nursery - Jeffrey Fredrick - Sun, 06/07/2015 - 19:10

    This year I’m training people in the theories of Chris Argyris, helping them to apply the concepts, and this raised some fun challenges. The challenge on my mind today is how to convince people that practice will be required before they can perform well? My current analogy is the piano.

    After a quick search I can show you a three minute video of a 14-year old explaining how a grand piano works. If you’ve got an extra minute I could share a four minute animation that illustrates the mechanism in detail. You probably already know that in a piano the strings vibrate and that produces the sounds you hear. It would take moments to strike each key and allow you to hear each note. Having invested less than thirty minutes you could understand a piano and how it works. You can’t play it, but you know you can’t play it. You were unlikely to have mistaken understanding the concepts for being able to produce the result.

    Action Science seems different.

    I’ve introduced dozens of people to the topic through Roger Schwartz’s excellent Eight Behaviours for Smarter Teams, a sort of Shu-level guide to producing Mutual Learning behaviour. The response is typically positive, enthusiastic even, and general  agreement they should start behaving in a mutual learning way. However they also believe that now they understand mutual learning behaviour they can also produce mutual learning behaviour. They mistake understanding the concepts with being able to produce the result. Worse, their own incompetence makes them blind to their lack of skill.

    So this is where the piano analogy comes into play. Everyone acknowledges the gap between understanding and performance. I use the piano analogy to set the expectation that practice will be required. Then we begin using the two-column case study to start retraining their ear, allowing them to begin hearing the difference between Model 1 and Model 2 behaviour for the first time. And when someone is discouraged by their performance, the analogy is there again to help them have realistic expectations: “How long have you been practicing the mutual learning approach? How long do you think it should take to retrain from a lifetime of habit and cultural norms?”

    Do you have a technique you use to help set expectations for skill acquisition and maintaining motivation? If so I’d love to hear about it in the comments or on Twitter.

    Categories: Blogs

    Knowledge Sharing

    SpiraTest is the most powerful and affordable test management solution on the market today