Skip to content

Feed aggregator

Luncheon Meet

Hiccupps - James Thomas - Fri, 04/08/2016 - 07:25

As manager of a test team I try to get everyone on the team presenting regularly, to the team itself at least. A significant part of the tester role is communication and practising this in a safe environment is useful. There's double benefit, I like to think, because the information presented is generally relevant to the rest of the team. We most often do this by taking turns to present on product features we're working on.

I encourage new members of the team to find something to talk about reasonably early in their time with us too. This has some additional motivations, for example to help them to begin to feel at ease in front of the rest of us, and to give the rest of us some knowledge of, familiarity with and empathy for them.

I invite (and encourage the team to invite) members of staff from other teams to come to talk to us in our weekly team meeting as well. Again, there are different motivations at play here but most often it is simple data transfer. I used to do this more, and with a side motivation of building links across teams and exposing us to new and perhaps unexpected ideas, or generating background knowledge. But at one of our retrospectives it became clear that some of the testers felt that some of the presentations were not relevant enough to them and they'd rather get on with work.

It pays to listen to your team.

So, along with Harnessed Tester, I set up Team Eating which is a cross-company brown bag lunch. And it's just reached its first anniversary! And, yes, I know its name is a terrible pun. (But I love terrible puns.)

Here's a list of the topics we've had in the first year:

We've had three guest speakers (Chris George, Neil Younger and Gita Malinovska) and, as you can see, there's been a bit of a bias towards testing topics, although that reflects the interests of the speakers more than anything else. There's no constraints on the format (beyond practical ones) so we've had live demos, more traditional talks and, this week, an interactive storytelling workshop.

The response from the company has been good and we've had attendees from all teams and presenters from most. The more popular talks were probably those by Roger, our new (and first) UX specialist. He's done two: first to introduce the company to some ideas about what UX is and then later on how he was beginning to apply his expertise to a flagship project.

I've been really pleased with the atmosphere. There's a positive vibe from people who want to be there listening to their colleagues who have something that they want to share.

One surprise to me has been a reluctance from some of the audience to have their lunch in the meetings. Some people, I now find, consider it impolite to be eating while the presenter is talking. Given the feedback from my team which prompted us to start Team Eating, I was keen that it shouldn't take time away from participants' work and so fitting it into a lunch break seemed ideal.

But although eating is in the name, the team part is much the more important to me and I feel like it's serving the kind of purpose that I wanted in that respect. Quite apart from anything else, I'm personally really enjoying them and so here's to the next 12 months of rapport-building, information-sharing, fun-having Team Eating.

Categories: Blogs

Building-in performance with continuous delivery and continuous testing

HP LoadRunner and Performance Center Blog - Thu, 04/07/2016 - 23:19


Are you fascinated by the result you have delivered by utilizing DevOps principles, but are still trying to figure out how to build-in performance? Are you looking to have performance automated throughout your continuous delivery and continuous testing lifecycle? Keep reading to learn how to accomplish this.

Categories: Companies

How Every Tester can do Performance Testing

Yet another bloody blog - Mark Crowther - Thu, 04/07/2016 - 22:26
Performance testing is often passed onto a 3rd party provider of testing services in its entirety. That is usually because the test team don’t feel they have the ability, experience or perhaps the tools to carry out the testing.
Yet, just like Security testing we can break Performance testing down into a set of discrete test types under the overall label of Performance. In doing this we give more opportunity for the test team to a level of Performance testing that draw on their understanding of the system or application under test. Let’s take the example of Performance testing a website, as it’s easy to get access to those and practice the techniques described. Most Performance testing is either benchmark because the site is new or comparative, because some changes have been made and we want to ensure the site is as performant as before. However, that covers performance from the user facing perspective. To get a complete picture we need to do Performance testing of the infrastructure too. This testing would include both the underlying infrastructure and connected network devices, plus the site exposed to users and the actions they perform. In summary then we could break-down Performance testing to the following types: Comparative Performance• Response Time• Throughput• Resource Utilisation
Full System Performance• Load• Stress• Soak For the purposes of this post, I’m going to ignore the Full System Performance and suggest in this scenario we need to get a 3rd party in to help us out. The comparative Performance testing of the website however is perfectly doable by the test team. Let’s see what and how.
Response Time ComparisonThe user’s perception of the time it takes the service to respond to a request they make, such as loading a web page or responding to a search, is the basis for Response Time comparison testing.Measuring Response Time
Response time should be measured from the start of an action a user performs to when the results of that action are perceived to have completed, for some singular task. The measurement must be taken from when a state change is triggered by the start of an action such as clicking a link to navigate from one page to another, submitting a search string or confirming a filter they have just configured on data already returned. For Services with a web front end, use the F12 developer tools in IE (for example) to monitor timings from request to completion. 1. Open IE, hit F12 and select ‘Network’, then click on the green > to record  2. Enter the target URL and capture the network information3. Click on ‘Details’ and record the total time taken   Test EvidenceA timing in seconds should be taken and recorded as the result in the test case. Multiple time recordings are advisable to ensure there were not lulls or spikes in performance that skew the average result. ---
Throughput ComparisonThis measure is the time it takes to perform a number of concurrent transactions. This could be performing a database search across multiple tables or generating a series of reports.Measuring Throughput
Measuring Throughput from the user’s perspective is very similar to measuring response time, but in this case Throughput is concerned with measuring the time taken to perform several tasks at once. As with Response time, the measurement should be taken from the start of an action to its perceived end. A suitable action for Throughput might include the generation of weekly/monthly/yearly reports where data is drawn from multiple tables or calculations are performed on the data before a set of reports are produced. Monitor system responses in the same way for Response Comparison above, but also include checks of dates and timings on artefacts or data produced as part of the test. In this way the user facing timings plus the system level timings can be analysed and a full end to end timing derived.
Test EvidenceCareful recording of the time taken to complete the task is needed, as with throughput tests it may not always be obvious when a task has completed. For example, if outputting a series of files, check the created date and time for the first and last files to ensure the total duration is known. Record the results in the relevant test cases, ideally of several runs as suggested for Response time. ---
Resource UtilisationWhen the service is under a certain workload system resources will be used, e.g. processor, memory, disk and network I/O. It’s essential to assess what the expected level of usage is to ensure no unacceptable degradation in performance.
Measuring Resource UtilisationUnlike Response and Throughput comparisons, Resource Utilisation measurement can only be done with tools on the test system that can capture the usage of system resources as tests take place. As testing will not generally need to prove the ability of the service to use resources directly, it’s expected this testing will be combined with the execution of other test types, such as Response and Throughput, to assess the use of resources when running agreed tests. Given this, the testing would ideally be done at the same time as Response and Throughput. One example way to monitor resource usage is by using the Performance Monitoring tools in the Windows OS. To allow us to go back to the configuration of Monitors we set up it’s actually best to use Microsoft Management Console. Here’s how:
1. Open the Start/Windows search field and enter MMC to open the Microsoft Management Console2. In MMC add Performance Monitor snap-in via File > Add/Remove Snap-in... 

3. Load up the template .msc file that includes the suggested monitors by going to File > Open and adding the .msc file
To do this, save ta copy of the file on GitHub
4. The monitoring of system resources will start straight away.5. To change the time scale that's being recorded; Right click on 'Performance Monitor', select 'Properties' and change the duration to slightly beyond the length of the test you're running. 
Test EvidenceWhere possible extracted logs and screen shots should be kept and added to the test case as test evidence. Some analysis will of the results will need to be done and as with other comparative test types several runs are suggested.
So there we go, it’s easy to do simple performance checks that can then inform the full system performance testing or stand on their own if that’s all you need. Mark.
Categories: Blogs

Keep your users happy and your apps running smoothly with mobile performance testing

HP LoadRunner and Performance Center Blog - Thu, 04/07/2016 - 21:03

mobile app stop teaser.png

Customer expectations for their mobile applications are on the rise and meeting them is a demanding task. Keep reading to find out more about Hewlett Packard Enterprise’s comprehensive performance engineering solution for mobile applications to meet your customer demands.

Categories: Companies

TestRail Highlight: Test Management Project History and Test Case Versioning

Gurock Software Blog - Thu, 04/07/2016 - 16:36


In our new TestRail Highlight blog series we will take an in-depth look at select features of our test management software TestRail. As software projects evolve, every testing team needs to think about how they will handle changing requirements and test cases for their projects and releases over time. Today we will take a look at TestRail’s unique features that help help teams archive test runs, track the history of test case changes and easily manage baselines for different project release branches.

Not every team needs to use the full capabilities of these features. One of our main design goals for TestRail is to make the application as easy to use as possible, while at the same time providing all advanced features large teams need to grow with TestRail. For example, basically all teams using TestRail will benefit from TestRail’s test case history functionality or the ability to archive testing efforts. But not all (or even most!) teams need to use TestRail’s advanced baseline project mode. Please see below for an overview of TestRail’s rich project and test case versioning support!

Try TestRail Now

Get started with TestRail in minutes
and try TestRail free for 30 days!

Archiving and Closing Test Runs/Plans

In addition to managing and organizing all current testing efforts, many teams adopt a test management tool so they can easily store, track and review past test results. TestRail makes it very easy to start many test runs against your test cases over time (e.g. for different iterations and project releases), and even start test runs in parallel (e.g. to test different platforms and configurations at the same time). This makes it easy to reuse the same case library without duplicating or synchronizing any test cases.

But if you are changing and improving your test cases for new project versions, wouldn’t old test runs show the wrong test case details? Isn’t it critical that old test runs show the exact test case details they were tested against? Enter TestRail’s Close Run/Plan feature. TestRail provides a unique feature to easily archive and close test runs and test plans. When you close a test run or plan, TestRail actually archives and copies all related test case details behind the scenes. So if you move, update or even delete test cases in the future, closed test runs wouldn’t be affected by this and would still always show the exact test case details your team tested against.

TestRail also prevents testers from making any additional changes to a closed test run and its results, so you can always review your previous test results and be sure that the archived runs weren’t modified. While at first a seemingly simple feature, TestRail’s Close Run/Plan option requires a lot of work behind the scenes to accomplish our goal of making closed test runs immutable. This functionality is critical for basically any testing team and surprisingly few test management tools actually offer similar features that come close to TestRail’s implementation.

Close test runs & plans for accurate test case and result archives Full Test Case Change History

TestRail makes it easy to update your test cases at any time, so you can directly improve and change your test cases during testing or when you review your case library. This is especially helpful when your application and development is under constant change and if you need to adopt to changed requirements quickly. Test cases changes are also automatically reflected in all active test runs. So especially teams that use exploratory testing and other agile testing techniques benefit from live updating and improving test cases during testing.

With all the changes teams make to test cases over time, it’s often helpful to know what changes were made, who made those changes and when a test case was updated. TestRail keeps a detailed log of all changes that were made to a test case and you can easily review this log. When you open a test case in TestRail, simply switch to the History tab from the sidebar. Not only will you be able to see when a test case was updated and who made the changes, but you will also see a detailed diff of all changed test case attributes. This also makes it easy to revert any changes by copying previous test case details to the latest version.

Full test case history to track test changes by person, date and content Project Suite Modes and Baseline Support

When we originally introduced TestRail 4.0 about 18 months ago, we added new suite modes to manage your test cases. In earlier TestRail versions we enabled test suites by default to organize your test cases in top level test suites and sections. As we invested a lot of resources over the years to make test suites much more scalable and as we introduced new view modes, we decided to default to a single test case repository per project. Teams can still enable and use test suites, but for most projects using a single case repository per project and organizing test cases via sections and sub sections works much better.

But this is not the only option we introduced at that time. Lesser known to many TestRail users, we also added new baseline support to projects. Baselines allow you to create multiple branches and test case copies within a project. Internally a baseline is similar to a test suite, and it includes all sections and test cases of the original baseline or master branch it was copied from. You can switch a project to baseline mode under Administration > Projects > edit a project.

Should your team use and enable baseline support for your projects? Probably not! Teams should only use baselines in a specific situation: if you have to maintain multiple major project versions for a long time in parallel. That is, your team needs to maintain and release multiple main branches (e.g. 1.x and 2.x) in parallel for many months or years, and the test case details will be quite different for each baseline over time, as each version will need different test cases. We designed baseline support specifically for this scenario and it’s a great way to manage multiple branches and versions of a project’s test case repository for multiple parallel development efforts.

Using baselines to maintain multiple parallel test case repository branches Try TestRail Now

Get started with TestRail in minutes
and try TestRail free for 30 days!

The above mentioned features make it easy to manage, track and review different test case versions and changes over time. Not every team will need all of the above features though. Especially the baseline mode should be limited to projects where this is really required. But TestRail offers advanced versioning and history tracking features for all kinds of scenarios and configurations.

In addition to the mentioned tools to manage your test case versions, TestRail also comes with various helpful options to track your releases, iterations and sprints via milestones and test plans. Make sure to try TestRail free for 30 days to improve your testing efforts if you aren’t using it yet, and check out TestRail’s great versioning support.

Categories: Companies

SmartBear Launches TestLeft

Software Testing Magazine - Thu, 04/07/2016 - 09:12
SmartBear Software has announced a new developer focused test automation tool, TestLeft. The tool enables developers working in an Agile and continuous delivery environment to create robust tests within IDEs, which helps drastically reduce test creation and maintenance time. TestLeft is a powerful yet lean functional testing tool for developers who are testing in Agile teams. It fully embeds into standard development IDEs such as Visual Studio, which helps minimize context switching and allows developers to create test cases in their favorite IDEs. Additionally, TestLeft comes with visual tools, which assists developers to quickly and easily identify correct object properties for application under test. Using TestLeft, developers can easily generate test code simply by dragging and dropping identifiers over objects on the screen. Furthermore, access to built-in methods and classes is available for code completion and faster scripting. Tests created by TestLeft can even be brought within SmartBear’s TestComplete for consumption by testers. For developers using Visual Studio, TestLeft’s ability to work within their IDE and built-in access to object and method library for faster test creation was particularly helpful.
Categories: Communities

Automating test runs on hardware with Pipeline as Code

In addition to Jenkins development, during last 8 years I’ve been involved into continuous integration for hardware and embedded projects. At JUC2015/London I have conducted a talk about common automation challenges in the area. In this blog post I would like to concentrate on Pipeline (formerly known as Workflow), which is a new ecosystem in Jenkins that allows implementing jobs in a domain specific language. It is in the suggested plugins list in the upcoming Jenkins 2.0 release. The first time I tried Pipeline two and half years ago, it unfortunately did not work for my use-cases at all. I was very disappointed but tried it...
Categories: Open Source

Jenkins Community Survey Results

This is a guest post by Brian Dawson at CloudBees, where he works as a DevOps Evangelist responsible for developing and sharing continuous delivery and DevOps best practices. He also serves as the CloudBees Product Marketing Manager for Jenkins. Last fall CloudBees asked attendees at the Jenkins User Conference – US West (JUC), and other in the Jenkins community to take a survey. Almost 250 people did – and thanks to their input, we have results which provided interesting insights into how Jenkins is being used. Back in 2012, at the time of the last community survey, 83% of respondents felt that Jenkins was mission-critical. By 2015, the...
Categories: Open Source

Jenkins 2.0 Release Candidate available!

Those who fervently watch the jenkinsci-dev@ list, like I do, may have caught Daniel Beck's email today which quietly referenced a significant milestone on the road to 2.0 which has been reached: the first 2.0 release candidate is here! The release candidate process, in short, is the final stabilization and testing period before the final release of Jenkins 2.0. If you have the cycles to help test, please download the release candidate and give us your feedback as soon as possible! The release candidate process also means that changes targeting release after 2.0 can start landing in the master branch, laying the groundwork 2.1 and beyond. I pushed the merge to master. So...
Categories: Open Source

Selenium Webdriver with C# - Cheat Sheet

Yet another bloody blog - Mark Crowther - Wed, 04/06/2016 - 21:45
Hey All,
I've been on a client site where we're using Visual Studio, Selenium WebDriver and C# for not only web front end but more system level automation testing.
As part of getting tooled up and informed as to how our favourite tools work with C# the team and I put together a Cheat Sheet to get us all started quickly. I thought I'd share that with you in a brief post.
Be warned, completing the below takes maybe an hour to get set up and then about 2 to 3 days full on to go through the material. If you’re working, with family, etc. expect a week with great focus.
One of the biggest challenges with adopting a new technology set is simply getting started. How often do we wish for a guiding hand to get us through the first baby steps and off building tests? Well, if you're a Test Architect like me, pretty much all the time! I hope the below helps.
1. Install Selenium IDE on FirefoxNo really. As I've said before the IDE is great for doing web page node discovery and grabbing those names, IDs, CSS classes, etc. quickly and easily. This allows you to do a rough Proof of Concept script to prove the automation flow and then export the Selenease commands as C# in this case.
You'll then strip out of the code the elements you want and discard the rest. The alternative is you can right-click, Inspect element and read the code. Just use the IDE.
Get it from the Chrome store or here:

2. Get Visual StudioIn order to structure and build out your C# code you'll want to grab a copy of Visual Studio. There are many flavours and if your company is a Microsoft house go get IT or whomever to provide you a copy. Failing that or if you're suffering budget restrictions you can grab a free version.
The best I've found is Visual Studio Community Edition. Once installed you'll need to sign-in with a Microsoft email, part of the universal account / ID approach they now use.
Get Community Edition from here: 

3. Learn C# BasicsIf you're new to C# then you'll need to learn a little. There's a great resource over on the Microsoft Virtual Academy which you can take for free:
I've been told the link can sometimes say the course has expired. If you see that, just hit YouTube: 

4. Practice Selenium C#If you want to jump straight in and not start mastering C# to get building out a framework at this point, then the site you want is this one:
Or possibly better still watch Learning Selenium Testing channel on YouTube: 

5. Practice, Practice, PracticeOnce you’re set-up and running with your first basic tests, be sure to practice practice and practice some more. Here’s some great sites to practice against:

If you need a book then get the only book out there that has pretty much all the answers you need in one place: Selenium Recipes by Zhimin Zhan
 Get the book

Good luck!

Categories: Blogs

Get Started with Protractor Testing for AngularJS

Sauce Labs - Wed, 04/06/2016 - 16:00

How do you test your AngularJS applications? With Protractor. Protractor is an end-to-end testing framework for AngularJS applications. This getting started guide is for new software developers in test who are interested in learning about Protractor testing. By following this getting started guide, you’ll understand how to build a firm foundation and learn fundamental Protractor testing for AngularJS applications.

Build a solid foundation

To build better software tests, you’ll need a solid base of the technologies behind your application, which teaches you the principles that will be essential when coding, executing and debugging your tests. Let’s focus on the correct learning path for Protractor testing. This is the age of JavaScript (JS) applications and frameworks. JavaScript is the foundation and critical learning block needed to be successful with Protractor testing, and what follows is your roadmap to learn JavaScript, NodeJS, and AngularJS for Protractor testing.

JavaScript is a very significant piece of the front-end development stack. How much JavaScript knowledge do you need to have before jumping head-first into frameworks like NodeJS, AngularJS, and Protractor? All of them are based on JavaScript, so you need to have a solid grasp of pure JavaScript before jumping to any of these frameworks or libraries. Just note that during the journey of learning AngularJS and NodeJS that they are two different things. AngularJS is for front-end development, and NodeJS is for server-side. I suggest you start learning core JavaScript before jumping into the other frameworks and libraries.

Dedicate the time to learn and master Javascript functions, events, error handling, errors and debugging before starting the Protractor testing journey.

NodeJS is for server-side, and I only suggest taking a quick crash course.

AngularJS has become the go-to JavaScript framework for enterprises and large companies’ front-end development. The baked-in directives are the most important and complex component in AngularJS and Protractor testing, which raises the importance of building directive strategies for testing, from Developer to Automation Engineer collaboration.

Protractor supports AngularJS directive strategies, which allows you to test AngularJS applications without much effort. Protractor is a Node program, which is a wrapper around WebDriverJS. I recommend skimming through the WebDriverJS Users Guide, Protractor API and Protractor Style Guide before writing any tests. Protractor uses Jasmine or Mocha for its test syntax.

Jasmine and Mocha are very similar, behavior-driven development frameworks for testing JavaScript code.

What is the point of learning all of these languages and frameworks? At the end of the day, end-to-end tests fail and will be tough to debug. It will also be difficult to locate root causes. Without a core foundation of Javascript, NodeJS and AngularJS would make debugging tough.

Getting started with some fundamentals of Protractor under your belt

How Protractor works and interacts with AngularJS (workflow)

Protractor Components - Sauce Labs

Spec and Configuration Files – Protractor needs two files to run, the test or spec file, and the configuration file. The spec file (test) is written using the syntax of the test framework you choose, such as Jasmine or Mocha along with Protractor API. The configuration file is simple. It tells Protractor how to configure the testing environment – Selenium server, which tests to run, browsers and more.

AngularJS directives – When searching for elements in the AngularJS app using Protractor, taking advantage of AngularJS directives will save you hours of pain and frustration. Avoid using CSS selectors like IDs and Classes as a last resort when writing Protractor tests. You heard me! This is a best practice of Protractor development. Avoid using CSS selectors.

Mocking – One of the main reasons for mocking is to prevent flaky tests. Everyone executing end-to-end tests comes to this crossroad. We can mock some or all of our services, HTTP backend, module and more.

Promises – All Protractor methods are asynchronous and return promises. Check out to learn more about asynchronous JavaScript functions.

Control Flow – WebDriverJS maintains a queue of pending promises, called the control flow, to keep execution organized.

elementExplorer and Elementor – used for debugging or first writing a Protractor test. You can enter a protractor locator or expression and elementExplorer/ elementor will test it against a live protractor instance. Elementor is considered an improved element finder for Protractor.


This article isn’t a complete list of resources — just a starting point for new software developers in tests who are interested in learning about Protractor testing. Keeping up-to-date on front-end technologies can be exhausting, but with a core JavaScript foundation, you will be fine.

Greg Sypolt (@gregsypolt) is a senior engineer at Gannett and co-founder of Quality Element. He is a passionate automation engineer seeking to optimize software development quality, while coaching team members on how to write great automation scripts and helping the testing community become better testers. Greg has spent most of his career working on software quality — concentrating on web browsers, APIs, and mobile. For the past five years, he has focused on the creation and deployment of automated test strategies, frameworks, tools and platforms.

Categories: Companies

Stop planning; fix the leak!

Sonar - Wed, 04/06/2016 - 14:32

So there you are: you’ve finally decided to install the SonarQube platform and run a couple of analyses on your projects, but it unveiled so many issues that your team doesn’t know where to start. Don’t be tempted to start fixing issues here and there! It could be an endless effort, and you would quickly be depressed by the amount of work that remains. Instead, the first thing you should do is make sure your development team fixes the leak. Apply this principle from the very beginning, and it will ensure that your code is progressively cleaned up as you update and refactor over time. This new paradigm is so efficient at managing code quality that it just makes the traditional “remediation plan” approach obsolete. Actually, so obsolete that related features will disappear in SonarQube 5.5: action plans and the ability to link an issue to a third party task management system.

“Why the heck are you dropping useful features? Again!?…”

Well, we’ve tried to dogfood and really use those features at SonarSource ever since we introduced them – but never managed to. Maybe the most obvious reason we never used them is that far before conceptualizing the “Leak” paradigm, we were already fixing the leak thanks to appropriate Quality Gates set on every one of our projects. And while doing so, nobody was feeling the need to rely on action plans or JIRA to manage his/her issues.

There are actually other reasons why those features never got used. First, action plans live only in the SonarQube server, so they don’t appear in your favorite task management system. Because of that, chances are that you will eventually miss the related dead-lines. This is why you might be tempted to “link issues” to your task management system. But this “Link to” feature isn’t any better. Let’s say you’re using JIRA in your company. When you link an issue to JIRA, the SonarQube integration automatically creates a ticket for that issue. So if you want to keep track of 100 issues, you’ll end up with 100 JIRA tickets that aren’t really actionable (you just have a link back to SonarQube to identify every single issue) polluting your backlog. What’s even worse is that when an issue gets fixed in the code, it will be closed during the next SonarQube analysis, but the corresponding ticket in JIRA will remain open! Anyway, issues in the SonarQube server and tickets in JIRA just don’t have the same granularity.

“Still, there are cases when I really want to create a remediation plan. How can I do that?”

As discussed previously, you should really avoid defining a remediation plan, and take the opportunity instead to spend the energy on “fixing the leak” instead. Still, occasionally, you might be forced to do so. The main case we can think of is when you absolutely want to fix critical bugs or vulnerabilities found on legacy code that might really affect your business if they pop up in production. In that scenario, indeed you might want to create a dedicated remediation plan so that your development team gets rid of this operational risk.

The good thing is that SonarQube already has everything you need to clearly identify all those issues and plan a task to make sure they got fixed – whatever task management system you’re using:

  1. In the SonarQube UI:
    1. Start tagging issues you want to fix with a dedicated and specific tag, like “must-fix-for-v5.2″
    2. Create a public “issue filter” that displays only issues tagged with ”must-fix-for-v5.2″
  2. In your task management system:
    1. Create a ticket in which you reference the URL of the issue filter
    2. Set a due date or a version
  3. You’re done! You have a remediation plan that you can manage like any other task and your team won’t forget to address those issues.

“I don’t need anything more then?”

Well, no. Defining remediation plans this way gives the best of both worlds: identifying issues to fix in the SonarQube UI, and planning the correspond effort in your own beloved task management system.

And once again, remember that if your team fixes the leak, chances are you will not need to create a remediation plan any longer. So yes, even if I’m the one who initially developed Action Plans and the “Link to” features a long time ago, I think it’s really time to say bye bye…

Categories: Open Source

National Software Testing Conference, London, UK, May 17-18 2016

Software Testing Magazine - Wed, 04/06/2016 - 13:00
The National Software Testing Conference is a two-day conference taking place in London and focused on software testing. The speakers are active testing professionals that have and fought their way through literally dozens and dozens of like-minded professionals and come out on top. The conference program delivers up-to-date and cutting-edge content and pragmatic advice to software testing current issues. In the agenda of the National Software Testing Conference of London you can find topics like “Why is Diversity More Important Than Ever Within Assurance Disciplines?”, “Testing for Digital Using One Script in One Lab”, “The Art of Questioning to improve Testing, Agile, and Automating”, ” Changing Trends, Test Automation and Continuous Delivery”, “MicroServices Testing and Automation strategy on Mobile Technology”, “Testing: Evolution or Revolution?”, “Introducing Quality through Behaviour Driven Change Implementation”. Web site: Location for National Software Testing Conference of Londonconference: The British Museum, Great Russell St, London WC1B 3DG, UK
Categories: Communities

My earliest computers

Yet another bloody blog - Mark Crowther - Tue, 04/05/2016 - 23:59
Gerald Weinberg recently posted about his earliest computers and some of the early influences that got him into computing. Check his post out here:

That got me thinking about how I arrived here, at a 16+ year long career in software testing. Now clearly I arrived a bit later than Gerald so I can tell you I have never and no doubt will never use a slide rule. In truth I doubt I even know what one is really.

Being amazed by a calculator aside and the amazing things you could do with that (2318008) the earliest computing thing I remember was getting an Oric Atmos. I can't even recall how it was programmed. I do remember plugging it in and nothing appearing on screen. Then discovering we had to tune in the portable TV my Mum had bought me to see the stunning output this thing could generate.
Oric Atmos

The next marvel I encountered in junior school, the world changing ZX81. How many of you remember those things? My two friend Chris Duignan and Shweb Ali formed the CAD computer club and blasted our way through many lunch times typing in the printed programmes we got from computer magazines. The problem was they were copies of printouts done on thermal paper. Consequently, they never worked first time. A ; or : is very hard to see on copied thermal printouts! Larger programmes went onto the 48K RAM pack, so long as it didn't move accidently and lose all your work. 

Sinclair ZX81

Now at this point the home computing market started to introduce serious competition. Vying for attention at the same time where the Amstrad Commodore CPC464, Sinclair ZX Spectrum and the Commodore 64 if I remember correctly. My friends and I switched to the Spectrum camp pretty solidly.
Mine was a Spectrum 64 with those dandy rubber keys, so special.Sinclair ZX Spectrum and the Commodore 64 

I also recall at some point getting my hands on a VIC 20. Can't remember what I ever did with this one though!
Vic 20(
From here on it was all "IBM" PC's as they used to get called. That was the way forward. Many a DOS disc load after and productivity was sky high. The time of course being spent on my first video game: Alone in the Dark
Ah, those were the days. I'm just glad they're over and I can hardly remember what I ever did with these things or used them for. Give me my Win 7 and 10 boxes, MS tech and Office with ethernet connection any day!
Categories: Blogs

Quality is Value-Value to Somebody

Hiccupps - James Thomas - Tue, 04/05/2016 - 22:19
A couple of years ago, in It's a Mandate, I described mandated science: science carried out with the express purpose of informing public policy. There can be tension in such science because it is being directed to find results which can speak to specific policy questions while still being expected to conform to the norms of scientific investigation. Further, such work is often misinterpreted, or its results reframed, to serve the needs of the policy maker.

Last night I was watching a lecture by Harry Collins in which he talks about the relationship between science and democracy and policy. The slide below shows how the values of science and democracy overlap (evidence-based, preference for reproducible results, clarity and others) but how science's results are wrapped by democracy's interests and perspectives and politics to create policies.

I spent some time thinking about how these views can serve as analogies for testing as a service to stakeholders.

But Collins says more that's relevant to that relationship in the lecture - much of it from his book Are We All Scientific Experts Now? In particular he argues that non-scientists' opinions on scientific matters should not generally be taken as seriously as those of scientists, those people who have dedicated their lives to the quest for deep understanding in an area.

He stratifies the non-scientific populus, though, making room for classes of expertise that are hard-earned - he terms them interactional expertise and experience-based expertise - that non-scientists can achieve and which make conversation with scientists, and even meaningful contribution to scientific debate, possible.

I find this a useful layer on the tester-stakeholder picture. Sure, most of our stakeholders might not know as much as we (think we) do about testing, about the craft and techniques of testing. But that doesn't mean that there aren't those who can talk to us knowledgeably about it. This kind of expertise might be from, say, reading (interactional) or perhaps from past testing practice or knowledge of the domain in which testing is taking place (experience-based) and I like to think that I am, and that we should be, open to it being valuable to us and whatever testing effort we're engaged in.
Categories: Blogs

Perfecto Mobile Expands Continuous Quality Lab

Software Testing Magazine - Tue, 04/05/2016 - 22:14
Perfecto Mobile has announced the next version of its cloud-based Continuous Quality Lab (CQL) that expands test coverage from mobile web and apps to include web browsers on desktops. The enhancement provides enterprises with the most complete quality lab, allowing Dev teams to execute and analyze manual, automated and performance tests for digital channels side-by-side, on desktop browsers and real mobile devices under real end-user conditions. With this expansion, users can apply a single quality strategy to deliver seamless responsive web and omni-channel experiences. Perfecto’s Continuous Quality Lab empowers development teams by supporting manual and automated testing across varied user conditions from a single cloud-based quality lab. Key features and benefits include: * One test strategy for faster delivery to market: To accelerate the development process, a single test script can run on responsive websites, desktop browsers and real mobile device browsers. The CQL’s platform-agnostic scripting of web apps across desktop browser/OS combinations and real devices/OS combinations shortens test cycles by running mobile and web assessments concurrently. * Side-by-side testing for earlier issue detection and resolution: The CQL provides side-by-side analytics of digital test results to enable teams to quickly focus and triage platform specific challenges supported by visual logs with screenshots, video and device diagnostics. * Testing for real end-user conditions: The integrated Wind Tunnel™ solution optimizes testing for end-user conditions by defining and personifying end-user profiles, and by enabling testing across common scenarios such as degraded network conditions, conflicting applications and device interruptions. * One lab for 24/7 [...]
Categories: Communities

Pipelines, DevOps, Silos and the idea of wanting it all

HP LoadRunner and Performance Center Blog - Tue, 04/05/2016 - 20:57

DevOps Pipeline.PNG

I recently heard an analogy about DevOps that I think describes it to a tee—pipelines.

Keep reading to find out why we need to rethink our thoughts about silos and instead embrace the pipelines that connect them.

Categories: Companies

SmartBear Launches RAPID-ML Plugin for Ready! API

Software Testing Magazine - Tue, 04/05/2016 - 20:08
SmartBear Software has launched the RAPID-ML Plugin for Ready! API which allows SmartBear’s Ready! API to test and virtualize API models created by RepreZen API Studio. Developers can import RAPID-ML open format models directly into Ready! API for complete functional testing, load testing, security testing and API virtualization. Developers can also generate an API model in RAPID-ML from any REST API already described in Ready! API for subsequent editing, documentation, visualization and code generation in RepreZen API Studio. RepreZen API Studio is an enterprise-class modeling environment for APIs described in Swagger-OpenAPI and RAPID-ML formats. RAPID-ML includes an intuitive, technology-neutral schema language to specify shared data types and adapt them dynamically for optimal representation in APIs. API Studio also includes an example-driven mock service, sandbox testing and a powerful code generation framework. Ready! API is a unified set of testing tools that includes SoapUI NG for functional testing, LoadUI NG Pro for load testing, ServiceV Pro for API service virtualization and Secure Pro for dynamic API security testing.
Categories: Communities

Latest trends in the QA world

PractiTest - Tue, 04/05/2016 - 14:00

“What do you want to do when you grow up?”

I don’t think that many of us would have answered this question by saying, “I want to be a Tester when I grow up!”

Still, as it we can see (from the recent State of Testing Report) many of us today feel proud to tell our friends and family that we work as Testers and are recognized for our contribution to technology and innovation.


What is “The State of Testing”?

The State of Testing is the largest testing survey worldwide (conducted by this QA Intelligence blog together with Tea Time with testers).  Now third time running, with over 1,000 participants from more than 60 countries, and in collaboration with over 20 bloggers, the survey aims to provide the most accurate overview of the testing profession and the global testing community.  Held yearly, the survey also captures current and future trends.

state of testing 2016 600px


Trends worth noting:
  • Testing and development have become distributed tasks.  With 70% of companies working with distributed teams in two, three or more locations. This requires adapting workflow habits and skills to maintain high productivity without close proximity to other teams involved in each project.
  • Increase in the percentage of organizations where the testing function reports to Project Management rather than to a VP or Director of Quality (in comparison to last years’ report). This could be due to the trend of testing groups becoming part of the organic developments teams, for those implementing Agile or SCRUM.
  • Formal training and certification is on the rise. This trend is true mostly for India and Western Europe, but is a trend that reflects the regard for testing as a profession that requires more formal training. While you might not agree that there is such a need for certification and formal training, we can still take it as complement to our professional recognition.
  • Communication is still key. With nearly 80% of the responses, the leading “very important” skill a tester needs is good communication skills (3rd year in row by the way). In fact, only 2% of all respondents regarded this as non-important!I have touched on this point before in a previous blog post – Using your Kitchen as a Communication Channel

state of testing report

In a nutshell….

The accelerating pace of development is making our work more challenging than ever. And overall we are seeing a more serious approach towards quality and testing in our work-ecosystem.

Today, we feel that testing is seen as a critical activity by many of the same people who used to see testers as “unskilled individuals” doing the least important tasks in the end and slowing down delivery.

I mean, we always knew we had an important role in any successful product or application release, but it is becoming apparent that everyone else knows this as well.





The post Latest trends in the QA world appeared first on QA Intelligence.

Categories: Companies

Belgium Testing Days, Brussels, Belgium, June 13-16 2016

Software Testing Magazine - Tue, 04/05/2016 - 10:00
The Belgium Testing Days (BTD) is a four-days conference dedicated to software testing. The BTD event aims to be the best software testing conference with leading experts on Testing, QA, DevOps, Mobile Testing, Test Automation and Continuous Delivery. On top of the expert speaker lineup, you will have a chance to work side by side with the BTD community at the Lab and choose from a full range of Labs sessions opportunities. In the agenda up you can find topics like “The whole story: Mapping, slicing and writing”, “Test & Project Management – are they related?”, “Performance Testing: Critical Concepts and Skills”, “Test Automation Patterns (Technical & Management)”, “Planning with #NoEstimates”, “Cloud Testing”, “Test Automation Heuristics”, “Application Performance Clinic: From Zero to Performance Hero in Minutes”, “Using Influence Diagrams to Understand Testing”, “Identifying and removing blockers to improving testing”. Web site: Location for the Belgium Testing Days conference: Brussels, Belgium
Categories: Communities

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today