Skip to content

Feed aggregator

Google Test Automation Conference (GTAC) 2017 Registration Open

Software Testing Magazine - Thu, 06/01/2017 - 16:18
The registration for the 2017 edition of the Google Test Automation Conference (GTAC) is open. The GTAC is an annual software testing automation conference hosted by Google and the 2017 edition will...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Kill the Noise! to Change Gear in our Code Analyzers

Sonar - Thu, 06/01/2017 - 15:48

Over the past few weeks, you may have noticed that most of our product news about code analyzers contained a mention of a “Kill The Noise!” project. We initiated this project at the beginning of the year to sharpen the detection of issues of type “Bug” on certain code analyzers: SonarJS, SonarC#, SonarJava and SonarCFamily. In simpler words, it means that our objective with this project is to make sure that when a SonarQube users clicks on “Bugs” on a project homepage he will be able to fix real and nasty bugs instead of trying to figure out whether the issues he is looking at are real bugs or not.

That may sound like an obvious and mandatory behavior for code analyzers to be extremely sharp when reporting bugs, but do you actually know any analyzer on the market that has at least 90% accuracy?

Over the past two years, we have developed the technology to do path-sensitive DFA (data flow analysis) on C#, Java, JavaScript, C and C++. This technology allows us to go through all execution paths while symbolically simulating the execution of each statement. With help of those DFA engines, we’re able to spot tricky bugs like this one in the Linux kernel:
linux-bug

And then we realized two things :

  • A lot of our older rules of type “Bug” were not really finding obvious bugs, but were more reinforcing good coding practices that help make the code more robust, and therefore reliable. Here are few examples: Non empty switch cases should end with unconditional break statement or Two branches in a conditional structure should not have the same implementation. Those rules spot some real quality issues but most of the time such implementations were done on purpose, and don’t actually lead to unexpected behavior in production.
  • Some rules combined the detection of bugs and quality issues. That was for instance the case for the rule to detect unconditionally true/false conditions. Sounds strange? Here is the reasoning: when a branch is fully unreachable due to an unconditionally false condition there is no doubt about the fact that it is a bug. But when a (sub)condition is unconditionally true, it might be on purpose just to make the code more self-explanatory: Boom! A rule that finds both bugs and quality issues

So we made an effort of reclassification and split of rules. Obviously we made all this while keeping the hunt for the infamous false-positives.

At the end we are all working at SonarSource towards this ultimate goal to have code analyzers being at least 90% accurate when raising issues of type “Bug” and we are not far from making this dream become a reality. Obviously any feedback on the latest versions of SonarJS, SonarC#, SonarJava and SonarCFamily is highly welcome to help us Kill the Noise!

Categories: Open Source

OpenStack Summit Boston Recap

The OpenStack Summit at the Hynes convention center earlier this month was a blast. Exciting keynotes where we heard that we’re in the age of 2nd generation private clouds, where everything is virtualized and the focus shifts on the applications again. That enables the OpenStack community also to engage with other OpenSource projects like Kubernetes or Cloud Foundry.

On the second day there was a live interview with Edward Snowden, who called in remotely. It was an interesting conversation with insights on how running your own private cloud allows you to gain and keep control over your data. Also, as presented in Barcelona last year, there was an interoperability live on-stage demo with several companies and public cloud providers joining a distributed Kubernetes cluster. There were discussions about running Kubernetes on OpenStack and running OpenStack on Kubernetes. In fact, I heard several people referring to the event as the “Kubernetes Summit”.

It seems the prophecy is being fulfilled: private cloud is finally all about applications and orchestration. However, this means that the health and user experience of applications becomes even more important. I overheard, and had many conversations, about monitoring and troubleshooting capabilities, not only for applications, but also for containers, orchestration tools, and cloud platforms. This topic also resonated through the expo area, where several monitoring vendors — including Dynatrace — had booths.

There was constant traffic at the Dynatrace booth and, to be honest, our staff needed some recovery time after showcasing our product and its capabilities throughout the four days. Everybody wanted to see what Dynatrace brings to the table when it comes to monitoring OpenStack environments, and they were blown away how easy it can be. Many booth visitors came by after they visited other vendors in the monitoring space. One attendees said, “so you not only gather more data, but you also correlate it, and make sense of it. That’s powerful”. Another interesting memory: a well-known figure in the OpenStack community — I’d met her at several other OpenStack events before — stopped by the booth and started a conversation with me. She said that she’s so glad that Dynatrace addresses the gap of full stack monitoring in the OpenStack area, because it was a need that no other company had been able to address .

Hands down, the best conversation I had during the summit was when another attendee came to our booth and starting asking how we monitor the OpenStack services, and how we notify our customers of problems in their environment. I explained that we monitor several aspects of the OpenStack services such as response time, availability, resource utilization, and even log files. Dynatrace provides integrations with several incident management systems (ServiceNow, PagerDuty, OpsGenie) as well as Email and web hooks. I demonstrated to him how Dynatrace handles those problems and, while it allows you to define custom alerts that are sent immediately, Dynatrace usually refrains from overloading users with alerts. His eyes lit up, he said “That’s what I’m looking for!”, and started to show me the list of alerts on his mobile that he obtained from his current infrastructure monitoring solution. He then told me that he had given up reading the alerts because there were simply too many to process.

With large cloud environments the amount of data, metrics, and log messages that need to be monitored is simply too large to allow a manual analysis. To top it off, automatic alerting for all those data points is also not a viable solution. Dynatrace OpenStack monitoring offers a smart and unique way to stay on top of complex and large scale environments of several thousands nodes and VMs. The setup takes ten minutes at most, is easy to integrate into an existing configuration automation tools like Ansible, Chef, and Puppet. It also provides a single pane of glass view for your OpenStack infrastructure and also for the workloads and applications that run on top — including real-user monitoring for your web applications. With automatic AI-powered root cause analysis and full stack monitoring, Dynatrace is the troubleshooting and monitoring jack of all trades!

For those that have experienced Dynatrace, especially any OpenStack Summit attendees who came by the booth, what do you think sets us apart?  We love feedback – it’s crucial to our ability to address the complexity challenges our customers face today and in the future. So please let us know your thoughts via the comments section below.

The post OpenStack Summit Boston Recap appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

How Testlio Keeps Strava’s User Experience in First Place

Testlio - Community of testers - Wed, 05/31/2017 - 18:36
Strava has been uniting athletes from around the world since 2009.

The company was launched by digital entrepreneurs Michael Horvath and Mark Gainey as a means of recreating — on a massive scale — the camaraderie and competition they experienced as teammates on the Harvard rowing team. Today, eight years later, Strava’s mobile app and website connects tens of millions of athletes in more than 195 countries. Through Strava, runners and cyclists can record their activities, compare performance over time, compete within their community, and share the photos, stories and highlights of their activities with friends.

“Strava” is Swedish for “strive,” precisely what the company must do every day to meet the expectations of its active and engaged users. In 2016, Strava wanted to accelerate its release cycle, moving from just a handful of releases a year to a bimonthly release. To keep pace with its accelerated cycle, Strava chose Testlio as its mobile testing partner. Since then, the two have created and maintained a first-class experience for Strava’s users.

Key benefits of Strava’s partnership with Testlio include:

  • Testlio better approximates Strava’s real users
  • Testlio tests outside of English
  • Testlio provides wider device and OS version coverage

Strava can’t afford to compromise the user experience in its fast-paced, ongoing pursuit of new ways to connect athletes around the world. Thankfully, with Testlio as a partner, it doesn’t have to

See how Strava creates a first-class customer experience in our new case study.

Categories: Companies

The Bug Reporting Spectrum

Gurock Software Blog - Wed, 05/31/2017 - 16:10

The Bug Reporting Spectrum Abstract Graphic

This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.

On my first day working as a software tester, my lead pointed to a bug tracking system. He gave me step-by-step instructions on how to use this system. I produced a bug, entered the product and browser information, and clicked submit. I did this on my first day because it is a critical skill, essential to being a software tester.

I had little training beyond my introduction to the bug tracker. Sadly, this is like most people’s introduction to bug reporting. This often results in boring triage meetings, where managers decide what is or is not a bug. The manager will reclassify bugs to make reports look good, and send reports of bugs that cannot be reproduced back where they came from. We also got bug tracking systems cluttered with data that no one would use.

Bug reporting is a skilled activity that can either enable faster delivery or jam up a development group, making them slower. I will explain why reporting is hard, what skilled reporting looks like and why it doesn’t always have to be done through a reporting system.

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.



Subscribe
We will never share your email. 1-click unsubscribes. articles Improved Bug Reporting

The Bug Reporting Spectrum Abstract Graphic

My first couple of jobs were like a mad game of musical bug reports. I’d discover what I thought was a problem; write it up, and inevitably the bug would be sent back to me. Sometimes the developer would say it wasn’t a bug at all, they couldn’t reproduce it, or they would say my bug was a feature disguised as a bug.

The problem I was having, and the problem I see most people have, is hiding the bug.

Before I developed good reporting techniques, I’d write up a bug with a complicated title that tried to capture every possible detail. In the description I’d add an introduction paragraph to set the context, and then literally every step needed to reproduce the bug. After that, I would write about the ‘expected result’ and ‘actual result’. The title was a mess, so programmers had to open the report to figure out what was going on. The description was also a mess. Programmers would stash the report away to be reviewed during a triage meeting, when they might have some help interpreting the problem.

To counteract this, I improved my bug reporting skills by creating useful titles. If I make a report now, I like to use the title format ‘X fails when Y’ when possible. This gives the programmer a decent idea of what went wrong, and were the problem might be before they open the report.

I also improved the descriptions by cutting them down. I removed the introduction paragraph. If the steps to reproduce the bug were needed, they focused on the parts that were critical. Some bugs are hard to describe. They are hard to produce on purpose, or have long, hard to follow workflows. I use supplemental things like recordings of me triggering the bug, screen shots, data files, and log captures in those situations. Sometimes, a recording is more accurate and easy to follow than a set of written instructions.

My improved reporting style had a positive effect; the amount of data in our bug tracking system shrank. We had fewer reports to contest in triage meetings. Furthermore, we had fewer bugs that would sit in the tracking system release after release.

Pairing and Agile

The Bug Reporting Spectrum Abstract Graphic

Whilst working in a different testing job, I reduced the amount of written bug reports by at least 50%.

I was working with a development team that consisted of two back end programmers, and three that worked in the user interface. All of us sat at desk pods in the same room, with three or four people in each pod. We were agile-ish. Our team delivered software to production every two weeks. We had daily status meetings and generally tried to work together. We weren’t to the point of having feature teams and single flow development, but we were trying.

We would work together before checking a feature fix into the source code repository and building it to test. We were working on a product that helped marketers create small advertisements. These advertisements that would be viewed through social media channels such as Twitter or Facebook. One project was to build a new type of advertisement based around video content. The finished result would be a YouTube video embedded in a frame with some text and a few text fields that would collect user data.

The programmer working on this product told me that it was mostly done. He wanted to know if I could come take a look on his machine before he checked in. We started with the process of building the advertisement in our tool. I started by testing the usual suspects. I experimented to see what happened if I entered too many characters, non-numeric characters or a bad date format. We found a few bugs and he continued to work on fixing those while I continued testing and taking notes. We found some more interesting problems and questions once I started looking at the advertisement that our tool made. The video didn’t auto-play, so to view the content a person would have to click play. Was that correct? All of our advertisement types had some analytical functions attached. This one was supposed to record views, average view length, and a few others. But, how do you define a view? Does the person have to watch an entire video to be considered a view? What if they start halfway through and watch the last 30 seconds?

We didn’t have the answer to these important questions, and our product person was at a customer site that day. Our questions were logged into the bug tracker so we didn’t lose them. When the product person got back we had a brief meeting to talk about the issue, updated the ticket to reflect those decisions and then the developer fixed them.

Bug reports were mostly done through demonstration and conversation. We were able to discover new problems, demonstrate exactly how they were triggered, and get them fixed without ever touching a bug tracking system. We went to the bug tracker only when we had questions that couldn’t be answered immediately, or bugs that were complicated and needed some research before they were fixed.

A Note on Zero Defects

The Bug Reporting Spectrum Abstract Graphic

Occasionally I will see people advocating ‘zero defects’. This is the idea that every single bug found should be fixed immediately. In this scenario, there is no bug tracking system, and there are no written bug reports.

A zero defects flow might look something like this:

A developer makes a change to add a discount field to a purchase page. The tester goes to work once the change is in a build. They may find a few superficial problems. For example, an error is thrown when non-numerical strings are entered, the user can enter discounts larger than 100%, and there is no limit on the number of decimal places a person could enter. These are pretty simple problems, and the programmer begins to fix them with an input mask that restricts what can be typed into the field.

The tester then starts looking at more complex scenarios after this. They will check if the discounts apply correctly, if someone can apply multiple discounts, and how is tax calculated. After some investigation, the tester finds that the discount is calculated incorrectly when the purchase total is greater than $100. At this point, the developer isn’t finished with the input mask change. Once that change makes it to an environment there would be some retesting to do.

There is a new dilemma. Should our tester interrupt the programmer still working on the previous fix to talk about the new problem? Should they wait to talk about the new problem until the other issue is fixed and retested? Should they move on and test some other aspect of the feature? Not talking about the bug now introduces the risk that the tester might forget something important about the bug making it harder to fix. The solution is usually to make some lightweight documentation with a post-it note or sent over email instead of a bug tracker.

The idea of “zero defects” is a lie. As my colleague, Matt Heusser points out, a project might work in a specific browser that will only Create, Read, Update and Delete database applications. Or a project might work in back-end batch applications with no user interface, and a few other limited applications. I’ll step out on a ledge and say it again: it’s a lie. If you think you have zero defects, let’s bet a consulting assignment on it.

At some point during feature development, a tester, programmer, or product person will stumble across a problem that can’t be fixed immediately. That issue might be complicated, it might require research, or the programmer may be busy working on something else. Either way the bug can’t be fixed now, and not documenting it is risky business.

Only When a Necessity

The Bug Reporting Spectrum Abstract Graphic

My general rule now is to only make a bug report when it’s an absolute necessity. If there is a question that no-one can answer in the next day, or if there is a bug that can’t be fixed yet. Most of the time, I find that a conversation can solve problems that a bug report introduces. Some people say the best tester is the one that finds the most bugs. I’d change that and say the best tester is the one that can get the most bugs fixed. That means reporting them in a way people care about and understand.

Categories: Companies

How to convert PowerShell Object to a String?

Testing tools Blog - Mayank Srivastava - Wed, 05/31/2017 - 09:42
Alike any other languages, PowerShell script also supports conversion of an object to a string for many kinds of manipulations. In my course of actions, I have come across two ways which help to convert Object to a string very easily. Out-String $string = Get-CimInstance Win32_OperatingSystem | Select-Object {$_.Version} | Out-String %{$_.Version} $string = Get-CimInstance…
Categories: Blogs

expoQA, Madrid, Spain, June 13-15 2017

Software Testing Magazine - Wed, 05/31/2017 - 09:15
expoQA is a three-day conference focused on software testing and quality assurance that will take place in Madrid, Spain. The first day will propose tutorials and presentations will be performed the...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Nordic Testing Days, Tallinn, Estonia, June 7-9 2017

Software Testing Magazine - Wed, 05/31/2017 - 08:00
The Nordic Testing Days is a three-day conference focused on software testing that target as an audience the professional software testers from the Northern European countries. The first day proposes...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Containers: Sample process to promote image from one registry to another

IBM UrbanCode - Release And Deploy - Tue, 05/30/2017 - 20:48

Are you looking to perform container development in IBM Bluemix, but run your production registry on-prem? Do you develop containers in one registry, but store your production containers in a different registry? If so, the sample process Promote Image from Bluemix sample may be a useful reference for you. You may find the sample process on GitHub at https://github.com/IBM-UrbanCode/Templates-UCD/tree/master/Docker/sampleprocesses.

Prerequisites
  • Version 5 or later of the Docker plug-in must be installed. The plug-in may be found at: https://developer.ibm.com/urbancode/plugin/docker-2/
  • An UrbanCode Deploy agent must be installed on a machine which is running a Docker Engine.
Installation Process
  1. Download the sample generic process named Promote+Image+from+BlueMix.json from https://github.com/IBM-UrbanCode/Templates-UCD/tree/master/Docker/sampleprocesses
  2. In UrbanCode Deploy, click on Processes
  3. Click the Import Process button.
  4. Select the downloaded template and click the Submit button.
About the Process

The sample process is made up of six steps. It may need to be modified to suit your needs. Below is a description of each step, with suggestions on what you may need to modify for different cases.

  1. Login to Bluemix
    This step logs in to Bluemix by running a bx login command. The sample process assumes you are logging in using an api key for authentication. If you are logging in using your Bluemix username and password, this step will need to be modified. If your development registry is not Bluemix, the command will need to be updated.
  2. Login to Blumix Container Registry
    The sample process uses the IBM Bluemix Container Registry CLI plug-in to manage your Bluemix registry. This step runs a bx cr login command to login to the registry. You will not need this step if not using Bluemix, but may need a similar step, depending on the registry you are using. For more information on the IBM Bluemix Container Registry CLI, see https://console.ng.bluemix.net/docs/cli/plugins/registry/index.html
  3. Pull Docker Image from Bluemix
    This step simply pulls the Docker image from your Bluemix registry. Again, this step may need to be modified if not using Bluemix.
  4. Tag Image
    This step tags the image which was just pulled from Bluemix with a naming format suitable for your target registry. Note the naming convention of the tag. Note the port number is specified in the example. If your registry does not require a port number, remove the colon and ${p:target.registry} from the Tag field. By default, the process is set to tag the image with the same image name and tag used in Bluemix.
  5. Docker Login to Target Registry
    This step uses the Docker plug-ins Docker Login step to connect to a Docker registry. It may need to be modified if connecting to your registry using different means (a key file, for example). Note the Docker Registry field includes a port number by default, so if your registry does not need a port number specified, remove the colon and ${p:target.registry}.
  6. Push Docker Image
    This step uses the Docker plug-in’s Push Docker Image step to push your tagged image to the registry. Again, note the port number is specified in the example and would need to be removed if desired.
Process Properties

Properties required by this process may be viewed and modified by going to the process’s Configuration tab, then clicking Process Properties.

As a Component Process

For convenience, this process is presented as a generic process. However, it may be desirable to run the process as a component process. Use this sample process as a guide while you build a component process. This could allow the use of component/environment/application properties to satisfy some of the sample process’s process properties. A final Run Docker Container step could be added to the process, allowing you to copy a container image from one registry to another and run it, updating the UCD inventory at the same time.

Categories: Companies

Moving from Manual to Automated Testing

Software Testing Magazine - Tue, 05/30/2017 - 17:20
Moving from manual to automated testing at a small company takes curiosity, research, careful planning and the ability to evolve as you learn. This talk will focus on how to get started, cultural...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

An Overview of JUnit 5

Software Testing Magazine - Mon, 05/29/2017 - 17:15
JUnit 5 is the next generation of JUnit. The goal of this upcoming version is to create an up-to-date foundation for developer-side testing on the JVM. The evolution includes focusing on Java 8 and...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

The art of transforming testing data into project information

PractiTest - Mon, 05/29/2017 - 13:43

A while back a colleague asked me for help creating a better testing report for her current project. The task got me thinking about how QA managers handle the same information differently; and how this makes the difference between being treated as “Testing Guy” or as the “Inside Information Provider” of a software project.

We are all required to report our work efforts on a regular basis, as a means to communicate project progress status. Many times the test management tools we use produce reports automatically and we just use whatever default reports it spits out and then distribute them to all involved, but that is a big mistake!

I’ve learned that when working with people outside our testing teams we need to think like them, understand what information they need and what format will help them understand it faster and better. More often than not I have found that the default “dry numbers” reports don’t get the proper message across.

Here are 2 simplified examples of different reports:

Example 1. Test execution report
Team A’s Report:
Total Tests in Cycle: 376
Passed Tests: 301
Failed Tests: 28

Tests not Run: 57

Team B’s Report:

Tests in Cycle: 376
Execution percentage: 87.5%
Passed percentage: 80%

It’s all about Framing

In the simple example above, both teams are providing the same data. But who is providing better information? When writing your report remember that people don’t like doing algebra equations in their heads. It is important to understand what information they are looking for, in this case they want to understand (a) how advanced in the testing cycle the team is –> execution %; and (b) what is the status of the application at this point –> passed %.

Example 2. Defects’ report
Team A’s Report:
Total Detected Defects: 453
Closed Defects: 321
Open Defects: 76
Postponed Defects: 56
Total Defects for Release: 397
(Postponed Defects – 56)

Team B’s Report:
Percentage closed: 80.8%
Percentage open: 19.2%
Defect detection rate (2w): 3.2 bugs/day
Defect closure rate (2w): 5.2 bugs/day

Again here, both teams provide the same data but team B also provides more valuable information. Not only did they present the numbers as percentages, but they also point at the convergence status of their project by informing that during the last 2 weeks the fixing rate has surpassed the detection rate.

Tips of the trade

There are many more examples, but the points to remember when defining reports are:

  • The less people need to think, the more intelligent my report will appear to stakeholders; as much as possible I provide my data as percentages or rates.
     run status by numbers   run status by percent
  • Anticipate the questions and provide the information up-front. If during the status meetings people always ask me for a specific datum, I start including it in my report.
  • Don’t overfill the reports with useless data; too much information will drown the important stuff.
  • Using graphs instead of numbers usually provide 3 to 5 times more information.
     issues by status

Finally the most important piece of advice is that a report should talk for itself; the less explanations are needed, the better you are at doing your job.

Creating reports

I would even further recommend creating different visuals & reports to the separate stakeholders in your project, as each audience has different interests regarding project QA. For instance, your managers might care more about time and budget spend on testing and what value that has brought, while HR might care more about team productivity, and your users would care more about when the latest feature update for instance will be released.

Many test management tools today offer a dashboard and/ or reporting features to help with this task. So it’s very easy to execute these reports, and the “hard” part becomes thinking on what data to present , to whom and how to frame it.
For creating a metrics plan I recommend This Ebook

In PractiTest (as I am it’s chief solution architect) we have taken this one step further and created “External dashboards” alongside the usual In App dashboard display. This allows our users to not only easily create, but also share and embed their visual reports with anyone related to their projects, and not just with logged-in team members. You can read more about this feature here.

What tips for presenting data do you use on a daily basis?

 

*Editor’s note: This post has been updated to be more relevant since it was originally posted in December, 2007.

The post The art of transforming testing data into project information appeared first on QA Intelligence.

Categories: Companies

Being Agile in HR with Peer Recruiting

A collaboration by Alexa Fuhren and Mario Moreira
Does a manager know better than a team who fits best to a role? How can we recruit the right people that fit best to our Agile organization? The answer is, by being Agile ourselves, particularly in the recruiting process!
In a more traditional working environment if there is a vacancy in a team, the manager approaches the recruiter, shares the requirements of the role, hands over the responsibility for the recruiting process to the HR department, and will be involved again when interviewing and selecting candidates. The recruiter is responsible for creating a job ad, posting it in appropriate recruiting channels, pre-selecting candidates, inviting the manager to interviews and making an offer to the selected candidate. The team usually plays a minor role in selecting the candidate.Many teams in Agile operate with a self-organizing model.  This model includes much more team ownership, autonomy, as well as responsibility and accountability for all team members than traditionally operating teams. In self-organizing models, the concept of peer recruiting can be applied where the team should play a much stronger role in selecting the right candidate that fits best to the team. Due to a better person-team fit, a reduction of early employee turnover could be a desired outcome.
If teams are responsible for selecting new team members, this will change the role of the recruiter from owning the recruiting process to supporting the process and coaching the team. Depending on the knowledge and experience of the team, the recruiter will be more or rather less involved in selecting the right candidate.
Self-organizing teams can be responsible for the whole recruiting process and accountable for hiring the right candidate. It starts with creating a (new) job profile for the vacancy. The Recruitment Coach will challenge the team to figure out which profile is needed to increase their current and future team performance. When creating a job ad, the Recruitment Coach can give advice on how to make it compelling and will provide templates that are in line with corporate design.
Team members can post the job ad on job boards and in their social media channels (LinkedIn, Xing, Facebook, chatrooms, private networks). After pre-selecting the candidates based on previously defined criteria, the team invites the selected candidates for interviews, roles plays, presentations etc. They can choose to ask the manager or recruiter to interview the candidates. The recruiter’s role will be to train the team on interview techniques and how to avoid evaluation errors like stereotyping, the halo effect or the Pygmalion effect etc.
Implementing peer recruiting means moving the decision to the people who know best who fits to their teams. It helps to speed up the recruiting process by reducing long decision making processes with managers and HR.
What is in it for the company?
  • Faster decisions due to less interactions with HR and the manager
  • Higher team commitment
  • Less turnover in the first 6 months of employment due to a better company-person fit
  • Recruiter can focus on strategic work, e.g. employer branding, building networks etc., and become a valuable coach for the recruiting processes
What is in it for the candidate?
  • Candidate experiences an Agile culture right from the first contact with the company
  • Candidate gets to know the colleagues he will closely work with
  • Job interview at eye level with team members instead of the potential manager
Peer recruiting shifts the recruiter’s role to a coach who supports the business in making hiring decisions faster, selecting candidates that fit best to the company and lowering the early turnover rate. Enabling the team to select new team members increase their autonomy which can lead to higher team commitment and higher team performance.

-----------------
Learn more about Alexa Fuhren at: https://de.linkedin.com/in/alexa-fuhren-b745843/de

Mario Moreira writes more about Agile and HR in his book "The Agile Enterprise" in Chapter 21 "Reinventing HR for Agile"p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Arial; color: #0433ff; -webkit-text-stroke: #0433ff} span.s1 {text-decoration: underline ; font-kerning: none}
Categories: Blogs

Best Practise

Hiccupps - James Thomas - Sun, 05/28/2017 - 18:26
I've said many times on here that writing for me is a kind of internal dialogue: compose a position, propose it in writing, challenge it in thought, and repeat.

I get enormous value from this approach, and have done for a long time. But in two discussions last week (Lean Coffee and a testing workshop with one of the other teams at Linguamatics) I found additional nuances that I hadn't considered previously.

First: in some sense, the approach I take is like pairing with myself. Externalising my ideas sets up, for me, the opportunity to take an alternative perspective that doesn't exist to the same extent when I'm only working in my head. It's often about the way I'm thinking as much as the content of my thoughts, and I speculate that this is a good grounding for being criticised by others when we're working together.

Second: writing and re-reading makes my position clear to me, and forces me to work out a way in which I can put it across. Since I started blogging there are numerous times in discussions that I've realised I am paraphrasing from something I've written. In the past I've tended to be a bit embarrassed by that but now I can see that, in fact, it's largely because I spent the time working it out before that I have it available to me now.

These are both things that are useful to me and that I want to get more benefit from. And, while I might agree that outside of a specific context there are no best practices, I also know that if I want to get those outcomes from my writing, I'd best practise.
Image: https://flic.kr/p/f2gYD7
Categories: Blogs

Three ways to handle CFRs

thekua.com@work - Sun, 05/28/2017 - 17:00

Cross-Functional Requirements (CFRs) are some of the key system characteristics that are important to design and account for. Internally we refer to these as CFRs, although classically they might be called Non-Functional Requirements (NFRs) or System Quality Attributes, however their cross-cutting nature means you always need to consider the impact of CFRs on new or existing functionality.

In the Tech Lead courses that I run, we discuss how it’s important that the Tech Lead ensures that the relevant CFRs are identified and accounted for either in design or development. Here are three ways I have seen some CFRs accounted handled.

1. CFRs satisfied via user stories and acceptance criteria

Security, authentication and authorisation stories are CFRs that naturally lend themselves to actually building out testable functionality. It’s important to consider the effort the risk and, in my experience, is important to start implementing these early to make sure they meet the needs and can evolve.

For these sorts of CFRs, it’s useful to identify these as natural user stories, and once implemented become acceptance criteria on future user stories that touch that area of the system.

As as example, authorisation can be dealt with by introducing a new persona role and what they might do (or not do) that others can have:

As an administrator, I would like to change the email server settings via a user interface, so that I do not need to raise an IT change request for it.

If this is the first time that this user story is implemented, then some acceptance criteria might look like:

  • Only a user with an administrator role can access this page
  • Only a user with an administrator role can successfully update the email setting (check the API)
  • Users with no administrator access receive a 403 or equivalent

This new role addition often means considering new acceptance criteria for every story going forward (if it should be accessible only by administrators or by all.

2. CFRs satisfied through architectural design

Scalability and durability are often CFRs that require upfront thinking about the architectural design, and perhaps planning for redundancy in the form of additional hardware, network, or bandwidth capacity. A web-based solution that needs to be scalable might draw upon the 12-factor application principles, as well as considering the underlying hardware. Failing to think about the architectural patterns that enable scalability and start coding will lead to additional rework later, or make it even impossible to scale.

3. CFRs satisfied via the development process

User experience is a CFR which often requires people, making automated testing much more difficult. An application where a high level of user experience is best dealt with by ensuring that a person with a background in UX is involved and that certain activities and feedback cycles are planned into the software development process to continually fine-tune the user experience as an application evolves.

Changes to the development process might include explicit user research activities, continuous user testing activities, the addition of an A/B capability and some training for product people and the development team to ensure that the developed software meets the desired level of user experience.

Conclusion

Every system has their own set of Cross-Functional Requirements (CFRs) and it is essential that teams focus on identifying the relevant and important CFRs and find ways to ensure they are met. In this article, I shared three typical ways that CFRs might be met.

How else have you seen these handled?

Categories: Blogs

Bluemix experimental service Continuous Release gets new features

IBM UrbanCode - Release And Deploy - Fri, 05/26/2017 - 17:47

As an experimental Bluemix service, the Continuous Release feature set has steadily grown. A recent milestone update introduced release events. With release events added to the other core feature, deployment plans, the product’s key characteristics come into sharp focus.

For those readers still unfamiliar with the service, Continuous Release is a release management solution that is both flexible and reliable. You can manage deployments across your entire software development lifecycle without complex spreadsheets, or back-of-the-envelope fixes. Automate as much as need. Your deployment plans can be completely automated, completely manual, or some mix in between.

Hybrid cloud solution

Continuous Release integrates your cloud native and on-prem tools. Combine manual tasks with automated tasks that manage your on-prem UrbanCode tools, as well as Bluemix Continuous Delivery composite pipelines. Other task types automate email and Slack messaging.

Multi-speed IT

Continuous Release is designed to manage collaboration between teams focused on automation with teams focused on risk management. Combine deployment from across your organization into a single release event. Team members collaborate to create deployment plans and run deployments.

UrbanCode Release

Continuous Release is not a cloud-hosted instance of UrbanCode Release, nor is it a replacement for UrbanCode Release. Continuous Release offers flexibility for customers that manage both on-prem UrbanCode Deploy installations and cloud-native applications. An important goal for Continuous Release is to provide simple and easy onboarding.

Bluemix experimental services

Many Bluemix services go through an experimental phase before entering beta. Like all Bluemix experimental services, Continuous Release is free. If you haven’t already, open a Bluemix account–it’s also free–and kick the tires. Early adopters can influence product direction with their feedback.

Categories: Companies

Just Do It: A snapshot of APM & Unified Enterprise Monitoring

As Bob learned in the first posting in this three-part series previous blog, technology alone can’t deliver a healthy lifestyle. Likewise, having a successful APM program isn’t just about seeing what’s happening on a computer screen; it’s about doing something about it. You have to align your strategy, culture, people and processes with your digital business goals to reach the summit — and that’s not easy. So, what does that kind of success look like? Is it the same for every company?

Steps along the way

The fact is that every organization has slightly — sometimes dramatically — different expectations of success depending on the business outcomes they want. Ultimate goals are always different, but all companies follow the same four-step process as they work towards them. It starts with the technology and ends with a culture change; a new way of doing business every day.

  1. Implementation

This is the beginning of the process, when an organization deploys a new technology of choice. This should be considered table stakes and should be achieved as quickly as possible — preferably during the proof-of-concept (POC) stage.

  1. Value Realization

The next step is when people in an organization start to use the new technology for detailed visibility into applications and digital experiences. Teams then gain understanding and perspective from what they now see and take action. This action — whether it means improving the performance of applications or making an online checkout process faster and easier — results in measurable value for the company. People like and use the new banking app. More visitors to an online store convert and buy. Financial analysts get the information they need without interruption.

  1. Adoption

Once individual teams start to realize value from enhanced visibility and information, word usually spreads within an organization. If the ops group can use this technology to find and eliminate problems in production, wouldn’t it be even better if development teams could use the same solution to diagnose problems with apps before they go into production? If it works for the team in NORAM, why not go global with it and see what could be accomplished on a large scale? Once companies start expanding their internal success with the adoption process, the measurable value starts to leap forward. The ROI can be remarkable.

  1. Operationalization

When the adoption process has spread like digital wildfire across the organization—through silos of business and technology—we often describe the company as having a “culture of performance.” The digital experience is an integrated part of the business. APM is embedded into the everyday operations of the organization, across the entire lifecycle. Everyone is a stakeholder in digital success.

Keys to success

“Sub-optimization is when everyone is for himself. Optimization is when everyone is working to help the company.” –W. Edwards Deming

How do companies open the sometimes-elusive door to performance culture? When I look at the ones who made it there, and keep improving, they have four things in common:

  • Executive-level leadership and a clear APM strategy articulated throughout the organization
  • A top-down monitoring approach that examines the health of applications from the end-user perspective, not just from an infrastructure standpoint (bottom-up)
  • Incentives and visibility into digital business that cross traditional silos bringing together teams like marketing, development and IT operations
  • Institutionalized, cross-functional collaboration between these different teams that make it easy for them to work together and speak the same language

One of our customers, a major insurance company, is a great example of the power of executive leadership. Initially, the company suffered some serious application issues during one of their annual open enrollment periods, and knew this had to change. The executive team sprang into action, communicated a clear strategy to the entire organization, and prioritized APM as a corporate goal.

They also established a dedicated APM team to drive broad and deep APM adoption across the company, supported by both Dynatrace Expert Services and Dynatrace University.

Together, we worked with the customer to develop an internal endorsement program to promote Dynatrace users who could demonstrate proficiency in APM technologies and concepts. Today, this company has won awards for its digital performance. A member of the team at this customer explained:

“The APM program has been the most successful IT initiative I have seen or heard of in more than 10 years working here.”

Another customer example is a large, global financial services company. IT operations leadership spearheaded initial APM efforts, and continued to support the business with a proactive and pervasive approach. Every ops team member knows their job is to make sure that financial advisers never have a single, noticeable, drop in performance or availability to ensure they generate the most money for their clients.

The team at this company is organized in a clear, almost military, fashion so that three groups can work together to focus on individual parts of the enterprise. One group deals with performance, handling on boarding and performance engineering. A tools infrastructure group is responsible for administrative tasks. Finally, a performance anomaly group is solely focused on hunting down and eliminating performance issues. The combined result of these groups working together as a team with Dynatrace is a virtually flawless digital enterprise.

“Without productivity objectives, a business does not have direction. Without productivity measurements, it does not have control.” – Peter Drucker

In the next blog entry, and the final one in this series, I’ll explain how we developed our path to success methodology by working with customers like the ones I described here. Thanks to our customers — some of the most respected companies in the world–we’ve learned what objectives and measurements work best.

The post Just Do It: A snapshot of APM & Unified Enterprise Monitoring appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Cambridge Lean Coffee

Hiccupps - James Thomas - Wed, 05/24/2017 - 21:48

This month's Lean Coffee was hosted by Redgate. Here's some brief, aggregated comments and questions  on topics covered by the group I was in.

What benefit would pair testing give me?
  • I want to get my team away from scripted test cases and I think that pairing could help.
  • What do testers get out of it? How does it improve the product?
  • It encourages a different approach.
  • It lets your mind run free.
  • It can bring your team closer together.
  • It can increase the skills across the test group.
  • It can spread knowledge between teams.
  • You could use the cases as jumping-off points.
  • I am currently pairing with a senior tester on two approaches at the same time: functional and performance.
  • For pairing to work well, you need to know each other, to have a relationship.
  • There are different pairing approaches.
  • How long should you pair for?
  • We turned three hour solo sessions into 40 minute pair sessions.
  • You can learn a lot, e.g. new perspectives, short-cuts, tips.
  • Why not pair with developers?

Do you have a default first test? What it is? Why?
  • Ask what's in the build, ask what the expectation is.
  • A meta test: check that what you have in front of you is the right thing to test.
  • It changes over time; often you might be biased by recent bugs, events, reading etc to do a particular thing.
  • Make a mind map.
  • A meta test: inspect the context; what does it make sense to do here?
  • A pathetic test: just explore the software without challenging it. Allow it to demonstrate itself to you.
  • Check that the problem that is fixed in this build can be reproduced in an earlier build.

How do you tell your testing story to your team?
  • Is it a report, at the whiteboard, slides, a diagram, ...?
  • Great to hear it called a story, many people talk about a report, an output etc.
  • Some people just want a yes or no; a ship or not.
  • I like the RST approach to the content: what you did, what you found, the values and risks.
  • Start writing your story early; it helps to keep you on track and review what you've done
  • Writing is like pairing with yourself!
  • In TDD, the tests are the story.

One thing that would turn you off a job advert? One thing that would make you interested?
  • Off: a list of skills (I prefer a story around the role).
  • Off: needing a degree.
  • Interested: the impression that there's challenge in the role and unknowns in the tasks.
  • The advert is never like the job!
  • Interested: describes what you would be working on.
  • Off: "you will help guarantee quality".
  • Interested: learning opportunities.
  • Interested: that it's just outside of my comfort zone.
Image: https://stocksnap.io/photo/A78EC1EB73
Categories: Blogs

Semaphore Releases Boosters

Software Testing Magazine - Wed, 05/24/2017 - 17:49
Semaphore, a cloud-based code delivery service provider, announced the launch of Boosters, a new feature that drastically speeds up automated software testing. Boosters allows software development...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Nouvola Integrates with AWS CodePipeline

Software Testing Magazine - Wed, 05/24/2017 - 16:57
Nouvola, a vendor of in cloud-based performance testing and load testing, has announced integration with AWS CodePipeline, enabling developers using AWS CodePipeline to now include Nouvola tests in...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today