Skip to content

Feed aggregator

Well Read

Hiccupps - James Thomas - Fri, 12/09/2016 - 00:08

This week, Maaret Pyhäjärvi published How to write 180 blog posts in a year.  Maaret's blog is one that I make a point of reading whenever Feedly tells me there's a new post there. Why? Because her posts are thoughtful, often deeply thoughtful. Here's a couple of paragraphs from Thinking you're the best:
For years, I prepared in the previous night for every relevant meeting. I went in with a ready-made plan, usually three to prep my responses for whatever might emerge in the meetings. Back in school, my Swedish teacher made me translate things out loud every class, because of my "word-perfect translations". Truth is I had them pre-translated with great effort because I was mortified with the idea of having to do  that work on the fly. Through my own experiences, I've grown to learn that the pre-prep was always my safety blanket. I did not want to look bad. I did not want to be revealed. I was the person who would rather use 3 days on a half-an-hour task. And I would say it was for my "learning". It was for my "personality". But truth is, it was for my fear of not being perfect.My feed aggregates over 200 testing blogs and a bunch from other areas. I skim the titles regularly. I read from the list most days. I've got much benefit by reading from a wide range of sources across a long period of time. But these blogs, for a variety of reasons, I'll read every time they show up:
Image: https://flic.kr/p/92jv
Categories: Blogs

Perforce Recently Acquired Seapine–Why I’m Excited and Why You Should Be Too

The Seapine View - Thu, 12/08/2016 - 20:00

seapine-logo-small-reg

If you’re a customer who has followed us for a while, last month’s acquisition announcement was probably a complete surprise. Rest assured, you’ll find it was a wonderful surprise.

The ink was barely dry on the acquisition agreement, announcing Seapine Software, Inc. would join Perforce, LLC, when customer inquiries began to arrive. “What is going to happen to the Seapine products?” “Are you going to continue to provide the same great level of support?” And so on. Reading these, I couldn’t be prouder of what Seapine meant to our customers. It was clear we succeeded in what we set out to do when Kelly and I founded Seapine 21 years ago—create products that positively impact how people build quality software and create a company that sets a high standard for friendly, responsive customer interaction. It also reinforced the benefits of the acquisition.

21 years is a long time to sustain a company in our industry. And while capital constraints often dictate a carefully measured path, I’ve always wanted to do more and do it faster. So this year, I decided it was time to enter the capital market. And a wonderful thing happened, I met the new management team at Perforce LLC, a long-time partner (since 2001). It was clear from the first meeting that Perforce and Seapine were a perfect fit. From the similar histories of our founding and growth, to the “customer first” emphasis we place on high quality support, to the focus on helping companies innovate faster with higher quality, our two companies were meant to be one. So it became.

The team at Perforce shares our vision for tackling the difficult development challenges you face. Now, as partners with Perforce, Seapine has access to the resources to accelerate our product development. If you like what you’ve seen from Seapine over the years, prepare to be amazed.

I am excited to be joining the Perforce team as CTO, ALM Solutions, where I will be leading the ALM product strategy and direction. I’ll be working closely with Tim Russell, CPO and Janet Dryer, CEO of Perforce. We’ve just begun this journey together and will start firming up the long-term product roadmaps over the next month or so. We’ll communicate with you as the plan develops and, as always, your input is welcome.

So stay tuned. I’m excited about Perforce + Seapine and I think you will be too.

Categories: Companies

Adding the ‘How’ to ‘What’ for Sitecore Helix Test Automation

Prologue

Following the introduction of the Sitecore Helix framework at Sitecore Symposium 2016 in New Orleans, and after attending Sitecore User Groups where Helix has been discussed, I wanted to take this opportunity to share insights on how to achieve the automation aspects outlined in Helix.

By adding Dynatrace AppMon into your DevOps Continuous Integration (CI) / Continuous Deployment (CD) processes, you automatically ensure the highest levels of quality, performance, and user experience for your Sitecore based application. And you achieve this while dramatically increasing the frequency of your releases and decreasing the lead time for each release.

Introduction

Insert1As a Sitecore Technology Partner with many customers using Dynatrace Application Monitoring (AppMon) and User Experience Management (UEM), and having published recent Sitecore blogs ‘Diagnosing Sitecore Performance Problems’ and ‘Geographical Performance Variance in User Experience: An Example using Sitecore Habitat‘, I wanted to focus the lens on the relationship between Helix (the What to do) and Dynatrace AppMon (the How to do it).

Before we get into the detail, and for completeness, I should also add that along with Helix, Habitat was also launched at Symposium 2016.

The Habitat demo application built using Sitecore Helix principles The Habitat demo application built using Sitecore Helix principles

Helix documents the development guidelines and recommended practices that describes the application of overall design principles that should be applied to a Sitecore project.

Habitat is a real Sitecore project implemented on the Sitecore Experience Platform using Helix. It is an example that allows developers to see how Helix is applied and lets developers experience a project based on these principles.

Insert3

Dynatrace AppMon complements Helix, in particular Section 3, which focuses on DevOps & Development Lifecycle Management. AppMon provides deep insight into some of the largest production Sitecore environments around the world, and it’s this same unique depth of insight that delivers tremendous value during the development, build, integration, and testing phases outlined in Helix.

The objective of this blog isn’t to go into detail regarding the principles of DevOps, that’s an area covered extremely well by my colleague and subject matter expert, Andi Grabner, and I’ll provide some links at the foot of this blog to his content for further education on DevOps. Instead, we’ll focus on section 3 of Helix and how Lifecycle and Automation can greatly impact Sitecore project delivery and project outputs.

Lifecycle and Automation

Helix Section 3.1 highlights the Development environment as the place teams will spend most of their time, and thus an obvious candidate for optimization and automation. But taking a sequential and isolated approach to Section 3 would be a mistake. To achieve the full benefits and outputs of DevOps, Section 3.1 needs to be tightly integrated with Sections 3.2 ‘Build & Integration’, and Section 3.3 ‘Testing’, to accomplish effective feedback loops between teams.

Essentially what we’re looking to accomplish is a change in approach from the old way of developing and testing that involves multiple manual iterations between dev and test resulting in a slow, inefficient, and painstaking process…

The Old Way - Isolated, manual approach to the software lifecycleThe Old Way: Isolated, manual approach to the software lifecycle

…to the new way, where performance is engineered early in the lifecycle and automation is embraced to speed up development, compress test time, and increase quality so that innovation flourishes and technical debt is minimized.

The New Way - Integrated, automated approach to the software lifecycleThe New Way: Integrated, automated approach to the software lifecycle Adding the ‘How’ to Helix

Let’s look at some simple steps Sitecore teams can take to embrace Lifecycle and Automation and create a delivery pipeline the embraces a shift left approach to quality and performance with automated feedback loops and quality control.

Building a cross-functional deployment pipeline that align teamsBuilding a cross-functional deployment pipeline that align teams Localized Development & Unit Tests

The First Way principle in DevOps talks about the need to minimize downstream problems, and improving the code quality at the individual developer level through automation is a great way to achieve drastic improvements in both code quality and developer productivity. Enhancing a developer’s IDE with Dynatrace provides incredible insight into the inner workings of developer’s code, for example:

  • Visual representations of runtime transaction flows
  • Sequence diagrams
  • Degradations in KPIs including, but not limited to:
    • Response times
    • Method use
    • Execution times
    • Database query timings
    • Exceptions
    • Error counts
    • Loggings
    • Number of remote calls
  • Full comparison analysis between either their prior local builds or operational builds
  • Comparison analysis between specific transactions currently being repaired vs a corresponding fatal transaction collected during an operational incident.

Dynatrace can pinpoint issues faster while reducing the need for numerous break points and line-by-line debugging. Call stacks are recorded each time the developer runs the code, enabling end-to-end post-mortem analysis of every local run. Not only can the developer prevent a risky check-in but designers such as solution engineers and architects can perform visual architectural validations (as opposed to them having to do full code reviews).

  1. Using the Dynatrace Visual Studio plugin (other IDE plugins are available such as Eclipse) allows developers to launch applications with an injected AppMon Agent directly from Visual Studio
Launching the Application with AppMon Agents instrumentedLaunching the Application with AppMon Agents instrumented

2. Executing unit tests using a tool like NUnit or directly from Visual Studio allows the AppMon Agent to capture data in the form of PurePaths:

PurePath Technology® is a complete multi-dimensional trace of every transaction in a monitored application providing comprehensive visibility across web browsers, web apps, mobile apps, web servers, Java, .NET, PHP, databases, middleware, and mainframe apps.

3. Developers can quickly analyze the PurePaths generated from the unit test to identify performance bottlenecks. The Transaction Flow view in Dynatrace provides a simple way to identify where time is being spent, thereby significantly speeding up the analysis process.

An example Transaction Flow view of performanceAn example Transaction Flow view of performance

When analyzing the PurePath, right clicking ‘source lookup > open in IDE’ will take you to the exact line of code that executed, in Visual Studio:

 Opening the IDE at the exact line of code that executedOpening the IDE at the exact line of code that executed

4. PurePaths from different builds, or even captured from Production, can be compared directly to identify performance regressions or performance improvements, and validate architecture before promoting code to higher environments like Continuous Integration or Continuous Deployment (CI/CD).

Example of an API comparison from 2 different builds (green is performance improvement, red is performance regression)Example of an API comparison from 2 different builds (green is performance improvement, red is performance regression) Build & Integration Test Automation

Moving beyond code check-in and onto the CI build server, such as Jenkins, Bamboo, and TeamCity, CI provides various steps and phases in which developers can test and piece together all of the code necessary for deployment. During a CI build step, a series of unit testing scripts should be executed to exercise as many of the critical methods as possible. (Note: while not advocating a particular tool, consider using code coverage tools such as CloverSonarQube, Team Test or dotCover, to determine the critical areas of the code to be unit tested).

 Jenkins Project Build incorporating Dynatrace Test Automation resultsJenkins Project Build incorporating Dynatrace Test Automation results

Once the unit test automation scripts are executed, the rich intelligence collected by AppMon can be leveraged, such as reporting if critical KPIs or SLAs are violated. Using build-scripting tools like MavenAnt or MSBuild, developers and testers can ensure that performance data is aligned and labeled for each build and that actions can be auto-orchestrated based on the performance data collected from each build.

The Jenkins Dynatrace Test Automation DashboardThe Jenkins Dynatrace Test Automation Dashboard

 

Detailed Test Automation report for a specific buildDetailed Test Automation report for a specific build

“It is only through automated testing that we can transform fear into boredom…
The only way you can get people productive is to show them there is a safety net underneath them that will catch errors long before they get into production.”  
Eran Messeri, SCM, Google

Summary

Helix provides a much-needed framework for best practice of Sitecore projects defining ‘What’ should be done, and Dynatrace AppMon complements Helix by providing the ‘How’ to do it. Specifically, Dynatrace AppMon supports continuous delivery processes by:

  • Promoting the sharing of performance data across development, test, & operations
  • Providing reliable metrics along your delivery pipeline through automation interfaces for CI/CD systems, such as Jenkins, Bamboo, and TeamCity
  • Delivering more stable applications in production faster through automatic analysis of regressions and the comparison of response times, structural differences, code executions, errors, exceptions and database performance

To get started, download the Dynatrace AppMon free 30-day trial and utilize our Share Your PurePath program by sending PurePaths to our Dynatrace Experts

If you’re new to Dynatrace and want to evaluate Dynatrace for your .NET application, check out this YouTube video that includes instructions for getting started What is Dynatrace AppMon?

Further Reading on DevOps

Continuous Innovation with Dynatrace AppMon & UEM 6.5

Scaling Continuous Delivery: Shift-Left Performance to Improve Lead Time & Pipeline Flow

Automated Optimization with Dynatrace AppMon & UEM

The post Adding the ‘How’ to ‘What’ for Sitecore Helix Test Automation appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

SQL queries for LoadRunner Analysis graphs

My Load Test - Thu, 12/08/2016 - 08:10
LoadRunner Analysis is a powerful tool for understanding exactly what happened during a load test. You can use it to slice and dice your performance test results data, then you can export selected graphs/charts for your Performance Test Summary Report. Some people may want to use a different tool to create charts from their LoadRunner […]
Categories: Blogs

Explore the new license bundles that are available in StormRunner Load 2.2 Release

HP LoadRunner and Performance Center Blog - Wed, 12/07/2016 - 21:51

Runner teaser.png

The latest StormRunner Load release has a bunch of new, exciting features for you to experience.  One of the most important new features is a change to our license model. We have changed the bundling a bit, trying to make the product even more flexible and affordable.

Categories: Companies

Creating configuration snippets

IBM UrbanCode - Release And Deploy - Wed, 12/07/2016 - 20:53
The steps and configuration values differ depending on which WebSphere Application Server – Configure plug-in version you use.
Categories: Companies

Cognitive Complexity, Because Testability != Understandability

Sonar - Wed, 12/07/2016 - 13:35

Thomas J. McCabe introduced Cyclomatic Complexity in 1976 as a way to guide programmers in writing methods that “are both testable and maintainable”. At SonarSource, we believe Cyclomatic Complexity works very well for measuring testability, but not for maintainability. That’s why we’re introducing Cognitive Complexity, which you’ll begin seeing in upcoming versions of our language analyzers. We’ve designed it to give you a good relative measure of how difficult the control flow of a method is to understand.

Cyclomatic Complexity doesn’t measure maintainability

To get started let’s look at a couple of methods:

int sumOfPrimes(int max) {              // +1
  int total = 0;
  OUT: for (int i = 1; i <= max; ++i) { // +1
    for (int j = 2; j < i; ++j) {       // +1
      if (i % j == 0) {                 // +1
        continue OUT;
      }
    }
    total += i;
  }
  return total;
}                  // Cyclomatic Complexity 4
 
  String getWords(int number) { // +1
    switch (number) {
      case 1:                   // +1
        return "one";
      case 2:                   // +1
        return "a couple";
      default:                  // +1
        return "lots";
    }
  }        // Cyclomatic Complexity 4

These two methods share the same Cyclomatic Complexity, but clearly not the same maintainability. Of course, this comparison might not be entirely fair; even McCabe acknowledged in his original paper that the treatment of case statements in a switch didn't seem quite right:

The only situation in which this limit [of 10 per method] has seemed unreasonable is when a large number of independent cases followed a selection function (a large case statement)...

On the other hand, that's exactly the problem with Cyclomatic Complexity. The scores certainly tell you how many test cases are needed to cover a given method, but they aren't always fair from a maintainability standpoint. Further, because even the simplest method gets a Cyclomatic Complexity score of 1, a large domain class can have the same Cyclomatic Complexity as a small class full of intense logic. And at the application level, studies have shown that Cyclomatic Complexity correlates to lines of code, so it really doesn't tell you anything new.

Cognitive Complexity to the rescue!

That's why we've formulated Cognitive Complexity, which attempts to put a number on how difficult the control flow of a method is to understand, and therefore to maintain.

I'll get to some details in a minute, but first I'd like to talk a little more about the motivations. Obviously, the primary goal is to calculate a score that's an intuitively "fair" representation of maintainability. In doing so, however, we were very aware that if we measure it, you will try to improve it. And because of that, we want Cognitive Complexity to incent good, clean coding practices by incrementing for code constructs that take extra effort to understand, and by ignoring structures that make code easier to read.

Basic criteria

We boiled that guiding principle down into three simple rules:

  • Increment when there is a break in the linear (top-to-bottom, left-to-right) flow of the code
  • Increment when structures that break the flow are nested
  • Ignore "shorthand" structures that readably condense multiple lines of code into one
Examples revisited

With those rules in mind, let's take another look at those first two methods:

                                // Cyclomatic Complexity    Cognitive Complexity
  String getWords(int number) { //          +1
    switch (number) {           //                                  +1
      case 1:                   //          +1
        return "one";
      case 2:                   //          +1
        return "a couple";
      default:                  //          +1
        return "lots";
    }
  }                             //          =4                      =1

As I mentioned, one of the biggest beefs with Cyclomatic Complexity has been its treatment of switch statements. Cognitive Complexity, on the other hand, only increments once for the entire switch structure, cases and all. Why? In short, because switches are easy, and Cognitive Complexity is about estimating how hard or easy control flow is to understand.

On the other hand, Cognitive Complexity increments in a familiar way for the other control flow structures: for, while, do while, ternary operators, if/#if/#ifdef/..., else if/elsif/elif/..., and else, as well as for catch statements. Additionally, it increments for jumps to labels (goto, break, and continue) and for each level of control flow nesting:

                                // Cyclomatic Complexity    Cognitive Complexity
int sumOfPrimes(int max) {              // +1
  int total = 0;
  OUT: for (int i = 1; i <= max; ++i) { // +1                       +1
    for (int j = 2; j < i; ++j) {       // +1                       +2 (nesting=1)
      if (i % j == 0) {                 // +1                       +3 (nesting=2)
        continue OUT;                   //                          +1
      }
    }
    total += i;
  }
  return total;
}                               //         =4                       =7

As you can see, Cognitive Complexity takes into account the things that make this method harder to understand than getWords - the nesting and the continue to a label. So that while the two methods have equal Cyclomatic Complexity scores, their Cognitive Complexity scores clearly reflect the dramatic difference between them in understandability.

In looking at these examples, you may have noticed that Cognitive Complexity doesn't increment for the method itself. That means that simple domain classes have a Cognitive Complexity of zero:

                              // Cyclomatic Complexity       Cognitive Complexity
public class Fruit {

  private String name;

  public Fruit(String name) { //        +1                          +0
    this.name = name;
  }

  public void setName(String name) { // +1                          +0
    this.name = name;
  }

  public String getName() {   //        +1                          +0
    return this.name;
  }
}                             //        =3                          =0

So now class-level metrics become meaningful. You can look at a list of classes and their Cognitive Complexity scores and know that when you see a high number, it really means there's a lot of logic in the class, not just a lot of methods.

Getting started with Cognitive Complexity

At this point, you know most of what you need to get started with Cognitive Complexity. There are some differences in how boolean operators are counted, but I'll let you read the white paper for those details. Hopefully, you're eager to start using Cognitive Complexity, and wondering when tools to measure it will become available.

We'll start by adding method-level Cognitive Complexity rules in each language, similar to the existing ones for Cyclomatic Complexity. You'll see this first in the mainline languages: Java, JavaScript, C#, and C/C++/Objective-C. At the same time, we'll correct the implementations of the existing method level "Cyclomatic Complexity" rules to truly measure Cyclomatic Complexity (right now, they're a combination of Cyclomatic and Essential Complexity.)

Eventually, we'll probably add class/file-level Cognitive Complexity rules and metrics. But we're starting with Baby Steps.

Categories: Open Source

Ranorex 6.2 Released

Ranorex - Wed, 12/07/2016 - 13:29

We’re excited to announce that Ranorex 6.2 is ready for you to download! In this release, we’ve focused on providing you with an advanced technology support for desktop and mobile applications as well as on making it easier for teams with mixed skills to collaborate on test automation projects.

Advanced Technology Support

We’ve made the support of innovative technologies a priority and now enable testing of Chromium-based frameworks in desktop apps and WKWebView objects in iOS apps.

Chromium Embedded Framework (CEF) support

CEF is one of the most frequently used frameworks for embedding web content in desktop applications. Next to Google’s native CEF implementation, we now also support testing of these Chromium-based frameworks:

  • CefSharp
  • Electron
  • NW.Js
  • Qt WebEngine

Find out more about the CEF support

WKWebView support

The WKWebView class enables you to embed interactive web content in your iOS mobile applications. Using Ranorex 6.2, you can now test your embedded WKWebView objects.

User Code Library

Sometimes, you need to add further functionality to your tests with user code. We understand that in a team with mixed skills, not everyone wants to code. That’s why we’ve introduced the user code library with Ranorex 6.2. It gives you direct access to the user code methods your colleagues have previously created in a test automation solution, so you can add additional functionality to your tests without writing a single line of code.

A quick overview of the workflow:

If you’re a developer or tester with programming skills, you can start by filling the user code library with user code methods. You can logically organize methods that relate to a specific workflow or application area. A tip: Add a description to each method. This makes it easier for your colleagues to find the right user code method .

As a tester in your team, you can directly access the library from the action table and select a method from there to use it as an action. This way, you can add further functionality to your tests without having to dip into code!

Find out more about the user code library

We hope you have as much fun using this update as we’ve had creating it!

Learn more about Ranorex 6.2 Download Free 6.2 Trial

The post Ranorex 6.2 Released appeared first on Ranorex Blog.

Categories: Companies

Automating Code Coverage Analysis

Testing TV - Wed, 12/07/2016 - 11:18
In the world of Agile having lean and reliable automated software testing that covers a products functionality is essential however the development process in creating, updating and reviewing these tests is often tedious and unpredictable and difficult to track as it is being developed simultaneously by multiple Scrum teams. What often results are tests that […]
Categories: Blogs

Agile Testing without Automation?

Software Testing Magazine - Wed, 12/07/2016 - 10:56
Most research on Agile Testing and QA have requirements on highly automated testing/CI and an Agile or Scrum project-management structure. How can we iterate towards a more Agile testing process, with all the benefits that entails, when some of the common requirements are missing or undesirable in the near-term? Drive quality as a core principal through communication and collaboration with QA test “consultants” embedded with development teams: We discuss real-world results from Puppet on feature turn-around time and relevant defect rates Derive transparency on upcoming features and requirements through quality user stories and acceptance criteria, BDD or otherwise Risk analysis driven by all teams, particularly Product, QA and Development: We discuss the importance of risk-relative testing, and why it’s important in some cases to test less, rather than more. Whatever is prioritized to test should be transparent to all groups and stakeholders: Everyone knows communication is important, but how can we agree upon the most effective details that need constant discussion with effortless transfer? We discuss methods and motivation on communication: Instances of success and opportunities for improvement at Puppet are presented. Video producer: http://www.pnsqc.org/
Categories: Communities

Klaros Test Management 4.6 Provides Better JIRA Integration

Software Testing Magazine - Tue, 12/06/2016 - 20:00
Klaros Test Management version 4.6 has been updated with a set of new features and improvements. The focus in this release is the improved integration with issue- and requirements management systems. In addition, the job management includes new features and managing test suites is now more user-friendly. The highlights of Klaros Test Management 4.6 are: 1. Processing of issues and the background synchronization with requirements management systems is now faster and includes additional supported custom fields 2. The calculation of the execution and success rate of nested tasks has been refined 3. Dependencies between test jobs can now be defined and will block job execution until dependency criteria are resolved 4. Additional filter options for finding test artifacts 5. Standardized comments on test steps can be created and selected individually 6. Test instructions can now contain references to image files, which are automatically displayed as a preview image in the text 7. Test instructions can be dynamically supplemented by test data to be used in the test 8. Multiple selection is now also possible when removing test cases from tests
Categories: Communities

Parasoft Updates Software Testing Solution

Software Testing Magazine - Tue, 12/06/2016 - 18:40
Parasoft has announced the latest release of Development Testing Platform (DTP) and DTP Engines for C/C++, .NET, and Java (C/C++test, dotTEST, and Jtest). Parasoft DTP consistently applies software quality practices across teams and throughout the SDLC, enabling your quality efforts to shift left–delivering a platform for automated defect prevention and the uniform measurement of risk. This release builds on Parasoft approach to continuously improving software quality processes. DTP 5.3 and DTP Engines 10.3 include new features and enhancements that provide deeper visibility into the effects of incremental changes to the code, streamline software quality activities on the developer desktop, and reduce the risk associated with safety-critical software development. Other Key Features in this Release * Continuous Quality Assistant: CQA analyzes code as the developer performs actions, such as opening and saving files, prior to running a full analysis. * Updates and enhancements to interfaces, widgets, and the award-winning Process Intelligence Engine (PIE). * New Marketplace Extensions * Enhanced environment support, including Oracle 12c and MySQL 5.7.
Categories: Communities

Ten Ways to Build Your Software Testing Skills

As a software testing and QA consultant over the past 27 years, I have worked with hundreds of organizations and tens of thousands of testers. Over that time, I have observed two types of people – those that see software testing as a job and those that see software testing as a career.

Those that see testing as a career typically advance in their jobs and have a higher level of self-esteem. Those that see testing as only as a job, often get bored and complain about the lack of opportunities. The “job only” perspective also indicates that someone is only in the testing role for a limited time. Therefore, there is little incentive to invest in personal improvement.

Of course, not everyone is cut out for software testing. The role can be frustrating at times, especially when the tester is blamed for the defects they report. I jokingly say that software testing conferences are like mass group therapy for software testers and test managers. It is interesting to see the realization of people when they see that they are not the only ones with unrealistic project managers, difficult end-users, technologies that are difficult to test, and oh, that automation stuff looks so easy but it can be so difficult to implement.

I know that I am around professional testers when vigorous (not vicious) debate breaks out over seemingly minor differences in test philosophies, approaches and techniques. It shows that people have thought a lot about the ideas they are defending or opposing.

If you see software testing as a profession instead of a job, then it’s up to you to grow. The greatest mistake you can make is to stop learning and growing. For those that see software testing and QA (yes, there is a difference) as a professional career choice, here are some ways to grow your career.

1. Set growth goals for the coming year. These don’t have to be huge goals, but without these goals it’s easy to lose focus. Goals also paint the target. You know when you have hit them. Here are some examples:

• Learn how to apply a test technique that is unfamiliar to you
• Develop a specialty area of security testing
• Learn how to use a particular test tool
• Read three books about testing or some related (or even unrelated) topic
• Obtain a certification in testing or a related field
• Speak at a conference
• Write an article

2. Read one or more books on software testing or related topics. It is amazing to me how few people read books that relate to the testing and software development professions. You have more choices than ever before with hundreds of testing books on the market. Perhaps the greater challenge is to find the books that are worthy of your time. By the way, some of the best books are also the oldest books that are available for $5 - $10 from online used booksellers such as www.abebooks.com. Two of my top recommendations are “The Art of Software Testing, 1st Ed.” By Glenford Myers and “Software Testing Techniques, 2nd Ed.” by Boris Beizer. These are foundational books in software testing, written over 30 years. However, don’t dismiss them due to age. These books are good for any tester to read. The Beizer book has a technical focus that would serve any tester well in today’s world of testing.

3. Take a training course that aligns with your goals. Even an online course is an easy reach in terms of time and cost. It is amazing what a little training can do. While good training typically will cost money, there are free and inexpensive online courses available. I have over twenty-three e-Learning courses at www.mysofwaretesting.com.

4. Create content. If you really want to learn and grow, then develop a small course, write a major article or start a blog. This not only stretches your abilities, but provides exposure as well. I never thought back in 1989 when I wrote my first testing course (unit testing) that one day I would be able to say I’ve personally written over 70 courses! I never thought I would write two books (and working on five others). And… I’m not saying that is where you will arrive. But the thing I can say is that I learn ten times more creating a class than attending a class. As the saying goes, “The best way to learn is to teach.”

5. Find a coach or mentor. Then, meet with them often enough to glean their wisdom. I know it’s hard sometimes to find the right person to mentor you, but they are out there. Look for people with lots of experience in what you want to do. Ask questions and listen. The trick on this one is that you must take the initiative to seek out the mentoring relationship.

6. Coach or mentor someone yourself. This is where you get to repay your coach or mentor. You learn by listening to the person you are mentoring. I have mentored many people and I learn by dealing with the tough questions they bring me. Admittedly, some people are difficult and are not worthy of your time. However, I have found it to be rare that a mentoring relationship has not been beneficial, both to me, and the person I am mentoring.

7. Test something totally different than you have ever tested before. Yes, this is on your own time and at your own effort, but you can learn a lot and come away with a new marketable skill. Interested in mobile testing? Find a mobile app you find interesting and challenging and test it. A way to make this profitable is to become a crowd tester. I can recommend www.mycrowd.com as a place to learn more about getting started as a crowdtester.

8. Read or watch something totally unrelated to software testing and find lessons in it for testing. Once you start looking for analogies of testing, they are everywhere. One of my favorite TV shows for testing lessons is Mythbusters, but I have also learned from Kitchen Nightmares, Hotel Impossible, Undercover Boss and many others. Novels such as Jurassic Park have some great testing lessons in them. Take notes, then write about what you learn.

9. Speak at a conference. The trends are in your favor. Smaller conferences are becoming more popular, as is finding speakers who are not well-known names in the field. Get a great topic, a case study and develop it into a conference presentation. No takers on your idea? Fine. Create a YouTube video and you will have more views in a few weeks than you would have at a physical conference! The skill you develop in speaking is that of oral communication - a skill that can really propel your success in any field.

10. Contribute to forum discussions. I’m not talking about short, one-sentence responses, but respectful, well-reasoned responses to people’s questions and/or opinions. LinkedIn groups are a great place to start. The growth comes in the articulation and sharing of your feedback and ideas. Especially on LinkedIn, group contributors gain a stronger profile and presence.

You will notice that most of the items I list are active in nature. You grow by doing.

Consider the idea that each of the above actions might have a 5% or more increase in your value to your team, your career or to your company. The combined effect of doing all of these would be phenomenal. The combined effect is not an addition function, but a multiplier function. Doing all ten items would not be a 50% value addition, but more like a 200% or more addition of value to your career and to your role in your company. I can attest to this in my own career.

This is important because in today’s marketplace, you are paid for the value you bring to a project. Low-value activities are often the first to go when companies decide to cut-back. The same holds true for people. The people that are more likely to be retained are those that add value to a project and to a company.

It’s better to build skills today for tomorrow than to realize one day you need skills that will take time to acquire and build.

Categories: Blogs

Testing Infrastructure at Google

Software Testing Magazine - Tue, 12/06/2016 - 16:23
One of the activity of software testing engineer at Google is to build and improve the test infrastructure to help software developers to be more productive. In this blog post, Jochen Wuttke gives a concrete example of this task. In one of his project at Google, Jochen Wuttke was responsible to understand the software testing issues associated to the maintenance of a legacy system. He discovered two main causes: * Tight coupling and insufficient abstraction made unit testing very hard * The infrastructure used for the end-to-end tests prevented to create and inject fakes or mocks for these services. The blog post describes three solutions that he explore to solve these issues. He gives more details about the solution that was retained, how he implemented it and what results were achieved. His conclusion is that “Building and improving test infrastructure to help engineers be more productive is one of the many things test engineers do at Google. Running this project from requirements gathering all the way to a finished product gave me the opportunity to design and implement several prototypes, drive the full implementation of one solution, lead engineering teams to adoption of the new framework, and integrate feedback from engineers and actual measurements into the continuous refinement of the tool.” Read the complete blog post on https://testing.googleblog.com/2016/11/what-test-engineers-do-at-google.html
Categories: Communities

StormRunner Load 2.2 Release: Extending the DevOps toolchain

HP LoadRunner and Performance Center Blog - Mon, 12/05/2016 - 23:52

2_2  blog image teaser.png

HPE StormRunner Load has always had a very open approach, and in this release we have added new integrations with widely used DevOps tools, namely Git and Bamboo. Keep reading to find out how to utilize these integrations.

Categories: Companies

DevOps Confessions from Fannie Mae, Liberty Mutual, and Capital One

Sonatype Blog - Mon, 12/05/2016 - 22:53
True Story.  Over the past few years, Fannie Mae transformed the way in which they delivered software.  Deploys increased from 1,200/month to 15,000/month.  At the same time, productivity increased by 28% while reducing costs by 30%.  But, how did they do it?

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Defending the Network: Flow vs. Wire Data

In recent months, at multiple customers, I’ve had the opportunity to observe shockingly similar problems that follow an unfortunately common scenario:

  • There is a perceived site issue
  • A war room or incident is started
  • The network is quickly thrown under the bus
  • Why? Because the complaints are only coming from one site; therefore, it MUST be a network issue

This predicament results in the network team reeling backward as they scramble to find answers and prove innocence. Phone calls are made to vendors for IP accounting (which can take up to 48 hours to produce results), CLI windows fly open as engineers attempt to capture the evidence of the issue in real-time, and everyone on the war room call waits in a not-so-patient manner. Other teams may be status checking their systems, but only enough to be able to claim ‘all systems are operating normally’ when called upon. The entire investigation seems to crawl at a slow pace; either the data isn’t there, or if it is, it is merely a snapshot, or it lacks the relevant detail to definitively prove the network guilty or innocent.

If I could trademark a measurement for this activity it would be Mean Time to Innocence (MTTI). This would be a sub-metric to the more cumulative Mean Time to Resolution (MTTR) normally measured in these scenarios.

This got me thinking. Why is this such a challenge for organisations? Why were these network teams unable to swat away these claims of the network being the root cause when all the server guy had to say for his technology was ‘it’s on and available’? I think some of the issue is ownership. Most organisations don’t own their WAN, which is why we have Telecoms companies. Instead, they normally have limited read access to an edge router or to firewalls, and that is enough — until the next time the above situation happens.

The question remains; how do you reduce your MTTI (see it is slowly catching on)? The answer, fundamentally, is through effective monitoring. This opens the next can of worms — how and to what extent? The solution may not be as difficult as you might expect. As with any good solution you need to understand three things:

  • What are the use cases you are looking to address?
  • What types of data are available to you?
  • What use cases can be addressed by what types of data?

Step 1 is quite easy to tackle. You can probably list off a few scenarios (I already alluded to one above) that come at you every time something goes pear shaped. Once you’ve had a brainstorm for all the scenarios you’ve encountered, or even better having looked into your ticketing system so you’ve backed up your list with data, then you’re ready to move on to step 2.

My brainstorm short list of use cases:

  • Is my link available?
  • What is my link utilisation?
  • Who is causing the utilisation?
  • How are they talking to each other?
  • What application are they using?
  • What transactions are they executing?
  • Is my network the cause of the problem or simply a symptom?

Step 2 isn’t too hard to discover. A little Google searching or some casual conversations with your other networking buddies will tell you there are a few different options:

  • NetFlow
    • NetFlow leverages layer 3 and 4 packet header information, observed by devices in the network infrastructure, to determine usage metrics for specific applications defined by port number, which is exported from the device in the form of flow data. This flow data (i.e., periodic summaries of the traffic traversing an interface) is sent to a collector, which in turn might generate reports on traffic volumes and conversation matrices.
  • NBAR2 (Network Based Application Recognition)
    • NBAR relies on deep and stateful packet inspection on Cisco devices. NBAR identifies and classifies traffic based on payload attributes and protocol characteristics observed on the network infrastructure.
  • Flexible NetFlow (FNF)/ IPFIX
    • Flexible NetFlow and IPFIX are extensions to NetFlow, sometimes referred to as NetFlow v9 and v10. These extend the basic NetFlow concept by allowing the inclusion of extended metrics to be exported within the Flow sets. These metrics include those defined within NBAR2.
  • Wire data
    • Wire data simply refers to the extraction of useful information from network packets captured at some point on your network. While this includes insights from protocol headers, it generally implies deeper inspection of application payloads. Wire data is typically obtained via a port mirror or through a physical or virtual tap.
    • A tap is sits in-path and provides exact copies of the actual packets on the wire, sending these to an aggregator, a probe, or a sniffing device.
    • A port mirror, often referred to by its Cisco-centric term SPAN, follows a similar premise as a tap except that it is an active function of the network infrastructurePackets are copied from one or more device interfaces and sent out via another interface to an aggregator, probe, or sniffing device.

Step 3 is where all the fun is because it will ultimately help determine your solution. I created a matrix comparing use cases vs. data types.

NetFlow NBAR2 FNF/IPFIX Wire data Is my link available? X X X X What is my link utilisation? X X X X Which users are causing the utilisation? X X X X What applications are users using? X X X What is the quality of the network path (loss, delay)? X X Which transactions are users executing? X1 How long do individual transactions take to complete? X1 What is the quality of the users’ experience? X1

1 If your monitoring solution can decode and analyse the application protocol.

Based on the above we can conclude that wire data has the most potential and should be used in the most critical places like in the data centre where your applications reside. For a branch office, you can gain broad insights using NetFlow and for more granular insight Flexible NetFlow/IPFIX and NBAR.

Luckily for me, our DC RUM product allows us to accept all three types of data and uses the same reporting engine to display all of them making it easy to prove that MTTI.

Here are two NetFlow reports I specifically created for that use case I mentioned at the beginning.

Figure 1. Utilisation and conversation with byte counts Inbsert3Figure 2. Utilisation and port number shown

The client can easily answer those use cases mentioned in the matrix. In this example, I can see a spike that pretty much maxed out the link for a duration of 60 minutes from 4:45AM – 5:45AM. I can tell the IPs that caused that spike in the table of Figure 1 and I can tell that one type of traffic (HTTP) doubled during that spike (4:45AM – 5:45AM).

To address the other issue I was working on, I leveraged wire data and DC RUM’s SAP transaction-level decode. Two sites were having issues with their SAP ECC system. The perception, as I mentioned earlier, was that the network was the only cause. Letting DC RUM do the analysis work for me, I could see their problem pattern for slowness changed considerably after the 29th. The problem was dual-edged. While they had earlier addressed the client issue (shown in yellow), DC RUM now pointed towards data centre tiers being the issue (sorry server guy).

Figure 3. Slow Operation Cause BreakdownFigure 3. Slow Operation Cause Breakdown

Drilling into a single T-code (the name for SAP transactions), the issue was very apparent. Almost all of the transaction delay was on the server side.

Figure 4. Load SequenceFigure 4. Load Sequence

You can see the clear difference in data types. In the NetFlow example we can see symptoms: high link utilisation, HTTP traffic volume increase, etc. but NetFlow’s lack of application-level insights limits our ability to isolate a fault domain. Yes, there was a sustained spike, but it is difficult to derive what was causing the incident and how many it affected. While the analysis did reduce our MTTI and possibly our MTTR, it didn’t provide enough detail to effectively isolate the fault domain.

In the wire data example, you can begin to appreciate the clarity gained by decoding individual transactions – in this case, for SAP. Not only could we see that the issue was dual-pronged, we could clearly articulate the issues with information that makes sense to everyone. We’ve captured the user’s login name, identified the transaction by its meaningful name (L100_ALV is the SAP transaction ID), its precise duration (2 minutes), and the distribution of delays (99% server time!), and are able to direct the investigation accordingly. This changes the dynamic of the war room from defensive – trying to prove your innocence – to collaborative, driving rapid positive results.

Whether you are looking to invest or expand in a product suite soon, I urge you to run through a similar exercise to evaluate how your product selection could address your specific use cases. Don’t just look at the answers the products give you. Interrogate how they analyse that data type, if at all, to get you to that resolution. Application awareness (i.e. NetFlow/NBAR2) and transaction fluency (i.e. wire data with analysers for application-specific protocols and behaviours) are two completely different things and should be well studied when considering your next-generation monitoring solution.

Now, off to patent that metric…

The post Defending the Network: Flow vs. Wire Data appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Why WAN optimization requires performance management

WAN optimization can only be as efficient as you let it be. Recently I’ve seen a good example that illustrates how, without diligent application performance management, multi-million WAN optimization investments can quickly turn into a lukewarm “implementation completed” outcome, to say the least.

The challenge at one of our customers started in a typical way: a large company, with operations around the globe, facilitated by a large SAP ERP and CRM landscape, was constantly hit by complaints about slowness of the vital SAP systems, which increased to a level that raised concerns of the management team. The IT team has been tasked with addressing the issues, provided an adequate budget and given a very simple goal: end users should no longer complain about SAP slowdowns affecting their productivity.

Investments went into the most prominent areas: speed up the worldwide network and take the whole application delivery chain under the APM control. Dynatrace became a partner in the project and delivered complete end-to-end performance monitoring of all SAP applications, with comprehensive visibility into transactions and users known by names, for all global locations. In fact, all applications, including SAP, web and non-web apps are now monitored for performance, usage and availability thanks to the flexibility and scalability of Dynatrace DC RUM.

DC RUM quickly proved that the SAP systems were scaled properly and behaved predictably, with no major slowdowns incurred within the data center part of the application delivery chain.

The network team partnered with their WAN provider and implemented WAN optimization controllers from Cisco – the Cisco WAAS devices have been installed at all vital locations where the centrally hosted SAP applications are delivered. The project was completed, all WOCs operational and configured to optimize all network traffic.

However, the end users didn’t stop complaining about the application performance. For them hardly anything changed. Now what?

The optimized problem

A key ingredient of the Dynatrace APM offering is the Guardian service, targeted at implementing the best APM practices at our customers. The on-site Guardians stood up to the challenge and took a deep dive into the efficiency of the whole application delivery chain – including the WAN optimization technology. Dynatrace DC RUM measures network traffic on both the LAN and WAN sides of the WAN optimization controllers and reconstructs the exact flow of each transaction through the optimized network and the data center network. DC RUM does this for every application on the network and for every user, in both Cisco WAAS and Riverbed Steelhead environments.

Diagram 1DC RUM’s network probe monitors both sides of the data center WOC

DC RUM understands specifics of the WAN optimization controllers’ protocols on the WAN side and thus precisely measures how each individual application transaction is optimized and delivered to the remote client. With such deep knowledge of the data flow on the application protocol level, DC RUM quickly identified some unexpected effects.

Sure, the WAN optimization and application acceleration technologies helped the typical target apps: the customer’s SMB and HTTP traffic was optimized well, with compression levels assuring traffic on the WAN side decreased, leaving more bandwidth for other critical apps like SAP GUI and SAP portal apps delivered over HTTPS.

However, negative network traffic reduction levels were observed for the most important apps. In other words, there’s more traffic on the WAN side than on the LAN side in the data center. This is not a desired effect of the WAN optimization, especially for apps that are the primary target of the performance improvements efforts.

Negative traffic compression ratio observed on the WAN link indicates inefficient WOC operation for the application of focus Negative traffic compression ratio observed on the WAN link indicates inefficient WOC operation for the application of focus

DC RUM uncovered a more significant issue that can be observed at the TCP traffic flow level. There are a high number of the Client zero window size signals sent from the remote locations, indicating that the remote WAAS devices are overloaded and can’t process on time all the traffic they should be processing.

screen2Remote WOC TCP receiver flow control limits throughput

Why is that? A look at the DC RUM’s WAN optimization efficiency report delivers a clear answer:

screen3Observed compression rates indicate the need for more granular policy configuration
  • Compression of the SAP traffic helps very little – because SAP GUI traffic is by design already compressed.
  • Compression of the HTTPS traffic does not help at all – it actually has a negative bandwidth effect.

Most importantly – compressing the already compressed traffic (SAP GUI and HTTPS) consumes remote WAAS resources, leaving no space for other WAN optimization services of the WAAS. Namely: the Traffic Flow Optimization (TFO) buffers on the remote WAAS cannot be emptied on time because the WAAS CPU is busy uncompressing/compressing the already compressed traffic. This forces TCP flow control to send the Client Zero Window Size events to the peer WAAS, limiting throughput and reducing performance.

The net effect? All WAN traffic is slower now than before the WAAS implementation!

Optimizing the WAN optimization

The remedy – once the data is visible – is simple: disabling compression on specific applications would free up WAAS CPU cycles from compressing what’s already compressed, which would let the WAAS TFO thread to finish its work on time, which would prevent receiving buffers overflow, which would prevent TCP client zero window size events, which would speed up WAN transmission for every app.

Lessons learned

  • Throwing in WAN optimization technology doesn’t solve the network performance problem yet. WAN optimization needs to be tuned in conjunction with the application mix on the network links, and its effects measured in two categories: bandwidth optimization and response time improvements experienced by the end users.
  • Measuring both requires using app-aware performance management tools. Network link utilization measurements don’t reflect what the network carries for whom. Wire data insight doesn’t tell what the end users experience. Only the application flow analytics uncovers true app-network interaction and this requires transactional understanding of the application traffic.
  • Leverage APM specialists. No one can be expert in everything. APM is a team game, so teaming up with the specialists would help achieving the desired results faster. In this case the falcon APM eye of the Dynatrace guardian spotted the WAN optimization inefficiency and triggered the corrective action based on objective end user experience measurements.

With the right APM tooling that relies on the application’s network protocol decodes you can understand how applications interact with the network. With this knowledge you can tune the WAN optimization techniques to the application specifics and thus optimize the WAN optimization for the desired effects: improving the end user experience with the applications and achieving cost savings on the WAN bandwidth. You may also find this blog post giving useful advice on WAN optimization approaches.

The post Why WAN optimization requires performance management appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

The Tweets You Missed in November

Sonar - Mon, 12/05/2016 - 12:05

Here are the tweets you likely missed last month!

"SonarQube 6.x series: Focused and Efficient", by @bellingard https://t.co/XBND3qPUA1

— SonarQube (@SonarQube) November 3, 2016

SonarQube JavaScript 2.18: 10 new rules and significant improvements to the type-based analysis, seehttps://t.co/ZjCRncEUDw pic.twitter.com/UUs3IWHmi5

— SonarQube (@SonarQube) November 21, 2016

SonarLint for IntelliJ 2.4 hides false-positive and won't-fix issues in connected mode. https://t.co/dRURlXJ0Sk pic.twitter.com/JVHM2kJsgu

— SonarLint (@SonarLint) November 17, 2016

SonarLint for Eclipse 2.3 hides false-positive and won't-fix issues in connected mode. https://t.co/rTXlxtAjKZ pic.twitter.com/1O6zKA5GVl

— SonarLint (@SonarLint) November 28, 2016

SonarLint for Visual Studio 2.8 Released with even more powerful path-sensitive dataflow engine see https://t.co/z7VETuMBrl @VisualStudio pic.twitter.com/pDAvY7lqmU

— SonarLint (@SonarLint) November 29, 2016

Categories: Open Source

OneAgent, Security Gateway, & OneAgent for iOS/Android release notes (version 107)

OneAgent
  • OneAgent log analytics now supports the Docker logging framework, making it possible to access all Docker container log data related to specific applications (i.e., process groups). Log files for processes that write to standard input/output errors and that run in containers that use the default Docker json-logging driver are also available for analysis in the Dynatrace Log viewer.  Docker container logs can now be downloaded from the Log files tab on related Process pages. Container image names and IDs, as well as output types, are available with each log entry. All Log Analytics functionality—including searching, grouping, bookmarking, and event detection—is now fully supported within Docker environments.
.NET
  • Improved reporting of technologies used by monitored .NET processes.
  • ASP.NET Core services now report application IDs and context roots.
Java
  • Queue support for IBM MQSeries.
Nginx
  • Support for OpenResty Nginx builds.
Security Gateway
  • Fixed handling of OneAgent requests (Dynatrace Managed only).
OneAgent for iOS and Android

Dynatrace OneAgent for iOS and Android release 6.5.5 introduces some changes to Gradle and CocoaPods support.

  • Support for Gradle version 2.2.2.
    A new instrumentation configuration must be used to instrument using the new Gradle version 2.2.2 plugin:
    buildscript {
        dependencies {
            classpath 'com.android.tools.build:gradle:2.2.2'
            classpath 'com.dynatrace.tools:android:+'
        }
    }

    apply plugin: 'com.dynatrace.tools.android'
    dynatrace {
        defaultConfig {
            applicationId 'YOUR_APP_ID'
            environmentId 'YOUR_ENV_ID'
        }
    }
  • The Cocoapod specification has been rebranded to Dynatrace. It now supports both Dynatrace SaaS/Managed and Dynatrace AppMon. Just use the new Dynatrace pod instead of the deprecated Ruxit specification, as shown below:
     pod 'Dynatrace'
  • Dynatrace OneAgent for iOS now officially supports iOS 10.

The post OneAgent, Security Gateway, & OneAgent for iOS/Android release notes (version 107) appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today