Skip to content

Feed aggregator

Test Data Management Risks

Software Testing Magazine - Mon, 06/26/2017 - 15:33
Providing meaningful data to perform software testing is the main challenge of test data management. This issue is even more important in domains where sensitive data is used like healthcare or...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

You Don’t Have Test Cases, Think Again

PractiTest - Mon, 06/26/2017 - 14:00

NOTICE:
We, at the QABlog, are always looking to share ideas and information from as many angles of the Testing Community as we can.  

Even if at times we do not subscribe to all points and the interpretations, we believe there is value in listening to all sides of the dialogue, and allowing this to help us think where do we stand on the different issues being reviewed by our community.

We want to invite people who want to provide their own opinion on this or any other topic to get in touch with us with their ideas and articles.  As long as we believe the article is written in good faith and provides valid arguments for their views, we will be happy to publish it and share it with the world.

Let communication make us smarter, and productive arguments seed the ideas to grow the next generation of open-minded testing professionals!

*The following is a guest post by Robin F. Goldsmith, JD, Go Pro Management, Inc. The opinions stated in this post are his own. 

 

You Don’t Have Test Cases, Think Again

Recently I’ve been aware of folks from the Exploratory Testing community claiming they don’t have test cases. I’m not sure whether it’s ignorance or arrogance, or yet another example of their trying to gain acceptance of alternative facts that help aggrandize them. Regardless, a number of the supposed gurus’ followers have drunk the Kool-Aid, mindlessly mouthing this and other phrases as if they’d been delivered by a burning bush.

What Could They Be Thinking?

The notion of not having test cases seems to stem from two mistaken presumptions:

1. A test case must be written.
2. The writing must be in a certain format, specifically a script with a set of steps and lots of keystroke-level procedural detail describing how to execute each step.

Exploratory Testing originated arguing that the more time one spends writing test cases, the less of limited test time is left for actually executing tests. That’s true. Conclusions Exploratory claims flow from it is not so true because it’s based on false presumptions that the only alternative to Exploratory is such mind-numbing tedious scripts and that Exploratory is the only alternative to such excessive busywork.

Exploratory’s solution is to go to the opposite extreme and not write down any test plans, designs, or cases to guide execution, thereby enabling the tester to spend all available time executing tests. Eliminating paperwork is understandably appealing to testers, who generally find executing tests more interesting and fun than documenting them, especially when extensive documentation seems to provide little actual value.

Since Exploratory tends not to write down anything prior to execution, and especially not such laborious test scripts, one can understand why many Exploratory testers probably sincerely believe they don’t have test cases. Moreover, somehow along the way, Exploratory gurus have managed to get many folks even beyond their immediate followers to buy into their claim that Exploratory tests also are better tests.

But, In Fact…

If you execute a test, you are executing and therefore have a test case, regardless whether it is written and irrespective of its format. As my top-tip-of-the-year “What Is a Test Case?” article explains, at its essence a test case consists of inputs and/or conditions and expected results.

Inputs include data and actions. Conditions already exist and thus technically are not inputs, although some implicitly lump them with inputs; and often simulating/creating necessary conditions can be the most challenging part of executing a test case.

Exploratory folks often claim they don’t have expected results; but of course they’re being disingenuous. Expected results are essential to delivering value from testing, since expected results provide the basis for the test’s determination of whether the actual results indicate that the product under test works appropriately.

Effective testing defines expected results independently of and preferably prior to obtaining actual results. Folks fool themselves when they attempt to figure out after-the-fact whether an actual result is correct—in other words, whether it’s what should have been expected. Seeing the actual result without an expected result to compare it to reduces test effectiveness by biasing one to believe the expected result must be whatever the actual result was.

Exploratory gurus have further muddied the expected results Kool-Aid by trying to appropriate the long-standing term “testing,” claiming a false distinction whereby non-Exploratory folks engage in a lesser activity dubbed “checking.” According to this con, checking has expected results that can be compared mechanically to actual results. In contrast, relying on the Exploratory tester’s brilliance to guess expected results after-the-fact is supposedly a virtue that differentiates Exploratory as superior and true “testing.”

Better Tests?

Most tests’ actual and expected results can be compared precisely—what Exploratory calls “checking.” Despite Exploratory’s wishes, that doesn’t make the test any less of a test. Sometimes, though, comparison does involve judgment to weigh various forms of uncertainty. That makes it a harder test but not necessarily a better test. In fact, it will be a poorer test if the tester’s attitudes actually interfere with reliably determining whether actual results are what should have been expected.

I fully recognize that Exploratory tests often find issues traditional, especially heavily-procedurally-scripted, tests miss. That means Exploratory, like any different technique, is likely to reveal some issues other techniques miss. Thus, well-designed non-Exploratory tests similarly may detect issues that Exploratory misses. What can’t be told from this single data point is whether Exploratory tests in fact are testing the most important things, how much of importance they’re missing, how much value actually is in the different issues Exploratory does detect, and how much better the non-Exploratory tests could have been. Above all, it does not necessarily mean Exploratory tests are better than any others.

In fact, one can argue Exploratory tests actually are inherently poorer because they are reactive. That is, in my experience Exploratory testing focuses almost entirely on executing programs, largely reacting to the program to see how it works and try out things suggested by the operating program’s context. That means Exploratory tests come at the end, after the program has been developed, when detected defects are hardest and most expensive to fix.

Moreover, reacting to what has been built easily misses issues of what should have been built. That’s especially important because about two-thirds of errors are in the design, which Exploratory’s testing at the end cannot help detect in time to prevent their producing defects in the code. It’s certainly possible an Exploratory tester does get involved earlier. However, since the essence of Exploratory is dynamic execution, I think one would be hard-pressed to call static review of requirements and designs “Exploratory.” Nor would Exploratory testers seem to do it differently from other folks.

Furthermore, some Exploratory gurus assiduously disdain requirements; so they’re very unlikely to get involved with intermediate development deliverables prior to executable code. On the other hand, I do focus on up-front deliverables. In fact, one of the biggest-name Exploratory gurus once disrupted my “21 Ways to Test Requirements Adequacy” seminar by ranting about how bad requirements-based testing is. Clearly he didn’t understand the context.

Testing’s creativity, challenge, and value are in identifying an appropriate set of test cases that together must be demonstrated to give confidence something works. Part of that identification involves selecting suitable inputs and/or conditions, part of it involves correctly determining expected results, and part of it involves figuring out and then doing what is necessary to effectively and efficiently execute the tests.

Effective testers write things so they don’t forget and so they can share, reuse, and continually improve their tests based on additional information, including from using Exploratory tests as a supplementary rather than sole technique.

My Proactive Testing™ methodology economically enlists these and other powerful special ways to more reliably identify truly better important tests that conventional and Exploratory testing commonly overlook. Moreover, Proactive Testing™ can prevent many issues, especially large showstoppers that Exploratory can’t address well, by detecting them in the design so they don’t occur in the code. And, Proactive Testing™ captures content in low-overhead written formats that facilitate remembering, review, refinement, and reuse.

About the Author

Robin GoldsmithRobin F. Goldsmith, JD helps organizations get the right results right. President of Go Pro Management, Inc. Needham, MA consultancy which he co-founded in 1982, he works directly with and trains professionals in requirements, software acquisition, project management, process improvement, metrics, ROI, quality and testing. .

Previously he was a developer, systems programmer/DBA/QA, and project leader with the City of Cleveland, leading financial institutions, and a “Big 4” consulting firm.

Author of the Proactive Testing™ risk-based methodology for delivering better software quicker and cheaper, numerous articles, the Artech House book Discovering REAL Business Requirements for Software Project Success, the forthcoming book Cut Creep—Put Business Back in Business Analysis to Discover REAL Business Requirements for Agile, ATDD, and Other Project Success, and a frequent featured speaker at leading professional conferences, he was formerly International Vice President of the Association for Systems Management and Executive Editor of the Journal of Systems Management. He was Founding Chairman of the New England Center for Organizational Effectiveness. He belongs to the Boston SPIN and served on the SEPG’95 Planning and Program Committees. He is past President and current Vice President of the Software Quality Group of New England (SQGNE).

Mr. Goldsmith Chaired attendance-record-setting BOSCON 2000 and 2001, ASQ Boston Section’s Annual Quality Conferences, and was a member of the working groups for the IEEE Software Test Documentation Std. 829-2008 and IEEE Std. 730-2014 Software Quality Assurance revisions, the latter of which was influenced by his Proactive Software Quality Assurance (SQA)™ methodology. He is a member of the Advisory Boards for the International Institute for Software Testing (IIST) and for the International Institute for Software Process (IISP). He is a requirements and testing subject expert for TechTarget’s SearchSoftwareQuality.com and an International Institute of Business Analysis (IIBA) Business Analysis Body of Knowledge (BABOK v2) reviewer and subject expert.

He holds the following degrees: Kenyon College, A.B. with Honors in Psychology; Pennsylvania State University, M.S. in Psychology; Suffolk University, J.D.; Boston University, LL.M. in Tax Law. Mr. Goldsmith is a member of the Massachusetts Bar and licensed to practice law in Massachusetts.

www.gopromanagement.com
robin@gopromanagement.com

 

The post You Don’t Have Test Cases, Think Again appeared first on QA Intelligence.

Categories: Companies

Weekly Wrap – Red Hat Summit, Velocity highlights, Trainmageddon, AI Webinar and more

Round 2 of the weekly summary and we’ve just as much digital performance news as last week. In this week’s summary we cover (click to jump to the post):

Dynatrace at Red Hat Summit EMEA

The Red Hat Partner summit had over 700 partners service integrators and resellers in Munich, and we were there showcasing our AI monitoring power. Martin Etmajer, featured centre above, knocked them out with the presentation –  “Close the OpenShift Monitoring Gap with Dynatrace“. Unfortunately it wasn’t filmed, but check out below for all the quality OpenShift content.

The Dynatrace AI webinar series creates record attendance!

Daniel Kaar, Technology Strategist at Dynatrace, has delivered 2 record webinars, in Asia and Europe. Next week he will re run the session for the US timezone. Not to be missed!

  • Why AI is necessary in today’s world of complex environments and deployments
  • Why current monitoring is not sufficient
  • Better insights into SAP, including support for SAP HANA DB
  • How AI operates when doing root cause and business impact analysis

Register now, and if you are reading this after the above date, don’t worry you can watch it on demand. 

Latest Blogs:

Live from Velocity San Jose 2017

Andi Grabner gives us the highlights from Velocity including catching up with performance guru Steve Saunders (pictured above). In this post Andi provides an overview of his favourite presentations from Verizon, Netflix, Microsoft, Google and more. It’s a feature rich post that all should read. Read more.

Trainmageddon: When the machines stop working, people get upset.

As UK commuters discovered this week the “simple” act of purchasing a train ticket is anything but simple. In fact, from an IT viewpoint it’s a hyper-complex transaction with many potential failure points. But if failure isn’t an option can technologies like artificial intelligence avert disaster?

Customer Corner – Nordstrom, Citrix, Red Hat and more

We’re proud to share the experiences our customers have working with Dynatrace. From COOP Denmark to Nordstrom, Citrix to Raymond James, and thousands of other leading enterprises, our customers’ success reflects the value our “monitoring redefined” mindset delivers to their daily operations. http://buff.ly/2tuk5SS

Dynatrace wins prestigious “Success for All” Sally award for its Champion Playbook

“Success for All” Sally award from @GainsightHQ validates Dynatrace approach to ensuring customer success and being transformationally and cross-functionally aligned around positive customer outcomes, with our customer success managers serving as personal customer advocates and strategic drivers for technology adoption, success and value achievement. http://buff.ly/2tCaDgw

Latest Videos

Online Perf Clinic – Power Web Dashboarding with Dynatrace AppMon

30 min demo – Monitoring Redefined – Unified Monitoring

Online Perf Clinic – Advanced Real User Monitoring: Agentless monitoring and SaaS vendor RUM

Featured Video: Dynatrace UFO

Whilst we didn’t publish this last week if you haven’t seen this video yet, then well you’ll want to take a look.

Perform 2017 – Register for a Perform event near you.

We are on the road running Perform in more than 15 cities around the world.

The post Weekly Wrap – Red Hat Summit, Velocity highlights, Trainmageddon, AI Webinar and more appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Come Share the Jenkins World Keynote Stage with Me!

Jenkins World is approaching fast, and the event staff are all busy preparing. I’ve decided to do something different this year as part of my keynote: I want to invite a few Jenkins users like you come up on stage with me. There have been amazing developments in Jenkins over the past year. For my keynote, I want highlight how the new Jenkins (Pipeline as code with the Jenkinsfile, no more creating jobs, Blue Ocean) is different and better than the old Jenkins (freestyle jobs, chaining jobs together, etc.). All these developments have helped Jenkins users, and it would be more meaningful to have fellow users, like you, share their...
Categories: Open Source

Continuous Release beta now available

IBM UrbanCode - Release And Deploy - Fri, 06/23/2017 - 20:26

The beta version of the IBM® Cloud Continuous Release Bluemix® service is now available. Continuous Release in an enterprise-scale release management tool that makes it easy to combine on-prem and cloud-native tools into a single release event. You can, for example, combine Continuous Delivery pipeline tasks with UrbanCode Deploy tasks in a single deployment plan.

With Continuous Release, you can automate as much of your release process as you need. Combine manual tasks with automated tasks that manage Continuous Delivery composite pipelines and UrbanCode Deploy applications. Other task types automate email messaging and Slack notifications.
Continuous Release logo

Beta features

Some of the beta features include:

  • Teams use releases to collaborate on multiple deployments and events.
  • Coordinate releases across multiple Bluemix organizations.
  • Combine Continuous Delivery composite pipelines and UrbanCode Deploy applications in a single deployment plan.
  • Use the calendar to manage release milestones and blackout windows.
  • Import and customize events and releases.
Bluemix beta services

Like many Bluemix services, Continuous Release went through an experimental phase before the beta release. During the experimental phase, users were free to try the service as new features were added at a steady cadence. With the beta version, Continuous Release is still free and limited support is now available. If you haven’t already, open a Bluemix account–it’s also free. Early adopters can influence product direction with their feedback.

Categories: Companies

Detailed comparisons of environment properties in IBM UrbanCode Deploy 6.2.5

IBM UrbanCode - Release And Deploy - Fri, 06/23/2017 - 16:15

For a long time, we’ve had a simple way of comparing two environments in IBM® UrbanCode™ Deploy: next to an environment, click More > Compare and pick a target environment to compare to. The resulting page shows a comparison between the component versions, files, and properties on the environments.

Comparing two environments, with columns that show which component versions are deployed to each environment

In version 6.2.5, we’ve added a more detailed comparison of properties across multiple environments. You pick a single reference environment and then select one or more other environments to compare it to. The server shows all of the environment properties in the reference environment and how the properties in the other environments compare.

Comparing the properties in a reference environment to three reference environments

From this view, you can see which property values match on the environments and which values don’t match. You can also see missing values and required values that are not filled in.

To open this kind of comparison, open an application and then click Compare Environments. Then, select a reference environment and one or more other environments.

For more information, go to IBM Knowledge Center: Comparing environments.

Categories: Companies

CloudBees Awarded "Best in Show - DevOps" in the 2017 SD Times 100

When you win an award, one time, it is an unexpected delight. When you win it a second time, you think “wow.” Third time lucky, you count your blessings. But the fourth, fifth and sixth times? You feel really honored, especially when it’s an award from a publication the likes of SD Times. Yes, CloudBees has been awarded status as an SD Times 100 winner - for the sixth year in a row. We are truly humbled.

We think the award and our longevity with it is a true testament to the ongoing innovation CloudBees strives for and that our employees deliver on every day, whether in engineering, sales, technical support, marketing or administration. We have an amazing customer base - many of them at the leading edge of DevOps and continuous delivery, returning real business value to their companies every day. All of them using our enterprise Jenkins offerings to power their business.

The SD Times 100 list is a collaboration by the editors of SD Times to honor companies that have demonstrated innovation, advancing the state of software development. The categories for 2017 included:

  • ALM and Development Tools
  • APIs, Libraries and Frameworks
  • Big Data and Business Intelligence
  • Database and Database Management
  • DevOps*
  • Influencers
  • IT Ops
  • Security & Performance
  • Testing
  • The Cloud
  • User Experience

* CloudBees was included in the category of DevOps

We are honored to be recognized beside our fellow industry influencers, disruptors and unicorns. This industry only ever speeds up. We are along for the ride - and building the engine for it. Stay tuned for much more to come from CloudBees. You can count on it.

Andre Pino
Vice President of Marketing
CloudBees

 

Blog Categories: Company News
Categories: Companies

Dynatrace wins prestigious “Success for All” Sally award for its Champion Playbook

Every day, we help our customers deliver the best experiences and success for their customers. That’s a big part of what digital performance is all about. But that’s not all we do. We take customer success very seriously for our own clients, too. We help them reach and exceed their goals—through our own signature program—the Champion Playbook.

For years now, Gainsight has led the way for companies — in many different industries — to redefine how they measure and exceed expectations by managing customer success. They also lead the way in recognizing the new digital business paradigm. According to Forbes, they’re at the “center of the market” for customer success and “increasingly the focus of dedicated teams at businesses that seek to monitor and improve customer relationships…”.

Recently Gainsight recognized Dynatrace as the top success plan leader among their corporate customers. We received their “Success for All” Sally award at the recent Pulse 2017 customer success event, in front of an audience of more than 4,000 VP and C-level executives who direct some of the largest Customer Success organizations around the world.

Each year at their Pulse event, the Gainsight team goes through a rigorous selection process to recognize cutting-edge customer success leaders. According to their CEO, Nick Mehta, “The Sally Awards aren’t given—they’re earned. Winning one means your company is transformationally and cross-functionally aligned around positive customer outcomes.” Winners in other categories included such well-known names as Adobe, Angie’s List, Blackbaud, HubSpot and Concur.

Nick Mehta presenting the award to Dynatrace’s own Jim Bowering, Director of CSM for North America and Tracy Streetman, CSM Business Operations Analyst Champion Playbook, Customer Success Managers: a winning combination

Dynatrace was selected for its development and use of the Champion Playbook success plan, a program that supports and drives the best customer outcomes in digital performance led by our Customer Success Managers (CSMs). CSMs are our customers’ personal advocates and strategic drivers for adoption, success and value achievement.

CSMs start by building a close relationship with our customers’ in-house performance monitoring, development and business leadership. Next, they examine the customer’s current state in accelerating innovation, optimizing customer experiences and modernizing operations. Using the Champion Playbook as their guide, Dynatrace CSMs work with customers to expand internal performance culture by sharing proven strategies to highlight value and speed adoption of new ways of doing business.

The whole program is based on our unparalleled and extensive experience working with top companies to build highly successful digital businesses. We’ve taken that knowledge and created a well-defined playbook for working together with our customers, and applying best practices and innovative processes to their specific needs. Together we set and achieve digital performance goals to reach optimum adoption, greater value and constant awareness of new opportunities for improvement.

At Dynatrace we know that having the best technology is important, but following the path to success with that technology also requires the right approach to organizational culture, strategy, people and processes. This award is an enormous validation of the Champion Playbook, our practical and proven way of working with our customers. It’s just the beginning, and, I can’t wait to see what we will accomplish in the future—together.

The post Dynatrace wins prestigious “Success for All” Sally award for its Champion Playbook appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Trainmageddon: When the machines stop working, people get upset.

For the train companies of the United Kingdom, today was a tough one. How’s this for a headline (which we all gazed at during our morning coffee break):

If you haven’t heard, you can read all about how the poor train companies of the UK copped a battering by commuters on social and traditional media for a ticket machine malfunction. I feel for the commuters but my sympathy today lies with the train companies. Well except for the ticket collectors that obviously didn’t get the memo, and handed out fines to those that boarded without a ticket!

Coverage:

Here’s why it’s so hard to be consistently perfect

Purchasing a single ticket is actually a hyper complex transaction from an IT point of view.

The complexity links from the (and let’s list them which isn’t even all of them!):

  1. front end software the customer uses at the attempted purchase
  2. third party payment gateway that processes the payment
  3. integration between the machine (of the device), the software and the gateway
  4. security certificate required to make a secure transaction
  5. credit check application
  6. hosting environment in which all this runs
  7. interconnection between the transactions that are crossing different hosting environments, from the end user, the train station, and through to the back end applications.

Yet all the customer cares about is their experience at the machine – it needs to be perfect, or if there’s a problem, it needs to be resolved in seconds…so they can board their train on time.

Not like this:

So what happened today in the UK?

We may not find out for sure what happened but from our experience, monitoring millions, if not billions of transactions a day, there are three common areas where problems can arise. When it comes to IT complexity, rapid release cycles and digital experience, typically problems centre on:

Human error – Oops did I do that? 

Software needs updating. Software updates are mostly written by humans, and when multiple humans are working together, it’s not uncommon for mistakes to be made. Even if you have the most stringent pre-production testing, issues can still arise once you push to production because you can never accurately replicate what software will do in the wild.

As our champion devops guru Andreas Grabner always preaches in his talks – #failfast. If the issue relates to change that was made, roll it back, fast.

in this case with the outage today, I doubt it was a software update in the core operating system. I’d expect it was a third party failure, which incidentally might have had it’s own update. But more on this on point 3.

Security

Not one to speculate on, but obviously when a software failure causes mass disruption to people, it would be fairly normal to assume that maybe some sort of planned security attack. But again I doubt it.

Delivery chain failure

The most likely cause for the train machine failure is simply a failure somewhere in the digital delivery chain. Considering a single transaction today runs across 82 different technologies, from devices, networks, 3rd party software applications, hosting environments, and operating systems, it doesn’t take much for a single failure to cause a complete outage. Understanding where that is, so that you can quickly resolve is critical. Referencing what I said in point 1, it’s probable that a simple update to any of these 82 different technologies caused a break in the chain. Or maybe one of these 3rd parties had their own outage.

And that’s where AI comes in.

This is why you need AI powered application monitoring, with the ability to see the entire transaction across every single one of the different technologies. But not just across the transaction, but the ability to go deep from the end point machine, to the host infrastructure, the line of code, and the interconnections between all the services and processes.  It’s the only way you can identify the root cause of the problem – in minutes not hours, or days.

The days of eye balling charts, having war room discussions with IT teams, are definitely over. Software rules our lives, and it simply cannot fail. Otherwise digital businesses face a day like this on social media:

What if the machine fixed the machine?

With the ability to see the immediate root cause of a problem, it’s not improbable for the machine to learn how to course correct itself. In the same way when servers are overloaded, a load balancer can direct traffic to a under utilised host. So if you can detect an issue in the delivery chain then the machine can about self correcting itself with an alternative path.  If the payment gateway fails, then it could auto redirect to a new hosted payment gateway for instance. Our chief technical strategist Alois Reitbauer demo’d just this scenario (ok a more simpler version) at Perform 2017. So it’s not that far off.

The post Trainmageddon: When the machines stop working, people get upset. appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

SoCraTes Germany, Soltau, Germany, August 24-27 2017

Software Testing Magazine - Thu, 06/22/2017 - 09:00
SoCraTes is the acronym for the International Software Craftsmanship and Testing Conference. The 2017 edition of the German edition will take place in Soltau, August 24-27 2017. The SoCraTes Germany...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

CAST Conference of the Association for Software Testing, Nashville, USA, August 16–18 2017

Software Testing Magazine - Thu, 06/22/2017 - 08:00
CAST is the conference of the Association for Software Testing. Its 2017 edition will be held in Nashville, Tennessee, USA, August 16–18 . At the CAST conference, speakers share their stories and...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

The Quirkier Side of Software Testing

Software Testing Magazine - Wed, 06/21/2017 - 20:02
Not all bugs are created equal. Sometimes quirks in the programming languages we use are to blame, and finding them has often stumped even the best programmers and testers. This talk explores the...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Improving Specification by Example, BDD & ATDD

Testing TV - Wed, 06/21/2017 - 19:12
To get the most out of Behaviour Driven Development (BDD), Specification by Example (SBE) or Acceptance Test-Driven Development (ATDD), you need much more than a tool. You need high value specifications. How do we get the most out of our specification and test writing effort? How do we write testable scenarios that business people will […]
Categories: Blogs

A Tale of Two Load Balancers

It was the best of load balancers, it was the worst of load balancers, it was the age of happy users, it was the age of frustrated users.

I get to see a variety of interesting network problems; sometimes these are first-hand, but more frequently now these are through our partner organization. Some are old hat; TCP window constraints on high latency networks remain at the top of that list. Others represent new twists on stupid network tricks, often resulting from external manipulation of TCP parameters for managing throughput (or shaping traffic). And occasionally – as in this example – there’s a bit of both.

Many thanks to Stefan Deml, co-founder and board member at amasol AG, Dynatrace’s Platinum Partner headquartered in Munich, Germany. Stefan and his team worked diligently and expertly with their customer to uncover – and fix – the elusive root cause of an ongoing performance complaint.

Problem brief

Users in North America connect to an application hosted in Germany. The app uses the SOAP protocol to request and deliver information. Users connect through a firewall and one of two Cisco ACE 30 load balancers to the first-tier WebLogic app servers.

When users connect through LB1, performance is good. When they connect through LB2, however, performance is quite poor. While the definition of “poor performance” varied depending on the type of transaction, the customer identified a 1.5MB test transaction that helped quantify the problem quite well: fast is 10 seconds, while slow is 60 seconds – or even longer.

EUE monitoring

Dynatrace DC RUM is used to monitor this customer’s application performance and user experience, alerting the IT team to the problem and quantifying the severity of user complaints. (When users complain that response time is measured in minutes rather than seconds, it’s helpful to have a solution that validates those claims with measured transaction response times.) DC RUM automatically isolated the problem to a network-related bottleneck, while proving that the network itself – as qualified by packet loss and congestion delay – was not to blame.

Time to dig a little deeper

I’ll use Dynatrace Network Analyzer – DNA, my protocol analyzer of choice – to examine the underlying behavior and identify the root cause of the problem, taking advantage of the luxury of having traces of both good and poor performing transactions.  I’ll skip DNA’s top-down analysis (I’m assuming you don’t care to see yet another Client/Network/Server pie chart), and dive directly into annotated packet-level Bounce Diagrams to illustrate the problem.

(DNA’s Bounce Diagram is simply a graphic of a trace file; each packet is represented by an arrow color-coded according to packet size.)

First, the fast transaction instance:

Bounce Diagram illustrating a fast instance of the test transaction through LB1; total elapsed time about 10 seconds.

For the fast transaction, most of the 10-second delay is allocated to server processing; the response download of 1.5MB takes about 1.7 seconds – about 7Mbps.

Here’s the same view of the slow transaction instance:

Bounce Diagram illustrating a slow instance of the test transaction through LB2; total elapsed time about 70 seconds.

There are two distinct performance differences between the fast transaction – the baseline – and this slow transaction. First, a dramatic increase in client request time (from 175 msec. to 52 seconds!); second, a smaller but still significant increase in response download time, from 1.7 seconds to 7.7 seconds.

The MSB (most significant bottleneck)

Let’s first examine the most significant bottleneck in the slow transaction. The client SOAP request – only 3KB – takes 54 seconds to transmit to the server, in 13 packets.

The packet trace shows the client sending very small packets, with gaps of about 5 seconds between. Examining the ACKs from LB2, we see that the TCP receive window size is unusually small; 254 bytes.

Packet trace excerpt showing LB2 advertising a window size of 256 bytes.

Such an unusually small window advertisement is generally a reliable indicator that TCP Window Scaling is active; without the SYN/SYN/ACK handshake, a protocol analyzer doesn’t know whether scaling is active, and is therefore unable to apply a scale factor to accurately interpret the window size field.

The customer did provide another trace that included the handshake, showing that the LB response to the client’s SYN does in fact include the Window Scaling option – with a scale factor of 0.

The SYN packet from LB2; window scaling will be supported, but LB2 will not scale it’s receive window.

Odd? Not really; this simply means that LB2 will allow the client to scale its receive window, but doesn’t intend to scale its own. The initial (non-scaled) receive window advertised by the LB is 32768. (It’s interesting to note that given a scale factor of 7, a receive window value of 256 would equal 32768.)

Once a few packets have been exchanged on the connection, however, LB2 abruptly reduces its receive window from 32768 to 254 – even though the client has only sent only a few hundred bytes. This is clearly not a result of the TCP socket’s buffer space filling up. Instead, it’s as if LB2 suddenly shifts to a non-zero scale factor (perhaps that factor of 7 I just suggested), even though it has already established a scale factor of zero.

Pop quiz: What to do with tiny windows?

Question: what should a TCP sender do when the peer TCP receive window falls below the MSS?

Answer: The sender should wait until the receiver’s window increases to a value greater than the MSS.

In practice, this means the sender waits for the receiver to empty its buffer. Given a receiver that is slow to read data from its buffer – and therefore advertises a small window of less than the MSS – it would be silly for the sender to send tiny packets just to fill the remaining space. In fact, this undesirable behavior is called the silly window syndrome, avoided through algorithms built into TCP.

For this reason, protocol analyzers and network probes should treat the occurrence of small (<MSS) window advertisements the same as zero window events, as they have the same performance impact.

When a receiver’s window is at zero for an extended period, a sender will typically send a window probe packet attempting to “wake up” the receiver. Of course, since the window is zero, no usable payload accompanies this window probe packet. In our example, the window is not zero, but the sender behavior is similar; the LB waits five seconds, then sends a small packet with just enough data (254 bytes) to fill the buffer. The ACK is immediate (the LB’s ACK frequency is 1), but the advertised window remains abnormally small. We can conclude that the LB believes it is advertising a full 32KB buffer, although it telling the client something much different.

After about 52 seconds, the 3K request reaches LB2, after which application processing occurs normally. It’s a good thing the request size wasn’t 30K!

The NSB (next significant bottleneck)

As is quite common, there’s another tuning opportunity – the NSB. This is highlighted by DC RUM’s metric called Server Realized Bandwidth, or download rate. The fast transaction transfers 1.5MB in about 1.6 seconds (7.5Mbps), while the slow transaction takes about 8 seconds for the same payload (1.5Mbps).

Could this be receiver flow control, or a small configured receive TCP window? These would seem reasonable theories – except that we’re using the same client for the tests. A quick look at the receiver’s TCP window proves this is not the case, as it remains at 131,072 (512 with a scaling factor of 9).

DNA’s Timeplot can graph a sender’s TCP Payload in Transit; comparing this with the receiver’s advertised TCP window can quickly prove – or disprove – a TCP window constraint theory.

Time plot showing LB2’s TCP payload in transit (bytes in flight) along with the client’s receive window size.

The maximum payload in transit for the slow transaction is about 32KB; given that the client’s receive window is much larger, we know that the client is not limiting throughput.

Let’s compare this with the fast transaction as it ramps up exponentially through TCP slow start:

Time plot showing LB1’s payload in transit as it ramps up through slow start.

It becomes clear that LB1 does not limit send throughput – bytes in flight – to 32KB, instead allowing the transfer to make more efficient use of the available bandwidth. We can conclude that some characteristic of LB2 is artificially limiting throughput.

Fixing the problems

For the MSB (most significant bottleneck), Cisco has identified a workaround (even if they might have slightly misstated the actual problem):

CSCud71628—HTTP performance across ACE is very bad. Packet captures show that ACE drops the TCP Window Size it advertises to the client to a very low value early in the connection and never recovers from this. Workaround: Disable the “tcp-options window-scale allow”.

For the NSB (next significant bottleneck), the LB configuration defaults to a TCP send buffer value of 32768K. Modifying the parameter set tcp buffer-share from the default 32768 to 262143 (the maximum permitted value) allowed for LB2 throughput to match that of LB1.

Wait; do you see the contradiction here? If we disable TCP window scaling, that would limit the effective TCP buffer to 65535, limiting the download transfer rate to under 4Mbps (given the existing link’s 130ms round-trip delay).

But this was the spring of hope; it seems that changing the tcp buffer-share parameter also solved the window scaling problem, without having to disable that option. This suggests a less-than obvious interaction between these parameters – but with happy users, we’ll take that bit of luck.

Is there more?

There are always additional NSBs; this is a tenet of performance tuning. We stop when the next bottleneck becomes insignificant (or when we have other problems to attend to). For this test transaction, the SOAP payload is rather large (1.5MB); while the payload is encrypted, it could still be compressed to reduce download time; a quick test using WinZip shows the potential for at least a 50% reduction.

While some of you will be quick to note that ACE has been discontinued, Cisco support for ACE will continue through January 2019.

The post A Tale of Two Load Balancers appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

PurePath visualization: Analyze each web request from end-to-end

Dynatrace OneAgent enables you to track each individual request from end to end. This enables Dynatrace artificial intelligence to automatically identify the root causes of detected problems and to analyze transactions using powerful analysis features like service flow and the service-level backtrace. Dynatrace enables you to efficiently find the proverbial needle in the haystack and focus on those few requests (out of tens of thousands) that you’re interested in. The next step is to analyze each of these requests separately to understand the flow of each transaction through your service landscape. Meet PurePath. PurePath technology is at the heart of what we do.

How to locate a single request

The first step in your analysis should be to locate those requests that you want to analyze at a granular level. Filters can be used to narrow down many thousands of requests to just those few requests that are relevant to your analysis. This can be achieved during problem analysis by following the root-cause analysis drill downs (select Problems from the navigation menu to start problem analysis), or manually by segmenting your requests using the advanced filtering mechanisms in service flow and outlier analysis. Ultimately, you’ll likely discover a handful of requests that require deeper analysis. This is where PurePath analysis comes in.

In the example below, the service easyTravel Customer Frontend received 138,000 service requests during the selected 2-hour time frame. This is an unwieldy number of requests to work with, so we need to narrow down these results.

We’re interested specifically in those requests that call the Authentication Service. There are only 656 of these. To focus on this subset of requests click the Filter service flow button after selecting the desired chain of service calls that you want to look at.


Notice the hierarchical filter; it shows that we are looking only at transactions where the easyTravel Customer Frontend calls the Authentication Service which in turn calls the easyTravel-Business MongoDB

Now we can see that 75% of the easyTravel Custom Frontend requests that call the Authentication Service also call the Verification Service. These are the requests we want to focus our analysis on. So let’s add the Verification Service as a second filter parameter to further narrow down the analysis. To do this we simply select the Verification Service and then click the View Purepath button in the upper right box.

Notice the filter visualization in the upper left corner of the example below. The provided PurePath list includes only those requests to the Customer Frontend that call both the Authentication Service and the Verification Service.

But the list is still too large—we only need to analyze the slower requests. To do this, let’s modify the filter on the easyTravel Customer Frontend node so that only those requests that have Response time > 500 ms are displayed.

As you can see below, after applying the response time filter, we’ve identified 4 requests out of 138,000 that justify in-depth PurePath analysis.

To begin PurePath analysis of this request, click the View PurePath button.

PurePath analysis of a single web request

Dynatrace traces all requests in your environment from end to end. Have a look below at the waterfall visualization of a request to the Customer Frontend. Each service in the call chain is represented here.

The top section of the example PurePath above tells us that the whole transaction consumes about 20 ms of CPU and spends time in 1 database. However the waterfall chart shows much more detail. The waterfalls shows which other services are called and in which order. We can see each call to the Authentication and Verification services. We also see the subsequent calls to the MongoDB that were made by both service requests. PurePath, like the Service Flow, provides end-to-end web request visualizations—in this case that of a single request.

The bars indicate both the sequence and response time of each of the requests. The different colors help to easily identify the type of call and timing. This allows you to see exactly which calls were executed synchronously and which calls were executed in parallel. It also enables us to see that most of the time of this particular request was spent on the client side of the isUserBlacklisted Webservice call. As indicated by the colors of the bars in the chart, the time is not spent on the server side of this webservice (dark blue) but rather on the client side. If we were to investigate this call further, we would see underlying network latency.

By selecting one of the services or execution bars you can get even more detail. You can analyze the details of each request in the PurePath. In the example below, you can see the web request details of the main request. You can view the metadata, request headers, request parameters, and more. You can even see information about the proxy that this request was sent through.

PurePath

Notice that some values are obscured with asterisks. This is because these are confidential values and this user doesn’t have permission to view confidential values. These values would be visible if the active user had permission to view these values.

The same is true for all subsequent requests made by the initial request. The image below shows the authenticate web service call. Besides the metadata provided on the Summary tab, you also get more detail about timings. In this case, we see that the request lasts 15ms on the calling side but only 1.43ms on the server side. Here again, there is significant network latency.

Code execution details of individual requests

Each request executes some code, be it Java, .NET, PHP, Node.js, Apache webserver, Nginx, IIS or something else. PurePath view enables you to look at the code execution of each and every request. Simply click on a particular service and select the Code level tab.

Code level view shows you code level method executions and their timings. Dynatrace tells you the exact sequence of events with all respective timings (for example, CPU, wait, sync, lock, and execution time). As you can see above, Dynatrace tells you exactly which method in the orange.jsf request on the Customer Frontend called the respective web services and which specific web service methods that were called. The timings displayed here are the timings as experienced by the Customer Frontend, which, in the case of calls to services on remote tiers, represent the client time.

Notice that some execution trees are more detailed than others. Some contain the full stacktrace while others only show neuralgic points. Dynatrace automatically adapts the level of information it captures based on importance, timing, and estimated overhead. Because of this, slower parts of a request typically contain more information than faster parts.

You can look at each request in the Purepath and navigate between the respective code level trees. This gives you access to the full execution tree.

Error analysis

Analyzing individual requests is often a useful way of gaining a better understanding of detected errors. In the image below you can see that requests to Redis started to fail around the 10:45 mark on the timeline.

By analyzing where these requests came from we can see that all of these requests originate in the Node.js weather-express service. We also see that nearly all failed Redis calls have the same Exception: an AbortError caused by a closed connection.

We can go one step further, down to the affected Node.js PurePaths. Below you can see such a Node.js PurePath and its code level execution tree. Notice that the Redis method call leads to an error. You can see where this error occurs in the flow of the Node.js code.

We can also analyze the exception that occurs at this point in the request.

Each PurePath shows a unique set of parameters leading up to the error. With this approach to analysis, PurePath view can be very useful in helping you understand why certain exceptions occur.

Different teams, different perspectives

Each PurePath tracks a request from start to finish. This means that PurePaths always start at the first fully monitored process group. However, just because a request starts at the Customer Frontend service doesn’t mean that this is the service you’re interested in. For example, if you’re responsible for the Authentication Service, it makes more sense for you to analyze requests from the perspective of the Authentication service.

Let’s look at the same flow once again, but this time we’ll look at the requests of the Authentication Service directly. This is done by clicking the View PurePaths button in the Authentication service box.

We can additionally add a response time filter. With this adjustment, the list now only shows Authentication requests that are slower than 50ms that are called by the Customer Frontend service (at the time when the frontend request also calls the Verification service).

Now we can analyze the Authentication service without including the Frontend service in the analysis. This is useful if you’re responsible for a service that is called by the services developed by other teams.

Of course, if required, we can use service backtrace at any time to see where this request originated.

We can then choose to once again look at the same PurePath from the perspective of the Customer Frontend service.

This is the same Purepath we began our analysis with. You can still see the Authenticate call and its two database calls, but now the call is embedded in a larger request.

The power of Dynatrace PurePath

As you can see, Dynatrace PurePath enables you to analyze systems that process many thousands of requests per minute, helping you to find the “needle in the haystack” that you’re looking for. You can view requests from multiple vantage points—from the perspective of the services you’re responsible for, or from the point of view of where a request originates in your system. With PurePath, you really do get an end-to-end view into each web request.

The post PurePath visualization: Analyze each web request from end-to-end appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

OpenStack monitoring beyond the Elastic Stack – Part 1: What is OpenStack?

We’ve been talking a lot about OpenStack over the past few months, and with good reason. Its explosive growth in popularity within the enterprise has enabled large, interoperable application architectures and, with this, a need for app-centric monitoring of the OpenStack cloud.

There are several open source monitoring tools for OpenStack out there, but are they mature enough for the challenges posed by its complexity? Can they effectively monitor hundreds of nodes, while simultaneously keeping an eye on hundreds of apps?

This post is the first of a 3-part series where we will take review:

  • What is OpenStack?
  • The OpenStack monitoring space: Monasca, Ceilometer, Zabbix and the Elastic Stack
  • A full stack view on Monitoring OpenStack with Dynatrace
What is OpenStack?

OpenStack is an open source cloud operating system used to develop private- and public-cloud environments. It consists of multiple interdependent microservices, and provides a production-ready IaaS layer for your applications and virtual machines. Being a 2010 joint project of Rackspace and NASA, it will turn seven this year, and it’s supported by many high-profile companies including AT&T, IBM, and Red Hat.

Still getting dinged on its complexity, it currently has around 60 components, also referred to as “services”, six of which are core components, controlling the most important aspects of the cloud. There are components for the compute, networking and storage management of the cloud, for identity and access management, and also for orchestrating applications that run on it. With these, the OpenStack project aims to provide an open alternative to giant cloud providers like AWS, Google Cloud, Microsoft Azure or DigitalOcean.

A few of the most common OpenStack components

The OpenStack components are open source projects continuously developed by its Community. Let’s have a brief look at the most important ones:

Nova (Compute API) – Nova is the brain of the its cloud, meaning that it provides on-demand access to compute resources by provisioning and managing large networks of virtual machines.

Neutron (Networking service) – Neutron focuses on delivering networking-as-a-service in its cloud.

Keystone (Identity service) – Keystone is the identity service used for authentication and high-level authorization.

Horizon (Dashboard service) – Its Dashboard, providing a web-based user interface to other services.

Cinder (Block Storage service) – The component that manages and provides access to block storage.

Swift (Object storage service) – Swift provides eventually consistent and redundant storage, and retrieval of fixed digital content.​

Heat (Orchestration service) – The orchestration engine, providing a way to automate the creation of cloud components.

Why the hype around OpenStack?

The reasons behind the explosive growth in OpenStack’s popularity are quite straightforward. Because it offers open source software for companies looking to deploy their own private cloud infrastructure, it’s strong where most public cloud platforms are weak.

Vendor neutral API: Proprietary cloud service providers such as AWS, Google Compute Engine and Microsoft Azure have their own application programming interfaces (API), which means businesses can’t easily switch to another cloud provider, i.e. they are automatically locked into these platforms. In contrast, its open API removes the concern of a proprietary, single vendor lock-in for companies and creates maximum flexibility in the cloud.

More flexible SLAs: All cloud providers offer Service Level Agreements, but these used to be the same for all customers. In some cases, however, the SLA in your contract might be completely irrelevant to your business. But thanks to the many OpenStack service providers it is easy to find the most suitable one.

Data privacy: Perhaps the biggest advantage of using OpenStack is the data privacy it offers. For some companies, certain data may be prohibited by law to be stored in public cloud infrastructure. While a hybrid cloud makes it possible to keep sensitive data on premise, the potential for vendor lock-in and data inaccessibility still remains. Not with OpenStack. Here, all your data is on-premise, secured in your data center.

These are the reasons why companies like AT&T, China Mobile, CERN or Bloomberg decided to become OpenStack users.

So what’s the state of OpenStack now?

I happened to overhear a comment at the OpenStack Summit Boston 2017 that I have not been able to get out of my head. Someone in the crowd claimed that “OpenStack will eat the world”. This might not be too far-fetched, as the figures of the newest OpenStack Foundation User Survey show.

Nothing demonstrates OpenStack’s growth more that the rapid development of new clouds, with 44% more deployments reported on this year’s survey than in 2016. And, its clouds around the world have also become larger: 37% of clouds have 1,000 or more cores. So what could speak more for its maturity if not the two-thirds of deployments in production environments?

Is OpenStack really going to eat the world? And if it is, who will make sure that application performance stays high?

In the second part of this blog series we will take a look at what the current options on the market are for monitoring OpenStack. Stay tuned!

The post OpenStack monitoring beyond the Elastic Stack – Part 1: What is OpenStack? appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Now on DevOps Radio: Three's Company - Special Jenkins World Edition

The countdown to Jenkins World 2017 is officially on! Host Andre Pino sits down with not one, not two, but three Jenkins World keynote speakers in Episode 20 of DevOps Radio. Kohsuke Kawaguchi, Jenkins founder and CTO at CloudBees; Jez Humble, co-author of Continuous Delivery, Lean Enterprise, and The DevOps Handbook; and CloudBees’ own CEO, Sacha Labourey, talk about recent developments in the industry and share a sneak peak at what attendees will hear August 28th - 31st at Jenkins World 2017.

Kohsuke starts the conversation by discussing what’s been going on in the Jenkins community over the last year. He touches on key projects, such as Blue Ocean and the Declarative Pipeline. Both new offerings make it easy for users of all skill levels to automate software pipelines. Kohsuke also talks about the community responses to these recent developments. At Jenkins World, he says the community can expect to hear how the ongoing evolution of Jenkins has led to new and better methods of software delivery.

Jez talks about his active role in the DevOps community and the recent advances he’s seen as DevOps starts to become more mainstream. He says that while everyone wants a piece of DevOps, the picture for implementation is still patchy. With the research Jez has done in relation to DevOps and continuous delivery, he’s seen the science on what works and what doesn’t in IT environments. In his first time speaking in front of the Jenkins community, he plans to talk about the huge impact continuous integration and continuous delivery have had on IT performance and how teams can measure this.

Finally, Sacha discusses what he’s seen in the Jenkins and DevOps market recently. He says it’s now understood that enterprises need to move to DevOps, but the question that remains is how to deploy at scale and make an enterprise-wide commitment. In order to adopt DevOps effectively, Sacha thinks you need the energy and willingness to share and learn. Sacha hints that Jenkins World attendees can expect to hear about new features and services from CloudBees, in addition to how people are accelerating DevOps adoption in their organizations.

Make sure you don’t miss the code Andre offers during the podcast for a special 20% discount off Jenkins World registration – including workshops and training, as well as the conference sessions!

Another thing you won’t want to miss? New episodes of DevOps Radio! Subscribe to DevOps Radio via iTunes or RSS feed and let us know what you think by tweeting @CloudBees and using #DevOpsRadio.

Categories: Companies

Testing is Not Dead: The Future of the Software Testing Role

Gurock Software Blog - Wed, 06/21/2017 - 03:05

The Future of the Testing Role

This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.

At least once a year, I see a presentation at a software conference proclaiming “Testing is Dead”. The speaker talks about modern software development techniques such as test automation, micro-services, continuous integration, delivery systems, production monitoring systems, and build rollback capabilities. The presentation will suggest that in the future, modern software development techniques will make testers obsolete. Shortly after, the testing world has a collective existential crisis. Testers wonder if they will still have a job in 5 years’ time, or if their skill set still valuable. Contrary to the skepticism, six months later there has been no major change.

Testing is not dead, but the role is changing. What does the future look like?

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.



Subscribe
We will never share your email. 1-click unsubscribes. articles Testing on The Coasts

The Future of the Testing Role Teams

A friend of mine works at a company in Silicon Valley. This company’s culture is built on radical collaboration; it is rare for a code change to be worked on by one person alone. The programmers here work in pairs. They begin the development cycle by looking in the tracking system, where the work is prioritized. The most important piece of work is placed at the top of the queue. They usually start new code changes with a test. This is usually a small piece of code that helps to design and refactor with less worry. When it comes to making the code changes, one person is at the keyboard typing while the other has a different role. The second acts like an empathetic back seat driver. They ask questions, guide, and might occasionally ask to take the wheel when acceptable.

By the time their code change can be run locally, automated tests have been developed. As the change is merged into the source code repository, the full test suite has already run and come back green. As soon as the automated tests pass, the finished product is ready to ship to production. This cycle may be completed several times every day.

This methodology stands apart from more traditional ways of working, for example:

  • There is no tester.
  • There are no hand-offs between roles, no back-and-forth when a bug is found.
  • There are no last-minute checks run before a push to production every two weeks.
  • Code goes to production daily, or even several times a day, per pair.

Programmers on the team are expected to build layers of tooling to reduce risk, and perhaps more importantly, to be effective at testing software. When these companies do have testers, they tend to float between development groups. The tester will act as a coach and trainer, helping developers build their testing skills. Getting to this point requires architecture, micro-services, build and deploy systems, roll back tooling, monitoring, and plenty of automation. It also requires a culture built around testing, collaborative work and improvement.

Testing in the Middle

The Future of the Testing Role Teams

My daily life as a software tester is different to my friend on the Coast. I am working on a UI automation project full time. My client has a legacy product built with SQL stored procedures, JavaScript and CSS. Our release schedule is close to quarterly. The development team spends about 10 weeks of each release cycle building a combination of new features, and working on lingering bugs from the last release that need to be fixed.

I am the lone wolf testing department for this company. The combination of our product architecture and technical stack means there are a lot of defects. The defects can be unpredictable and unrelated to the place of the most recent change. My task is to help the programmers discover these defects as fast as possible. That is where the UI automation suite comes into play. Each night, my test suite is scheduled to run against three different test environments. Two of these are used for merging code into the release branch, and one is at the tip of the development branch. Each morning I get a report listing the tests that failed. At this point, my job quickly changes from “automation engineer” to “exploratory tester”.

I start by running the failed test to observe what happened, and then perform typical follow-up testing techniques. I will change data, change my behavior, change environment/browser, or change software version. This is all to help me to better understand, and more effectively report on bugs found. At this point I’ll talk with the programmer that might know about this problem. They might fix the bug now, or not. Once that hand-off has been made, I’ll either move onto the next bug investigation, work on refactoring some tests, or build new tests. My mission varies day-to-day depending on where in the release cycle we are, and what sort of changes are being made.

Most companies seem to work in a similar way. They have skilled software testers and developers; the main challenge is improving efficiency and releasing more often. They aren’t working with the hottest technology, or debating which JavaScript library they want to jump to next. They are making software products that many customers depend on to get work done.

The Future of Testing

The Future of the Testing Role Teams

We have companies to the left and right (often startups) experimenting, and trying new development strategies that don’t always involve a professional software tester. In the middle, we have large companies that build the software that the country runs on, all of them working to increase their delivery cadence from quarters to weeks. They are using older development methods, and making the software that many people use, for industries such as health insurance, financial services, and automotive business.

Software development changes very slowly. The agile manifesto was signed in 2001. There are still companies undergoing agile “transformations”. There are also companies that think they have transformed, but have completely missed the point. A future without testers is a complicated extension of the original agile vision. This future involves currently uncommon skill sets, tooling, and cultures built on agile principles. The struggle is real, even after 16 years.

When contemplating the future of the testing role it is important to consider the future of the testers themselves, as well as the future of the software testing industry.

The most popular question I hear, is whether testers should learn to write code. For me, the answer is “Yes, obviously”. Non-technical testers do not always work dynamically in the sprint. They can participate in planning and design meetings at the beginning, and retrospectives at the end. They can’t start the important task of investigating the software until a developer has written some code, checked it into the source code repository, started a build, and then installed that build on a test server. The flow of work is different for software testers with technical skills.

One project I worked on took about two hours to build, including a full suite of unit tests. After a successful build, a tarball had to be FTP’d to a test server, and then installed. This took more time, and meant more waiting for the software test team. I wrote a script in Bash that would check that the latest build was green, and download the build files to our test server. The script would then perform the install and run a smoke test. No one on the test team lost time if there was a horrible bug that broke the install, or caused the smoke test to fail. If everything completed successfully, we could test a new version seamlessly. I was not a programmer, and still don’t claim to be, but stringing together small bits of code like that has saved me many times.

Technology and development styles change over time. Testers must adjust their approaches, tools, and skill sets to keep pace. I think the future is bright for software testers. According to the Bureau of Labor Statistics* Software development jobs are projected to increase by 17% over the next eight years, and it makes sense that software testing jobs will also increase. If I am wrong and the testing role is going extinct, it will be a very gradual death. For the people out there worried, I’d suggest working on your technical skills. If you’re still testing in 5 years, you’ll find yourself more skilled and more valued.

* https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm

Categories: Companies

Customer Corner – Nordstrom, Citrix, Red Hat and more

One of the more rewarding parts of my job is working with our team on customer stories. I always feel like a great case study has so much external and internal value, that they should be shared as widely as possible. So, after just a few moments on our site and YouTube channel, I’ve collated a few stand outs to share in the first Dynatrace “Customer Corner” blog post.

Nordstrom: From 8 weeks to 2 days for performance testing

Gopal Brugalette has headed up some very cool digital transformation initiatives at retail giant, Nordstrom. Here, he talks about how his team uses Dynatrace to shorten release cycles and pinpoint issues instantly, to stay ahead of the competition.

CooP: Largest retailer in Denmark avoids store closures

Jeppe Lindberg of Coop looks at how Denmark’s largest retailer avoided massive store closings on the launch day of a new loyalty app, thanks to Dynatrace’s built-in AI capabilities, and our ability to see every user, every application across the entire IT stack.

How Citrix uses Dynatrace for cloud systems insight

Nestor Zapata, Lead Systems Administrator at Citrix, highlights how he and the Citrix production teams use Dynatrace for faster application issue resolution, problem prevention and how they make smarter and more efficient decisions around their cloud systems.

Red Hat and Dynatrace help close the OpenShift technology gap

Chris Morgan, Technical Director from Red Hat, on how the depth of integration, AI and machine learning capabilities set Dynatrace apart as an OpenShift partner.

Raymond James: “Dynatrace sees all transactions; AppDynamics samples”

Jeff Palmiero, APM Manager at Raymond James, explains the role Dynatrace plays in helping monitor customer experience and enabling them to take proactive actions, no matter how complex the technology ecosystem. In this video he explains why ONLY Dynatrace is capable of delivering the application analytics required.

Customers at the heart of Perform 2018

What’s great about this collection of stories is that most were captured on the fly, without prep at last year’s global Perform 2017 event in Last Vegas. But that’s what you get when you attend our Perform event series – customer stories with big brands and innovative, down-to-earth leaders that are only too happy to share their knowledge and insights with you.

So, why not jump over to our new “save the date” page and make plans to join us for a super-sized, customer-focused Perform 2018 at The Bellagio in Vegas, from 29th to 31st January. It’s going to be fun, informative, hands-on and inspiring, and we’re expecting about 2,000 people to join us. Will you?

The post Customer Corner – Nordstrom, Citrix, Red Hat and more appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

5 Lessons from over 100,000 Bug Fixes

Testlio - Community of testers - Tue, 06/20/2017 - 20:17
From left: Kristi, Jeff, and Michelle

Earlier this month we hosted a webinar with two of our finest QA managers, Jeff Scott and Kristi Kaljurand.

Jeff and Kristi shared some of what they’ve learned from reporting tens of thousands of fixed bugs — in essence, what makes high-quality software testing.

Want the full video experience? Check out the webinar recording here.

Without further ado, our lessons! 1. Flexibility makes amazing QA

“Flexibility is key in QA because as you know, we may receive a build request for testing at any time of the day, and that can be at the end of day 6PM,” Jeff says. “Right when we are ready to walk out the door, we receive a notification from the person of contact from our client saying ‘Can we perform sanity tests overnight’ and it’s key to have passionate flexible testers that have the knowledge and where-with-all, and is also familiar with the app that way we can easily pass down instructions, release notes, and so on. And they can provide us with excellent QA. With the results by morning. So all-in-all I believe just being passionate and also being a good listener and being able to be flexible is extremely key.”

2. Visual aids help teams understand test cases and their structure

“We like to build out something called a mind map, and this allows us to formally break out each and every feature and functionality,” says Jeff. “That will give us a better chance to create test cases and then formulate the proper test plan or task list.”

MindmapA sample mind map.

“It’s a good visual aid to see what are the main features of an app, and our clients really like to see it in this way,” says Kristi. “Actually those nice red dots can show you which features have the most issues. Like they say ‘A picture is worth a thousand words.'” 

3. Exploratory testing is awesome — but it needs structure 

“We do approach the testing structure in an exploratory way because regression testing only checks what you write down to check,” says Kristi. “Usually with regression testing, there’s not many new issues coming out. It only shows you what you write it to check. It shows you what’s supposed to work, works. The structured exploratory gives us, or our testers, to look at as an end user way, so how we write tests are basically can do something more, which allows our testers to think outside the box. If the expected result is written down, then they usually look at that. They don’t look outside.”

4. Issue reporting matters

“I think what frustrates developers is when they can’t really reproduce an issue, and I’m sure you have heard it, I’ve heard it,” Kristi says. ” So it is really important to try and write all the information down that the other person can reproduce the issue. Especially with working outside the windows, because you don’t have the privilege to go to the person and show that it’s not working on your machine. Writing down all the details, all the videos, I think our platform is a good one that helps developers too.”

5. There’s no “perfect” QA team structure

“It’s been proven time and time again that all structures work,” Jeff says “For instance, if you just have an internal QA team of two members that are just awesome individuals That can test web mobile connected devices all at the same time within the timeframe if an eight-hour workday, and get the work done then it’s fine. Or, if you have a structure of QA manager, a few test leads, an analyst, tester, all in-house, then you can also have vendors helping you out. It just depends on the actual work load and the platforms that’s needed coverage for.”

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today