Skip to content

Feed aggregator

Wall Street’s Use of Artificial Intelligence Leads the Way for Digital Performance Management Revolution

The evolution of APM (Application Performance Management)

APM (Application Performance Management) is on the precipice of dramatic change. The complexity of cloud native and new stack applications is making many of the traditional ways of monitoring applications irrelevant. With applications which can elastically scale using containers to meet unprecedented amounts of demand, the APM industry needs to reconsider the focus on simple problem identification to provide value for businesses. Understanding how application performance visibility can provide better data as to what should get scaled and what does not need to get scaled is just as important to the business. Managing complexity is the challenge of the next generation of applications.

A radically new approach is needed.

Are there industries which have been pioneering a different way of doing things? The answer is “yes” and, interestingly enough, it starts with Wall Street. For years the financial services industry has been using highly advanced analytics, predicative algorithms and automation to execute transactions and service customers. It’s time that a wider cross section of industries take advantage of these techniques in helping them manage complex applications. Call it Artificial Intelligence – Machine Learning – Algorithms, this automation shows that we can identify patterns in data and react to it.

Let‘s look at some real world examples.

Stock Trading

In 2001 IBM published a paper which highlighted how several algorithms (MGD and ZIP) were able to outperform actual human stock traders. These algorithms look for patterns in data in real time.  Using machine learning these algorithms can adapt and react to the way in which these patterns change. The decision to make (or not make) a trade is entirely automated. The use of these algorithms combined with machine learning is responsible for billions of dollars of financial transactions. This is not science fiction; it’s reality.

Are there other examples? Yes, there are.

Credit Card Protection – Did you really buy this?

Credit card fraud is an estimated $16-billion-dollar problem. Credit card companies monitor billions of transactional events. These companies use machine learning and algorithms to identify anomalies in credit card holders’ activities. When a transaction is occurring on their card and the user is in a location they are not usually associated with, the credit card companies use these algorithms and machine learning to flag those transactions as being suspect. Many of us have received these calls from the credit card companies asking us if we’ve been aware of odd transactions. This has now become a common industry use of machine learning.

These examples show that Artificial Intelligence – Machine Learning – Algorithms are a proven technology that can be used to analyze (and react to) massive amounts of data in real time. Exactly the sort of thing needed in addressing the complexity management challenge coming with the next generation of applications.

Digital Performance Management: the evolution of APM

Industry leaders like Jason Bloomberg have described the need for an Artificial Intelligence-Driven Vision for Digital Performance Management. He describes how machine learning can be applied to the complex data sets generated by Digital Performance Management assets.

Application Performance Management has been becoming highly diversified over the past decade. Businesses have been driving this, requiring a variety of services encompassing the needs of different parts of their organizations; operations, development, LOB, etc. Recently, a new understanding of how businesses consume APM data has brought about the emergence of Digital Performance Management. Understanding that the entire business relies on APM data to better serve end users, more and more emphasis has revolved around capturing the entire (full stack) transaction (in a gap free fashion) for users actions. Be it a touch or swipe or API request, regardless of how the end user interacts with an application, there is a complex chain of meta data captured responding to an end user’s request. The amount of data generated is immense. Some APM vendors approach this by looking at slices of data when things go wrong. The problem with this approach is that businesses are looking to understand everything about every transaction as opposed to a slice of it. Algorithms work better with more data. With more data, machine learning learns better. Accuracy increases. Hence, capturing data slices isn’t ideal for complete Digital Performance Management. Being able to see everything allows businesses to better understand what end users are doing, how performance impacts conversion, how to better protect the brand, and determine where to make technology investments. This is why business are turning to Digital Performance Management.

Artificial Intelligence – Machine Learning – Algorithms are particularly well suited for managing dynamic environments. For the algorithms to work effectively, they need to take into account the auto discovery of compute resources as they are being created. In a world where containerized new stack capabilities can be elastically generated on demand, understanding what is being created and deprecated is vital in order for businesses to know what is touching the end user. The algorithms also need to understand the data being generated by end users and automatically baseline not only the amount of traffic but the performance of the traffic to identify anomalies impacting end users. Read more here.

While the Artificial Intelligence — Machine Learning – Algorithms can automatically manage massive amounts of data, the benefits for businesses to deploy these technologies along with their Digital Performance Management assets is that they help reduce costs associated with problem identification and root cause analysis. Businesses no longer have to divert their most costly human capital to ad hoc diagnostics from the innovation they should be working on.

Digital Performance Management is an example of a strategic set of highly complex real time data that businesses depend on. Artificial Intelligence – Machine Learning – Algorithms, whatever you want to call it, is an industry-proven solution that Wall Street has been using for years to help manage highly-complex dynamic data sets. It’s time for businesses to leverage the advances in machine learning to help them manage the complexity involved in servicing their end users.

The post Wall Street’s Use of Artificial Intelligence Leads the Way for Digital Performance Management Revolution appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

HPE Performance Testing Solutions Receive Premium Qualification for SAP Solution Extensions

HP LoadRunner and Performance Center Blog - Tue, 01/17/2017 - 17:22

HPE receives premium qualification from SAP Solution Extensions for HPE LoadRunner/Performance Center 12.53. Learn more.

Categories: Companies

Jenkins Upgrades To Java 8

In the next few months, Jenkins will require Java 8 as its runtime. Back in last November, we discussed interesting statistics showing that Jenkins was now running Java 8 on a majority of its running instances. Timeline Here is how we plan to roll that baseline upgrade in the next few months. Now: Announce the intention publicly. April, 2017: Drop support for Java 7 in Jenkins weekly. With the current rhythm, that means 2.52 will most likely be the first weekly to require Java 8. June 2017: First LTS version requiring Java 8 is published. This should be something around 2.60.1. If you are still running Java 7, you will not be...
Categories: Open Source

SCM API turns 2.0 and what that means for you

Due to regressions discovered after release it is not recommended to upgrade the plugins listed below at this time. We are announcing the SCM API 2.0.x and Branch API 2.0.x release lines. Downstream of this there are also some great improvements to a number of popular plugins including: GitHub Branch Source BitBucket branch source Git Mercurial Pipeline Multibranch GitHub Organization Folders There are some gotcha’s that Jenkins administrators will need to be aware of. Always take a backup of your JENKINS_HOME before upgrading any plugins. We want to give you the whole story, but the take home message is this: When updating the SCM API and/or Branch API plugins to the 2.0.x release lines, if you have any of the GitHub Organization Folders, GitHub...
Categories: Open Source

APM on latest technologies is a “given” but what about visibility into legacy applications?

Getting full-end-to-end visibility for critical systems is a “must have” nowadays. Most APM solutions integrate fairly well out of the box with common products such as web servers and application servers, but what about other systems? Unfortunately, not all companies use the latest of the latest cloud based open source technologies or even the big application servers such as IBM, Oracle and Microsoft. Some still use in house custom written applications somewhere in their systems. Where does APM fit?

I recently came across a very interesting case with a customer. Most of their back-end systems were using fairly standard technologies such as Java and Tomcat, but one of the core applications was still running a very simple AWT java-thick client.

This customer is one of the market leaders in online and physical car auctions. The general public can use the standard web site and place bids, but the auction centers still rely heavily on a simple but reliable Java client. Because, this company wanted to be more proactive and not wait for calls from the Clerks when things go bad, having monitoring in all places became a must.

With agent based monitoring, Dynatrace agents can collect all the monitoring information and send it to a central server for analysis.

The first step here was to install the agent on the local machine and make sure that it was “injected” in the application. In our case, the code was delivered via Java web start (AKA JNLP) so it was a little bit more challenging to get it working.

An environment variable called JAVA_TOOL_OPTIONS was used to set the agent path. This way, the customer was able to “soft launch” the new agent by setting the environment variable or not depending on which auction center would go first.

Here is an example of how it was defined:

set JAVA_TOOL_OPTIONS=-agentpath:C:\dynatrace\agent\lib\dtagent.dll=name=FD_Clerk_agent,collector=x.x.x.x:9998

In the case of non-generic applications, apart from memory and CPU consumption figures not much else can be seen with most tools so it isn’t enough. On a side note, it was quite surprising to see that the JVM was only using 20 MB of RAM maximum where most processes nowadays consume gigabytes!

Dynatrace is compatible with any Java application. It will automatically map transactions to Purepaths (digest representation of what has been executed and where) for most known applications servers. The real power of Dynatrace is that it provides a generic engine to allow users to tailor their configuration to the nth degree. It is therefore possible to “teach” Dynatrace to recognize any system and not just J2EE application servers. This is done, by spending a little bit of time with developers and go through the source code to find the pieces of code which defines a user transaction. Once it had been identified, all we needed was to create a custom sensor rule to indicate where a Purepath should start.

From this point, any bad performing code will be highlighted with auto-sensors.

Auto sensors are a lightweight way of highlighting long running code. They are great because, they don’t require any upfront configuration settings unlike custom sensors. Dynatrace uses both so that’s why it is so powerful. In our case, we only needed two method sensors. One for sending messages and one for receiving messages from the back-end bidding engine. A third method (processEvent) was added for convenience.

Because the code was so simple not much more was required. Each message handler sends the message in text format, so the custom sensor was configured to capture the text because it is a method argument. Now that the data was available, it was easy to build a dashboard which showed activity broken down by auction center.

The customer can now see bids been placed, new auctions starting etc. They no longer require to pick up the phone to check whether or not auctions are running fine.

The screenshot below was taken during the Proof of concept. It was deployed to 2 agents called Hall A (left) and Hall B (right). The upper section shows each message type received by the auction applications and at the bottom, messages sent out such as bid requests. Each type has its own color. The average response time was added in blue to complete the picture.

The only challenge was to define regular expressions to extract the different message types because, unfortunately, the messages were not formatted consistently. This example really shows how fixable Dynatrace can be. Any plug-and-play monitoring solution would not have worked in this case, but with Dynatrace it was possible.

The post APM on latest technologies is a “given” but what about visibility into legacy applications? appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Introducing Cassandra monitoring (beta)

We’re happy to announce the beta release of Dynatrace Cassandra monitoring! Apache Cassandra server monitoring provides information about database exceptions, failed requests, performance, and more. You’ll know immediately if your Cassandra databases are underperforming. And when problems occur, it’s easy to see which nodes are affected.

To view Cassandra monitoring insights

  1. Click Technologies in the menu.
  2. Click the Cassandra tile.
    Note: Monitoring of multiple Cassandra clusters isn’t supported in this beta release.
  3. To view cluster metrics, expand the Details section of the Cassandra process group.
  4. Click the Process group details button. Cassandra monitoring
  5. On the Process group details page, select the Technology-specific metrics tab, where you can identify problematic nodes.
  6. Select a relevant time interval from the Time frame selector in the top menu bar.
  7. Select a metric type from the metric drop list beneath the timeline to compare the values of all nodes in a sortable table view.
  8. To access node-specific metrics, select a node from the Process list at the bottom of the page.
    Cassandra monitoring
  9. Click the Cassandra metrics tab. Cassandra monitoringHere you’ll find valuable Cassandra node-specific metrics. The Exceptions and Failed requests charts show you if there’s a problem with the node. Pay particular attention to Unavailable Read/Write/RangeSlice counts. Increased latency while the number of operations remains stable typically indicates a performance issue.
Cassandra node metrics Metric Chart Description Exception count Exceptions Number of internal Cassandra exceptions detected. Under normal conditions, this metric should be zero. Unavailable – Read Failed requests Number of Unavailable – Read exceptions encountered Unavailable – Write Failed requests Number of Unavailable – Write exceptions encountered Failed requests Number of Unavailable – RangeSlice exceptions encountered Timeout – Read Failed requests Number of Timeout – Read exceptions encountered Timeout – Write Failed requests Number of Timeout – Write exceptions encountered Timeout – RangeSlice Failed requests Number of Timeout – RangeSlice exceptions encountered Failure – Read Failed requests Number of read failure  exceptions encountered Failure – Write Failed requests Number of Failure – Read exceptions encountered Failure – RangeSlice Failed requests Number of Failure – RangeSlice exceptions encountered Read Operation count Adverage number or Reads per second Write Operation count Adverage number or Writes per second RangeSlice Operation count Adverage number or RageSlices per second Read Latency 95th percentile Average 95th percentile of transaction read latency Write Latency 95th percentile Average 95th percentile of transaction write latency RangeSlice Latency 95th percentile Average 95th percentile of transaction RangeSlice latency Additional Cassandra node monitoring metrics

More Cassandra monitoring metrics are available on individual Process pages. Select the Further details tab to view these metrics.

Cassandra monitoring

Here you’ll find six tabs and plenty of informative metrics.

Cassandra monitoring

The Cache tab tells you about the Row cache and Key cache hit rates. The Disk usage tab provides essential understanding of the health of the Cassandra compaction process. On the Load tab you’ll find details about ongoing and past operations. Above-average Maximum latency measurements may indicate that you have some very slow requests. Charts on the Thread Pools tab should be empty, or at least be very low. A continuously high number of pending reads indicates a problem. For full details, see Pending task metrics for reads.

Additional Cassandra metrics Metric Chart Description Disk space Total disk space used  Total disk space used by SSTables, including obsolete tables waiting to be GC’d Row cache hit rate Hit rate 2m row cache hit rate Key cache hit rate Hit rate 2m key cache row hit rate Load Storage load Size, in bytes, of the on-disk data the node manages Bytes compacted Bytes compacted Total number of bytes compacted since server start Pending tasks Compaction tasks pending Estimated number of compactions remaining to perform Completed tasks Compaction tasks completed Number of completed compactions since server start SSTable count SSTable count Number of SSTables on disk for this table Hints Hints Number of hint messages written to this node since start. Includes one entry for each host to be hinted per hint Average Read latency Average 95th percentile of transaction read latency Maximum Read latency Max 95th percentile of transaction read latency Average Write latency Average 95th percentile of transaction write latency Maximum Write latency Max 95th percentile of transaction write latency Average RangeSlice latency Average 95th percentile of transaction RangeSlice latency Maximum RangeSlice latency Max 95th percentile of transaction RangeSlice latency Average Read throughput Adverage number of reads per second Maximum Read throughput Max number of reads per second Average Write throughput Average number of writes per second Maximum Write throughput Max number of writes per second Average RangeSlice throughput Average number of RangeSlices per second Maximum RangeSlice throughput Max number of RangeSlices per second Mutation pending tasks Mutation pending tasks Number of queued mutation tasks Read pending tasks Mutation pending tasks Number of read mutation tasks ReadRepair pending tasks ReadRepair pending tasks Number of ReadRepair mutation tasks Prerequisites
  • Cassandra 2.xx
  • Linux or Windows OS
Enable Cassandra monitoring globally

With Cassandra monitoring enabled globally, Dynatrace automatically collects Cassandra metrics whenever a new host running Cassandra is detected in your environment.

  1. Go to Settings > Monitoring > Monitored technologies.
  2. Set the Cassandra JMX switch to On.
Want to learn more?

Visit our dedicated webpage about Cassandra monitoring to read more about how Dynatrace supports Apache Cassandra.

Have feedback?

Your feedback about Dynatrace Cassandra monitoring is most welcome! Let us know what you think of the new Cassandra plugin by adding a comment below. Or post your questions and feedback to Dynatrace Answers.

The post Introducing Cassandra monitoring (beta) appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

OneAgent and Security Gateway release notes for version 111

OneAgent Java
  • Support for Mule 3.7 HTTP listener
  • Support for JAX-WS Spring remoting
  • Support for REST web services via WINK framework
  • Support for Restlet WS
  • Support for Hessian web services
.NET
  • Beta support for Owin/Katana. Please contact Dynatrace Support for details.
Nginx
  • Support for Nginx 1.11.8 mainline
General improvements and fixes
  • Improved memory usage reporting for Linux hosts
  • Automatic detection of ActiveMQ and CloudFusion
  • Improved detection of RabbitMQ, taking into account a new way of starting RabbitMQ clusters (rabbit_clusterer)
  • Fixes for OneAgent network & system monitoring stability
  • Fixed issues with process crash reporting
  • Docker 1.13 RC2 is supported by Docker containers plugin
  • Fixes for OneAgent auto-update
  • Support for Windows Server 2016
Security Gateway
  • Fixed debug log flooding, which caused fast log rotation

The post OneAgent and Security Gateway release notes for version 111 appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Manipulating XML files with XPath strings in IBM UrbanCode Deploy

IBM UrbanCode - Release And Deploy - Mon, 01/16/2017 - 18:48
I’m learning that people really want to manipulate XML files with their IBM UrbanCode Deploy servers! We get so many questions about the Update XML With XPath automation step (both through the forum and through direct contact) that I wrote an article about some common things that come up when people want to edit XML files as part of their deployment automation: Using the Update XML with Xpath step

… and we **still** get a ton of XPath questions!

The most common use case seems to be a server configuration that needs an edit when a new application or version of an application gets deployed. However, this step can edit just about any XML file, though there are some gotchas and idiosyncrasies that are listed in the article. (I might go into some detail about those in a later post.)

The basics are pretty simple: first, you give the step the name and location of an XML file or files. Then you give it some rules for changes to make in the XML file. To tell the step to change some text, for example, you give it an XPath string that points to the text, then a little text “arrow” (->), and then the new text.

For example, here’s some sample XML from the article:

<?xml version="1.0" encoding="UTF-8"?> <testFile> <myData> <data name="filePath" value="/opt/sampleDirectory"/> <data name="textData">Here is some text.</data> <data att1="one" att2="two" name="attributes" value="something"/> </myData> </testFile>

To change an attribute value, you use an XPath string that points to that attribute, and then you put in the new text. For example, to change the “/opt/sampleDirectory” to “/usr/otherDirectory”, put this in the Replace with text field: //testFile/myData/data[@value]/@value->/usr/otherDirectory

To change the text of a node, refer to it as “text()”, for example: //testFile/myData/data[@name='textData']/text()->Some new text content.

There are also steps to add and remove XML nodes.

Got an unusual file that you need to make changes to as part of your deployment automation? Keep the XPath questions coming in the forum and we’ll answer as many as we can.

Categories: Companies

Test Management Forum, London, UK, January 25 2017

Software Testing Magazine - Mon, 01/16/2017 - 09:30
The Test Management Forum is a one-day conference on software testing that takes place in London. This afternoon event proposes several talks that explore all the aspect of software testing and software quality from Agile approaches to DevOps. In the agenda of the Test Management Forum conference you can find topics like “Usability testing, a manual human test”, “The challenges of delivering performance for Agile and Continuous Delivery”, “Bringing quality and agility to startups”, “Agile delivery – Why does testing get left behind?”, “Exploratory learning – throwing away the slideware”, ” A hierarchy of Software Testing Measures and Metrics – Discuss?”, ” You’ve crowdsourced your hotel and your taxi, what about your testing?”, ” What’s all the fuss about DevOps?”, ” State of the Automation'”, “What’s so great about WebDriver?”. Web site: http://uktmf.com/ Location for the Test Management Forum conference : Balls Brothers, Minster Pavement, Mincing Ln, London EC3R 7PP, United Kingdom
Categories: Communities

Software Quality Days, Vienna, Austria, January 17-20 2017

Software Testing Magazine - Mon, 01/16/2017 - 09:00
The Software Quality Days is a four-day conference focused on quality in software development that takes place in Vienna. It aims to present and discuss the latest trends, best practice methods in quality management and ideas on improving methods and processes. Presentations are in German and in English. In the agenda of the Software Quality Days you can find topics like “From pair programming to mob programming to mob architecting”, “Specification by Example”, “Practical Quality Scorecards”, “Practical Tools for Simplification of Software Quality Engineering”, “Traceability in a Fine Grained Software Configuration Management System”, “A Wearables Story: Testing the Human Experience”, “A portfolio of internal quality metrics for software architects”, “Mobile Apps Performance Testing using open source tool JMeter”, “Software Security Validation in the Industrial Data Space”, “The Impact of Testing – By Example”. “Risk Based Testing in a Project following Very Early Testing”, “The Elephant in the room: Continuous Delivery for Databases (DLM)”, “How Scrum tools may change your agile software development approach”, “Questions of Sanity: Verifying Early That You’re on the Right Path”, “Software Quality Assurance with Static Code Analysis”, “From requirements to automated tests with a BPM approach”, “How to manage people who manage testing?”. Web site: https://2017.software-quality-days.com/en/ Location for Software Quality Days conference: Austria Trend Hotel Savoyen, Rennweg 16, 1030 Vienna, Austria
Categories: Communities

Using Analysis Templates to Customize Performance Test Run Reports in Performance Center

HP LoadRunner and Performance Center Blog - Sun, 01/15/2017 - 13:30

Discover how to customize your performance test run reports using Analysis templates.

Categories: Companies

DevOps for Small Organizations: Lessons from Ed

Sonatype Blog - Fri, 01/13/2017 - 16:07
Ed was demoralized. He had just heard a speaker who would change his life. He knew he needed to change, and he knew what the end goal was. He just didn’t know how to get there. He needed fresh air. He needed endorphins. What better way to do that than go on a 6-hour run through some of the seedier...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Terms for building applications with Rational Team Concert and deploying them with IBM UrbanCode Deploy

IBM UrbanCode - Release And Deploy - Fri, 01/13/2017 - 12:09
Do you use Rational Team Concert (RTC) to build applications? If so, you may not know that you can configure RTC to send the built artifacts to IBM UrbanCode Deploy (UCD) for automatic deployment. That’s part of a continuous delivery solution that’s discussed here: Achieving continuous deployment with UrbanCode Deploy by integrating with Rational Team Concert.

Here’s a glossary white paper of RTC-UCD terms that we put together for a customer who was learning how to use these products together: RTC and UCD terminology

Need help setting up your RTC-UCD integration? There are instructions here: Achieving continuous deployment with UrbanCode Deploy by integrating with Rational Team Concert, and there’s more information about the details of the UCD setup on the RTC SCM plugin for UCD on the plug-in page. Let us know in the forum if you’ve got other questions.
Categories: Companies

Blue Ocean Dev Log: January Week #2

As we get closer to Blue Ocean 1.0, which is planned for the end of March, I figured it would be great to highlight some of the good stuff that has been going on. It’s been a busy-as-usual week as everyone comes back from vacation. A couple of new betas went out this week. Of note: input to Pipelines is now supported, a much asked for feature (see below) A new French translation Some optimisations (especially around reducing number of HTTP calls). We have started using gtmetrix.com to measure changes on dogfood to get some numbers around optimisations on the web tier. And a grab bag of other great bug fixes. Also a bunch...
Categories: Open Source

System definition and confidence in the system

Thoughts from The Test Eye - Thu, 01/12/2017 - 19:16
Ideas

As a tester, part of your mission should be to inform your stakeholders about issues that might threaten the value of the system/solution. But what if you as a tester do not know the boundary of the system? What if you base your confidence of the result of your testing on a fraction of what you should be testing? What if you do not know how or when the system/solution is changed? If you lack this kind of control, how can you say that you have confidence in the result of your testing?

These questions are related to testability. If the platform we base our knowledge on is in fluctuation, then how can we know that anything of what we have learnt is correct?

An example. In a project I worked on, the end-to-end solution was extremely big, consisting of many sub systems. The solution was updated by many different actors, some doing it manually and some doing it with continuous deployment. The bigger solution was changed often and in some cases without the awareness of the other organisations. The end-to-end testers sometimes performed a test that took a fair amount of time. Quite often, they started one test and during that time the solution was updated or changed with new components or sub systems. It was difficult to get any kind of determinism in the result of testing. When writing the result of a test, you probably want to state which version of the solution you were using at the time. But how do you refer to the solution and its version in a situation like this?

When you test a system and document the result of your tests you need to be able to refer to that system in one way or another. If the system is changed continuously, you somehow need to know when it is changed, what and where the change is as well. If you do not know what and where there are changes, it will make it harder for you to plan the scope of your testing. If you do not know when, it is difficult to trust the result of your tests.

One way of identifying your system is to first identify what the system consists of. Considering the boundary of the system and what is included. Should you include configuration of the environment as part of the system? I would. Still, there are no perfect oracles. You will only be able to define the system to certain extent.

Sub systems

System version 1.0

System version 1.1

System version 1.2

component 1 version

1.0

1.1

1.1

component 2 version

1.0

1.0

1.0

component 3 version

1.0

1.0

1.1

As you define parts or components of the system, you can also determine when each are changed. The sum of those components are the system and its version. I am sure there are many ways to do this. Whatever method you choose, you need to be able to refer to what it is.

I think it is extremely important that you do anything you can to explore what the system is and what possible boundaries it could have. You need many and different views of the system, creating many models and abstractions. In the book “Explore IT!”, Elizabeth Hendrickson writes about identifying the eco system and performing recon missions to charter the terrain, which is an excellent way of describing it. When talking about test coverage you need to be able to connect that to a model or a map of the system. By doing that you also show what you know are coverable areas. Another way of finding out what the system is using the heuristic strategy model, by James Bach, and specifically exploring Product Elements. Something that I have experienced is that when you post and visualize the models of the system for everyone to see, you will immediately start to gain feedback about them from your co-workers. Very often, there are parts missing or dependencies not shown.

If one of your missions as a tester is to inform stakeholders to make sound decisions, then consider if you know enough of the system to be able to recommend a release to customer or not. Consider what you are referring to when you talk about test coverage and if your view of the system is enough.

References

  1. Explore It! by Elisabeth Hendrickson – https://pragprog.com/book/ehxta/explore-it
  2. Heuristic Test Strategy Model by James Bach – http://www.satisfice.com/tools/htsm.pdf

  3. The Oracle Problem – http://kaner.com/?p=190

  4. A Taxonomy for Test Oracles by Douglas Hoffman – http://www.softwarequalitymethods.com/Papers/OracleTax.pdf

Categories: Blogs

Complexity is also a Bug

PractiTest - Thu, 01/12/2017 - 15:00

Last month I attended the DevOps Days Tel Aviv conference, I took some notes about the interesting stuff people said – like many people do during a conference –  and then I put the notebook away – like most people usually do after a conference…

But yesterday I was going over my notebook, I guess I wanted to check what I still had to do from my endless to-do-lists, and I stumbled onto my notes from the conference.  Among all the scribbles and drawings there was a sentence that caught my eye, something I had written and then forgotten, but that when I saw it again managed to grab my attention.

What was written in this sentence was:

Complexity is also a Bug.
Complexity is a bugComplexity is increasing and will continue to increase

Below the sentence I wrote the notes that the speaker had said in his presentation.

He was a Founder/Programmer/Geek/CEO from The Valley, and he explained how from his point of view complexity in software and applications was increasing, and would continue to increase with time.

The point he was driving to was that, as development companies we have to make choices about who handles this complexity.  He looked at this from a DevOps perspective and so he said that there were 3 choices for who could “take on” this complexity:  Development, IT or the End Users.

To put it simply, applications need to do more: talk to more applications in order to do stuff, do this stuff faster, and do it in more diverse environments.  We are not really doing less of anything (unless you count less sleeping and relaxing).

Being this the case, someone needs to “pay the penalty” for doing more, and this penalty could be handled by any of the 3 teams I mentioned above.

It could be handled as part of the development process, by creating more complex algorithms and smarter code to handle all the complexity.

Parts of it could be passed on to the IT, who need to deploy and configure the system so that it could handle some of this added complexity.

And finally, all the stuff not handled by any of previous two teams would end up being passed to the End User, who would need to work with more complicated installations and configurations, and applications that were less friendly.

In the end, what he was explaining is that complexity is a Zero-Sum game, and as part of our development projects we need to decide who will be handling it.

Looking at complexity as a bug in the product

Complexity is not a feature, it is also not an objective attribute like load level or application response time, but it is not less important than any of the other attributes in our products, such as a nice GUI or a good UX.  Actually, complexity is usually handled via a combination of features and UX solutions.

Having said that, we need to handle excessive complexity as a bug in the products we are testing, but this may sometimes be trickier than you think.

  • First of all, complexity depends on the user.

Try to think about 2 different users:  for example your dad vs. yourself.  My father is a great surgeon, but he is pretty bad with iPhone apps.  This means that something that is trivial to me may be impossible for him to figure out.

On the other hand, if you compare me with some of the developers we have in PractiTest, then I become the guy who cannot get most of our configuration done straight.  In this second case, something that to them may be trivial for me is utterly complex.

  • Second of all, complexity is a trade-off.

I think we have all been in those projects where the Product Owner or Product Manager or even the End User comes and says:  “Guys, we need to release this!  It’s time to make compromises…”

Compromises are just another term being used for bugs and missing features.  These are the cases when the project is already delayed and we need to deliver something out, even if it is not perfect.  And in this cases, one of the first things to be “compromised” is complexity.

This is when we agree that we can release a document on how to configure the system manually instead of having the intelligent self configuration, or that the installation will ask many questions that could have been taken from the system directly.

These are not critical things, but depending on who the user is this may be a blocker for him/her to install the system.  Or in the best of cases only a nuance they would have preferred do not deal with.

  • Finally, it is very hard to handle complexity after the feature was developed.

Imagine you order a car to be custom made for you.  You talked about the color, the shape, the interiors, wheels, mirrors, etc…

Then, when they are showing you the finished car you realize that (1) you wanted this car to be driven in the UK where they ride on the Right Side of the road, also that (2) you wanted to save money by having it work with a Diesel engine, and finally that (3) you wanted an automatic car, and the one they are showing you works with a stick…

Oops!

Just as making these changes once the car is done will be very costly and it will definitely delay the time you get to drive your car…  So is the case with trying to reduce the complexity of the system once it has been done and it is close to being released.

It is true that by working AGILE and with Iterations these issues should be reduced, but they will not be eliminated.  And many times the issues won’t be discovered until the product actually reaches the end user, and so solving the complexity issues or adding new functionality to handle them becomes even more costly and may end up having additional marketing repercussions and costs.

Complexity is an issue that needs to be taken seriously into account

Complexity is a bugAs you may understand by now, complexity is not only here to stay but it will become more of an issue as our products need to do more and communicate with additional products and solutions in order to fulfill their objectives.

Complexity is also something that, if we don’t handle it as part of the product design and development, it will be passed to our end-users to handle – or they may choose not work with our solutions because they don’t want to or can’t handle the complexity we are relegating to them.

And finally, complexity is easier, faster and cheaper to handle early in the process than later on.

But most importantly, given that your users will see complexity as a flaw in your product, it is better that you start considering it as a bug that needs to be reported and eventually handled by your team.

The post Complexity is also a Bug appeared first on QA Intelligence.

Categories: Companies

Flexible group-based permissions management!

We’ve upgraded the Dynatrace permission management system to make it more flexible and to give you more control over managing permissions for groups. The new system isn’t based on hierarchical roles, but rather on groups, reflecting Unix- and Windows-based permissions. It enables you to create groups that have pre-defined (fully customizable) permissions sets—users added to a group inherit the permissions of that group.

Group, users and permissions

Groups

To get you started, Dynatrace provides a new default set of editable user groups that cover all the roles and permissions that were available in the previous permission system. The same separation of account and environment permissions has been retained.

Default account groups
  • Account manager. Has full account access. Can view and edit company data, enter credit card data, review invoices, create and edit groups, and add users to groups. Also has access to environment consumption data, Help, and Support.
  • Finance admin. Can enter credit card data and review invoices. Has access to company/billing address info, environment consumption data, Help, and Support. Can’t edit groups or assign users to groups.
  • Account viewer. Has access to environment consumption data, Help, and Support. No access to credit card data, invoices, or company/billing address info. Can’t edit groups or assign users to groups.
Default environment groups
  • Monitoring admin. Has full environment access. Can change monitoring settings. Can download and install OneAgent.
  • Deployment admin. Can download and install OneAgent. Has read-only access to the environment. Can’t change settings.
  • Monitoring viewer. Can access the environment in read-only mode. Can’t change settings. Can’t download or install OneAgent.
  • Log viewer. Can access and view the contents of log files. Only available to personnel who have been granted permission to view sensitive log data. No other access rights.
Permissions

Groups are fully customizable and can be modified to contain any permission you require for a specific group. Even the default groups can be modified to meet your needs. Just select/deselect the predefined permissions you want when setting up groups. Once permissions are assigned to a group, users added to that group inherit the permissions of the group.

Account permissions
  • Access account. Can access account to view environment data (host hours, sessions, and web checks). Can access Help and Support (create Support tickets, view documentation, and visit the Dynatrace Answers user forum). No access to billing or user/group management.
  • Edit billing & account info. Allows access to payment data (credit card details), billing data (invoices), and contact information (company/billing address).
  • Manage users. Allows access to user management (can add users to groups) and group management (can create, edit, and delete groups).
Environment permissions
  • Access environment. Allows read-only access to the environment. Can’t change settings. Can’t install OneAgent.
  • Change monitoring settings. Can change all Dynatrace monitoring settings. Can’t install OneAgent.
  • Download & install OneAgent. Allows download and installation of OneAgent on hosts. Can’t change Dynatrace monitoring settings.
  • View logs. Allows access to log file content, which may contain sensitive information.
  • View sensitive request data. Allows viewing of potentially sensitive data (for example, previously captured HTTP Headers, method arguments, and literals within database statement parameters).
  • Configure request capture data. Allows configuration of request-data capture rules, which can be used to capture data such as HTTP Header or Post parameters within requests. Captured request data can be stored, filtered, and searched.
Manage groups and users

The new user and group permissions controls are available when you sign into your account. Just select User management or Group management from the menu on the left-hand side.

View list of groups

To view the list of groups associated with your account, Select Group management from the menu.

Note: This feature is only available to users who have the Manage users permission.

permissions management

Create a new group
  1. Select Group management from the menu.
    Note: This feature is only available to users who have the Manage users permission.
  2. Click Create new group.
  3. Enter a Group name.
  4. Select relevant permissions (account and/or environment permissions).
    At least one permission must be selected.
  5. Click Add group.

permissions management

Edit a group
  1. Select Group management from the menu.
    Note: This feature is only available to users who have the Manage users permission.
  2. Click the Edit (V) button on the right-hand side.
  3. Select/Deselect permissions as required.
  4. (Optional) Type a new Group name.
  5. Click Save.

permissions management

Delete a group
  1. Select Group management from the menu.
    Note: This feature is only available to users who have the Manage users permission.
  2. Click the corresponding Delete (x) button on the right-hand side of the group list.
  3. Click Yes to confirm the deletion.
    You can delete groups that have one or more users assigned to them.

permissions management

View list of users

To view the list of users and their permissions associated with your account, Select User management from the menu.

Note: This feature is only available to users who have the Manage users permission.

permissions management

Invite a user to your account
  1. Select User management from the menu.
    Note: This feature is only available to users who have the Manage users permission. Other users must use the Invite a co-worker option (available on your account’s Environment page).
  2. Click Invite user.
  3. Type the new user’s Email address.
  4. Click a group name to add or remove the user from that group.
    You need to select at least one group.
  5. To see which permissions the user inherits from all the groups they are members of, click Permission preview.
  6. Click Invite.
    If the user isn’t already a Dynatrace user, they will receive a link they can use to complete the signup process. If they are already a Dynatrace user, they will receive a link to the specified environment.

permissions management

permissions management

Edit a user’s group assignments
  1. Select User management from the menu.
    Note: This feature is only available to users who have the Manage users permission.
  2. Locate the relevant user in the list and click the corresponding Edit (V) button on the right-hand side.
  3. Click a group name to add or remove the user from that group.
  4. Review the permissions by clicking Permission preview.
    This is an aggregated view of all permissions of all groups the user is assigned to.
  5. Click Save.

permissions management

permissions management

Delete a user
  1. Select User management from the menu.
    Note: This feature is only available to users who have the Manage users permission.
  2. Locate the relevant user in the list and click Delete (X) on the right-hand side.
  3. Click Yes to confirm the deletion.

permissions management

The post Flexible group-based permissions management! appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Declarative Pipeline Syntax Beta 2 release

This week, we released the second beta of the new Declarative Pipeline syntax, available in the Update Center now as version 0.8.1 of Pipeline: Model Definition. You can read more about Declarative Pipeline in the blog post introducing the first beta from December, but we wanted to update you all on the syntax changes in the second beta. These syntax changes are the last compatibility-breaking changes to the syntax before the 1.0 release planned for February, so you can safely start using the 0.8.1 syntax now without needing to change it when 1.0 is released. A full syntax reference is available on the wiki as well. Syntax Changes Changed "agent" configuration...
Categories: Open Source

Dynatrace Managed feature update for version 110

Following the release of version 110, here are the latest enhancements that we’ve introduced to Dynatrace Managed.

Improved load balancing capabilities for the user interface

With version 110, you no longer need to worry about sticky sessions when load-balancing the user interface. Dynatrace ensures that cluster nodes can handle all user requests, even when user sessions are initiated on different nodes. Also, multi-node clusters that use the automatic domain name and certificate management capabilities don’t require that you log in again when DNS lease time expires.

Improved event notification logic

To avoid frequent notifications about problematic hardware or network instability problems between cluster nodes, Dynatrace now employs a refined mechanism that avoids alerting for such issues (for example, short network interruptions that have no functional impact).

Notification of exceeded quotas

When the monthly or yearly quota of user sessions for a Dynatrace environment is exceeded, all environment users are notified automatically.

The post Dynatrace Managed feature update for version 110 appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Customer complaint resolution in today’s digital multi-channel world!

I have seen many companies struggle to find out what issues may be driving customer complaints when they come in to a support or service center, and today’s digital multi-channel strategies do not make it easier. With customer complaints, the following question always surfaces: Is the complaint because of a technical issues or is it something else? To get visibility into technical issues many companies have tried web session recording and replay tools. But these solutions lack the ability to capture all customer visits due to numerous reasons, including overhead on the application. With these tools, capturing the online journey of a specific customer with a complaint is usually pure luck.

They are also unable to cover all the digital channels that a company uses, including mobile apps, store kiosks, airline self-service check-in counters, and more. To obtain better insight, companies have also integrated feedback loops into their apps, but are not always successful with them. And, in addition to having everything in a separate tool, the best way to determine how to work together with the IT team — who must solve the issue — is usually left open!

Isn’t there an easier way to gain insight into customer journeys, no matter which digital channel a user comes through? From my experience, in order to get customer complaint resolution “done right” in today’s multi-channel environment, you must have the following:

  1. Chance to quickly find and see the user journeys
  2. See quickly if there were technical issues occurring during the session
  3. The user navigation path, the errors occurring alongside this path, and the resulting performance degradation
  4. Having the technical details available to developers and operators to make them actionable

Here is what I found to be most useful – Real User Monitoring across the digital channels, with technical depth as well as the non-technical view on individual users, for support and marketing audiences to understand why users are complaining. Luckily, I can work with great people together and we solved the issue! The screenshots below give you concise insight on what is possible today with Real User Monitoring when working to solve customer complaints.

Step 1: Identify the user who is in need!

Identifying users via user tags, location, app usage or other indicators needs to be simple and easy-to-use for everyone in the company. Having the result available quickly, particularly if the complaining user is still interacting with your application, can be crucial to determining if you’ll be able to win back the customer or lose them, possibly forever.

Finding the user with a bad digital experience in the multi-channel world of todayFinding the user with a bad digital experience in the multi-channel world of today Step 2: Learn about his journeys across your channels!

It is not only critical to identify the user but to also see each session they had on different applications, different devices, and also from different locations. Determining session length and the number of interactions occurring during this time, are all key to developing a clear understanding of the user frustration and pinpointing the customer complaint session.

Unique visitor with multiple session across the different channelsUnique visitor with multiple session across the different channels Step 3: See where they failed on their way to conversion!

As seen in the below example, not being able to use the datepicker when trying to book a journey is quite a challenge. Trying it manually without the date picker (which does the date formatting for the user) can be a big challenge when considering all the date formats used around the world. The example shows how a simple JavaScript Error can stop your customer from moving forward on the conversion path, and lead to a complaint.

Customer complaint session with JavaScript error on the way to conversionCustomer complaint session with JavaScript error on the way to conversion Step 4: Collaborate with IT using context information to fix the issue!

We frequently write about technical issues bringing down applications/channels. The example below shows how a third-party JavaScript caused a big impact, not only on the single complaining user but also on the entire site causing more than 100 errors per minute!

JavaScript errors impacting applicationJavaScript errors impacting application for Firefox users. JavaScript error with stack traceJavaScript error with stack trace. JavaScript error impacting overall application healthJavaScript error impacting overall application health. Conclusion

Today, people are using more channels than ever to communicate with you, but are you ready to react appropriately when something goes wrong? The digital changes in the past have not stopped the evolution of monitoring solutions. They have — for better or worse — pushed them to become more capable. Not too long ago IT started with Application Performance Management (APM), but today it is not only about actionable insight for IT, it is about applying user-centric data to assist entire companies in effectively growing the business.

Welcome to the World of Digital Performance Management! Want to get started? Check out this!

The post Customer complaint resolution in today’s digital multi-channel world! appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today