Skip to content

Feed aggregator

Record and Replay

Ranorex - Thu, 03/23/2017 - 17:23

Record and Replay

The post Record and Replay appeared first on Ranorex Blog.

Categories: Companies

Ranorex Studio – Overview

Ranorex - Thu, 03/23/2017 - 17:16
Categories: Companies

Next generation WPF Plug-In

Ranorex - Thu, 03/23/2017 - 17:00

The post Next generation WPF Plug-In appeared first on Ranorex Blog.

Categories: Companies

Using Multi-branch Pipelines in the Apache Maven Project

This is a post about how using Jenkins and Pipeline has enabled the Apache Maven project to work faster and better.

Most Java developers should have at least some awareness of the Apache Maven project. Maven is used to build a lot of Java projects. In fact the Jenkins project and most Jenkins plugins are currently built using Maven.

After the release of Maven 3.3.9 in 2015, at least from the outside, the project might have appeared to be stalled. In reality, the project was trying to resolve a key issue with one of its core components: Eclipse Aether. The Eclipse Foundation had decided that the Aether project was no longer active and had started termination procedures.

Behind the scenes the Maven Project Management Committee was negotiating with the Eclipse Foundation and getting all the IP clearance from committers required in order to move the project to Maven. Finally in the second half of 2016, the code landed as Maven Resolver.

But code does not stay still.

There had been other changes made to Maven since 3.3.9 and the integration tests had not been updated in accordance with the project conventions.

The original goal had been to get a release of Maven itself with Resolver and no other major changes in order to provide a baseline. This goal was no longer possible.

In January 2017, the tough decision was taken.

Reset everything back to 3.3.9 and merge in each feature cleanly, one at a time, ideally with a full clean test run on the main supported platforms: Linux and Windows, Java 7 and 8.

In a corporate environment, you could probably spend money to work your way out of trying to reconstruct a subset of 14 months of development history. The Apache Foundation is built on volunteers. The Maven project committers are all volunteers working on the project in their spare time.

What was needed was a way to let those volunteers work in parallel preparing the various feature branches while ensuring that they get feedback from the CI server so that there is very good confidence of a clean test run before the feature branch is merged to master.

Enter Jenkins Pipeline Multibranch and the Jenkinsfile.

A Jenkinsfile was set up that does the following:

  1. Determines the current revision of the integration tests for the corresponding branch of the integration tests repository (falling back to the master branch if there is no corresponding branch)
  2. Checks out Maven itself and builds it with the baseline Java version (Java 7) and records the unit test results
  3. In parallel on Windows and Linux build agents, with both Java 7 and Java 8. Checks out the single revision of the integration tests identified in step 1 and runs those tests against the Maven distribution built in step 2, recording all the results at the end.

There’s more enhancements planned for the Jenkinsfile (such as moving to the declarative syntax) but with just this we were able to get all the agreed scope merged and cut two release candidates.

The workflow is something like this:

  1. Developer starts working on a change in a local branch
  2. The developer recognizes that some new integration tests are required, so creates a branch with the same name in the integration tests repository.
  3. When the developer is ready to get a full test run, they push the integration tests branch (integration tests have to be pushed first at present) and then push the core branch.
  4. The Apache GitPubSub event notification system sends notification of the commit to all active subscribers.
  5. The Apache Jenkins server is an active subscriber to GitPubSub and routes the push details into the SCM API plugin’s event system.
  6. The Pipeline Multibranch plugin creates a branch project for the new branch and triggers a build
  7. Typically the build is started within 5 seconds of the developer pushing the commit.
  8. As the integration tests run in parallel, the developer can get the build result as soon as possible.
  9. Once the branch is built successfully and merged, the developer deletes the branch.
  10. GitPubSub sends the branch deletion event and Jenkins marks the branch job as disabled (we keep the last 3 deleted branches in case anyone has concerns about the build result)

The general consensus among committers is that the multi-branch project is a major improvement on what we had before. 

Notes
  • While GitPubSub itself is probably limited in scope to being used at the Apache Software Foundation, the subscriber code that routes events from source control into the SCM API plugin’s event system is relatively small and straightforward and would be easy to adapt if you have a custom Git hosting service, i.e. if you were in the 4% on this totally unscientific poll I ran on twitter:

    If you use Git at work, please answer this poll. The git server we use is:


    - Stephen Connolly (@connolly_s) March 17, 2017

  • There is currently an issue whereby changes to the integration test repository do not trigger a build. This has not proved to be a critical issue so far as typically developers change both repositories if they are changing the integration tests.

 

Blog Categories: Jenkins
Categories: Companies

Enhanced Test Suite Structure

Ranorex - Thu, 03/23/2017 - 16:27

The post Enhanced Test Suite Structure appeared first on Ranorex Blog.

Categories: Companies

Selenium WebDriver Integration

Ranorex - Thu, 03/23/2017 - 16:07

The post Selenium WebDriver Integration appeared first on Ranorex Blog.

Categories: Companies

Do You View Your AppSec Tools as an Inhibitor to Innovation or a Safety Measure?

Sonatype Blog - Thu, 03/23/2017 - 15:00
DevOps is all about making better software faster.  It also requires making it more safely while compressing the time between ideation to realisation. I hear IT organisations tell me time and time again of their ambitions to be the innovation power-house for their business - so it’s great news...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

DevSecOps: Eat Carrots, Not Cupcakes

Sonatype Blog - Thu, 03/23/2017 - 15:00
You Are What You Eat.   When it comes to food, we all know what’s considered “good” and what’s “bad”.

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Can You Afford Me?

Hiccupps - James Thomas - Wed, 03/22/2017 - 23:56

I'm reading The Design of Everyday Things by Donald Norman on the recommendation of the Dev manager, and borrowed from our UX specialist. (I have great team mates.)

There's much to like in this book, including
  • a taxonomy of error types: at the top level this distinguishes slips from mistakes. Slips are unconscious and generally due to dedicating insufficient attention to a task that is well-known and practised. Mistakes are conscious and reflect factors such as bad decision-making, bias, or disregard of evidence.
  • discussion of affordances: an affordance is the possibility of an action that something provides, and that is perceived by the user of that thing. An affordance of a chair is that you can stand on it. The chair affords (in some sense is for) supporting, and standing on it utilises that support.
  • focus on mappings: the idea that the layout and appearance of the functional elements significantly impacts on how a user relates them to their outcome. For example, light switch panels that mimic the layout of lights in a room are easier to use.
  • consideration of the various actors: the role of the designer is to satisfy their client; the client may or may not be the user; the designer may view themselves as a proxy user; the designer is almost never a proxy user; the users are users; there is rarely a single user (type) to be considered.

But the two things I've found particularly striking are the parallels with Harry Collins' thoughts in a couple of areas:
  • tacit and explicit knowledge: or knowledge in the head and knowledge in the world, as Norman has it. When you are new to some task, some object, you have only knowledge that is available in the world about it: those things that you can see or otherwise sense. It is on the designer to consider how the affordances suggested by an object affect its usability. This might mean - for example - following convention, e.g. the push side of doors shouldn't have handles and the plate to push on should be at a point where pushing is efficient.
  • action hierarchies: actions can be viewed at various granularities. In Norman's model they have seven stages and he gives an example of several academics trying to thread an unfamiliar projector. In The Shape of Actions, Collins talks about an experiment attempting to operate a laboratory air pump. Both authors deconstruct the high-level task (operate the apparatus) into sub-tasks, some of which are familiar to some extent - perhaps by analogy, or by theoretical knowledge, or by having seen someone else doing it - and some of which are completely unfamiliar and require explicit experience of that specific task on that specific object.

I love finding connections like this, even if I don't know quite what they can afford me, just yet.

Categories: Blogs

Happy 10th Birthday Google Testing Blog!

Google Testing Blog - Wed, 03/22/2017 - 23:22
by Anthony Vallone

Ten years ago today, the first Google Testing Blog article was posted (official announcement 2 days later). Over the years, Google engineers have used this blog to help advance the test engineering discipline. We have shared information about our testing technologies, strategies, and theories; discussed what code quality really means; described how our teams are organized for optimal productivity; announced new tooling; and invited readers to speak at and attend the annual Google Test Automation Conference.

Google Testing Blog banner in 2007

The blog has enjoyed excellent readership. There have been over 10 million page views of the blog since it was created, and there are currently about 100 to 200 thousand views per month.

This blog is made possible by many Google engineers who have volunteered time to author and review content on a regular basis in the interest of sharing. Thank you to all the contributors and our readers!

Please leave a comment if you have a story to share about how this blog has helped you.

Categories: Blogs

“Workflow” Means Different Things to Different People

Wikipedia defines the term workflow as “an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes” - processes that make things or just generally get work done. Manufacturers can thank workflows for revolutionizing the production of everything from cars to chocolate bars. Management wonks have built careers on applying workflow improvement theories like Lean and TQM to their business processes.

What does workflow mean to the people who create software? Years ago, probably not much. While this is a field where there’s plenty of complicated work to move along a conceptual assembly line, the actual process of building software historically has included so many zigs and zags that the prototypical pathway from A to Z was less of a straight line than a more of a sideways fever chart.

Today, workflow, as a concept, is gaining traction in software circles, with the universal push to increase businesses’ speed, agility and focus on the customer. It’s emerging as a key component in an advanced discipline called continuous delivery that enables organizations to conduct frequent, small updates to apps so companies can respond to changing business needs.

So, how does workflow actually work in continuous delivery environments? How do companies make it happen? What kinds of pains have they experienced that have pushed them to adopt workflow techniques? And what kinds of benefits are they getting?

To answer these questions, it makes sense to look at how software moves through a continuous delivery pipeline. It goes through a series of stages to ensure that it’s being built, tested and deployed properly. While organizations set up their pipelines according to their own individual needs, a typical pipeline might involve a string of performance tests, Selenium tests for multiple browsers, Sonar analysis, user acceptance tests and deployments to staging and production. To tie the process together, an organization would probably use a set of orchestration tools such as the ones available in Jenkins.

Assessing your processes

Some software processes are simpler than others. If the series of steps in a pipeline is simple and predictable enough, it can be relatively easy to define a pipeline that repeats flawlessly – like a factory running at full capacity.

But this is rare, especially in large organizations. Most software delivery environments are much more complicated, requiring steps that need to be defined, executed, revised, run in parallel, shelved, restarted, saved, fixed, tested, retested and reworked countless times.

Continuous delivery itself smooths out these uneven processes to a great extent, but it doesn’t eliminate complexity all by itself. Even in the most well-defined pipelines, steps are built in to sometimes stop, veer left or double back over some of the same ground. Things can change – abruptly, sometimes painfully – and pipelines need to account for that.

The more complicated a pipeline gets, the more time and cost get piled onto a job. The solution: automate the pipeline. Create a workflow that moves the build from stage to stage, automatically, based on the successful completion of a process – accounting for any and all tricky hand-offs embedded within the pipeline design.

Again, for simple pipelines, this may not be a hard task. But, for complicated pipelines, there are a lot of issues to plan for. Here are a few:

  • Multiple stages – In large organizations, you may have a long list of stages to accommodate, with some of them occurring in different locations, involving different teams.
  • Forks and loops – Pipelines aren’t always linear. Sometimes, you’ll want to build in a re-test or a re-work, assuming some flaws will creep in at a certain stage.
  • Outages – They happen. If you have a long pipeline, you want to have a workflow engine ensure that jobs get saved in the event of an outage.
  • Human interaction – For some steps, you want a human to check the build. Workflows should accommodate the planned – and unplanned – intervention of human hands.
  • Errors – They also happen. When errors crop up, you want an automated process to let you restart where you left off.
  • Reusable builds – In the case of transient errors, the automation engine should allow builds to be used and re-used to ensure that processes move forward.

In the past, software teams have automated parts of the pipeline process using a variety of tools and plugins. They have combined the resources in different ways, sometimes varying from job to job. Pipelines would get defined, and builds would move from stage to stage in a chain of jobs — sometimes automatically, sometimes with human guidance, with varying degrees of success.

As the pipeline automation concept has advanced, new tools are emerging that program in many of the variables that have thrown wrenches into more complex pipelines over the years. Some of the tools are delivered by vendors with big stakes in the continuous delivery process – known names like Chef, Puppet, Serena and Pivotal. Other popular continuous delivery tools have their roots in open source, such as Jenkins.

While we are mentioning Jenkins, the community recently introduced functionality, specifically to help automate workflows. Jenkins Pipeline (formerly known as Workflow) gives a software team the ability to automate the whole application lifecycle – simple and complex workflows, automation processes and manual steps. Teams can now orchestrate the entire software delivery process with Jenkins, automatically moving code from stage to stage and measuring the performance of an activity at any stage of the process.

Conclusion
Over the last 10 years continuous integration brought tangible improvements to the software delivery lifecycle – improvements that enabled the adoption of agile delivery practices. The industry continues to evolve. Continuous delivery has given teams the ability to extend beyond integration to a fully formed, tightly wound delivery process drawing on tools and technologies that work together in concert.

Pipeline brings continuous delivery forward another step, helping teams link together complex pipelines and automate tasks every step of the way. For those who care about software, workflow means business.

This blog entry was originally posted on Network World.

 

 

Blog Categories: Jenkins
Categories: Companies

Dynatrace partners, HCL Technologies & CSC stand tall in new Gartner report

Here at Dynatrace, we were pretty excited (but not completely surprised) to see a number of our partners making the A-list in Gartner’s newly launched Magic Quadrant report, which identifies the top 20 Public Cloud Infrastructure Managed Service Providers Worldwide.

Accenture, HCL, CSC, Capgemini, Melbourne IT, Bulletproof, Rackspace, Wipro and Infosys are among our partners that made the list. I specifically called out HCL and CSC because it just so happens that we’ve been active in our joint marketing efforts recently.

HCL Technologies CTO – Kalyan Kumar

I recently caught up with Kalyan (@kklive) at their UK office, to get the lowdown on why they chose to partner with Dynatrace.

Check out the video here.

For me, the most exciting takeaways from Kalyan’s interview was hearing just how forward thinking HCL’s strategy is – they’re adopting AI, robotics and machine learning technologies every day to drive improved services and build better products for their customers.

And I’m proud to say that Dynatrace plays a critical role in HCL’s ability to provide a full stack application monitoring solution through our integration into their service offering DRYiCE. If you listen in the video you’ll hear Kalyan reference one of the world’s leading brands – Manchester United.

Speaking of the future…

Recently at our global Perform event in Las Vegas, I had the pleasure of interviewing both Kalyan Kumar from HCL and JP Morgenthal from CSC, about the big trends impacting digital delivery for businesses tomorrow.

Have a quick look and listen to our on-stage chat here.

For me, the discussion brought home some great points that underscore our unified monitoring mandate here at Dynatrace – to see every user, across every application, everywhere:

Our focus should be on outcomes, not data

“It’s not about the nuts and bolts. Too much data hits operations. Leading them to question, what does it mean? What do you do with it? You need to focus on outcomes. Show me where the issue is. What do I need to focus on?” – Kalyan Kumar

Visibility is of the utmost importance

“Managing distributed apps is really complex and there are very few tools out there that really focus on understanding all of the connection points and the flow of communications and dependencies. That’s critical to being able to understand how to troubleshoot a problem when something occurs and to understand the health of a distributed app.” – JP Morgenthal

Cultural change is here

“We’re departing from infrastructure operations monitoring. As the cloud comes in, and as we get commoditized hardware, what we’re seeing … there is a gradual shift towards an application centric universe and it’s really beginning to change things and the way people think.” – JP Morgenthal

AI is the answer to the complexity challenge

“Application complexity leads to a situation where human IT operations is no longer possible. Bring in artificial, augmented intelligence… let the system handle the complexity and provide the insights.” – Kalyan Kumar

 And the winner is…

With the rate of innovation at HCL exceeding expectation, it’s no wonder this year we awarded Kalyan and his team at HCL the Dynatrace R&D Mover and Shaker award for being the most innovative development partner of 2017.

Big thanks to HCL and CSC for partnering with Dynatrace – we applaud your tireless efforts to innovate and succeed for your customers.

The post Dynatrace partners, HCL Technologies & CSC stand tall in new Gartner report appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

It’s a Small (Testing) World After All

PractiTest - Wed, 03/22/2017 - 16:22

Hello Testers of the Free World

Categories: Companies

DevSecOps: A More Deterministic Approach

Sonatype Blog - Wed, 03/22/2017 - 15:00
Is security an inhibitor to DevOps agility? To answer this question we would need to take a quick look at differences between DevOps, QA and Security when it comes to automation issues.

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

DevSecOps: In Time for Security

Sonatype Blog - Wed, 03/22/2017 - 15:00
Changing Mindsets. Historically developers have prioritized functional requirements over security when building software.  While secure coding practices important, they have often fallen into secondary or tertiary requirements for teams building applications against a deadline.

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

New Features for HPE Jenkins Plugin v5.1 Including Enhanced Integration with UFT 14.00 & ALM Octane

HP LoadRunner and Performance Center Blog - Wed, 03/22/2017 - 11:14

Introducing the new features of HPE Jenkins plugin version 5.1 including enhanced integration with UFT 14.00 and ALM Octane. Learn more…Jenk1.png

 

 

 

 

 

Categories: Companies

Automated naming of mobile user-actions & grouping of web requests

Keeping up with new mobile-app features has traditionally been a real challenge when relying on manual instrumentation. Automatic instrumentation has proven to be extremely efficient at quickly instrumenting mobile apps—without requiring manual configuration of source code. With auto-instrumentation, you’re guaranteed of instrumenting all of your mobile-app’s new features as they come online. One downside of automatic instrumentation is that, in some cases, automatic detection and naming of user actions and grouping of web requests has been less than optimal. This issue is addressed in the latest release of Dynatrace. Extraction rules that automatically group and aggregate web-request metrics can also now be defined using regular expressions.

Set up user-action naming rules

The mobile application page below shows a typical user action captured by Dynatrace OneAgent. The highlighted AppStart user action represents the startup of the app. The name of the app (easyTravel) is included in parentheses.

user actions

To create naming rules for mobile user actions
  1. Select Applications from the navigation menu.
  2. Select your mobile application.
  3. Click the Browse (…) button.
  4. Click Edit.
  5. Select User actions.
  6. Click the Add naming rule button.

Three types of naming rules are available to clean or extract specific information from your auto-detected mobile user actions and web requests:
Cleanup rules, naming rules, and extraction rules.

user actions

User-action naming example

In this first example, we’ll use a naming rule to rename the auto-generated AppStart (easyTravel) user action to Startup.

The naming rule shown below states that all user action names beginning with the string AppStart are to be renamed and grouped under the name Startup. By clicking the Preview button, the actual incoming stream of user actions is retrieved and the effects of the new rule are displayed for you in a preview further down the page.

user actions

Extraction rule example

Another useful approach to automated user-action naming involves setting up extraction rules via regular expressions. Extraction rules are used to replace variable web-request URL elements (for example, session data, product IDs, or GUIDs) with fixed strings. With variable elements replaced with fixed strings, the resulting web requests can be grouped correctly. In the process, all web request response-time and error-rate metrics can also be aggregated correctly.

To group web requests that have variable elements, and therefore to correctly aggregate all their response time and error rate metrics, it’s necessary to define specific extraction rules, as below. An extraction rule can be defined using a regular expression that selects and replaces the variable part of a URL with a fixed string. In the example below, the variable GUID values following the /feed/ subpath will be replaced with the fixed path /feed/*/. The asterisk symbol (*) is a wildcard that represents all available GUIDs.

As a result of this rule, all calls to the API endpoint /feeds/ will be grouped into a single group.

Variable API endpoints:

/feeds/42424224343423423432/

/feeds/33453345345353453453/

/feeds/32342423424234234243/

Resulting fixed API endpoint:

/feeds/*/

You can see the results in the Preview this rule section in the example below.

The post Automated naming of mobile user-actions & grouping of web requests appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Seamless ServiceNow CMDB sync & problem detection

Managing highly dynamic service and application infrastructures with a CMDB database can be cumbersome and error prone. Modern microservices infrastructures commonly contain thousands of individual business-critical services and related dependencies. Dynatrace automatically discovers and monitors all such services and applications in real time, detects deviations from normal behavior (availability, performance, and/or errors), and synchronizes this data with your ServiceNow instance.

Dynatrace anomaly detection

The first step toward gaining full technology insight into each individual service request of each of your customers is to install Dynatrace OneAgent on your hosts. Once installed, OneAgent automatically discovers all software services and applications running on your host and detects all communication relationships between your services in real time.
Dynatrace immediately calculates a multi-dimensional baseline with up to 10K cells for each service and application in your environment. This baseline allows Dynatrace to automatically detect degradations from normal behavior and inform you about complex problems and any impact on customer experience. Problems include detail related to all affected services and applications, their relationships, as well as root cause information that is correlated with each individual service.

Dynatrace ServiceNow CMDB synchronization

With the new Dynatrace ServiceNow CMDB synchronization application, all auto-discovered hosts, applications, and services—along with their relationships—can be synchronized with your ServiceNow ITIL CMDB database.
The main benefits of seamless integration between Dynatrace and your ServiceNow instance are:

  • Automatic synchronization of auto-detected services and applications, along with their used_by relationships, in real time.
  • Automatic synchronization of monitored hosts and virtual machines, along with their attributes.
  • Automatic push of Dynatrace-detected problems in your monitored infrastructure to your ServiceNow incidents list.
  • Automatic linking of detected problems with all affected CMDB CIs.

The image below compares how the synchronization of a Dynatrace-discovered application and its relationships to auto-discovered business-critical services looks in Dynatrace Smartscape and how it looks when synchronized within the ServiceNow dependency map.

Another benefit for ServiceNow users is detailed descriptions of all application dependencies within each architectural layer, as shown below.

When Dynatrace discovers an availability, performance, or error-related problem within your environment, the problem is pushed to your ServiceNow instance and automatically mapped with previously synchronized CMDB CI elements, as shown below.

To fully benefit from Dynatrace state of the art AI-powered technology monitoring with ServiceNow, head over to the ServiceNow app store and install the Dynatrace Monitoring and CMDB Integration application.
For more detail on synchronizing Dynatrace monitoring with ServiceNow, see How do I set up ServiceNow problem notifications?

The post Seamless ServiceNow CMDB sync & problem detection appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Improved design & functionality of Service pages

Over the past months, we’ve added numerous enhancements to our Service overview pages. To maximize the value of these enhancements, it became necessary to build an entirely new page design. We’re now proud to present all the new views and features that the enhanced Service overview page design has to offer.

Service overview page

As you can see from the image below, the service overview page has received a major design overhaul. In addition to a redesigned service infographic and new features, the chart area has been reduced to several small trendline charts. In this way, more monitoring data is now visible at a glance, while the details are still accessible by clicking on the trend charts.

To view the new service overview page
  1. Select Transactions & services from the navigation menu.
  2. Select a service from the list.
Improved service infographic

The new service infographic (see below) follows the same logic and includes the same high-level detail that was included in the previous design. Just click an infographic tile to view detail regarding caller and callee services, real user experience, and performance metrics for related services and databases.

The new infographic also provides information about the processes and hosts that a service runs on. Availability status and detail regarding recent calls is included for both Calling services and Processes and hosts.

The service overview infographic now also explicitly states when a service receives traffic over the network that is not explicitly monitored by Dynatrace. The Network clients box (see below) appears when such traffic is detected.

Load balancer and proxy chains

Another major enhancement on the newly designed service overview page is the inclusion of proxy and load balancer data. Just click the calling Applications tile or the Network clients tile to see information about related proxies and load balancers (see below).

Dynatrace detects proxies and load balancers that exist between services—for example, when a web server directs traffic to your application server, but a load balancer operates in front of the web server (as is the case with Amazon Elastic Load Balancer). Dynatrace detects and monitors each of these components and even resolves the processes that perform the load balancing!

This is not only useful for understanding the topology and dependencies in your environment. In an upcoming release, this monitoring functionality will enable Dynatrace to understand when availability or performance issues in your load balancer impact your environment. Stay tuned for this enhancement.

Trend charts

The new chart section is smaller. It includes trendline charts that turn red when a problem is detected. To view further details, click a trendline chart, or click the View dynamic requests button to access the new service Details page.

Service details page

The service Details page has also been vastly improved and provides much more information (see below). Each of the metric tabs now provides much more detail.

Most significantly, we’ve increased the chart resolution across the entire product. This gives you deeper visibility into small spikes and performance variations. For each request, you can now also view the Slowest 5% (95th percentile), a Failure rate chart, HTTP errors chart, and a CPU consumption chart. Clicking in the chart will show a vertical line and the numbers in the tables below will change accordingly; they will always reflect that line.

Improved service-instance support

If you run multiple instances of the same service across separate hosts, you’ll see a funnel view in the Server response time chart (see below). This funnel represents the worst and best of your instances at each point in time. If you see spikes in the funnel, but not in the overall response time, only a minority of the instances (or even a single instance) has experienced a response-time spike. When this is the case, you should take a deeper look at the breakdown of the specific service instance that’s experiencing the issue.

Notice how the top chart in the example below shows a spike in the funnel at 16:10, but not in the overall median response time? When you click the chart at that position and look at the instance break down, you see that one of the instances is much slower than the others. Click this instance to view more detail about this specific instance.

service overview

Client-side response time

Interestingly, many services reveal a totally new perspective in the Server response time chart when viewed from the client side. The example below shows response time and failure rate as perceived by the calling process on the client side.

Much more…

You also have access to all the standard analysis features that Dynatrace provides, in the context of the selected timeframe and metric. Also, notice that you can view all requests that were processed during a selected timeframe in the Top web request list. There is now a separate list here to make your key requests easier to find.

The post Improved design & functionality of Service pages appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today