Skip to content

Feed aggregator

Dynatrace Managed feature update for version 112

Following the release of version 112, here are the latest enhancements that we’ve introduced to Dynatrace Managed.

Improved notifications for infrastructure issues

To ensure the continuous, reliable operation of Dynatrace Managed Server in your environment, Dynatrace Managed now automatically performs periodic checks to confirm that health of your infrastructure. When a problem with the response time or availability of your hosts or other infrastructure components is detected, a problem event is automatically generated and a notification is sent out. Infrastructure-event notifications are now generated for the following problem types:

Infrastructure-event notifications are now automatically generated for the following problem types:

  • Insufficient CPU or memory
  • Lost connection to Dynatrace Mission Control
  • Unsuccessful or incomplete upgrade
  • Disconnected or non-operational cluster node
SAML 2.0 improvements

We’ve added several enhancements that improve Dynatrace Managed compatibility with various identity providers (including SAML 2.0). Improved error handling and more detailed error messages for identity-provider related issues has also been introduced.

The post Dynatrace Managed feature update for version 112 appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Dynatrace makes life easy for OpenStack admins (EAP starting)

We’re thrilled to announce the Early Access Program for Dynatrace OpenStack integration! This blog post is the first in a two-part series that explores how Dynatrace supports the monitoring of OpenStack environments.

OpenStack has become quite popular in recent years. Organizations are increasingly opting to build public and private OpenStack clouds for their employees and customers. One reason for the rapid adoption of OpenStack is its vibrant user community, which has fueled OpenStack’s growth and spirit of innovation. By joining the OpenStack community you can contribute your ideas related to requirements definition as well as development. This gives you the power to actively shape the features of the next OpenStack release.

OpenStack is indeed powerful, but it’s also complex. As an OpenStack admin, you know perfectly well that there’s no such thing as a flawless OpenStack cloud deployment. Even more challenging is maintaining smooth operation once your OpenStack cloud is used in a production environment.

Troubleshooting performance issues

Regardless if you’re working with a public or private cloud, as an OpenStack administrator, you need to be able to contend with a range of challenges. The components that are most likely to present you with challenges are:

  • OpenStack services
  • Supporting technologies like HAproxy, RabbitMQ, and MySQL
  • Network

OpenStack troubleshooting can be complex and time-consuming. This is due to the elusive nature of many OpenStack issues—problems with one OpenStack service can manifest themselves as performance issues within other services. For example, when a user reports an issue with launching a new VM or attaching a Cinder volume, your first thought might be to look into the log files of your Nova and Cinder services. After combing through hundreds of megabytes of log data, you might learn however that the root cause of the issue resides within a different OpenStack service or supporting technology (for example, HAproxy, Rabbit MQ, MySQL).

Dynatrace has good news for you OpenStack admins out there. With Dynatrace OpenStack monitoring, you no longer need to spend hours troubleshooting elusive issues within your OpenStack cloud!

Dynatrace provides complete OpenStack monitoring

In contrast to conventional monitoring tools, which typically cover only a single monitoring domain, Dynatrace provides a complete monitoring solution. Dynatrace monitoring covers:

  • OpenStack services
  • Supporting technologies
  • Compute nodes and VMs
  • Log analysis

For each of these components, Dynatrace provides automated root-cause analysis to help you identify the sources of problems and resolve issues in a timely manner.

Analyze OpenStack performance

OpenStack pages provide a holistic overview of your entire OpenStack account (see example images below).

(1) See if key components like compute and controller nodes are healthy.

(2) Gain insight into environment dynamics by tracking how the number of running virtual machines evolves over time. An increasing trend may indicate the need for capacity adjustments. Crucial details regarding the number of VMs that have been spawned and their average launch times is also included. If you notice launch times going up, you may want to investigate the reasons why.

(3) The Events section provides details such as on which compute node each VM is launched and stopped.

(4) The Compute section shows you how well your compute nodes are performing, which virtual machines are currently running on those nodes, and how the VMs contribute to overall resource usage.

You can slice and dice your OpenStack monitoring data with filters—compute nodes and virtual machines can be filtered based on RegionSecurity group nameCompute node name, Availability zone, and more. Such filtering is particularly useful for tracking down elusive performance issues within large environments.

openstack

Smartscape analysis (see below) shows you how your VMs interact with one another and gives you an understanding of the vertical dependencies between your application components—virtual machines, processes, and services.

Performance analysis of OpenStack services

Let’s explore Dynatrace’s automated problem detection and root-cause analysis capabilities with a Keystone use case. In the example below, the Keystone service began to respond slowly to TCP requests due to memory saturation on one of the controller nodes. Dynatrace has automatically identified the underlying root cause of this issue and the impact of the problem.

Let’s drill down into the Keystone metrics to better understand what’s going on here. Click the Keystone process tile to analyze this process within the context of the detected performance problem.

Here on the Keystone process page we see that the response time of the Keystone service has increased significantly, from 200 ms to 2 s.

By clicking the View all log entries button, you can explore all of the log data that’s been generated by this process.

The Log viewer has uncovered numerous warnings within the Keystone.log file indicating that the authentication process has been failing.

OpenStack

Now let’s take a look at the controller node that caused the issue. As you can see below, memory was indeed exhausted; it reached almost 100% saturation.

Note further down in the Processes section that all OpenStack services running on the controller are listed. Click any of these individual processes to analyze their connections and understand their relationship to other processes.

Dynatrace reports an outage event when Keystone becomes completely unavailable (see below). Outages are a major concern because they prevent users from performing any operations (each API request requires a Keystone token).

Out-of-the-box, Dynatrace automatically monitors your OpenStack environment for a wide range of potential log-based problem patterns. For example, Dynatrace detects when an OpenStack service can’t connect to a database or fails to authenticate.

Monitoring supporting technologies

Another potential problem area that OpenStack admins need to keep an eye on is the technologies that are frequently deployed alongside OpenStack. This includes load balancers (e.g., HAproxy), message brokers (e.g., RabbitMQ), and databases (e.g., MySQL).

To illustrate the challenges involved in monitoring the technologies that support OpenStack, here’s a problem we ran into within our own OpenStack environment. The RabbitMQ process in the example below was launched using the default file descriptor limit of 1024. Once this limit was exceeded, RabbitMQ stopped accepting new connections. This resulted in a Connectivity problem.

We wouldn’t have known about this problem if it weren’t for the RabbitMQ-specific counters that Dynatrace provides. All of this detail is included in the same view, so you don’t need to use multiple tools to get the full picture.

OpenStack dashboard tiles

Dynatrace provides two different OpenStack tiles that you can add to your home dashboard.

The Regions tile displays relevant information related to the health of compute nodes and virtual machines, as well as OpenStack services such as Keystone, Glance, Nova, and more.

The Project tile provides insights into resource usage, taking assigned quotas into consideration. This information enables you to think proactively about resource usage related to critical projects, providing you with early warning of any resource capacity issues that may present themselves.

To add an OpenStack tile to your home dashboard
  1. Click the Home dashboard button in the upper-left corner.
  2. Click the Browse (…) button in the upper-right corner.
  3. Click Add tile.
  4. Select the Infrastructure filter in the left-hand navigation menu.
  5. Select the All regions tile or the Project tile.

Stay tuned for part two of this blog post series, to be published shortly. Part two will cover full-stack monitoring of applications that run in OpenStack clouds.

The post Dynatrace makes life easy for OpenStack admins (EAP starting) appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Security Testing Toolbox

Testing TV - Wed, 02/15/2017 - 14:54
Kali, Veil, Metasploit, BeEF. All tools in an arsenal that exist to break through security barriers of software. This talk introduces the tools available and shows how they are used to get through your defense. It is more a massive demo than a talk and is an exploration of the tools and what they do. […]
Categories: Blogs

Say Hello to the Blue Ocean Pipeline Editor

Back in September 2016 we announced the availability of the Blue Ocean beta and the forthcoming Visual Pipeline Editor. We are happy to announce that you can try the Pipeline Editor preview release today. What is it? The Visual Pipeline Editor is the simplest way for anyone wanting to get started with creating Pipelines in Jenkins. It’s also a great way for advanced Jenkins users to start adopting pipeline. It allows developers to break up their pipeline into different stages and parallelize tasks that can occur at the same time - graphically. The rest is up to you. A pipeline you create visually will produce a Declarative...
Categories: Open Source

Declarative Pipeline: Notifications and Shared Libraries

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. Declare Your Pipelines! Declarative Pipeline 1.0 is here! This is the third post in a series showing some of the cool features of Declarative Pipeline. In the previous post, we converted a Scripted Pipeline to a Declarative Pipeline, adding descriptive stages and post sections. In one of those post blocks, we included a placeholder for sending notifications. In this blog post, we’ll repeat what I did in "Sending Notifications in Pipeline but this time in Declarative Pipeline. First we’ll integrate calls to notification services Slack, HipChat, and Email into our Pipeline. Then we’ll refactor those calls into a single Step in a...
Categories: Open Source

Refactoring Towards Resilience: Evaluating Coupling

Jimmy Bogard - Tue, 02/14/2017 - 23:25

Other posts in this series:

So far, we've been looking at our options on how to coordinate various services, using Hohpe as our guide:

  • Ignore
  • Retry
  • Undo
  • Coordinate

These options, valid as they are, make an assumption that we need to coordinate our actions at a single point in time. One thing we haven't looked at is breaking the coupling of our actions, which greatly widens our ability to deal with failures. The types of coupling I encounter in distributed systems (but not limited to) include:

  • Behavioral
  • Temporal
  • Platform
  • Location
  • Process

In our code:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

Of the coupling types we see here, the biggest offender is Temporal coupling. As part of placing the order for the customer's cart, we also tie together several other actions at the same time. But do we really need to? Let's look at the three external services we interact with and see if we really need to have these actions happen immediately.

Stripe Temporal Coupling

First up is our call to Stripe. This is a bit of a difficult decision - when the customer places their order, are we expected to process their payment immediately?

This is a tough question, and one that really needs to be answered by the business. When I worked on the cart/checkout team of a Fortune 50 company, we never charged the customer immediately. In fact, we did very little validation beyond basic required fields. Why? Because if anything failed validation, it increased the chance that the customer would abandon the checkout process (we called this the fallout rate). For our team, it made far more sense to process payments offline, and if anything went wrong, we'd just call the customer.

We don't necessarily have to have a black-and-white choice here, either. We could try the payment, and if it fails, mark the order as needing manual processing:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    try {
        var payment = await stripeService.PostPaymentAsync(order);
    } catch (Exception e) {
        Logger.Exception(e, $"Payment failed for order {order.Id}");
        order.MarkAsPaymentFailed();
    }
    if (!order.PaymentFailed) {
        await sendGridService.SendPaymentSuccessEmailAsync(order);
    }
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

There may also be business reasons why we can't process payment immediately. With orders that ship physical goods, we don't charge the customer until we've procured the product and it's ready to ship. Otherwise we might have to deal with refunds if we can't procure the product.

There are also valid business reasons why we'd want to process payments immediately, especially if what you're purchasing is digital (like a software license) or if what you're purchasing is a finite resource, like movie tickets. It's still not a hard and fast rule, we can always build business rules around the boundaries (treat them as reservations, and confirm when payment is complete).

Regardless of which direction we go, it's imperative we involve the business in our discussions. We don't have to make things technical, but each option involves a tradeoff that directly affects the business. For our purposes, let's assume we want to process payments offline, and just record the information (naturally doing whatever we need to secure data at rest).

SendGrid Temporal Coupling

Our question now is, when we place an order, do we need to send the confirmation email immediately? Or sometime later?

From the user's perspective, email is already an asynchronous messaging system, so there's already an expectation that the email won't arrive synchronously. We do expect the email to arrive "soon", but typically, there's some sort of delay. How much delay can we handle? That again depends on the transaction, but within a minute or two is my own personal expectation. I've had situations where we intentionally delay the email, as to not inundate the customer with emails.

We also need to consider what the email needs to be in response to. Does the email get sent as a result of successfully placing an order? Or posting the payment? If it's for posting the payment, we might be able to use Stripe Webhooks to send emails on successful payments. In our case, however, we really want to send the email on successful order placement not order payment.

Again, this is a business decision about exactly when our email goes out (and how many, for what trigger). The wording of the message depends on the condition, as we might have a message for "thank you for your order" and "there was a problem with your payment".

But regardless, we can decouple our email from our button click.

RabbitMQ Coupling

RabbitMQ is a bit of a more difficult question to answer. Typically, I generally assume that my broker is up. Just the fact that I'm using messaging here means that I'm temporally decoupled from recipients of the message. And since I'm using an event, I'm behaviorally decoupled from consumers.

However, not all is well and good in our world, because if my database transaction fails, I can't un-send my message. In an on-premise world with high availability, I might opt for 2PC and coordinate, but we've already seen that RabbitMQ doesn't support 2PC. And if I ever go to the cloud, there are all sorts of reasons why I wouldn't want to coordinate in the cloud.

If we can't coordinate, what then? It turns out there's already a well-established pattern for this - the outbox pattern.

In this pattern, instead of sending our messages immediately, we simply record our messages in the same database as our business data, in an "outbox" table":

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    dbContext.SaveMessage(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

Internally, we'll serialize our message into a simple outbox table:

public class Message {  
    public Guid Id { get; set; }
    public string Destination { get; set; }
    public byte[] Body { get; set; }
}

We'll serialize our message and store in our outbox, along with the destination. From there, we'll create some offline process that polls our table, sends our message, and deletes the original.

while (true) {  
    var unsentMessages = await dbContext.Messages.ToListAsync();
    var tasks = new List<Task>();
    foreach (var msg in unsentMessages) {
        tasks.Add(bus.SendAsync(msg)
           .ContinueWith(t => dbContext.Messages.Remove(msg)));
    }
    await Task.WhenAll(tasks.ToArray());
}

With an outbox in place, we'd still want to de-duplicate our messages, or at the very least, ensure our handlers are idempotent. And if we're using NServiceBus, we can quite simply turn on Outbox as a feature.

The outbox pattern lets us nearly mimic the 2PC coordination of messages and our database, and since this message is a critical one to send, warrants serious consideration of this approach.

With all these options considered, we're now able to design a solution that properly decouples our different distributed resources, still satisfying the business goals at hand. Our next post - workflow options!

Categories: Blogs

People are Strange

Hiccupps - James Thomas - Tue, 02/14/2017 - 19:01

Managers. They're the light in the fridge: when the door is open their value can be seen. But when the door is closed ... well, who knows?

Johanna Rothman and Esther Derby reckon they have a good idea. And they aim to show, in the form of an extended story following one manager as he takes over an existing team with problems, the kinds of things that managers can do and do do and - if they're after a decent default starting point - should consider doing.

What their book, Behind Closed Doors, isn't - and doesn't claim to be - is the answer to every management problem. The cast of characters in the story represent some of the kinds of personalities you'll find yourself dealing with as a manager, but the depth of the scenarios covered is limited, the set of outcomes covered is generally positive, and the timescales covered are reasonably short.

Michael Lopp, in Managing Humans, implores managers to remember that their staff are chaotic beautiful snowflakes. Unique. Individual. Special. Jim Morrison just says, simply, brusquely, that people are strange. (And don't forget that managers are people, despite evidence to the contrary.)

Either way, it's on the manager to care to look and listen carefully and find ways to help those they manage to be the best that they can be in ways that suit them. Management books necessarily use archetypes as a practical way to give suggestions and share experiences, but those new to management especially should be wary of misinterpreting the stories as a how-to guide to be naively applied without consideration of the context.

What Behind Closed Doors also isn't, unlike so much writing on management, is dry, or full of heroistic aphorisms, or preachy. In fact, I found it an extremely easy read for several reasons: it's well-written; it's short; the story format helps the reader along; following a consistent story gives context to situations as the book progresses; sidebars and an appendix keep detail aside for later consumption; I'm familiar with work by both of these authors already; I'm a fan of Jerry Weinberg's writing on management and interpersonal relationships and this book owes much to his insights (he wrote the foreword here); I agree with much of the advice.

What I found myself wanting - and I'd buy Rothman and Derby's version of this like a shot - is more detailed versions of some of the dialogues in this book with commentary in the form of the internal monologues of the participants. I'd like to hear Sam, the manager, thinking though the options he has when trying to help Kevin to learn to delegate and understand how he chose the approach that he took. I'd like to hear Keven trying to work out what he thinks Sam's motives are and perhaps rejecting some of Sam's premises.  I'd also like to see a deeper focus on a specific relationship over an extended period of time, with failures, and techniques for rebuilding trust in the face of them.

But while I wait for that, here's a few quotes that I enjoyed, loosely grouped.

On the contexts in which management takes place:
Generally speaking, you can observe only the public behaviors of managers and how your managers interact with you. Sometimes people who have never been in a management role believe that managers can simply tell other people what to do and that’s that. The higher you are in the organization, the more other people magnify your reactions. Because managers amplify the work of others, the human costs of bad management can be even higher than the economic costs. Chaos hides problems—both with people and projects. When chaos recedes, problems emerge. The moral of this fable is: Focus on the funded work.On making a technical contribution as a manager:
Some first-level managers still do some technical work, but they cannot assign themselves to the critical path.

It’s easier to know when technical work is complete than to know when management work is complete.

The more people you have in your group, the harder it is to make a technical contribution.

The payoff for delegation isn’t always immediate.

It takes courage to delegate.On coaching:
You always have the option not to coach. You can choose to give your team member feedback (information about the past), without providing advice on options for future behavior.

Coaching doesn’t mean you rush in to solve the problem. Coaching helps the other person see more options and choose from them.

Coaching helps another person develop new capability with support.

And it goes without saying, but if you offer help, you need to follow through and provide the help requested, or people will be disinclined to ask again.

Helping someone think through the implications is the meat of coaching.On team-building:
Jelled teams don’t happen by accident; teams jell when someone pays attention to building trust and commitment

Over time they build trust by exchanging and honoring commitments to each other.

Evaluations are different from feedback.

A one-on-one meeting is a great place to give appreciations.

[people] care whether the sincere appreciation is public or private ... It’s always appropriate to give appreciation for their contribution in a private meeting.

Each person on your team is unique. Some will need feedback on personal behaviors. Some will need help defining career development goals. Some will need coaching on how to influence across the organization.

Make sure the career development plans are integrated into the person’s day-to-day work. Otherwise, career development won’t happen.

"Career development" that happens only once a year is a sham.On problem solving:
Our rule of thumb is to generate at least three reasonable options for solving any problem.

Even if you do choose the first option, you’ll understand the issue better after considering several options.

If you’re in a position to know a problem exists, consider this guideline for problem solving: the people who perform the work need to be part of the solution.

We often assume that deadlines are immutable, that a process is unchangeable, or that we have to solve something alone. Use thought experiments to remove artificial constraints,

It’s tempting to stop with the first reasonable option that pops into your head. But with any messy problem, generating multiple options leads to a richer understanding of the problem and potential solutions

Before you jump to solutions, collect some data. Data collection doesn’t have to be formal. Look for quantitative and qualitative data.

If you hear yourself saying, “We’ll just do blah, blah, blah,” Stop! “Just” is a keyword that lets you know it just won’t work.

When the root cause points to the original issue, it’s likely a system problem.On managing:
Some people think management is all about the people, and some people think management is all about the tasks. But great management is about leading and developing people and managing tasks.

When managers are self-aware, they can respond to events rather than react in emotional outbursts.

And consider how your language affects your perspective and your ability to do your job.

Spending time with people is management work.

Part of being good at [Managing By Walking Around and Listening] is cultivating a curious mind, always observing, and questioning the meaning of what you see.

Great managers actively learn the craft of management.Image: http://www.45cat.com/record/j45762
Categories: Blogs

Dynatrace Customer Award Winners: Redefining Monitoring in 2017

Every year at the Perform customer event, we recognize individual users and customer organizations that have made great strides—and helped others make such strides—in digital transformation. Last week at Perform 2017, in the Chelsea Theater at the Cosmopolitan Hotel, nine Dynatrace Customer Award Winners were recognized for their outstanding work “Redefining Monitoring in 2017 “.

The Dynatrace Community is the largest collection of Digital Performance Management experts on the planet.  That’s a big reason why it’s a special place, but, more importantly, this group is made of extraordinary people, namely all of you.

Dynatrace awarded two individuals from our customer and partner bases who have been vibrant participants in our forums, contributors to product enhancements, and who have generally demonstrated the qualities that make our Community great.

The winners are:

Matt Evanson of Optum for the Most Valuable Customer Contributor.

Babar Qayyum, of 2P for Most Valuable Partner Contributor.

The Dynatrace R&D Mover and Shaker award is awarded to our most innovative development partner for 2017.

This year’s winner is HCL, a multinational IT services company.

Three Digital Performance Awards were presented to customers who exceled in specific categories and who have achieved extremely positive results using the business driver/use case approach to digital transformation.

The Customer Champion is Graybar, the industrial and electrical supply distributor, for optimizing customer experience.

The Operational Wizard winner is the Australian Government, Department of Defence for excellence in operations for Digital Performance Management (DPM).

And, The Innovation Trailblazer is awarded to AMEX, an industry leader in advanced and revolutionary monitoring.

Overall excellence awards are given to two runners-up and one winner for Digital Transformation Command Performance. These winners use visionary and mature DPM to reach their business goals.

The winner of the Digital Transformation Command Performance Award for 2017 is NRG Energy. A leading integrated power company and a member of the Fortune 200, NRG creates value through best in class operations, reliable and efficient electric generation, and a retail platform serving residential and commercial businesses.

The runners-up are Spark, a communications service provider, and the Australian Department of Defence.

Please help us congratulate these customers who redefine monitoring every day.

The post Dynatrace Customer Award Winners: Redefining Monitoring in 2017 appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Automated rule-based tagging for services

Using tags to organize and label all the monitored components in your environment is a great way to filter server-side services for purposes of developing component-specific dashboard charts and assigning responsibility to specific team members or groups. However, managing and applying tagging within large, dynamic environments can be a real challenge. To address this, Dynatrace now provides a rule-based approach to tagging that gives you the flexibility you need to take maximum advantage of Dynatrace Smartscape topology modeling, auto-detection of processes and services, and built-in domain expertise.

To define a rule-based tag
  1. Go to Settings TaggingAutomatic tagging.
  2. Type a name for the new tag in the Add custom tag field.
  3. Click the Add button to add your new tag.
  4. Select the newly created tag from the list below to edit rules for the tag. screencapture-ruxitdev-dev-dynatracelabs-1481098115896
  5. You can define multiple rules for each tag. Rules are executed in order. As soon as one rule meets all conditions, the tag is applied and no further rules are executed.
  6. The first thing to consider is whether or not you want to restrict the rule to a specific process group, technology, or service type. While this step is optional, it provides a quick means of reducing the number of services that a rule applies to.
    screencapture-ruxitdev-dev-dynatracelabs-1481098307181
  7. You can then add one or more conditions to the rule that a service must meet before the tag applied.
    Conditions can check for specific values within any service property that Dynatrace displays (for example, Web application ID). To find the Web application ID or other properties of your services, go to the service’s page and expand the Properties and tags section (as shown below). The list of conditions also contains properties of processes and hosts. If you select one of these, then the rule will be applied only to those services that run on that specific process or host.2016-12-07-10_37_08-service-ruxitdevruxit-com_-easytravel-dev-dynatrace
  8. Once you’ve created a rule, click the Preview button to verify the services that are returned by the rule. Note that to be successfully tagged, a service must meet all of the specified conditions of the rule.
    screencapture-ruxitdev-dev-dynatracelabs-1481098489811
  9. Click Done at the top of the page to save your tag. Rule-based tags are applied automatically to all existing and newly detected services.
    Note: It may take up to a minute before your new tag is applied. Once a tag is applied to a service, the tag is listed on that service’s page within the Properties and tags section.

There are numerous ways by which you can combine tags, rules, and conditions when organizing your services. For example, you can define multiple conditions within a single rule or you can define multiple rules for a single tag. For a rule to be met, all conditions of the rule must be met. In other words, conditions are combined using an implied AND operator. The rules themselves are executed independently from one another other, and so are combined using an implied OR operator.

In this way, you can build tagging rules for all kinds of scenarios.

Define more complex & granular rules

Dynatrace automatically discovers all of your hosts, processes, process groups, and services—in addition to the dependencies between these components. Dynatrace also discovers metadata across each layer in your environment’s topology. This is the same metadata that Dynatrace uses to develop Smartscape topology maps. This same metadata and topology modeling can now be leveraged to define complex and powerful service-tagging rules. The example below shows a rule that filters services that are of type WebService, run in a Tomcat, and have detected process group names that include the string BB.

screencapture-ruxitdev-dev-dynatracelabs-1481098988053

Another powerful service-tagging rule example (shown below) matches all Java services that run in specific process groups within a Cloud Foundry space called Development within a Cloud Foundry PaaS setup within a detected process group name that includes the string spring.

screencapture-pap94091-dev-dynatracelabs-1481100920988screencapture-pap94091-dev-dynatracelabs-1481102896123

The example below shows a rule that applies a tag to all non-admin Azure Web Sites services.

screencapture-mtt00072-dev-dynatracelabs-1481100784973

Service properties available for tagging

The specific service properties available to you for tagging varies based on technology type.

To find out which properties a service provides
  1. Select Transactions & services from the navigation menu.
  2. Select the service that you want to tag.
  3. Expand Properties and tags to display the available properties.
To find out which properties a process group provides
  1. Select Hosts from the navigation menu.
  2. Select the host that includes the process group you want to tag.
  3. Expand Properties and tags to display the available properties.

Here is the current list of service properties supported by Dynatrace for automated service tagging. This group will be expanded over the coming weeks.

Service properties

  • Custom service class name
  • Database name
  • Database vendor
  • Detected service name
  • Service port
  • Service tags
  • Web application ID
  • Web context root
  • Web server name
  • Web service name
  • Web service namespace

Process properties

  • Apache config path
  • Apache spark master ip address
  • Azure web app host name
  • Azure web app site name
  • Catalina base
  • Catalina home
  • Cloud Foundry application name
  • Cloud Foundry instance index
  • Cloud Foundry space ID
  • Cloud Foundry space name
  • Coldfusion jvm config file
  • Coldfusion service name
  • Detected group name
  • Detected process name
  • Docker container name
  • Docker image name
  • Dotnet command
  • Dynatrace custom cluster ID
  • Dynatrace custom node ID
  • Elastic search cluster name
  • Elastic search node name
  • Exe name
  • Exe path
  • GlassFish domain name
  • GlassFish instance name
  • IIS app pool
  • IIS role name
  • Java jar file
  • Java jar path
  • Java main class
  • Jboss home
  • Jboss mode
  • Jboss server name
  • Kubernetes base pod name
  • Kubernetes container name
  • Kubernetes full pod name
  • Kubernetes namespace
  • Kubernetes pod uid
  • Listen port
  • Nodejs app name
  • Nodejs script name
  • Ruby app root path
  • Ruby script path
  • Varnish instance name
  • Weblogic home
  • Weblogic name
  • Webshpere cell name
  • Webshpere cluster name
  • Websphere node name
  • Websphere server name

Host properties

  • AWS availability zone
  • Azure SKU
  • Azure compute mode
  • Azure web app host name(s)
  • Azure web app site name(s)
  • Cloud type
  • Detected name of host
  • Host ip addresse
  • Host tags
  • Instance id of ec2
  • Local host name of ec2
  • Paas type
  • Public host name of ec2
Take full advantage of service tags

Service tags can be leveraged in a number of ways. For example, within the Services list (Transactions & services > Services) to filter services based on technology type or other criteria (see example below).

screencapture-ruxitdev-dev-dynatracelabs-1481099018043

Once you’ve selected a tagged group of related services, it’s easy to focus your analysis on those services. For example, click the Chart button at the top of the Services list page to generate tag-specific charts for the selected services (see example below).

2016-12-07-11_12_08-clipboard

Newly deployed services that match your tagging rules are automatically tagged and added to your charts! Tag-specific charts can even be pinned to your home dashboard. This provides a great option for providing responsible teams and staff with performance insights into their specific areas of responsibility.

Tagging for efficient problem-notification routing

You can also use tags for efficient routing of problem notifications to responsible team members. When setting up notification integration (Settings > Integration > Problem notifications), enable the Filter on tags switch (see Slack example below) and assign relevant tags. Once set up, the next time a problem notification is sent out, Dynatrace will check to see if any affected services carry properties that you’ve defined in your service tags. In this way, when critical parts of your environment are affected by a detected problem, the related notification will be delivered to the appropriate teams.

notify

Pre-requisites
  • OneAgent version 1.111 or higher is required to generate the correct metadata for Cloud Foundry, OpenShift, and Azure Web Sites processes.
  • Usage of the DT_CLUSTER_ID environment ID leads to loss of some metadata within specific process. This issue will be addressed in OneAgent version 1.113.

The post Automated rule-based tagging for services appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Open Letter about Agile Testing Days cancelling US conference

Chris McMahon's Blog - Tue, 02/14/2017 - 03:21
I sent the following by email contact pages to Senator John McCain, Senator Jeff Flake, and Representative Martha McSally of Arizona in regard to Agile Testing Days cancelling their US conference on 13 February.



Agile Testing Days is a top-tier tech conference about software testing and Quality Assurance in Europe. They had planned their first conference in the USA to be held in Boston MA, with a speaker lineup from around the world. They cancelled the entire conference on 13 February because of the "current political situation" in the USA. Here is their statement: https://agiletestingdays.us/

Although I was not scheduled to attend or to speak at this particular conference, it is conferences such as Agile Testing Days where the best ideas in my field are presented, and it is from conferences such as Agile Testing Days that many of my peers get those ideas, and I rely on conversations from those who do speak and attend in order to stay current in my field.

As a resident of Arizona, cancelling such conferences affects me directly. I have enough expertise and skill to live anywhere I choose. I choose to live in Arizona, but my work absolutely depends on the free flow of people and information across national and state borders.

It is shameful that such a prestigious and respected multi-national software organization finds it necessary to cancel their first ever conference in the USA because of the outrageous policies of the current administration. I urge you to take measures to make organizations such as Agile Testing Days and their attendees and speakers feel safe and welcome, as they should be.

Chris McMahon
Senior Member of Technical Staff, Quality Assurance
Salesforce.org
Tucson, AZ
Categories: Blogs

Discomfort as a Tool for Change

Google Testing Blog - Mon, 02/13/2017 - 18:53
by Dave Gladfelter (SETI, Google Drive)
IntroductionThe SETI (Software Engineer, Tools and Infrastructure) role at Google is a strange one in that there's no obvious reason why it should exist. The SWEs (Software Engineers) on a project understand its problems best, and understanding a problem is most of the way to fixing it. How can SETIs bring unique value to a project when SWEs have more on-the-ground experience with their impediments?

The answer is scope. A SWE is rewarded for being an expert in their particular area and domain and is highly motivated to make optimizations to their carved-out space. SETIs (and Test Engineers and EngProdin general) identify and solve product-wide problems.

Product-wide problems frequently arise because local optimizations don't necessarily add up to product-wide optimizations. The reason may be the limits of attention, blind spots, or mis-aligned incentives, but a group of SWEs each optimizing for their own sub-projects will not achieve product-wide maxima.

Often SETIs and Test Engineers (TEs) know what behavior they'd like to see, such as more integration tests. We may even have management's ear and convince them to mandate such tests. However, in the absence of incentives, it's unlikely that the decisions SWEs make in response to such mandates will add up to the behavior we desire. Mandates around methods/practices are often ineffective. For example, a mandate of documentation for each public method on an interface often results in "method foo does foo."

The best way to create product-wide efficiencies is to change the way the team or process works in ways that will (initially) be uncomfortable for the engineering team, but that pays dividends that can't be achieved any other way. SETIs and TEs must work to identify the blind spots and negative interactions between engineering teams and change the environment in ways that align engineering teams' incentives. When properly incentivized, SWEs will make optimal decisions enhanced by product-wide vision rather than micro-management.
Common Product-Wide ProblemsHard-to-use APIsOne common example of local optimizations resulting in cross-team de-optimization is documentation and ease-of-use of internal APIs. The team that implements an internal API is not rewarded for making it easy to use except in the most oblique ways. Clients are compelled to use the internal APIs provided to them, so the API owner has a monopoly and will set the price of using it at "you must read all the code and debug it yourself" in the absence of incentives or (rare) heroes.
Big, slow releasesAnother example is large and slow releases. Without EngProd help or external pressure, teams will gravitate to the slowest, biggest release possible.

This makes sense from the position of any individual SWE: releases are painful, you have to ensure that there are no UI and API regressions, watch traffic and error rates for some time, and re-learn and use tools and processes that are complex and specific to releases.

Multiple teams will naturally gravitate to having one big release so that all of these costs can be bundled into one operation for "efficiency." The result is that engineers don't get feedback on features for weeks and versioning of APIs and data stores is ignored (since all the parts of the system are bundled together into one big release). This greatly slows down developer and feature velocity and greatly increases risks of cascading failures when the release fails.
How EngProd fixes product-wide problemsSETIs can nibble around the edges of these kinds of problems by writing tools and automation. TEs can create easy-to-use test environments that facilitate isolating and debugging faults in integration and ambiguities in APIs. We can use fancy technologies to sample live traffic and ensure that new versions of systems behave the same as previous versions. We can review design docs to ensure that they have an appropriate test plan. Often these actions do have real value. However, these are not the best way to align incentives to create a product-wide solution. Facilitating engineering teams' fruitful collaboration (and dis-incentivizing negative interactions) gives EngProd a multiplier that is hard to achieve with only tooling and automation.

Heroes are few and far between so we must turn to incentives, which is where discomfort comes in. Continuity is comfortable and change is painful. EngProd looks at how to change the problem so that teams are incentivized to work together fruitfully and disincentivized (discomforted) to pursue local optimizations exclusively.

So how does EngProd align incentives? Certainly there is a place for optimizing for optimal behaviors, such as easy-to-use integration environments. However, incentivizing optimal behaviors via negative feedback should not be overlooked. Each problem is different, so let's look at how to address the two examples above:
Incentivizing easy-to-use APIsEngineers will make the things they're incentivized to make. For APIs, make teams incentivized to provide integration help in the form of fakes. EngProd works with team leads to ensure there are explicit objectives to provide Fakes for their APIs as part of the rollout.

Fakesare as-simple-as-possible implementations of a service that still can be used to do pre-submit testing of client interactions with the system. They don't replace integration tests, but they reduce the likelihood of finding errors in subsequent integration test runs by an order of magnitude.
Furthermore, have some subset of the same client-owned and server-owned tests run against the fakes (for quick presubmit testing) as well as the real implementation (for continuous integration testing) and work with management to make it the responsibility of the Fake owner to debug any discrepancies for either the client- or the server-owned tests.

This reverses the pain! API owners, who are in a position to make APIs better, are now the ones experiencing negative incentives when APIs are not easy to use. Previously, when clients felt the pain, they had no recourse other than to file easily-ignored bugs ("Closed: working as intended") or contribute changes to the API owners' codebase, hurting their own performance with distractions.

This will incentivize API owners to design APIs to be as simple as possible with as few side-effects as possible, and to provide high-quality fakes that make it easy for clients to integrate with the API. Some teams will certainly not like this change at first, but I have seen API teams come to the realization that this is the best choice for the larger effort and implement these practices despite their cost to the team in the short run.

Helping management set engineering team objectives may not seem like a typical SETI responsibility, but although management is responsible for setting performance incentives and objectives, they are not well-positioned to understand how the low-level decisions of different teams create harmful interactions and lower cross-team performance, so they need SETI and TE guidance to create an environment that encourages optimal behaviors.
Fast, small releasesBeing forced to release more frequently than is required by feature deployment requirements has many beneficial side-effects that make release velocity a goal unto itself. SETIs and TEs faced with big, slow releases work with management to mandate a move to a set of smaller, more frequent releases. As release velocity is ratcheted up, negative behaviours such as too much manual testing or too much internal coupling become more painful, and many optimal behaviors are incentivized.
Less coupling between systemsWhen software is released together, it is easy to treat the seams between different components as implementation details. Resulting systems becoming so intertwined (coupled) that responsibilities between them are completely and randomly mixed and their interactions are too complex for any one person to understand. When two components are released separately and at different times, different versions of them must be compatible with one another. Engineers who were previously complacent about this fragility will become fearful of failed releases due to implicit contract changes. They will change their behavior in beneficial ways such as defining the contract between components explicitly and creating regression testing for it. The result is a system composed of robust, self-contained, more easily understood components.
Better/More automated testingManual testing becomes more painful as release velocity is ramped up. This will incentivize automated regression, UI and performance tests. This makes the team more agile and able to catch defects sooner and more cheaply.
Faster feedbackWhen incremental feature changes can be released to dogfood or other beta channels more frequently, user interaction designers and product managers get much faster feedback about what paths lead to better user engagement and experience than in big, slow releases where an entire feature is deployed simultaneously. This results in a better product.
ConclusionThe SETIs and TEs optimize interactions between teams and create fixes for product-wide, cross-team problems in order to improve engineering productivity and velocity. There are many worthwhile projects that EngProd can do using broad knowledge of the system and expertise in refactoring, automation and testing, such as creating test fixtures that enable continuous integration testing or identifying and combining duplicative tests or tools.

That said, the biggest problem that EngProd is positioned to solve is to break the chain of local optimizations resulting in cross-team de-optimizations. To that end, discomfort is a tool that can incentivize engineers to find solutions that are optimal for the entire product. We should look for and advocate for these transformative changes.
Categories: Blogs

Automating Cross-browser JavaScript Unit Testing

Software Testing Magazine - Mon, 02/13/2017 - 18:20
What might seem obvious to some people could be weird to other. This is still the case for applying unit testing to JavaScript code in multiple browsers. In his blog post “Learning How to Set Up Automated, Cross-browser JavaScript Unit Testing”, Philip Walton provides a step-by-step approach process to create some automated testing of your JavaScript code. Philip Walton thinks that even if there some JavaScript testing tools and frameworks, like Karma, that claim to make it easier to automated JavaScript tests, his experience is that these tools create often more complexity. The blog starts with a definition of automation as “using machines to off-load the repetitive parts of an existing workflow”. He makes very interesting and fundamental remark: “If you try to start with automation before fully understanding the manual process, it’s unlikely you’ll understand the automated process either.” The post continue with some examples of testing using Mocha. The problem in this simple approach is that when some of your tests are failing, there is no easy way to reproduce your bug and debug locally. It is also tedious and error prone to open different browsers to run your tests every time you change your code. Philip Walton then describes a process that meet his requirements * running the tests from the command line. * debugging failed tests locally. * running the tests on a CI machine * being able to run all the tests automatically anytime somebody commits new changes or makes a pull request. This process [...]
Categories: Communities

The Bug in Lessons Learned

Hiccupps - James Thomas - Fri, 02/10/2017 - 21:52

The Test team book club read Lessons Learned in Software Testing the other week. I couldn't find my copy at the time but Karo came across it today, on Rog's desk, and was delighted to tell me that she'd discovered a bug in it...
Categories: Blogs

Refactoring Towards Resilience: Evaluating RabbitMQ Options

Jimmy Bogard - Fri, 02/10/2017 - 19:50

Other posts in this series:

In the last post, we looked at dealing with an API in SendGrid that basically only allows at-most-once calls. We can't undo anything, and we can't retry anything. We're going to find some similar issues with RabbitMQ (although it's not much different than other messaging systems).

RabbitMQ, like all queuing systems I can think of, offer a wide variety of reliability modes. In general, I try to make my message handlers idempotent, as it enables so many more options up stream. I also don't really trust anyone sending me messages so anything I can do to ensure MY system stays consistent despite what I might get sent is in my best interest.

Looking back at our original code:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

We can see that if anything fails after the "bus.Publish" line, we don't really know what happened to our message. Did it get sent? Did it not? It's hard to tell, but going to our picture of our transaction model:

Transaction flow

And our options we have to consider as a reminder:

Coordination Options

Let's take a look at our options dealing with failures.

Ignore

Similar to our SendGrid solution, we could just ignore any failures with connecting to our broker:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    try {
        await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    } catch (Exception e) {
        Logger.Exception(e, $"Failed to send order created event for order {order.Id}");
    }
    return RedirectToAction("Success");
}

This approach would shield us from connectivity failures with RabbitMQ, but we'd still need some sort of process to detect these failures and retry those sends later on. One way to do this would be simply to flag our orders:

} catch (Exception e) {
    order.NeedsOrderCreatedEventRaised = true;
    Logger.Exception(e, $"Failed to send order created event for order {order.Id}");
}    

It's not a very elegant solution, as I'd have to create flags for every single kind of message I send. Additionally, it ignores the issue of a database transaction rolling back, but my message is still sent. In that case, my message will still get sent, and consumers could get events for things that didn't actually happen! There are other ways to fix this - but for now, let's cover our other options.

Retry

Retries are interesting in RabbitMQ because although it's fairly easy to retry my message on my side, there's no guarantee that consumers can support a message if it came in twice. However, in my applications, I try as much as possible to make my message consumers idempotent. It makes life so much easier, and allows so many more options, if I can retry my message.

Since my original message includes the unique order ID, a natural correlation identifier, consumers can have an easy way of ensuring their operations are idempotent as well.

The mechanics of a retry could be similar to our above example - mark the order as needing a retry of the event to be raised at some later point in time, or retry in the same block, or include a resiliency layer on top of sending.

Undo

RabbitMQ doesn't support any sort of "undo" natively, so if we wanted to do this ourselves, we'd have to have some sort of compensating event published. Perhaps an event, "OrderNotActuallyCreatedJustKiddingAboutBefore"?

Perhaps not.

Coordinate

RabbitMQ does not natively support any sort of two-phase commit, so coordination is out.

Next steps

Now that we've examined all of our options around the various services our application integrates with, I want to evaluate each service in terms of the coupling we have today, and determine if we truly need that level of coupling.

Categories: Blogs

Sonatype Nexus Installation Using Docker

Sonatype Blog - Fri, 02/10/2017 - 19:21
1. Download the Docker image using following commands.. # docker pull sonatype/nexus   2. Build an image from a Nexus Dockerfile# docker build –rm –tag sonatype/nexus oss/ # docker build –rm –tag sonatype/nexus-pro pro/ (For Pro)  

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Paul Volkman: Why is Sonatype the best solution?

Sonatype Blog - Fri, 02/10/2017 - 18:04
When Paul Volkman was asked "Why is Sonatype the best solution?," he didn't hesitate. Watch and listen as he gives the best, most succinct explanation you'll find anywhere.

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Journée Française des Tests Logiciels, Paris, April 11 2017

Software Testing Magazine - Fri, 02/10/2017 - 10:00
The Journée Française des Tests Logiciels (French Software Testing Day) is a one-day conference focused on software testing organized by the Comité français du test logiciel (CFTL), the French chapter of the ISTQB (International Software Testing Qualifications Board). It brings together more than 700 French software testers. Most of the presentations and keynotes are in French, but there are also sessions in English. In the agenda of the Journée Française des Tests Logiciels conference you can find topics like “Test Automation in an Agile Organization at Voyages-sncf.Com”, “Don’t Write the Test… Draw it”, “Testing Software in a Scaling Agile Context”, “Open Source as a Valuable Solution for Software Testing”, “App Testing: Seven Experience Reports”, “Security as a Service”, “Automated Testing: Selenium vs SoapUi”, “Performance Testing”, “Using Lean in Software Testing”, “Load Testing and Agility”, “Optimizing Your Tests with a Simplified Automation Approach”, “Seven Years of Experience in Model-Based Testing”, “Optimizing the Testing Process with a Grey Box Approach”. Web site: http://www.jftl.org/ Location for Journée Française des Tests Logiciels conference: Beffroi de Montrouge, 2, place Émile-Cresp, 92121 Montrouge, France
Categories: Communities

Two important lessons for success of Test Automation

Thinking Tester - Fri, 02/10/2017 - 09:06
James Bach wrote this great article on how not to think about Test automation way back in 1999. Anyone starting into automation and those wanting to learn more about automation - must read this article. First of all automation is about testing. If you think narrowly about testing - your automation will be narrow.  Even today it is not uncommon for business leaders to say "do not have time or resources for testing - do automation". I hope some business leaders in IT, Software, Testing are reading this post and make amendments in their view.

I would like to put two key lessons that I learned in these years that you can use to make most of your money you are putting into automation

If a test (case) can be specified like a rule - that MUST be automated
Automation code is software - thus, obviously is built on some kind of specification. Most GUI automation (QTP, Selenium) is typically built based on so called "test cases" written in human language (say English). It is the first question that a automation guy will ask while starting automation - "where are the test cases?". In dev world - automation takes a different meaning. In TDD style automation (if you call TDD tests as automation) - test is itself a specification. A product requirement is expressed as a failing test to start with. The approach of BDD throws this context to other boundary, specify tests in the form of expected behavior. So, automated tests are based on specification that is a human language but expressed in business terms (mainly) and with a fixed format (Given-when-then).
Key lesson here is - if a test can be specified like a rule with a clearly defined inference to be drawn from the test - that should be automated. Automating a test means create a program to configure, exercise and infer results of what test is trying to validate. Michael Bolton calls such a test as a check - a meaningful distinction. If a test has human element in it for inference mostly - you cannot possible automate the test in its full form.
How do you implement this lesson in your daily life as tester? When designing a test - see if you can specify it like a rule.  If you can then explore ways to write a program for it. Then that test becomes automated. In this way when you are building a suite of tests - some are specified like a way that makes it easy to automate and some are specified in a way that a human tester need to apply her intelligence to exercise and infer.

Automated tests (checks) are like guard to product code
A child asks his father "what is the use of brake in a car". "it helps to stop the car" says father. Kid responds back "no.. I guess break helps driver to drive the car as fast he wants to as he has a means to to stop when needed". On the similar lines - having automated tests around a piece of code - literally guarding the code - empowers the developer to make changes to the code faster. More often than not - bigger speed breakers for development is fear of breaking some working code. Developers are mostly worried about large chunk of legacy code that one rarely understands fully. Having automated test as guard - what happens is test will flag change in the code via failing test. Armed with support of guarded code - developers can now make changes faster and can depend on tests to tell them if any of change made has broken some other "working" code.

How do you implement this lesson? Work with developers and help them creating tests that guard their code. These tests should work like "change detectors". Writing test automation would require knowledge of product code and principles of unit testing. Not for weak hearted GUI QTP/Selenium folks.

Webinar Slides and Recording - Security Testing: The Missing Link in Information Security

Thanks to everyone who participated in today's webinar. I really enjoyed the time together, even if I did experience a complete system failure and restart in the latter part of the webinar. Just to let you know how the rest of today went, I was checking out this evening at Wal-mart (not self-checkout) and after I scanned my debit card, the pin pad displayed a message, "System shutdown in progress". I don't know what it is about me, but I swear, systems fail in my presence. It has been that way for over 20 years now! Oh, the joys of being a tester!

OK, here we go...

Here is the recording link. I have edited the video so that all slides are shown and discussed.

Here is a PDF with the slides in 2-up format.

Here is a PDF with the slides in full color format.

I hope you find the information helpful. Feel free to share it. I hope it can help you build the awareness of the need for security testing in your organization.

Thanks!

Randy

Categories: Blogs

Declarative Pipeline: Publishing HTML Reports

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. Declare Your Pipelines! Declarative Pipeline 1.0 is here! This is the second post in a series showing some of the cool features of Declarative Pipeline. In the previous blog post, we created a simple Declarative Pipeline. In this blog post, we’ll go back and look at the Scripted Pipeline for the Publishing HTML Reports in Pipeline blog post. We’ll convert that Pipeline to Declarative syntax (including properties), go into more detail on the post section, and then we’ll use the agent directive to switch our Pipeline to run in Docker. Setup For this post, I’m going to use the blog/add-declarative/html branch of my fork of the hermann...
Categories: Open Source

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today