Skip to content

Feed aggregator

Stop being a NON-Technical Tester!

PractiTest - Thu, 02/23/2017 - 11:25
Editor’s note: This post was originally posted in Dec. 2011, and has been updated to take current testing trends into account.

With current QA trends reflecting an increased emphasis on agile and DevOps, as a means to cope with accelerated product releases, it would seem that it has also become increasingly important for testers to become more “technical” and even at times “code savvy” in their teams and work.

When I again (as in the original post from 2011) posted the open question:

“Do testers need to be as technical as programmers to be successful at their jobs?”

I got plenty of answers. Here are just a few representations of the main opinion threads:

Prashant HegdeNot necessarily, However they need to be technical enough to analyze the system under test and can carry out effective testing.

Tracy RichardsonNo, but they need to understand what a programmer has changed and linked to. I have always thought a good tester comes in from the user perspective, with a lot of tenacity and touch of “Right let’s break this!

Kobi Halperin: No – Testers must be MUCH MORE Technical than programmers !!!
While programmers mostly consider how the product should work, Testers must consider all the adjacent activities and features which might cause it to Fail.

To all the responders above and those I missed, thanks for the great feedback!

But as you surely guessed I have my own opinion on the subject and I want to share it with you, so here it goes…

My definition of a Technical Tester

Technical testerHere’s how do I differentiate a Technical Tester from a Non-Technical Tester. (If you read my previous blogs on “Why are some tester not really Professional Testers” then you should already have an idea…)

A Technical Tester is not afraid of doing most of the following stuff on a regular basis as part of his job (without any specific order):

-Understand the architecture of the product he is testing,

including the pros & cons of the specific design, as well as the risks linked to each of the components and interfaces in the product.

He then uses this information to plan his testing strategy, to execute his tests and find the hidden issues, and also to provide visibility to his team regarding the risks involved in developing a specific feature or making a given change to the system.

 -Review the code he needs to test.

He can do this on a number of levels, starting from going only over the names of the files that were changed, and all the way to reviewing the code itself. This information will provide valuable inputs to help decide what needs to be tested and how, as well as to find things about the changes that might have been missed by the developer or the documentation.

BTW, by code I mean SQL queries, scripts, configuration files, etc.

 -Work with scripts & tools to help his work.

A technical tester should be able to create (or at least “play”) with scripts to help him run repetitive tests such as sanity or smoke, and tasks such as configurations, installation, setups, etc.

He should also be able to work with free automation tools such as Selenium or WATIR (or any of the paid ones like QTP, SeeTest, TestComplete, etc) to create and run test scripts that will increase the stability of the product in development, and over time save time…

 -Be up to date with the technical aspects of his infrastructure

(e.g. browsers, databases, languages, etc)

He should read the latest updates on all aspects of his infrastructure that may have an effect on his work. For example new updates to his O/S matrix, known issues with the browsers supported by his product, updates to external products they integrate, etc.

With the help of Google alerts and by subscribing to a couple of newsletters anyone can do this by reading 5 to 10 email 2 or 3 times a week. The value gained from becoming an independent source of knowledge greatly exceeds the time invested in the efforts.

 -Is able to troubleshoot issues from Logs or other System Feeds.

He is aware of all the logs and feeds available in his system, and uses them to investigate more about any issue or strange behavior.

This information is helpful during testing to provide more information than simply writing “there is a bug with functionality X”. And it will be critical if he is called to work on a customer bug, where he needs to understand complex issues quickly and without access to all the information.

In addition to the above, a technical tester should also be able to:

– Provide feedback and run the unit tests created by his programmer peers.

– Run SQL Queries on the DB directly to help verify his testing results.

Install and configure the system he is testing.

etc.

Sounds like Superman or MacGyver?

Flying ManIt may sound like this, but actually it’s not!

As testers we work on projects that revolve around Software, Hardware, and/or Embedded products. The only way to do a good job in testing them is to have a deep understanding of both angles: technical and functional.

This doesn’t mean that you need replace or have the same technical dept as your developers, or surpass your Product Marketing’s knowledge of your users.

You need to achieve a balance, where you have “enough” knowledge and understanding of both these areas in order to do your job as a tester.

Is it black and white?

There is no standard to define how technical a tester should be on every project and product. Like in many other situations, the best answer to how technical you need to be is: “It Depends…”

You should be at least technical enough to do your job effectively and to talk the same language with the rest of your programming and testing peers.

What do I mean by that?

If you work on a software development firm then you should understand enough of the languages used by your developers to be able to read the code and understand their changes. If you work on a heavily DB-related project then you need to understand enough of SQL and database management. If you work on a Website development firm then you should know enough CSS, HTML and JS, and so it goes…

So if I am not Technical enough, should I quit testing??? Raise your handDefinitely not!

If you like testing and you are good at it, why should you quit? On the other hand, this is a great opportunity to improve your work and increase  your market value as a tester

Categories: Companies

Browser testing and conditional logic in Declarative Pipeline

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. Declare Your Pipelines! Declarative Pipeline 1.0 is here! This is the fourth post in a series showing some of the cool features of Declarative Pipeline. In the previous post, we integrated several notification services into a Declarative Pipeline. We kept our Pipeline clean and easy to understand by using a shared library to make a custom step called sendNotifications that we called at the start and end of our Pipeline. In this blog post, we’ll start by translating the Scripted Pipeline in the sample project I worked with in "Browser-testing with Sauce OnDemand and Pipeline" and "xUnit and Pipeline" to Declarative. We’ll make our Pipeline clearer...
Categories: Open Source

Integrating performance testing into your CD pipeline

HP LoadRunner and Performance Center Blog - Wed, 02/22/2017 - 20:30

With the ever shortening sprints, how often do you test for performance or you skip it altogether? Or that’s until one minor failure in production reminds you that you need to move from rarely testing for performance to continuously testing for performance.... And this is how the journey of  " fitting perfroamnce testing in you DevOps practises"  has begun. Read more how to fit performance testing into you short release cycles. 



 

Categories: Companies

Vector Informatik Buys Vector Software

Software Testing Magazine - Wed, 02/22/2017 - 19:09
Vector Informatik GmbH, a German-based specialist for the development and test of automotive electronics, has acquired 100% of the US company Vector Software Inc. Vector Software specializes in...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Inside the black box – Generated code

With the requirement to deliver new software releases faster, and more frequently, many companies choose to automate some, or all, of the code development. Developers can create flows in an integrated development environment (IDE) such as JDeveloper, Eclipse or another tool without seeing a single line of code. Once the process is defined in the flow view the IDE creates one or more files which is read in by the Java Virtual Machine (JVM) or Common Language Runtime (CLR). Once the JVM/CLR has read in the file(s) it will generate the code necessary to execute the flow.

When a performance problem or error occurs within the generated code it can be very complex and time consuming to find out which part of the flow is the root cause. While many tools support stepping through the flow to find potential issues during development, how can one assure that the generated code keeps performing in a production or acceptance environment?

Without any additional configuration within Dynatrace AppMon the PurePath information already informs you about all bottlenecks, but it can be close to impossible to relate the bottlenecks back to the original flow created in the IDE. If one sees in AppMon that a specific query is executed 1000 times, or that a method is using a lot of synchronization or CPU time, how could one relate that back to one specific flow component used in the process?

One example of a solution which uses the above method to describe a process is Oracle BPEL Process Manager. The specific implementation explained below is related to Oracle BPEL Process Manager, but the thought process can be applied to countless other tools which generate code.

Oracle BPEL Process Manager

 In the following screenshot one can see all of the exceptions, response time and much more of a transaction, but the data cannot be related back to the developers of the application. The developers never wrote any code, they dragged and dropped elements in a graphical interface.

Out of the box PurePath

Using Application Monitoring from Dynatrace you can link the original flow to the generated code. When there is an exception thrown, a slow query, an unexpected end point being called or another problem within the application, it is possible to see which part of the process caused it.

By adding a sensor to a specific method that is part of the framework of Oracle BPEL we are able to follow the request through the different stages (Oracle BPEL terminology for one of the top layers of a flow):

Sensor configuration for stages

Once the sensor is added the PurePath tree will contain all of the stages which the request is using. Doing this it becomes clear which stage is responsible for which part of the PurePath.

PurePath with stage information

If information about which stage is executed is insufficient it is possible to go even deeper, to every individual component of the flow. To get that additional insight we need to instrument another method with a specific accessor.

Sensor configuration for components

After adding the sensor which captures the component names we can see each individual part of the flow in the PurePath. If there is a slowdown, exception, architectural issue or one of many other issues, we can pinpoint it to a single component with just a couple of clicks. This provides a similar insight as stepping through the flow during development, but for every single transaction in production.

PurePath with component information

Break open the black box and get in control of your applications, no matter if they are developed or generated!

The post Inside the black box – Generated code appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Are your Mocks Mocking at You?

Testing TV - Wed, 02/22/2017 - 18:41
Ever since J.B. Rainsberger’s ‘integrated tests are a scam’, many developers try to get rid of their massively integrated tests and test their units in isolation. Co-operation of units is tested with mocks and stubs. But – depending on the language used – this mocking can be more or less trustworthy. I present a prototypical […]
Categories: Blogs

Software Testing in a Continuous Delivery World

Software Testing Magazine - Wed, 02/22/2017 - 18:14
A team that releases every commit needs to take software testing seriously. This talk explains what kinds of testing a team needs when working with Continuous Delivery. It will need to evolve new...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

How to Track the Right User Experience Metrics

Testlio - Community of testers - Wed, 02/22/2017 - 13:35

User experience metrics aren’t just about conversions and retentions. They show us behaviors, attitudes, emotions — even confusion.

That’s partially why UX metrics are so complex. They can be both subjective and objective, qualitative and quantitative, analytics-based and survey-based.

Like QA results, UX metrics are aimed at uncovering user experience issues before a customer brings them to your attention so you can adapt and improve your product.

Since 67% of consumers churn due to a bad customer experience, these complicated metrics are extremely valuable. In 2017, Gartner forecasts that a whopping 50% of product investment projects will focus on improving the customer experience.

Leading tech companies use UX metrics to make decisions focused that put the customer first. Here’s how:

Metrics that deserve your attention

Just as CTR and bounce rate are the peanut-butter-and-jelly of the digital marketing world, there are some basic UX metrics (and metric categories) that can make or break an app’s success.

Engagement

Analyzing how users are engaging with a product is an absolute must. Number of visits per user per day would be a critical engagement metric for a social media app, while number of tasks completed would be a better indicator for any kind of automation platform. Other companies may choose to measure engagement in terms of time. How long are users staying in-app or on-site?

Efficiency

Time also comes into play when measuring efficiency. UX teams should know how long it takes to complete key tasks like:

  • Entering billing information
  • Customizing a profile
  • Filling out an in-app support ticket

The entire onboarding process must also be measured. How long does it take users to signup and complete their first task?

UX teams can use this data to reduce the amount of steps for a given task and simplify the design as much as possible.

image02-1.jpg

Efficiency metrics can be especially insightful after pushing out a design update. Is there a dip in efficiency as users get accustomed to a new process for transferring money? Or does the change immediately speed up the task for users?

Performance

We rely on our smartphones every day — even when we don’t have our chargers nearby. If you’ve ever uninstalled an app because you realized it drained your battery life, then guess what — so have your customers.

Performance metrics like load speed and battery drainage are easy for teams to measure. Testing platforms can deliver behind-the-scenes data that really enhances the customer experience.

Usability

Engagement and efficiency are definitely part of usability, but let’s take a second to talk about use and user flows. Are users recognizing the app’s cues? Are they able to follow along quickly with walkthrough steps?

One example of poor menu usability is if users are routinely relying on search navigation because they can’t find what they’re looking for in the menu.

But usability metrics can be simpler than that. The task success rate is an important metric that shows how many users can achieve what they set out to do.

image01-4.png

In 1986, the System Usability Scale (a 10-part questionnaire) was introduced, and it’s still used today. The survey can help support in-app analytics with real psychometric data for a more complete customer view.

Choosing the right UX metrics and understanding signals

Signals are often ambiguous. A long amount of time spent in one feature may be a positive, while in another feature it’s a clear negative. Let’s say a lawyer is uploading files to a firm management platform. There should be a set goal for upload speeds in a range of file sizes.

If file uploads keep taking users longer than desired, it’s worth looking into.

But a media or entertainment app will see high length of time spent as a good sign.

Because metrics mean something different to each product, no team should start with metrics. Instead, you must start with goals. Here’s the flow for tracking metrics strategically:

  1. Start with a goal
  2. Turn the goal into a signal
  3. Turn the signal into a metric

image03-1.png

Follow this process for every goal (including important basics like performance or task error rate) to come up with the UX metrics that will have a real impact on your product.

Boosting customer satisfaction with combined metrics

The 10-part SUS survey is only the beginning when it comes to subjective UX data. Because user experience needs to incorporate emotions and satisfaction, more customized surveys are a must.

Without incentives like discounts or prize draws, pop-up surveys have an average response rate of <3% while email surveys have an average of <10%.

Making your survey as unobtrusive as possible helps up the rate of response.

There’s a reason why two-question, two-step surveys are so common inside apps. When users are asked multiple questions, they’ll click away. But if you ask them something simple like, “How likely are you to recommend us to a friend?” and then provide a scale of 1 – 10, then you’ll get more feedback and can then follow up with an additional question — which may or may not get answered.

image00-1.jpg

UX analytics and survey data should be combined with QA results for a full view into the user experience. QA gives insight into what improvements should be tackled first. QA results can help explain the “why” behind low-performing metrics and help teams triage.

The most immediate goal for tracking UX metrics is to fix problems before customers complain, so integrating with QA is a no-brainer. A combined analysis (including UX metrics, customer surveys, customer support conversations, and QA data) can impact large, long-term product decisions.

User experience metrics are becoming more sophisticated as companies continue to invest in improving the customer experience. Advancements in tracking are currently aimed at uncovering not just interactions but perceptions. Undoubtedly, adoption and innovation with UX metrics are on the rise.

Testlio provides comprehensive software QA services with a global team of skilled testers who are focused on customer satisfaction. To increase the impact of QA for your team, get in touch with us.

The post How to Track the Right User Experience Metrics appeared first on Testlio.

Categories: Companies

How to Enable Enterprise DevOps with CD as a Service and Distributed Pipelines

This week, we launched CloudBees Jenkins Enterprise to enable enterprise-wide DevOps through CD as a Service.

Why is this important to you? Simply put, the results are in. Organizations which implement continuous delivery (CD) in support of enterprise-wide DevOps see significant improvement in release frequency, cycle time and mean time to recovery. More importantly, such improvements lead to a more agile, more responsive more competitive overall business.

To successfully implement continuous delivery and DevOps in a large, mature enterprise, there are specific needs and obstacles which must be addressed. Let’s look at them:

  • Support for heterogeneous tools and practices to enable integration across the organization’s entire technology portfolio.
  • Resiliency and high availability to prevent disruptions in the delivery pipeline of business-critical applications.
  • Enterprise security and compliance capabilities to protect valuable intellectual property and ensure adherence to the organization’s established standards.
  • Ability to unify process across multiple disconnected silos so that teams and stakeholders can deliver software rapidly and repeatedly.
  • Scalability to support on-boarding all of your teams in a stable reliable environment.

Traditionally, meeting the requirement for scalability has been the biggest challenge. Much of this has to do with the way continuous delivery has been adopted and the nature of the available CD solutions.

CD and DevOps adoption has often begun within individual teams as grassroots efforts. The tools used for such grassroots implementations fall largely into two categories:

  • Lightweight, single server web applications not architected for large scale deployments.
  • Public SaaS solutions which are cloud-based but, implemented on the same single server model as the web applications.

These solutions present issues when growing CD from one team to an entire organization. Common problems are:

  • On a shared single service instance, the increasing workload overwhelms the server and the result is downtime, slow builds, compromised data and broken pipelines.
  • As teams stand up their own instances, infrastructure costs increase. The ability to share practices is limited, and you have developers acting as tool admins.
  • Single server cloud instances address the infrastructure cost and reduce administration overhead but still suffer from disconnected teams and carry the risk of having critical data and processes off-premise, controlled by a third-party.

CloudBees Jenkins Enterprise enables you to scale without instability by implementing the only solution with a Distributed Pipeline Architecture (DPA). To better understand DPA, it helps to look at what happens when traditional solutions scale.

When we setup CD for a single team, things look good. We can deliver a single service through our CD pipeline with speed:

DPA Image 1 - Revised.png

But as we add teams, instability of our CD server increases. Our speed decreases. We are unable to update business-critical services. Single server, single point of failure.

Distributed Pipeline Architecture 2

The elasticity of the Distributed Pipeline Architecture distributes teams’ CD workloads across multiple isolated servers, providing high levels of scalability. Now multiple teams using multiple pipelines can deliver multiple business-critical services reliably. Scaling with DPA enables speed AND stability.

Distributed Pipeline Architecture 3

Building on the scalability enabled by DPA, CloudBees Jenkins Enterprise supports enterprise-wide DevOps with other best-in-class features:

  • Integration of all of your tools and processes - Leverages the vast ecosystem of Jenkins 1,200+ integrations, the CloudBees Assurance Program then curates and verifies the top ones.
  • Reduced infrastructure costs - Dynamically allocates appropriate resources providing a high-density and very efficient use of infrastructure.
  • Secure project isolation – Each team, project or application can have their own execution environment, keeping projects and data fully secured and isolated.
  • Fault tolerant and self-healing – Build services that have stopped are detected and restarted automatically.
  • Business continuity - CloudBees Jenkins Enterprise automatically handles real-time backup of the entire platform and fully automates the recovery process.
  • Centralized management - All management activities can be performed centrally thanks to CloudBees Jenkins Operation Center, providing a very low cost of ownership.

CloudBees Jenkins Enterprise Architecture Graphic

The launch of CloudBees Jenkins Enterprise is important to you because enterprise DevOps built on the practice of continuous delivery is how you remain competitive in today’s market. To do this you require the scalability, security, manageability and resiliency provided by Enterprise and the only Distributed Pipeline Architecture. Deploy CD as a service in minutes on your existing infrastructure.

Brian Dawson
DevOps Dude and Jenkins Marketing Manager
CloudBees

 

 

Blog Categories: Company News
Categories: Companies

The myth of “mission-critical”: Irrational thinking in modern IT management

I was reading an article today that discusses managing “mission-critical” applications. I really dislike that term. It’s trite, it’s dated – even nonsensical. It suggests that applications fall into two groups – mission-critical, and…optional? marginal? unnecessary? one step away from being voted off the island?

Here’s the fallacy with that view – people that run IT organizations are smart, and they invest in stuff that matters to the business. They don’t run apps that don’t provide value because they excel at cost-efficiency. So the notion that relatively few apps are actually worth managing is illogical. Even email, the poster child for apps at the bottom of the food chain, is essential to the operation of a 21st century organization – it’s how they stay organized.

Which is why it is surprising to note that, according to industry analysts, the majority of enterprises manage fewer than 25% of their apps. I acknowledge that not all apps are created equal – some have relatively greater value than others. But surely no one would argue that the next most important 10% to 20% of apps don’t merit being managed.

So – why invest in something because it’s important to your business, but stop short of the incremental investment needed to ensure it works well? It seems irrational. Would a trucking company not monitor the oil level in their trucks, or a grocer the temperature of their freezers? At some point intervention will be required, and when not addressed on a timely basis, small problems can become big, costly ones, and business operations can be seriously disrupted.

Sure, these are not perfect metaphors, but I think my point is obvious, and obviously valid. Generally, when assets are important to the successful operation of the business, organizations invest to ensure that they keep operating effectively.

Unless they are software applications. So while people that run IT are smart, in this respect their behavior seems irrational. Attitudes towards management have always seemed a bit wonky to me. Rather than seen as additive to app value, application management has been viewed as unwanted overhead, even deleterious. Put another way, stuff that would make apps go well has often been viewed as detracting from simply making them go.

But is the lack of APM investment irrational? On balance, service delivery is pretty darned good most of the time for most apps, or at least “good enough.” Naturally, problems occur in modern, complex IT environments. So organizations invest in management technology to minimize risk and impact for the most important apps, and handle everything else as well as humanly possible. That is the status quo that seems to work reasonably well, except when it doesn’t, sometimes with serious business impact. And, in those cases, you wrestle the problem to the ground, ask “What are the chances of THAT happening again!”, and return to the status quo until it happens again.

I believe nearly all IT professionals would say that problems are inevitable, including serious business-impacting ones. So I am back to thinking that this doesn’t make good sense. Why would you not make the incremental investment in APM (for more than 25% of your apps) to reduce the incidence and impact of these inevitable events?

I can think of a few potential reasons. It may be difficult to quantify the business risk as input to a cost-justification. It may be difficult to prioritize what applications to invest in, which impedes setting technical criteria for a solution. There may be a diverse set of stakeholders with competing priorities.  Any of these challenges makes it difficult to pick a strategy and move forward with it.

But in my view, none of these is a good enough reason to settle for the current status quo. There’s a bigger picture here. Those inevitable problems are affecting your business every day. Investments in apps aren’t moving in the right direction with regard to your company’s strategic customer experience focus and commitment to digital transformation. Apps are important – managing them cannot be viewed as optional.

Dynatrace has redefined monitoring to establish a new status quo way better than “good enough” for way more than 25% of your apps, regardless of their technology, and for all your stakeholders.  And that makes very good sense.

The post The myth of “mission-critical”: Irrational thinking in modern IT management appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Using Nexus 3 as Your Repository – Part 1: Maven Artifacts

Sonatype Blog - Tue, 02/21/2017 - 18:17
This article is the first in a three part series by one of our community advocates, Rafael Eyng. You can follow his work at CodeHeaven.io

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Modern Test Management: People First

Software Testing Magazine - Tue, 02/21/2017 - 18:03
Since the publication of Peopleware by Tom DeMarco and Tim Lister in 1987, the importance of people in the success of software development projects could not be underestimated. This is also true in the software testing domain. In this article, Anna Royzman discusses some of the essential skills of modern software testers that managers should nurture and develop. Author: Anna Royzman, Test Masters Academy, http://testmastersacademy.org/ Software testing as a profession undergoes a lot of changes in 2017. The job becomes more technical and there is a greater need for good communication skills, higher requirements for cultural fit and overall subject matter expertise. If your job as a manager involves supervising software testers, you need to make sure that they fit well into the changing landscape of software delivery environments. You will have to create opportunities for your software testing talents to grow and excel. When I coach software test managers and leads on how to nurture software testers in their teams, I focus on three talent development vectors: strategy, communication and subject matter expertise. Strategy skills involve following areas: Know and leverage testing techniques and methodologies Build effective test coverage Adjust testing strategy to changing priorities Learn how to evaluate and use software testing resources like people skills, tools, etc. Communication skills would include: Ability to explain what testers do and where they can add most value Communicating status of their work in such a way that stakeholders can evaluate project risks Be an effective quality advocate Make connections with [...]
Categories: Communities

Introducing PHP-FPM monitoring (beta)

We’re happy to announce the beta release of Dynatrace PHP-FPM monitoring! Dynatrace PHP-FPM monitoring provides information about connections, slow requests, and processes. Now you’ll know immediately if your PHP-FPM is underperforming. And when problems occur, it’s easy to see which hosts are affected.

To view PHP-FPM monitoring insights

  1. Click Technologies in the navigation menu.
  2. Click the PHP tile.
  3. To view cluster metrics, expand the Details section of the PHP-FPM process group.
  4. Click the Process group details button. 
    PHP-FPM cluster
  5. On the Process group details page, select the Technology-specific metrics tab.
  6. Select a relevant time interval from the Time frame selector in the top menu bar.
  7. Select a metric type from the metric drop list beneath the timeline to compare the values of all nodes in a sortable table view.
  8. To access node-specific metrics, select a node from the Process list at the bottom of the page.
    PHP FPM cluster
  9. Click the PHP-FPM tab. PHP-FPM tab process monitoringHere you’ll find the number of Accepted connections (connections accepted by the pool), and the Slow requests count. Please note that the Accepted connections measure is sometimes misunderstood to represent the number of requests. This metric measures exactly what its name suggests—the number of accepted connections.
Additional PHP-FPM node monitoring metrics

More PHP-FPM monitoring metrics are available on individual Process pages. Select the Further details tab to view these metrics.

Additional PHP-FPM metrics

Here you’ll find additional PHP-FPM charts for RequestsInput buffering, and Processes.

Additional PHP-FPM metrics

When the number of total active processes reaches the Total processes limit, new scripts are prevented from running until the problematic processes have completed. The maximum number of Waiting connections defines the maximum number of connections that will be queued. Once this limit is reached, subsequent connections are refused or ignored.

PHP-FPM metrics Metric Description Accepted connections The number of connections  accepted by the pool  Slow requests The number of requests that have exceeded the request_slowlog_timeout value Waiting connections The number of requests in the queue of pending connections Max number of waiting connections The size of the pending connections socket queue Active processes The number of active processes Total processes The number of idle + active processes Prerequisites
  • Linux OS or Windows
  • PHP  version  5.5.9+
  • PHP-FPM Status Page must be enabled on all nodes you want to monitor.
Enable PHP-FPM monitoring globally

With PHP-FPM monitoring enabled globally, Dynatrace automatically collects PHP-FPM metrics whenever a new host running PHP-FPM is detected in your environment.

To monitor more than one pool, type the URIs of the individual PHP-FPM status pages (separated by spaces) into the Status page URI field. All PHP-FPM instances must have a correct status page URI reference.

  1. Go to Settings > Monitoring > Monitored technologies.
  2. Set the PHP-FPM switch to On.
  3. Click the ^ button to expand the details of the PHP-FPM integration.
  4. Define a status page URI(s).
  5. Click Save.
Enable PHP-FPM monitoring for individual hosts

Dynatrace provides the option of enabling PHP-FPM monitoring for specific hosts rather than globally.

  1. If global PHP-FPM monitoring is currently enabled, disable it by going to Settings > Monitoring > Monitored technologies and setting the PHP-FPM switch to Off.
  2. Select Hosts in the navigation menu.
  3. Select the host you want to configure.
  4. Click Edit.
  5. Set the PHP-FPM switch to On.
Have feedback?

Your feedback about Dynatrace PHP-FPM monitoring is most welcome! Let us know what you think of the new PHP-FPM plugin by adding a comment below. Or post your questions and feedback to Dynatrace Answers.

Visit our dedicated webpage about PHP monitoring to see how Dynatrace supports PHP.

The post Introducing PHP-FPM monitoring (beta) appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Automatically fetch & leverage process-group attributes via Dynatrace API

The Dynatrace API can now be used to seamlessly integrate the process-group attributes that are discovered by Dynatrace OneAgent—for example, technology overview and topology details—into your existing reporting and operations processes. Process-group properties returned by the API can be leveraged in numerous ways depending on the needs of your DevOps teams.

Leverage technology overview information

The Dynatrace Technology overview presents all of the process-group technology-related information that is detected by Dynatrace OneAgent in your environment. Process group instances are grouped into technology-specific tiles (see image below).

To access the Technology overview, click Technologies in the navigation menu. All of this information can now be fetched automatically and utilized within your existing tools and processes!

Uses cases for technology-overview data

While your organization can utilize technology-overview data in any particular way that supports your existing workflows, one use-case to consider is the automatic retrieval of topology information for configuration management efforts. for example, real-time topological relationships and dependencies between the components in your environment can be retrieved automatically and used to populate an ITIL CMDB database.

Or, your DevOps teams might create scripts that automatically check and fetch the log files of all available process groups.

To query process-group information with the Dynatrace API, simply call an HTTP GET request on the following Dynatrace endpoint:

https://<YOUR_ENV>.live.dynatrace.com/api/v1/entity/infrastructure/process-groups/?Api-Token=<YOUR_API_TOKEN>

For Dynatrace Managed installations, process-group information can be retrieved using a slightly modified REST endpoint:

https://<YOUR_OWN_DOMAIN>/e/<YOUR_ENV>/api/v1/entity/infrastructure/process-groups/?Api-Token=<YOUR_API_TOKEN>

The resulting JSON payloads list all of the monitored process groups in your environment, as shown below:

API

Get started with the new API endpoint

To get started writing your own scripts and leveraging the new Dynatrace API process-group endpoint, please have a look at the Dynatrace API documentation.

The post Automatically fetch & leverage process-group attributes via Dynatrace API appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Data Synergy scripting to improve load testing: dissect the anatomy of a script

HP LoadRunner and Performance Center Blog - Tue, 02/21/2017 - 12:04

In our last blog we used the Call Flow Designer to create a test call flow and create a test script using the Data Synergy Voice plugin. Now it's time to learn more about the anatomy of a complex script.

Categories: Companies

Use "Golden Image" to test Big Ball Of Mud software systems

Chris McMahon's Blog - Tue, 02/21/2017 - 01:42

So I had a brief conversation on Twitter with Noah Sussman about testing a software system designed as a "Big Ball Of Mud" (BBOM).

We could talk about the technical definition of BBOM, but in practical terms a BBOM is a system where we understand and expect that changing one part of the system is likely to cause unknown and unexpected results in other, unrelated parts of the system. Such systems are notoriously difficult to test, but I have tested them long ago in my career, and I was surprised that Noah hadn't encountered this approach of using a "Golden Image" to accomplish that.

Let's assume that we're creating an automated system here. Every part of the procedure I describe can be automated.

First you need some tests. And you'll need a test environment. BBOM systems come in many different flavors, so I won't specify a test environment too closely. It might be a clone of the production system, or a version of prod with fewer data.  It might be something different than that.

Then you need to be able to make a more-or-less exact copy of your test environment. This may mean putting your system on a VM or a Docker image, or it may be a matter of simply copying files. However you accomplish it, you need to be able to make faithful "Golden Image" copies of your test environment at a particular point in time.

Now you are ready to do some serious testing of a BBOM system using Golden Images:

Step One: Your test environment right now is your Golden Image. Make a copy of your Golden Image.

Step Two: Install the software to be tested on the copy of your Golden Image. Run your tests. If your tests pass, deploy the changes to production. Check to make sure that you don't have to roll back any of the production changes. If your tests fail or if your changes to production get rolled back, go back to Step One.

Step Three: the copy of your first Golden Image with the successful changes is your new Golden Image. You may or may not want to discard the now obsolete original Golden Image, see Step Five below.

Step Four: Add more tests for the system. Repeat the procedure at Step One.

Step Five (optional) You may want to be able to compare aspects of a current Golden Image test environment with previous versions of the Golden Image. Differences in things like test output behavior, file sizes, etc. may be useful information in your testing practice.





Categories: Blogs

Help Linnea

“There is a saying that it takes a whole village to raise a child. Now we need a whole village to save our Linnea”

Linnea, Kristoffer Nordströms daughter, is five and a half years and comes from Karlskrona in Sweden. Her world revolved up until recently around My Little Ponies, riding her bicycle and popcorn… lots of popcorn. She who has one best friend: her beloved big brother Kristian.
That was her world – until a few months ago when she suddenly and shockingly became afflicted, and got emergency surgery for a brain tumor.
After the operation, we hoped that the bad news would end. But now the family lives in the hospital and has been told that the tumor is an aggressive variety called DIPG (Diffuse Intrinsic Pontine Glioma). The short story is that there is a heart-breakingly minimal chance of survival using established treatments.

There is a possible treatment that we are now aiming for: one that means the tumor is treated through catheters implanted directly into the tumor. Studies and reports show that such a direct treatment gives Linnea the best chance of one day becoming healthy. The cost of treatment and the journeys are very high. Higher than the average person can pay for: £ 65.000 for the first operation and then £ 6.500 for treatments thereafter. In the current situation, it is unclear how many of these Linnea will need.

Please help Kristoffer and his family!

Categories: Blogs

System Hardening with Ansible

Sonatype Blog - Mon, 02/20/2017 - 15:00
The DevOps pipeline is constantly changing.  Therefore relevant security controls must be applied contextually. We want to be secure, but I think all of us would rather spend our time developing and deploying software. Keeping up with server updates and all of the other security tasks is...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

IQNITE Europe, Cologne, Germany, April 24-26 2017

Software Testing Magazine - Mon, 02/20/2017 - 09:00
IQNITE Europe is a three-day conference taking place in Cologne (Koeln) Germany and focusing on all the aspects of software testing and software quality. All the presentations are in German. It features one leadership day and two days of talks. In the agenda of the IQNITE Europe conference you can find topics like “Cross-functional Teams: Best practices for Better Quality”, “Agile Transformation@Organisation”, “Adaptive Quality Management”, “Unleash the Power of your People”, “Turn Specs into High Quality Apps”, “Are you only just a test manager?”, “Blameless Post-Mortem”, “Continuous Security Testing: A Practical View”, “Mobile Testing @ XING – Does The Release Train Arrive On Schedule?”, “Building and Operating a Successful Test Centre of Excellence”, “Test Center of Excellence, bring it to the next level with ITIL”, “Next Generation QA – How Can We Forecast the Quality Assurance of Tomorrow”, “Are You Still Testing or Are You Already Playing?”, “360 Test Coverage of Mobile Applications”, “Economics of the Test Factory”, “A field report on full automated Pairwise Testing”, “Beta Testing Community – Blessing or Curse?”. Web site: http://www.iqnite-conferences.com Location for IQNITE Europe: Congress-Centrum Nord Koelnmesse, Deutz-Mülheimer-Strasse 111, 50679 Koeln, Germany
Categories: Communities

Before Testing

Hiccupps - James Thomas - Mon, 02/20/2017 - 07:25

I happened across Why testers? by Joel Spolsky at the weekend. Written back in 2010, and - if we're being sceptical - perhaps a kind of honeytrap for Fog Creek's tester recruitment process, it has some memorable lines, including:
what testers are supposed to do ... is evaluate new code, find the good things, find the bad things, and give positive and negative reinforcement to the developers.Otherwise it’s depressing to be a programmer. Here I am, typing away, writing all this awesome code, and nobody cares.you really need very smart people as testers, even if they don’t have relevant experience. Many of the best testers I’ve worked with didn’t even realize they wanted to be testers until someone offered them the job.The job advert that the post points at is still there and reinforces the focus on testing as a service to developers and the sentiments about feedback, although it looks like, these days, they do require test experience.

It's common to hear testers say that they "fell into testing" and I've offered jobs to, and actually managed to recruit from, non-tester roles. On the back of reading Spolsky's blog I tweeted this:
#Testers, one tweet please. What did you do before testing? What's the most significant difference (in any respect) between that and now?— James Thomas (@qahiccupps) February 18, 2017 And, while it's a biased and also self-selected sample (to those who happen to be close enough to me in the Twitter network, and those who happened to see it in their timeline, and those who cared to respond) which has no statistical validity, I enjoyed reading the responses and wondering about patterns.

Please feel free to add your own story about the years BT (Before Testing) to either the thread or the comments here.
Image: https://flic.kr/p/rgXeNz
Categories: Blogs

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today