Skip to content

Feed aggregator

How We Test Software: Chapter Two—Telerik Platform

Telerik TestStudio - Fri, 01/27/2017 - 13:34
Have you wondered how the teams working on Telerik products test software? We continue with the next chapter in our detailed guide, giving you deeper insight into our very own processes. This chapter focuses on Telerik Platform. 2016-04-11T21:07:02Z 2017-01-27T01:40:03Z Angel Tsvetkov
Categories: Companies

Let the New Test Studio Turn You Into a Mobile Testing Hero

Telerik TestStudio - Fri, 01/27/2017 - 13:34
The first major Test Studio product update for this year is now live. Go grab your free evaluation copy, and keep on reading to learn what’s new, including enriched mobile testing with support for web apps and more. 2016-04-01T15:08:05Z 2017-01-27T01:40:03Z Antonia Bozhkova
Categories: Companies

How Our Telerik Teams Test Software: Chapter One

Telerik TestStudio - Fri, 01/27/2017 - 13:34
Have you wondered how the teams working on Telerik products test software? Today we launch a detailed guide, giving you deeper insight into our very own processes. 2016-03-22T14:34:15Z 2017-01-27T01:40:03Z Daniel Djambov
Categories: Companies

Telerik Test Studio and Selenium

Telerik TestStudio - Fri, 01/27/2017 - 13:34
Mixing different frameworks into one automated testing solution can lead to powerful results. Test Studio and Selenium are two frameworks that complement each other well. 2016-03-18T15:46:17Z 2017-01-27T01:40:03Z Iliyan Panchev
Categories: Companies

Product Notifications in Test Studio

Telerik TestStudio - Fri, 01/27/2017 - 13:34
With Product Notifications, Test Studio becomes easier to use than ever, helping you discover new functionality and save time. 2016-02-11T16:02:11Z 2017-01-27T01:40:03Z Konstantin Petkov
Categories: Companies

Looking Back and Ahead as We Recap the Test Studio Webinar

Telerik TestStudio - Fri, 01/27/2017 - 13:34
The latest updates to Telerik Test Studio have arrived. We recap what's new and upcoming here, which we also recently discussed in our webinar (which you can watch here). 2016-02-09T17:23:21Z 2017-01-27T01:40:03Z Iliyan Panchev
Categories: Companies

How to create generation 2 virtual machine using PowerShell?

Testing tools Blog - Mayank Srivastava - Fri, 01/27/2017 - 11:53
Below script would help to create virtual machine in Hyper-V using PowerShell- Open PowerShell window as admin. Paste below script and hit enter. VM would be created. Open Hyper-V Manager and verify it. # Set VM Name, Switch Name, and Installation Media Path. $VMName = ‘TestVM’ # Switch is nothing but network adapter $Switch = […]
Categories: Blogs

Blue Ocean Dev Log: January Week #4

As we get closer to Blue Ocean 1.0, which is planned for the end of March, I have started highlighting some of the good stuff that has been going on. This week was 10 steps forward, and about 1.5 backwards…​ There were two releases this week, b19 and b20. Unfortunately, b20 had to be released shortly after b19 hit the Update Center as an incompatible API change in a 3rd party plugin was discovered. Regardless, the latest b20 has a lot of important improvements, and some very nice new features. A first cut of the "Create Pipeline" UX, seen above, allowing you to create Git based Multibranch Pipelines like you have never...
Categories: Open Source

Notes from the financial services trenches: Know your card payments & users

If you are responsible for marketing digital products in the financial services industry, or any industry, you are undoubtedly interested in reducing the impact of card payment failures on operations on your business, all while enhancing the user experience. But what are the indicators should you be monitoring?

I work with many customers from the general insurance industry, and it’s no secret that their product sales are increasingly generated from online platforms rather than through traditional paper-based methods. While it provides immense convenience and better efficiency to both customers and insurance companies, the critical issue here is users’ ability to make payment successfully. Failure to ensure this, and your business may be negatively impacted in 3 main areas:

  1. Unhappy user ends up purchasing policy from competitor
  2. Increased operations costs (identifying a failed payment doubles operations costs)
  3. Loss of recurring revenue (e.g. once a user buys a life / motor insurance they are likely to stick to it for multiple years with the same provider)

Typically, when one thinks about adding a payment feature online for services, for example the purchase of life insurance, the picture that comes into mind would be integrating payment modes, such as VISA, MasterCard, AMEX PayPal and Discover to the final payment page, and voila, the job’s done. Now you can sit back and relax while the digital cash register rings.

However, in this article I will not be talking about how payment gateways work (merchant, issuer, acquirer, open network, closed network and so on) but more importantly, I want to highlight what you should be looking out for once integration is completed and how to prepare for the unknown if payment failures do occur.

Prevention is always better than cure:

  1. Is the payment gateway connection up or down?
  2. Don’t just rely on success or failure responses from payment gateway (the lifecycle of card payments is, unfortunately, not that simple). What are the possible success / failure responses of the payment gateway?
    1. Transaction type
    2. Transaction status
    3. Error message e.g. insufficient funds, invalid card number, invalid cvv, etc.
  3. Co-relate user ID with failed payment. If you have this in place, you can proactively reach out to the user in the event of failure and help to complete the transaction.
  4. What are the mandatory fields that the issuing bank requires while validating user, such as user name, cvv and expiry date? You’d be surprised to know that not all fields are mandatory, depending on the issuing bank.
  5. Check if there is any maximum limit set by the payment gateway in terms of the amount that the user is allowed to pay.The reason for failed payment can be from something as simple as the user entering the wrong expiry date, to other things as bizarre as a maximum limit in amount set by the payment gateway. What if a payment is marked as “failed” because the transaction amount is more than, for example, $1,000? (Note that it’s not the same as credit limit when paying by credit card or available bank balance of a debit card)
  6. Campaign performance: A simple trending chart of interest to sales and marketing. Does your payment correspond to online / offline campaign time?

Armed with these useful pointers, digital product owners will be able to set up dashboards, reports and alerts before a product launch to achieve better online user experience.

The post Notes from the financial services trenches: Know your card payments & users appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Continuous Performance Testing Using Jenkins CI / CD Pipelines with LoadRunner

HP LoadRunner and Performance Center Blog - Thu, 01/26/2017 - 13:48


Learn how to create your first Pipeline script in Jenkins using the HPE Jenkins plugin that runs LoadRunner tests and lets you integrate them into your current build pipeline. 

Categories: Companies

Cambridge Lean Coffee

Hiccupps - James Thomas - Thu, 01/26/2017 - 08:00

This month's Lean Coffee was hosted by Redgate. Here's some brief, aggregated comments and questions  on topics covered by the group I was in.

Why 'test' rather than 'prove'?
  • The questioner had been asked by a developer why he wasn't proving that the software worked.
  • What is meant by proof? (The developer wanted to know that "it worked as expected")
  • What is meant by works?
  • What would constitute proof for the developer in this situation?
  • Do we tend to think of proof in an absolutist, mathematical sense, where axioms, assumptions, deductions and so on are deployed to make a general statement?
  • ... remember that a different set of axioms and assumptions can lead to different conclusions.
  • In this view, is proof 100% confidence?
  • There is formal research in proving correctness of software.
  • In the court system, we might have proof beyond reasonable doubt 
  • ... which seems to admit less than 100% confidence. 
  • Are we happier with that?
  • But still the question is proof of what?
  • We will prioritise testing 
  • ... and so not everything will be covered (if it even could be)
  • ... and so there can't be emprical evidence for those parts.
  • How long is enough when trying to prove that a program never crashes?

What would you be if not a tester? What skills cross over?
  • One of us started as a scientist and then went into teaching. Crossover skills: experimentalism, feedback, reading people, communicating with people, getting the best out of people.
  • One of us is a career tester who nearly went into forensic computing. Crossover skills: exploration, detail, analysis, presentation.
  • One of us was a developer and feels (he thinks) more sympathetic to developers as a result.
  • One of us is not a tester (Boo!) and is in fact a developer (Boo! Hiss! etc)
  • I am a test manager. For me, the testing mindset crosses over entirely. In interactions with people, projects, software, tools, and myself I set up hypothesis, act, observe the results, interpret, reflect.
  • ... is there an ethical question around effectively experimenting "on" others e.g. when trying some approach to coaching?
  • ... I think so, yes, but I try to be open about it - and I experiment with how open, when I say what I'm doing etc.

Pair Testing. Good or Bad?
  • The question starts from Katrina Clokie's pairing experiment.
  • The questioner's company are using pairing within the Test team and find it gives efficiency, better procedures, skill transfer.
  • It's good not to feel bad about asking for help or to work with someone.
  • It helps to create and strengthen bonds with colleagues.
  • It breaks out of the monotony of the day.
  • It forces you to be explain your thinking.
  • It allows you to question things.
  • It can quickly surface issues, and shallow agreements.
  • It can help you to deal with rejection of your ideas.
  • Could you get similar benefits without formal pairing?
  • Yes, to some extent.
  • What do we mean by pairing anyway?
  • ... Be at the same machine, working on something.
  • ... Not simply a demonstration.
  • ... Change who is driving during the session.
  • We have arranged pairing of new hires with everyone else in the test team.
  • ... and we want to keep those communication channels open.
  • We are just embarking on a pairing experiment based on Katrina's.

Reading Recommendations.
  • Edward Tufte, Beautiful Evidence, Envisioning Information: data analysis is about comparison so find ways to make comparisons easier, more productive etc on your data.
  • Michael Lopp, Managing Humans: we all have to deal with other people much of the time.
  • Amir Alexandar, Inifinitesimal: experimental vs theoretical (perhaps related to the idea of proof we discussed earlier).
Edit: Karo Stoltzenburg has blogged about the same session and Sneha Bhat wrote up notes from her group too.Image:
Categories: Blogs

Listens Learned in Software Testing

Hiccupps - James Thomas - Wed, 01/25/2017 - 22:50

I'm enjoying the way the Test team book club at Linguamatics has been using our reading material as a jumping-off point for discussion about our experiences, about testing theory, and about other writing. This week's book was a classic, Lessons Learned in Software Testing, and we set ourselves the goal of each finding three lessons from the book that we found interesting for some reason, and making up one lesson of our own to share.

Although Lessons Learned was the first book I bought when I became a tester, in recent times I have been more consciously influenced by Jerry Weinberg. So I was interested to see how my opinions compare to the hard-won suggestions of Kaner, Bach and Petttichord.

For this exercise I chose to focus on Chapter 9, Managing the Testing Group. There are 35 lessons here and, to the extent that it's possible to say with accuracy given the multifaceted recommendations in many of them, I reckon there's probably only a handful that I don't practice to some extent.

One example: When recruiting, ask for work samples (Lesson 243). This advice is not about materials produced during the  interview process. It is specifically about bug reports, code, white papers and so on from previous employments or open source projects. I can't think of an occasion when I've asked for that kind of thing from a tester.

Of the 30 or so lessons that I do recognise in my practices, here's three that speak to me particularly today: Help new testers succeed (Lesson 220), Evaluate your staff as executives (Lesson 213), The morale of your staff is an important asset (Lesson 226).

Why these? Well, on another day it might be three others - that's what I've found with this book over the years - but in the last 12 months or so our team has grown by 50% and we've introduced line management structure. This triplet speak to each of those changes and the team as a whole, as we work through them.

When bringing new people into our team we have evolved an induction process which has them pair with everyone else on the team at least a couple of times, provides a nominated "first point of contact", a checklist of stuff to cover in each of the first days, weeks and months, a bunch of introductions to the Test team and company (from within the team) and to other parts of the company (from friendly folk in other groups). We also have the new person present to the team informally about themselves and their testing, to help us to get to know them, develop some empathy for them, to give them some confidence about talking in front of the established staff. Part of the induction process is to provide feedback on the induction process (this is not Fight Club!) which we use to tune the next round.

Until this year, I have conducted all annual reviews for all of the testers in my team. This affords me the luxury of not having to have externalised the way in which I do it. That's not to say that I haven't thought about it (I have, believe me, a lot) or that I haven't evolved (again, I have) but more that I haven't felt the need to formally document it. Now that there are other line managers in the team I have begun that process by trying to write it down (almost always a valuable exercise for me) and explaining it verbally (likewise; particularly when doing it to a bunch of intelligent, questioning testers).

How to assess the performance and development needs of your team (fairly) is tricky. The notion of executive in Lesson 213 is from Drucker - "someone who manages the value of her own time and affects the ability of the organisation to perform" as Lesson 211 puts it - and essentially cautions against simple metrics for performance in favour of a wide spread of assessments, at least some of which are qualitative, that happen regularly and frequently. It recommends paying attention to your staff, but contrasts this with micromanagement.

Past experience tells me that there is almost never consensus on a course of action within my teams and I rarely get complete agreement on the value of an outcome either. However, it's important to me to be as inclusive and open and transparent as possible about what I'm doing, in part because that's my philosophical standpoint but also because I think that it contributes to team morale, and that is crucial to me (Lesson 226) because I think a happy team is in a position to do their best work.

When planning and going through the kind of growth and restructuring that our team has in the last 12 months, morale was one of my major concerns - of the new hires, of the existing team members, and also of myself. It's my intuition that important aspects of building and maintaining morale in this kind of situation include providing access to information, opportunity to contribute, and understanding of the motivation.

I didn't want anyone to feel that changes were just dropped on them, that they weren't aware that changes were coming, that they had no agency, that they hadn't had chance to ask questions or express their preferences or suggestions, and that I hadn't tried to explain what options were being considered and why I made the particular changes that I did, and what my concerns about it are, and that my full support is available to anyone who wants or needs it.

The lesson of my own that I chose to present is one I've come back to numerous times over the years:

  If you can, listen first.

Listening here is really a proxy for the intake of information through any channel, although listening itself is a particularly important skill in person-to-person interactions. Noticing the non-verbal cues that come along with those conversations is also important, and likewise remembering that the words as said are not necessarily the words as meant.

Since reading What Did You Say? I have become much more circumspect about offering feedback. Feedback is certainly part of the management role - any manager who is not willing to give it when requested is likely not supporting their staff to the fullest extent they could - but feeling that it's the manager's role to dispense feedback at (their own) will is something I've come to reject.

These days I try to practice congruent management and a substantial part of that is that it requires understanding - of the other person, the context and yourself. Getting data on the first two is important to aid understanding, and listening - really listening, not just not speaking - is a great data-gathering tactic. The Mom Test, which I read recently, makes essentially the same point over and over and over. And over.

Listening isn't always pleasant - I think, for example, of a 30-minute enumeration of things a colleague felt I hadn't done well - but I try to remember that the other person - if being honest - is expressing a legitimate perspective - theirs - and understanding it and where it comes from - for them - is likely to help me to understand the true meaning of the communication and help me to deal with it appropriately.

And that's a lesson in itself: managing is substantially about doing your best to deal with things appropriately.
Image: Wiley
Categories: Blogs

New Year, New Blog

Jimmy Bogard - Wed, 01/25/2017 - 21:10

One of my resolutions this year was to take ownership of my digital content, and as such, I've launched a new blog at I'm keeping all my existing content on Los Techies, where I've been humbled to be a part of for the past almost 10 years. Hundreds of posts, thousands of comments, and innumerable wrong opinions on software and systems, it's been a great ride.

If you're still subscribed to my FeedBurner feed - nothing to change, you'll get everything as it should. If you're only subscribed to the Los Techies feed...well you'll need to subscribe to my feed now.

Big thanks to everyone at Los Techies that's put up with me over the years, especially our site admin Jason, who has become far more knowledgable about WordPress than he ever probably wanted.

Categories: Blogs

Business Innovation through APM Metrics-Driven DevOps

Innovating faster to meet end-user demand is one of the challenges addressed by DevOps. DevOps bridges the knowledge gap about the impact between business and application teams. Application teams have to better understand the impact they have on business with code or deployment changes. On the other hand, business wants to better understand the impact on current development commitments when they come up with new requirements and a tight schedule.

The following is an example I keep using in my presentation when I talk about the importance of metrics-driven continuous delivery. It shows the success and failure of marketing campaigns as well as deployments. Making this data visible and accessible to both business (who drove users through a marketing campaign) and the application team (who deployed a change on May 2nd causing a spike in frustrated users) allows us to all better understand the impact we have.

 Conversion Rate!Dynatrace AppMon & UEM bridges the gap between business and application team. Highlighting the impact of events such as campaigns or deployments on the bottom line: Conversion Rate!

Precisely bridging this gap is what motivated the story behind this blog, which was shared with me by Jose Miguel Colella, one of our Lead Consultants in our Dynatrace Expert Services. Jose worked with one of the larger European travel agencies. Holiday season booking is the most critical time to do business in the travel industry. In countless examples we have all seen the facts that bad performance and user experience impacts end-user behavior. In the case of online travel, frustrated users simply open the next travel app or browse to the next online travel portal to get a better deal. It is highly unlikely they’ll ever return to the portal that left them waiting or that crashed their device.

In order to understand the impact of performance, new deployments, or changes in features on the business, the travel agency needed to bring visibility into their application performance to understand how their customer base was utilizing the various applications, and have the business receive insight into key business transactions.

Defining Dashboards and Metrics that benefit Business, App and Ops Teams

The travel agency’s business users worked alongside development and operations to create dashboards that are used daily by key stakeholders for visibility into the performance of their applications. One of the agency’s subsidiaries is a major Spanish Airline. They are leveraging Dynatrace Application Monitoring to bring insight into critical business flows; checkin, purchases, purchase type as well as revenue. The business is communicating with operations and development to bring visibility to the business flows that most matter to customers and the company’s brand.

Business Metrics: Reservation by Payment Type

The business users demanded visibility into the payment type that was being used for purchases; Credit Card, Transfer, Paypal, etc. A quick session with the development team yielded the following dashboard: showing the purchases by payment type by a time series showing the response time. With this they are able to visualize both the major purchase type, with visibility into any performance issue with any specific payment type.

Dynatrace AppMon’s Business Transactions give a business view on top of the captured application performance data. Dynatrace AppMon’s Business Transactions give a business view on top of the captured application performance data.

The dashboard is used daily by the business in order to benchmark expected purchases as well obtain performance metrics into a transaction that makes up the majority of the revenue.

Tip: Business Transactions can use context data from any type of information along the end-to-end transaction flow. Whether it is an HTTP Parameter, REST API Endpoint, Backend Method Argument or Return Value or a SQL Query. Even information presented to the end user in HTML can be captured and used. Learn more on our Advanced Business Transaction YouTube Tutorial.

Business and App Metrics: Load, Failure and Response Time

The business users also wanted visibility into the entire flow that led to the purchase. If a customer is having issues before the purchase, he/she is unlikely to go through with the purchase. Having blurred the lines between business and development, a knowledge transfer took place, where the business acquired more in-depth knowledge of the entire flow, while development understood the needs of the business.

The following dashboard was created to provide visibility into the entire business flow from home page to purchase of flight ticket, with failure rate, response time and count. At any time the business is aware of what is occurring with their customer base, and where there might be a bottleneck.

 Conversion Rate!Dynatrace AppMon Dashboard to bridge the gap between Business and Application Team. Understand where to optimize to impact the bottom line: Conversion Rate!

Both these dashboards are indications of something greater at work, that has also come about as a result of Digital Performance Monitoring. Business, Operations and Development are speaking one language, understanding each other’s requirements, and fostering growth and visibility for the company.

Tip: Check out what else is possible with Dynatrace AppMon dashboarding. Watch our BizOps Dashboard as well as our Building Dynatrace Dashboards YouTube Tutorial!

Metrics for A/B Testing: What is faster? And Why?

When web sites have multiple channels and user work flows to eventually make a reservation, it is good to understand the difference in performance, user experience, and the impact this has on user behavior. Also, when doing A/B Testing or Blue-Green Deployments it is important to compare the behavior. The following is a dashboard used by the application teams to understand the different performance behavior of similar features. It’s easy to spot the huge performance difference between these two user workflow steps:

Dynatrace AppMon can be used to compare performance behavior between different workflow steps, A/B Testing or Blue-Green Deployments.Dynatrace AppMon can be used to compare performance behavior between different workflow steps, A/B Testing or Blue-Green Deployments.

From these dashboards it is easy to drill into the actual root cause of why the performance is so much different. Whether it is a poorly-coded method, a slow SQL statement, a misconfigured micro-service, or a faulty or slow JavaScript file, the information is available with a single click in Dynatrace AppMon & UEM.

Tip: Want to learn more about how to identify root cause and diagnose performance issues with Dynatrace? Then check out our YouTube Tutorials on Optimizing Application with Automatic Pattern Detection.

Trustworthy data for everyone!

Internally these dashboards are seen as the single source of truth for business users and development teams. The development teams leverage these dashboards to gauge the performance of both current applications and new versions that are published.

This is the power of Dynatrace Application Monitoring. It allows for different users from different backgrounds to have insight into the application delivery chain in a way that is most relevant to their needs and concerns. For the business user, it is understanding that a spike in response time for the search functionality resulted in loss of revenue. For the development teams, it is understanding that their application is critical to the success of the business.

If you want to try Dynatrace we provide two easy options for you:

  1. Try Dynatrace SaaS for 15 Days
    1. Full Stack AI-Driven Monitoring
    2. Just deploy a single agent on your machine
  2. Try Dynatrace AppMon & UEM for 30 Days
    1. On-Premise Lifecycle Monitoring of your Application
    2. Closing the loop from Dev via Test to Ops and Business

If you want to learn more, please feel free to connect with Jose Miguel Colella or read up on what our Dynatrace Expert Services have to offer.

And keep closing these gaps!

The post Business Innovation through APM Metrics-Driven DevOps appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Refactoring Towards Resilience: A Primer

Jimmy Bogard - Wed, 01/25/2017 - 19:29

Other posts in this series:

Recently, I sat down to help a team to put in some resiliency in a payment page. This payment page used Stripe as its payment gateway. Along with accepting payment, this action had to perform a number of other actions. Roughly, the controller action looked something like:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");

I'm greatly simplifying things, but these were the basic steps of the order:

  1. Find customer from DB
  2. Create order based on customer details and cart info
  3. Post payment to Stripe
  4. Send "payment successful" email to customer via SendGrid
  5. Publish a message to RabbitMQ to notify downstream systems of the created order
  6. Redirect user to a "thank you" page

Missing in this flow are the database transaction commits, that's taken care of in a global filter:

config.Filters.Add(new DbContextTransactionFilter());  

I see this kind of code quite a lot with people working with HTTP-based APIs, where we make a lot of assumptions on the success and failure of requests. Ever since we've had RPC-centric APIs to interact with external systems, you'll see a myriad of assumptions being made.

In our RESTful centric world we live in now, it's easy to patch together and consume APIs, but it's much more difficult to reason about the success and failure of such systems. So what's wrong with the above code? What could go wrong?

Remote failures and missing money

Looking closely at our request flow, we have some items inside a DB transaction, and some items not:

Transaction flow

After the first call to the database (inside a transaction) to build up an order, we start to make calls to other APIs that are not participating in our transaction. First, to Stripe to charge the customer, then to SendGrid to notify the customer, and finally to RabbitMQ. Since these other services aren't participating in our transaction, we need to worry about what happens if those other systems fail (as well as our own). One by one, what happens if any of these calls fail?

  1. DB call fails - transaction rolls back and system is consistent
  2. Stripe call fails - no money is posted and transaction rolls back
  3. SendGrid call fails - customer is charged but transaction rolls back
  4. RabbitMQ call fails - customer is charged and notified but transaction rolls back
  5. DB commit fails - customer is charged and notified and downstream systems notified but transaction rolls back

Clearly any failures after step 2 are a problem, since we've got downstream actions that have happened, but no record of these things happening in our system (besides maybe a log entry).

To address this, we have many options, and each will depend on the resiliency options of each different service.

Resiliency and Coffee Shops

In Gregor Hohpe's great paper, Your Coffee Shop Doesn't Use Two-Phase Commit, we're presented with four options with handling errors in loosely-coupled distributed systems:

  1. Ignore
  2. Retry
  3. Undo
  4. Coordinate

Each of these four options coordinates one (or more) distributed activities:

Coordination Options

Which we decide for each interaction will highly depend on what's available for each individual resource. And of course, we have to consider the user and what their expectation is at the end of the transaction.

In this series, I'll walk through options on each of these services and how messaging patterns can address failures in each scenario, with (hopefully) our brave customer still happy by the end!

Categories: Blogs

Tricentis Raises $165 Million

Software Testing Magazine - Wed, 01/25/2017 - 18:56
The software testing vendor Tricentis has announced that it has raised $165 million in Series B financing from Insight Venture Partners, a leading global private equity and venture capital firm. Tricentis has more than 400 companies in its client list, including HBO, Whole Foods, Toyota, Allianz, BMW, Starbucks, Deutsche Bank, Lexmark, Orange and UBS. Its software testing solution, Tosca Testsuite, consists of a Model-based Test Automation and Test Case Design approach, encompassing risk-based testing, test data management and provisioning and service virtualization.
Categories: Communities

Zephyr Releases ZAPI Add-On in the Cloud

Software Testing Magazine - Wed, 01/25/2017 - 18:44
Zephyr has announced the release of ZAPI in the Cloud. The deployment delivers requests on the JIRA platform to integrate automation and continuous integration tools.  This new feature provides enhanced test execution, new query language and REST APIs advances DevOps testing. Additional release details include: * Further advancing its leadership as the top grossing testing add-on in the Atlassian Marketplace, Zephyr for JIRA takes test management to the next level, now supporting ZAPI deployments in the cloud * ZAPI is a Zephyr for JIRA add-on that allows access to Zephyr’s testing data, including the ability to view and upload data programmatically * ZAPI allows access to Zephyr for JIRA to read/write testing-related data via REST APIs. This can be used to query test cycles, fetch tests and test cycles, update test results, create new tests, and more * Integrate to any automation and/or continuous integration tools with well documented and supported REST APIs “We take our customers’ requests seriously, and we’re committed to delivering the best testing products that complement our Atlassian tools,” said Hamesh Chawla, VP of Engineering for Zephyr. “As teams continue with their Agile and DevOps transformations, seamless integrations will play a key role in enabling the flow of data across systems for continuous testing.” The combination of Zephyr for JIRA and ZAPI creates endless testing possibilities for project teams in the cloud, giving users enhanced abilities to execute tests in a free form way and leverage the REST APIs to unify testing data across multiple systems. [...]
Categories: Communities

Introducing ActiveMQ monitoring (beta)

We’re happy to announce the beta release of Dynatrace ActiveMQ monitoring! ActiveMQ server monitoring provides information about queues, brokers, and more.  You’ll know immediately when your ActiveMQ nodes are underperforming. And when problems occur, it’s easy to find out why.

To view ActiveMQ monitoring insights

  1. Click Technologies in the navigation menu.
  2. Click the ActiveMQ tile on the Technology overview page.
    Note: Monitoring of multiple ActiveMQ clusters isn’t supported in this beta release.
  3. To view cluster metrics, expand the Details section of the ActiveMQ process group.
  4. Click the Process group details button. 
    ActiveMQ cluster monitoring
  5. On the Process group details page, select the Technology-specific metrics tab. Here you can identify any problematic nodes.
  6. To access node-specific metrics, select a node from the Process list at the bottom of the page. Drill down into the metrics of individual nodes to find the root causes of any potential bottlenecks or detected problems. 
    Navigate to chosen ActiveMQ node
  7. Click the AcitveMQ metrics tab. ActiveMQ node metrics
    Here you’ll find valuable ActiveMQ node-specific metrics. Pay particular attention to the Broker limits chart, which shows memory usage, storage usage, and any temporary storage limits that have been assigned to the broker. Other important metrics include Connections, Number of producers, and Number of consumers
  8. Click the Queues metrics tab.
    ActiveMQ queuesThe Queues metrics tab includes all the information you need to know about your queues.  Queues size informs you about traffic, Average enqueue time increase informs you about ActiveMQ message processing-time degradation. Please be aware that Average enqueue time is calculated as an average of all messages created since the start of the broker session. When waiting time increases rapidly, this metric changes only slightly. It’s most helpful for trends analysis. For real-time usage, keep an eye on the Average enqueue time increase metric.
ActiveMQ metrics Average enqueue time increase Increase of average enqueue time counted, as delta between samples. Average enqueue time Average of time from enqueue to dequeue of messages, given in milliseconds.
(waiting time of messages before they’re consumed) Memory usage Percentage usage of memory limit for NON_PERSISTENT messages. Store usage Percentage usage of storage limit for PERSISTENT messages. Temp usage Percentage usage of storage limit for temporary messages. Current connections Number of currently open connections. Total connections Number of connections from last broker restart. Producers TotalProducerCount Consumers TotalConsumerCount Queue size Number of messages in the queue/store that haven’t been ack’d by a consumer. Enqueue count Number of messages sent to the queue since the last restart. Dequeue count Number of messages removed from the queue (ack’d by consumer) since last restart. Dispatch count Number of messages sent to consumer sessions. (Dequeue + Inflight) Expired count Number of messages not delivered because they expired. In Flight count Number of messages sent to a consumer session that haven’t received an ack. Prerequisites
  • Linux OS
  • ActiveMQ 5.8.0+
  • Compatible with Docker containers
  • Dynatrace OneAgent is required on all nodes
  • Enabled JMX monitoring
Enable ActiveMQ monitoring globally

With ActiveMQ monitoring enabled globally, Dynatrace automatically collects ActiveMQ metrics whenever a new host running ActiveMQ is detected in your environment.

  1. Go to Settings > Monitoring > Monitored technologies.
  2. Set the ActiveMQ JMX switch to On.
Have feedback?

Your feedback about Dynatrace ActiveMQ monitoring is most welcome! Let us know what you think of the new ActiveMQ plugin by adding a comment below. Or post your questions and feedback to Dynatrace Answers.

The post Introducing ActiveMQ monitoring (beta) appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Dynatrace API can now automate tagging of related components

Tags have long offered a great means of organizing lists, notifications, and charts for related components across all Dynatrace views. Now, with the latest release of the Dynatrace Smartscape and Topology API, you can automate the push of relevant tags to monitored components. This new API feature opens the way for all sorts of powerful automation use cases.

Use tags to group related components, services, or applications

Tags allow you to organize related components into groups for monitoring and tracking purposes. They also facilitate searches for related components and the collection of component metrics into meaningful groups for analysis. Tags are particularly useful when you have a high number of infrastructure components, services, or applications.

Tags can also be used to assign responsible teams and individuals to specific components or to simply group components into team-specific views. In addition to the filtering possibilities that tags provide within list views, you can also use tags to create component-specific dashboard charts.

You have a few options for managing tags within Dynatrace. You can manually tag each of the monitored components in your environment individually. Alternatively, you can bulk tag numerous components simultaneously from the Tagging page (Settings > TaggingManual Tagging).

Use Dynatrace API to manage tags

With the latest release of Dynatrace you now have a new option for managing tags in your environment, using the Dynatrace Smartscape & Topology API. With the Dynatrace API, DevOps teams can automate the creation of tags using custom automation scripts. With just a few lines of scripting, you can automatically create and assign tags to the components in your environment.

Use tags to assign team responsibilities

One valuable use case is to assign user-specific tags to individual components so that individual personnel can easily find and understand their areas of responsibility. This is done by fetching your personnel database and then pushing user-specific tags onto specific infrastructure components, applications, or services.

For example, imagine that an Ops person in your organization is responsible for all Java services within a group of services that has been tagged with PROD. From this point, it’s simple to write a script that fetches all services that have been automatically detected by Dynatrace. The list of services includes all pre-existing tags, enabling you to selectively assign the Ops person responsibility to all services that receive a new tag. The example below shows you how to create such a tag.

This scripting example fetches a list of auto-discovered services along with all tags that have been assigned to the detected services.

Once you push a new responsibility tag onto the service, the script includes the new tag.

Add version tags for artifactories and build infrastructure

For aritfactories and build infrastructure you can write scripts that automatically add version tags to all new service and application deployments. This enables you to generate Dynatrace lists and views that are specific to select release versions.

All newly added tags appear on the associated service pages, as shown below.

Visit the Tagging page (Settings > TaggingManual Tagging) to review the tags that are set up in your environment and the components they are assigned to.

Please refer to Dynatrace API Help for full details on tagging your environment components using the Dynatrace Smartscape & Topology API.  Also, check out this Python example on our Github page to see how you can automatically tag all components using name patterns.

The post Dynatrace API can now automate tagging of related components appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

We Don’t Need Testers! What we Really Need is Testers!

Software Testing Magazine - Wed, 01/25/2017 - 15:54
Software testers are limited by their role. Testers are only allowed to be testers. We need to break the current tester mold and replace it with a new role… the tester. The tester is a much bigger role than it currently is. Its much bigger, it much more valuable, its higher status and its much more fun. So do you want to be a tester? Or would you rather be a tester? Video producer
Categories: Communities

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today