Skip to content

Feed aggregator

What is Customer Success at CloudBees?

In an endless effort to provide the best customer experience for all subscribers, consistently and effectively, what does Customer Success imply for myself at CloudBees? The practical answer is to reduce churn, grow recurring revenue and increase product adoption. How does this become reality, though? There’s no magical “button of success” to achieve these goals and it certainly doesn’t happen overnight. From the start, I need to understand what matters most to my customers from their point of view to build a relationship with integrity and credibility. Only then can I provide trustworthy and reliable advice while working alongside the customer to meet their goals. I always want to add value in some facet with each interaction, even if it’s a quick email.

At the end of the day it’s about improving the software delivery cycle as a whole. At CloudBees we utilize a customer-centric engagement model to drive value, and in turn, we seek to establish a regular meeting cadence with customers. This allows me to create a customer journey/roadmap that includes specific goals and timelines. I want to know what is going to make you, as the customer, successful and help create a path to success that includes a sense of accountability.

Customer Success is an emerging market, but it’s permeating throughout all sectors of business. It’s becoming even more important for enterprises to adopt a customer engagement model conducive to positive, regular conversations. Implementing change into an organization is a daunting task, specifically for larger entities, but it is become more evident that an effective CS model executed 100% of the time produces positive results.

Lastly, I’ll speak briefly to the factors I believe to be most influential in qualifying customer success.

Trust. Trust is imperative to establish credibility and a positive, professional rapport. You cannot have mutual respect without trust!

Advocate for the customer. I’m not referring to advocacy in the sense that “the customer is always right.” I’m referring to advocacy in a sense of truly understanding your customer’s needs, wants and timelines to achieve their goals. Having this understanding gives CSMs the ability to advocate for the customer on a truly individual level and work together to make sure that the customer is never wrong. When there’s mutual respect and transparency, we’re in the journey to success together.

Know your product well. Do you have to become extremely technical? No. Will it benefit those of us working in customer success to become more technical and understand the product better? Yes.

Hire the right people. Teamwork really does make the dream work. 

Don’t be afraid to fail.  Fail fast. I’m not afraid to fail because I learn from my mistakes. The internal support surrounding this methodology makes it effective and enables continuous growth - personally and professionally.

Parker Ennis
Customer Success Manager
CloudBees, Inc.

 

 

 

 

 

 

Blog Categories: Jenkins
Categories: Companies

QASymphony Raises $40M in Series C Funding

Software Testing Magazine - Thu, 05/18/2017 - 16:09
QASymphony, a of software testing solutions, has announced that it has raised $40M in Series C funding, led by New York-based venture capital and private equity firm Insight Venture Partners....

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

ServiceV Pro 2.0 Released

Software Testing Magazine - Thu, 05/18/2017 - 15:53
SmartBear Software has announced ServiceV Pro 2.0, the latest version of the popular service virtualization product. The new version of ServiceV Pro introduces Java Database Connectivity (JDBC)...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Pipeline Development Tools

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. I’ve only been working with Pipeline for about a year. Pipeline in and of itself has been a huge improvement over old-style Jenkins projects. As a developer, it has been so great be able work with Jenkins Pipelines using the same tools I use for writing any other kind of code. I’ve also found a number of tools that are super helpful specifically for developing pipelines. Some were easy to find like the built-in documentation and the Snippet Generator. Others were not as obvious or were only recently released. In this post, I’ll show how a few of those tools...
Categories: Open Source

How to Write a Test Case for Your Project and Your Team

Testlio - Community of testers - Wed, 05/17/2017 - 19:15

Every team and organization has its protocols. There is no one way that teaches indisputably how to write a test case.

The writing comes down to the project and the team. Actually, QA managers are wise to have a few different methods they can turn to, allowing for the fluid application of the test case format that truly fits.

To that end, here are five different methods with examples, explanations, writing tips, and project pairings.

But first, a quick overview: test cases break down an app or web page into one test at a time. They can be used for an entire test cycle or only for certain functions. While they typically direct testers, they can also be written more loosely.

Structured exploratory test cases

These test cases are as brief as can be. They are “high-level” considerations within each feature or area of a product.

Structured-exploratory-testing.png

 Who they’re for: Test leads/QA managers who want to give their teams the freedom of the exploratory method while ensuring product coverage

Where they fit: Products with uncomplicated steps and actions and/or testing cycles where user-centric exploration is desired

How to write them:

  1. Break down a product into its areas or functions
  2. Divide each area into tasks
  3. Request a pass/fail result for each task

Because we value the results of exploratory testing (but still want to oversee testing efforts) we use this method often. We call them “task lists” and while pass/fail is the most common input we request from testers, we do have a variety of reporting options built into our platform that we assign ahead of time, including text commentary and multiple choice responses.

pass-fail-reporting-methods.png

Classic test cases

While there’s no set one way to do it, the format for test cases that most quickly comes to mind for many in the QA world includes a name, description, preconditions, and steps with a final expected result.

 classic-test-cases-expected-result.png

Because these are the densest method we’re describing, be sure to practice your skills at brevity. Include only information that is necessary and progresses the tester forward (otherwise you’ll end up with some lengthy test cases).

Who they’re for: Test leads/QA managers who need to tightly manage a testing team, whether for time or project restrictions.

Where they fit: Critical functions of an app—anything that requires clear, perfect testing.

How to write them:

  • Identify a critical function to test
  • Name it and describe it clearly using verbs where possible
  • Break the test down into no more than 8 directive steps
  • Describe the expected result
  • Depending on how/where your testers report, add field for “actual result,” “pass/fail,” and “comments”
Valid/invalid test cases

Typically written in Excel, the valid/invalid format for test cases has the goal of cramming the maximum amount of information in the minimum amount of space by doing away with steps. Instead, QA managers create columns for each data set, tool, or object and rows for each test case. Testers then interpret the information to come up with the logical steps. 

valid-invalid-test-cases.png

Who they’re for: QA managers with an experienced team who will benefit from the use of quick validation exercises over potentially lengthy test cases

Where they fit: big complex projects with multiple steps and multiple pre-conditions for each test case

How to write them:

  • Strategize which test cases to cover in each sheet (by area or function)
  • Write the headings for your columns—ID, scenario, action, the tools and data types that fit the project, and finally the expected result
  • In rows, create the initial scenarios, such as login, logout, forgot password, and the base critical functions
  • Add more column headings as you go along (new scenarios will make you think of more)
  • For each scenario row, mark whether the data or tool is valid, invalid, or non-applicable
Verify-using-with-to test cases

These simple test cases break down info into easy-to-grasp language.

verify-using-with-to-test-cases.png

Who they’re for: QA managers assigning projects to beginning testers or employees in other roles who are temporarily filling in as testers OR QA managers looking for another way to structure exploratory tests

Where they fit: Projects of any size and nature—this test case style is more about language (either not scaring away new testers or providing exploratory freedom to experienced testers by leaving out specific steps)

How to write them:

  1. Start with one action
  2. “Verify” serves as both the name and description of the test
  3. “Using” is the tools and data that the tester will use
  4. “With” is a list of any necessary preconditions or givens
  5. “To” is the expected result
Testable use cases

Use cases are often written by those outside of QA, such as business analysts. Use cases capture what a software product does, not how it does it. They aren’t necessarily designed for testing, but they can be modified for testing, providing the benefit of user-centric, strategic testing that is focused on business or product goals.

use-cases-that-can-be-tested.pngWho they’re for: QA managers that want to get a jump start on writing test cases before the product code is even finished and/or who want to structure exploratory testing with not just user personas (AKA user stories) but with more specific user goals

Where they fit: large, complex projects whose end goals can have multiple pathways, typically enterprise software

How to write them:

  1. Start with a goal
  2. Write in verb-driven name and description
  3. Write in actors (can be job titles or user titles) and preconditions
  4. Write flows NOT STEPS—meaning keep the flow technology-neutral by not directing the tester, but instead writing in terms of the user and the system
  5. Write an end result that reflects what other users or areas (if any) are affected, so they will also be validated by tester

While some of these methods might stretch your conceptualization of test cases, that’s a good thing. What matters is finding the right format for your project.

Have you tried all of these? I’d be really curious to see where you think they best apply. Let me know in the comments below!

Categories: Companies

Integration Tests are a Scam

Software Testing Magazine - Wed, 05/17/2017 - 16:21
This presentation explores the issues and controversy around integration testing efforts. Then it follows with a discussion of testing outcomes and explores some patterns and techniques to achieve...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Why Localization Testing is About More than Language

Testlio - Community of testers - Wed, 05/17/2017 - 14:00

A software product can’t achieve global domination without internationalization (I18n) and localization (L10n).

Internationalization is the first step: this process requires that a product have the ability to support other languages and regional formats. When done well, it makes localization (perfecting a product for each region/language on an individual level), much much smoother. But regardless of how skilled the internationalization team, localization and localization testing must occur before deployment to a new, previously untouched user base.

Unfortunately, many companies still don’t take this seriously, and international users are faced with obnoxious issues every single day. The English-heavy focus of most products is continually apparent to users in many non-English speaking countries.

Grammatical errors, nonsensical imagery, horrendous responsiveness—all of these come into play. So while accurate translation is essential, there are tons of other factors to be aware of during localization testing.

UI elements affected by language

First off, let’s look at the design elements affected by language that testers always need to watch out for:

Character counts & design space

The most common problem faced by localization is having enough space for translations.

What would take 30 characters to say in one language could be said in 4 in another—meaning there could be too much space, or too little. Words can fall outside of buttons or margins, sometimes becoming obscured or invisible.

Images with text inside

Testing teams are often presented with a product or site that’s been fully translated except for the images.

Certain words might be purposefully kept in the original language (because the intended user base knows them or the inclusion of foreign words is part of the app’s stylistic experience) but other images may have been simply forgotten about. Unless testers have been notified otherwise, any images that have text in the wrong language for that audience need to be logged.

Ungrammatical sentences with UI elements

Here’s an interesting localization conundrum: drop down menus inside of text sentences.

Imagine if a budgeting app’s transaction entry were worded like this:

image01-1.png

It would be a huge challenge to translate. For any mixed UI elements that have passed through the internationalization process, testers need to check that they’ve been properly localized in terms of placement and grammar.

Mobile responsiveness

Localization is a huge conundrum for mobile responsiveness. Changes in string sizes can wreak havoc on layout, to the extent that sometimes different layouts are required. Nothing can be assumed, which is excellent device coverage is necessary.

Testers should be using devices that are commonly used in that region. Certain brands will have a lot more market presence than others.

Additional issues to hunt for during localization testing

Yes, language has a big impact on UI elements, but there are also issues that stretch beyond language itself:

Imagery and icon meaning

Luckily, there are popular icons that are generally understood around the world:

  • Magnifying glass for search
  • House for home or dashboard
  • Gears for settings
  • Plus sign for add or create

But edit, cancel, go back or go forward can sometimes pose a challenge as there may be existing norms for the meaning of a slash, X, or arrow (especially depending on RTL or LTR reading directionality).

image02-2.png

Only native translators, testers, designers, or other resources can ensure full comprehension with icon and imagery use.

Keyboard shortcuts and swiping functionality

For web based applications, there has to be a complete understanding of keyboard shortcuts. Commonly used keyboard shortcuts in one language may not even be possible in another (if that key is absent from a native keyboard), or they could serve a different function.

Swiping directionality and norms come into play for mobile apps or web-based apps that are optimized for touch.

Currency and date formats

Even in countries that speak the same language, there will be different currencies and different standards for date formats.

The fact that the month/date order is reversed between the UK and the US isn’t necessarily known to everyone in either country, so if something is presented incorrectly (a pre-scheduled task or version history details), user experience can suffer dramatically.

Currency, on the other hand, isn’t just about presentation, because currency must also be correctly calculated in real time, particularly for banking, retail, and other transaction-heavy sites and products.

Layout directionality

RTL languages require lots of adaptations beyond backward/forward/redo/undo icons. Layouts, menus, text boxes, dialogue boxes, and edit boxes must be mirrored.

Why localization testing often requires functional testing

Let’s think about currency calculation again.

Pulling in live currency data might not have existed before the internationalization process, meaning this new backend function requires fresh functional testing.

Many of the other factors mentioned above also necessitate additional functional testing—which is really at the crux of why localization testing is about more than language.

So many new issues can be introduced in updates made to menus, formats, and layouts.

Basically, things can break. When a product is localized, it likely needs a full round of functional testing to ensure that no new issues have been introduced during the process. A review of translations and design elements could be just the beginning.

image00-3.png

Testers will be asking:

Do the translations make sense in context? Do they look right on the screen?

But they will also be asking:

Has localization caused any existing features to break? Do all new tasks and functions required by localization work properly?

Creating a team feedback loop for successful localization

The need for continued communication between design, development, and QA is super apparent during localization testing. QA lessons learned can impact future efforts to localize a different OS and can even strengthen the team’s skill at the early stages of internationalization.

With so much interaction between language and UI, localization mostly begins with design. And when QA is involved early on or can pass on learnings, the process only gets easier over time. Sharing commonalities and root causes for localization issues strengthens the organization’s capability of going global.

Testlio’s testers are always involved in the overall lifetime success of a product. It’s why we have long term testing teams, why testers love to collaborate, and why we submit requests and suggestions (not just bugs).

Categories: Companies

SEETEST 2017 Conference Call for Papers

Software Testing Magazine - Wed, 05/17/2017 - 10:00
The South East European Software Testing (SEETEST) Conference is a conference focused on Software Testing and Software Quality Management in South East Europe that will take place in Sofia, Bulgaria,...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Respawn 0.3.0-preview1 released for netstandard2.0

Jimmy Bogard - Wed, 05/17/2017 - 05:57

Respawn, a small library designed to ease integration testing by intelligently clearing out test data, now supports .NET Core. Specifically, I now target:

  • net45
  • netstandard1.2
  • netstandard2.0

I had waited quite a long time because I needed netstandard2.0 support for some SqlClient pieces. With those pieces in place, I can now support running Respawn on full .NET and .NET Core 1.x and 2.0 applications (and tests).

Respawn works by scanning foreign key relationships in your database and determining the correct order to clear out tables in a test database. In my testing, this method is at least 3x faster than TRUNCATE, dropping/recreating the database, or disabling FKs and indiscriminately deleting data.

Since netstandard2.0 is still in preview1 status, this is a preview release for the netstandard2.0 support. The other two TFNs are production ready. To use Respawn create a checkpoint:

static Checkpoint checkpoint = new Checkpoint  
{
    TablesToIgnore = new[]
    {
        "sysdiagrams",
        "tblUser",
        "tblObjectType",
    },
    SchemasToExclude = new []
    {
        "RoundhousE"
    }
};

And configure any tables/schemas you want to skip. Then, just call "Reset" at the beginning of your test (or in a setup method) to reset your local test database:

checkpoint.Reset("MyConnectionStringName");  

I support SQL Server (any version this millennium), SQL Server Compact Edition, and PostgreSQL, but the schema provider is pluggable (and since no one really does ANSI schema views the same way, has to be).

Enjoy!

Categories: Blogs

9 Ways to Become a First Class Noticer

Gurock Software Blog - Tue, 05/16/2017 - 23:13

Become a First Class Noticer

This is a guest posting by Simon Knight. Simon Knight works with teams of all shapes and sizes as a test lead, manager & facilitator, helping to deliver great software by building quality into every stage of the development process.

Albert Einstein famously asked the question; “How would it feel to ride on a beam of light?” Why don’t you take a moment and just think about that yourself; how would it feel?

Nobody knows the answer. We can speculate. Scientists can synthesize information and hypothesize possible answers based on available data. Still, we don’t know. Maybe we never will. You might reasonably argue that the answer doesn’t matter anyway. I mean, who cares – right?

But isn’t it fun to think about anyway?

“Imagination is more important than knowledge.” – Albert Einstein

The point of the exercise isn’t necessarily to find the answer. Using your imagination to explore unknown realms brings rewards of a different kind. That’s not to say you should give up on the idea of trying to find the answer. After all, your own thought experiment may be an order of magnitude more solvable than knowing what it feels like to ride on a beam of light. Einstein was just a child when he pondered what would turn out to be one of his most fascinating lines of inquiry. It certainly didn’t hurt him or his career.

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.



Subscribe
We will never share your email. 1-click unsubscribes. articles

Amazing things happen in the brain when you let it roam free. When you allow it to explore the boundaries of the known, and take excursions into things yet to be known.

This is one of the joys of my work, in the field of professional testing. The exploration of things. Software, systems, artifacts. Always questioning. Always seeking answers. Always trying to dig a little bit deeper.

“I have always loved things. Just things in the world. I love trying to find the shape of things.” – Leonard Cohen

One of the challenges of software testing is that the focus of your efforts are often somewhat abstract. However, the specific thing you’re currently exploring (or testing) may have elements of physicality. It may have an interface purposely designed for users to interact with, including buttons and displays. It may even be a physical device like a phone, a watch, a robot or a headset.

If you want to sharpen your noticing powers, it’s worth paying the physical realm some attention. The things you explore with your mind don’t have to be abstract. They can be things right in front of you. Everyday things. Things you sit on. Things you work or play with. Things you put on or into your body. When you explore, you should use as many senses as possible to determine the shape, weight, texture and substance of a thing.

You need to take an interest in every part to test what you’re working on effectively, and to its fullest extent. You need to show a deep concern for each component and interface. A passion for every detail.

“Look with all your eyes. Look.” – Jules Verne

First Class Noticer

First class notier

If you’re a tester, you’re basically being paid to observe and to gather information of importance for people who care about a facet of a product. Different aspects of what you do will be for different levels of interest to the people you report to. Ultimately, the whole product is important to somebody, somewhere. For example, the wearer of a smart watch isn’t going to care about the clear, clean user interface if the clasps of their watch are so fragile they can’t leave their home without the device falling into pieces. Nor will they care that it wasn’t your job to test that part. The whole experience is what ultimately counts. In order to give people the experience they’re searching for, you need to become a Noticer of the highest order. An explorer of every aspect of your software, your device, your project.

First-Class Noticer – noun (coined by Saul Bellow in his novel, The Actual)
Someone with the ability to spot important details among noise.

As a tester, it’s your job to become a First Class Noticer. Your ability to identify key issues of concern, separating them out from the surrounding noise, bringing them to the attention of others clearly and persuasively, are skills that separate First Class Noticers from ordinary, or Second Rate Noticers. Testers often joke about having OCD. The ability to notice or observe important details that others miss may come naturally, or it may not. Even if it doesn’t come naturally to you, there is still room for you to build your existing noticing skills from a standing start.

Here’s Nine ways you can Develop First Class Noticing Skills:

Nine ways to develop first class noticing skills

  1. Make sure you are always looking – keep your eyes open and attentive always. Be completely open to the possibility that something of interest is happening either right this very moment, or very soon will be. If you don’t pay attention – you might miss it!

    Think about ways you can capture more information from the software or systems you’re testing, so you can look in more places at once. Can you monitor the logs while you’re testing? Can you observe network traffic? What about resource utilization on your servers? What other information might you have missed? Did you read all the documentation? Is it up to date? Does it cover everything it needs to?

  2. Deem everything interesting – stay curious! Try to cultivate a sense of wonder, go far beyond the surface. Be deeply interested in the object of your attention. Look at all aspects, elements, components and sub-systems. Keep building upon your understanding. Make detailed notes. Drop and return to them later if you get bored. Slow your work pace down if necessary (and justifiable) so that you can follow a line of inquiry to a conclusion. You never quite know what you may discover and learn along the way.

    In my experience, it can be useful to occasionally put time into something that isn’t directly related to what you are working on. For example, investigating a new tool may not help you make progress on a deliverable in the moment. However, if it proves to be useful, is likely to realize many benefits in the future, outweighing the cost of time originally spent. That’s what makes the time invested justifiable.

  3. Change course often – don’t allow your brain to get stuck in a rut. Allow your mind to wander and explore many different paths. Focus for a period, then refocus and change your approach. Use heuristics to guide your thinking. Randomize them if this is helpful or necessary to move forward.

    Your brain is lazy and will happily settle into a groove (see my eBook: How to Bust a Testing Groove). It can be difficult to break out of a rut if you allow yourself to stay there for too long. The best thing to do is not to allow your mind to settle in the first place. Using heuristics to guide your thinking down avenues that your brain might otherwise resist, is a great way to disrupt its natural tendency towards idleness.

  4. Observe for long durations – what might you notice if you just maintained your attention a little bit longer? Does the state change over time? Are there details you may have missed previously?

    Buddhists and other traditions have methods of meditation that lead to a condition they call Jhana – a “state of profound stillness and concentration in which the mind becomes fully immersed and absorbed in the chosen object of attention.” It’s surprisingly easy to enter this state of profound concentration:

    1. Stare really hard at a thing for 5-10 minutes
    2. Repeat step 1 until everything in your peripheral vision gets dark, and only the center of your vision is bright.

    Whether you use this method or not is up to you. Personally, I’ve found mindfulness meditation to be a useful and reasonably effective tool for practicing increasingly longer periods of focused attention. It’s like the steps above, but without the staring. Or the tunnel vision.

    Sometimes though, it’s just difficult to separate out the distractions, and for those occasions a Pomodoro timer is a useful accessory.

  5. Pay attention to the stories around you – consider the narrative of the situation you’re in, or the application you’re testing. What might be going on in the end-users’ world? How might their story affect the use of the software?

    Everything around you has a story, a context of some sort. Your job, as a tester, as a First Class Noticer, is to act as an exegetic for the story of your software under test, and for your project. To interpret events, issues, bugs, threats and risks and explain to the people who care about them what they mean, in the context of their occurrence.

  6. Look for patterns and connections – where are the dots and how are they joined? The very act of attempting to identify the patterns and connections between different items is likely to expose gaps in your own understanding, and potentially omissions in the product that has been delivered.

    This is an area where you can add significant value as a tester. With many teams and projects focusing hard on trying to automate everything, it’s easy to forget that machines, at least for the moment, only do what we’ve programmed them to. They only look where they’ve been told to look, and only see what they’ve been programmed (or trained) to see. The application of your human intelligence, though imperfect, is nonetheless a powerful tool for separating signals from noise.

  7. Document your findings – be a compulsive note taker! Take the mechanics of note-taking seriously. Invest in a quality notebook (I tend to have numerous Moleskin note books lying around for various kinds of notes) and take one everywhere with you. Use it judiciously.

    Don’t forget to review them occasionally either. Look for insights. See what jumps out at you. What inspired you? What confused or irritated you? When you re-encounter an anomalous behavior, did you make a note of it? Was it timestamped? If you use a tool like Evernote, you can add screenshots, videos, snippets etc and have access to them from whatever device you’re working on!

  8. Don’t judge, be indeterminate – when you judge, what you’re really saying is “I already know as much as I need to about ”. You’re closing your mind to possibilities that lie outside the realm of your current worldview.

    And that’s fine, I won’t judge you for that.

Categories: Companies

User action resource timings can now be grouped by domain

Waterfall analysis view of user actions enables you to see which resources are loaded with each user action and the impact that individual resources have on overall user action duration. For some time now, we’ve provided you with the option of viewing summaries of action duration analysis based on resource type.

As you can see in the example below, by selecting group by type from the drop list, you can view summary averages of the timings for images, JavaScript files, 3rd party resources, CSS resources and more.

Waterfall analysis

Group resource timings based on domain

Sometimes, however, it’s more valuable to view summaries of resources based on domain type. This approach to organizing resource analysis provides you with a quick overview of the load order, detailed timings for specific domains, and summaries of loaded resources, including how the loading of those resources affected overall action duration.

As you can see in the example below, by selecting group by domain from the drop list, all resource timings are now grouped based on the domains from which they originate (CDN domains, 3rd party domains, and 1st party domains).

Waterfall analysis

The post User action resource timings can now be grouped by domain appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Automatic user-action naming based on page metadata

Dynatrace automatically generates user action names based on either underlying HTML page file names (for example, Loading of page index.html)  or XHR URLs (for example, http://example.com/api/login). An earlier article explains your options for automated user action naming and explores how you can configure the default naming of user actions. This article focuses on recent enhancements to user-action naming and extraction rules.

To add a new user action naming rule

  1. From the navigation menu, select Applications.
  2. Select the application you want to configure.
  3. Click the Browse (…) button.
  4. Select Edit.
  5. Select the User actions tab.
  6. Click the Add naming rule button.
    Basic user action naming rules are static. Extraction rules are used for dynamic user action naming. For full details about user action naming rules, see How do I create custom names for user actions?
    User action names
User action names based on page metadata

With the latest release of Dynatrace, you can now configure user action naming rules based on page metadata. Dynatrace supports three metadata elements for user action naming: CSS selector, JavaScript variable, and meta tag. To define a user action naming rule, type a Resulting user action name and select a metadata type from the Rule drop list.

user action names

In many cases, the available page metadata already contains usable action names that you can build extraction rules for. For example, consider a JavaScript variable that has been implemented for monitoring with Dynatrace Advanced Synthetic that contains a page group ID (for example, gomez.pgId). You can configure an extraction rule that uses the value of the page group ID as the user action name. See example below.

User action names

As you can see, user action naming rules provide a powerful and flexible means of generating consistent and readable user action names.

The post Automatic user-action naming based on page metadata appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Embedding Ownership: A DevOps Best Practice

Sonatype Blog - Tue, 05/16/2017 - 14:00
From where I sit in the DevOps community, there is often more focus on dev than on ops. Damon Edwards (@damonedwards) of SimplifyOps sought to change that with his talk, Ops Happens: DevOps Beyond Deployment, at the All Day DevOps conference.

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Test Automation for Microservices

Software Testing Magazine - Tue, 05/16/2017 - 11:23
A microservices architecture is a variant of the service-oriented architecture (SOA) architectural style that structures an application as a collection of loosely coupled services. In her article...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

The Daily Grind

Hiccupps - James Thomas - Tue, 05/16/2017 - 05:01

Houghton Mill is an 18th-century water mill, full of impressive machinery and, last weekend, actually grinding flour by the power of the river Great Ouse. Although I am not knowledgeable about these kinds of buildings or this technology I found myself spellbound by a small, but crucial, component in the milling process, the slipper.

The slipper is a kind of hopper that feeds grain into the millstones for grinding, Here's a short film I took of it in operation when I was there with some friends and our kids:


It has a system of its own, and also it is intimately connected to other systems.

It has inputs: a gravity feed brings grain into the top of the slipper; energy is supplied by the vertical axle which is in turn driven indirectly from the water wheel.

It has outputs: grain is dropped into the centre of the millstones immediately below it.

It is self-regulating: as the flow of the river varies, the speed of the wheel varies, the rotation of the axle varies, and the extent to which the slipper is agitated varies. Slower water means less grain supplied to the millstones, which is what is required, as they are also grinding more slowly. A second form of self-regulation occurs with the flow of grain into the slipper.

It has balance: there is a cam mechanism on the axle which pushes the slipper to the left, and a taut string which pulls it back to the right, providing the motion that encourages grain to move.

It can be tuned: the strings that you can see at the front provide ways to alter the angle of the slipper, and the tension of the string to the right can be adjusted to change the balance.

Tuning is important. If properties of the grain change (such as size, or stickiness, or texture, ...) then the action of the slipper may need to change in step. If the properties of the millstones change (e.g. they are adjusted to grind more coarsely or finely, or they are replaced for cleaning, or the surface roughness alters as they age, ...) then the rate of delivery of grain will need to adjust too.

Although the system is self-regulating, these are examples of variables that it does not self-control for. It has no inherent feedback mechanism for them, and so requires external action to change its behaviour.

Further, beyond the skilled eye and ear (and fingers, which are used to judge the quality of the flour) of the miller, I could see no means of alerting that a change might even be required. In a mill running at full tilt, with multiple sets of stones grinding in parallel, with the noise, and dust, and cramped conditions, this must have been a challenge.

Another challenge would be in setting the system up at optimum balance for the conditions that existed at that point. I found no evidence of gauges, or markers, or scales that might indicate standard settings. I noted that the tuning is analogue, there are infinite fine variations that can be made, and the ways in which the system can be tuned no doubt interact.

The simplicity of the self-regulation is beautiful to me. But I wondered why not regulate the power coming from the water wheel instead and so potentially simplify all other systems that rely on it. There are technologies designed to implement this kind of behaviour, such as governors and flywheels.

I wondered also about the operational range of the self-regulation. At what speeds would it become untenable: too slow to shake any grain, or too fast to be stable? There didn't seem to be scope for an automatic cut-out.

So that was an enjoyable ten minutes - while the kids were playing with a model mill - to practice observation, and thought experiments, and reasoning in a world unfamiliar to me.

I doubt you'll find it difficult to make an analogy to systems that you operate within and with, so I won't labour any points here. But I will mention the continual delight I find in those trains of thought in one domain that provoke trains of thought in another.
Image: https://flic.kr/p/6eRBPi
Categories: Blogs

GTAC 2017 - Registration is open!

Google Testing Blog - Mon, 05/15/2017 - 23:40
by Diego Cavalcanti on behalf of the GTAC 2017 Committee
The Google Test Automation Conference (GTAC) is an annual test automation conference hosted by Google. It brings together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It is a great opportunity to present, learn, and challenge modern testing technologies and strategies.

We are pleased to announce that this year, GTAC will be held in Google's London office on November 14th and 15th, 2017.

Registration is currently OPEN for attendees and speakers. See more information here.

The schedule for the upcoming months is as follows:
  • May 15, 2017 - Registration opens for speakers and attendees, including applicants for the diversity scholarship.
  • July 1, 2017 - Registration closes for speaker submissions.
  • July 15, 2017 - Registration closes for attendee submissions.
  • August 15, 2017 - Selected speakers and attendees will be notified.
  • November 13, 2017 - Rehearsal day for speakers (not open for attendees).
  • November 14-15, 2017 - GTAC 2017!
As part of our efforts to increase diversity of speakers and attendees at GTAC, we will again be offering travel scholarships for selected applicants from traditionally underrepresented groups in technology. Please find more information here.

Please do not hesitate to contact gtac2017@google.com if you have any questions. We look forward to seeing you in London!

Categories: Blogs

Automatic identification of users based on page metadata

One of the key features of Dynatrace Real user monitoring is our ability to uniquely identify your users across different browsers and devices. This enables you to analyze the user experience of individual users during user session analysis. Dynatrace initially assigns a unique, random ID to each new user. There are however a few different ways to assign more meaningful user tags to your users. You may already know how to use the JavaScript API to assign custom tags to users. This article explains another approach to tagging your users that works by capturing available data in your application’s page source—there’s no need to add additional code to your application.

Identify users

Locate usernames in page source

If you take a close look at your application’s page source, you’ll likely find that usernames are already included somewhere. Usernames may be included in the text of a DOM element, a meta tag, a JavaScript variable, or even a cookie attribute. For example, Easy Travel, the Dynatrace demo application, includes the user name in a welcome message in the upper-right corner of the home page. Using the development tools that are built into most browsers, we can easily generate a unique CSS selector for this particular element.

user identification

Create a user tag

Once you’ve identified where usernames are located in your page source, you can create user tags based on the usernames.

  1. From the navigation menu, click Applications.
  2. Select the application you want to configure.
  3. Click the Browse (…) button and select Edit.
  4. Click the User tags tab.
  5. From the Capture expression type drop list, select CSS selector.
  6. Type the CSS selector value into the CSS selector field.
  7. To ensure that there is a clean extraction of the username value, you can apply a regex cleanup rule. In this example, the text of the DOM element is Hello <username>! See example below.

user identification

That’s all there is to it!

Verify your user tag

To verify that your user tag configuration has been applied correctly, take a look at the injected JavaScript tag in your application’s updated page source. As you can see in the example below, a property called md= is now listed in this page’s metadata expressions.

Additional notes
  • If you’re using the JavaScript API to identify your users, any metadata rules that you configure may be overruled.
  • All configured user tags will be captured on every page. So keep the list short.
  • The last user action in your session that contains a tag will be used as tag for the entire session.
  • You can also report user names for native mobile apps.

The post Automatic identification of users based on page metadata appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

What’s new in Dynatrace University 1 of 3: Our own digital transformation story

Recently, we’ve made some significant improvements to Dynatrace University (DTU), based on customer feedback along with information gathered electronically from actual customer experiences, and based on our own digital transformation. We had three goals in mind: innovate, optimize customer experience, and modernize our operations. Of course, since these are the same pillars that Dynatrace itself is built on, it made sense that this is also how we should align DTU.

The question was: “Can we redefine digital performance management training and education one app team at a time?” The answer was a resounding “yes.”

The story behind this transformation is a valuable study in applying the same standards of excellence across our entire organization. In this blog series, I’ll explain how we adapted our learning platform and educational materials to fit our customers’ needs. You’ll see how we accelerated innovation, optimized customer experience and modernized operations to bring a new learning platform to life.

It all started with the assembly of a small, agile team to build the next generation education app based on what we’ve learned from DTU over the last four years. We analyzed user behavior, learning patterns and performance expectations. We mapped out the digital voice of our customer within our original DTU application—and we did it using Dynatrace technology.

Understand our customers with real user monitoring

Dynatrace gives us visibility into every Dynatrace learner–everywhere. With this detailed and in-depth perspective on user experience, we can see meaningful patterns that show different learning behaviors. This helps us redefine how we organize and deliver our content. With Dynatrace, what we see is far more than just performance metrics. Each customer visit to DTU unfolds as a unique story. As we look at these stories, we start to see patterns that show us how we can better serve our students.

One of the most telling patterns we observed gave us insights into what our students want to accomplish. We generally identified two types of learners within the University—reactive and proactive learners.

A reactive learner is one who searches for a few specific answers. Maybe they want to know how to manage business transactions using application monitoring capabilities, or they want to quickly learn how to configure a webcheck with the Dynatrace Recorder. These kinds of learners are very easy to spot in Dynatrace by analyzing their User Action Purepaths through our application monitoring capabilities. Just by using the Action Count metric, you get a feel for this behavior and what it means.

I discovered that reactive learners are usually those with fewer than 10 user actions per visit. A quick look at some of the most recent visits to Dynatrace University the User Analytics filters shows you how I came to this conclusion.

As you can see in the dashboard above, Joseph from @dynatrace logged six user actions on April 19. By inspecting the visit, I can see that he’s a prime example of a reactive learner. He has a relatively low user action count, extended visit duration, and acceptable user experience index. He came to the University for specific information. He found his answers by spending some quality time on the specific pages and, then, went on his way. In this case, Joseph provided his credentials, selected our new Dynatrace training and watched a video on how to analyze mobile app crashes with Dynatrace. Just another one of our best solution engineers brushing up on his knowledge.

We can also identify proactive learners with a Purepath view. Proactive learners spend more time exploring Dynatrace University’s great features. They typically take advantage of our easy-to-consume prescriptive learning paths, and have a desire for broader education—not just a few answers.

In this visit, we see that the user entered the University through an external link. Next, they were redirected to our authentication service. Most likely this was from a marketing or Twitter social campaign for Dynatrace and our ability to monitor VMWare. Once they entered DTU, they watched some videos and went down the path of learning more about our platform and how to monitor New Stack technologies with Dynatrace. Thanks to the magic of Purepath and our Visit Duration metric we can see the evolution of the visit over the course of the next 45 minutes.

Visibility means everything

Thanks to our ops team, this proactive learner has a great experience during this 45-minute visit, with sub-second response times. Thanks to the clear view of this visitor’s user activity, we also know that this student is now well informed on how to enable monitoring on PaaS, Azure, VMWare, and Mobile Monitoring. This is great information for us in DTU—so that we know what people are looking for and using. It’s also super insight for our sales and support teams, who can use this information to approach this visitor to see how we can help. Could it lead to a new opportunity? Is this our next customer? If it’s an existing customer, is this a sign that they’re expanding their APM initiatives?

Being able to tag visits with an email address using our real user monitoring capability gives us the insight we need to help answer these questions. A look at this information tells us that this specific user re-visited the University 11 more times over the next seven days, and spent nearly nine hours in the University. With this much training and exposure to best-practice Dynatrace knowledge, we can reach out to this learner and see if they’re interested in an Associate Certification or even more. At the very least, we know that they’re serious about learning Dynatrace.

It’s great that our own technology helps us continue to build a better University by improving our understanding of our own customers and learners. Every day, we use it to understand different learning behaviors, and redefine how we organize and deliver our content at the edge.

Don’t miss my next post to learn more about the way we use Dynatrace solutions to help our customers achieve their goals and keep reaching higher for success.

The post What’s new in Dynatrace University 1 of 3: Our own digital transformation story appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

8 Challenges of Hybrid Application Testing

Testlio - Community of testers - Mon, 05/15/2017 - 19:30

For a time, desktop applications were declared dead. They became too costly to develop and deploy compared to web apps, which allowed businesses to easily onboard new users, release updates instantly and introduce recurring pricing models rather than one-off purchases.

But then ironically, the web actually saved desktop apps. With advances to JavaScript and HTML5, developers can create desktop apps with the same languages used to develop on the web and then adapt it for desktop or devices with a tool like MacGap, Electron, or Cordova.

Businesses can offer their customers the best of both worlds (online and offline access) without taking on massive development projects.

Now hybrid apps are increasingly used by all ages in all industries. Spotify, Slack, Skype…all of these are hybrids.

Hybrids pose some unique challenges when it comes to QA, though. Here, I’m giving more insight into how the nature of hybrid apps requires an additional consideration during testing strategy.

What are hybrid apps, exactly?

A hybrid app could refer to just about anything.

  • It could refer to an app that works in a web browser and as a desktop application
  • An app that works in a web browser and as a mobile application
  • An app that works in web, mobile, and desktop
  • OR…mobile applications that don’t have a web access component but use web coding languages to create a unified code base across iOS, Android, and/or Windows Phone

For the most part, I’ll be discussing web-desktop hybrids, but there are certainly hybrid mobile testing challenges worth considering too.

Ideally, hybrid mobile apps don’t actually require separate versions for different operating systems, since they run on CSS, JavaScript and HTML5. But in practice, some native coding is often required for various platforms.

As you can see from the bulleted list above, hybrid really just means “combo.” Some take a broader view of the term “hybrid.” It’s not about how the app was built, but the fact that it’s integrated into local files and accessible via web, and that it gives users more options for access, like certain features or files being available offline.

But because HTML is the easiest, fastest language for developing hybrids, the fact is that these types of apps are increasingly built this way.

Why hybrid apps keep growing in popularity

We talk a lot about how picky customers are these days (and how testing can help). The only real reason for developing a hybrid app is because it’s what your customers demand.

On mobile, allowing a user to check out your app and use certain features without downloading anything can be a major advantage. As for desktop, having a docking icon and a separate window whose automatic size fits the app features can increase user adoption of an originally web-based app.

So the number one reason a business will decide to go hybrid should be because of the features and type of access their customers demand.

However, there are certainly some plusses for developers that have made this an increasingly popular choice. As mentioned, matured web coding languages have allowed developers to cut costs of native app development by allowing more unification of the underlying code.

The ease with which hybrid apps can be now developed make it less of a “why?” question and more of a “why not?.” It’s not that native apps are dying, but rather that capabilities for seamless coding are growing, and along with them hybrid apps.

Testing challenges with hybrid apps and how to overcome them

Because hybrid apps can be accessed in different devices and environments, they’re naturally a little trickier to test. Unless built with completely unified code, they require unique test cases for automation, and of course require unique sessions of manual testing in each platform regardless of the development process.

  • Seamless notifications: The notification process for all major functions will need to be tested. If a user is logged into a messaging app on their desktop and chatting in real time, they’ll likely not want to receive pings on their phone for responses they’re currently reading. If the user is logged out on all platforms and views a notification on their mobile app, will it show on the desktop app later as well, or will that notification be cleared? Testers will need to understand the various requirements for notifications, how they work within one user account and across user accounts.
  • Intuitive navigation: There may be shifts in the UX of different platforms. The desktop app may pull a small chunk of functionality from the web app and have a very different appearance. Testers will need to explore each platform within its own context to validate the intuitiveness of the navigation, as well as examine how differing designs compare. Does it all add up to a seamless user experience, or are some design differences simply confusing?
  • Data syncing: With some data hosted on local files and some data on servers, accurate syncing can be a major concern with hybrid apps. Testers must check that for each function, the app is pulling in the right information.
  • Integration with outside apps: Particularly for B2B products, integrations can be a major draw for customers. But even for B2C apps, integrations can be important, like with popular email providers or social media networks for example. First off, desktop apps have to integrate with their own web app counterparts’ APIs and then with the APIs of external apps as well. Testers can execute tasks that result in an action in an external app in each supported platform to verify integrations across user behavior.

  • Offline storage: A music streaming app might allow a user to download certain playlists for offline listening. An audiobook app might allow downloads of certain chapters. A word processing app might allow offline edits that are supposed to sync up later. Whatever the feature, and whatever the environment, hybrid apps are likely to have some features affected by a mix of offline and online storage. Any such feature is a great candidate for exploratory testing.
  • Connectivity: The issue of connectivity can play a part with notifications, data syncing, and offline storage, but it warrants its own mention because connectivity is often why hybrid apps are developed in the first place. What needs for offline functionality do users have, how is this being supported, and is the application delivering on those needs? Dependant on whether the components of the hybrid app are desktop or mobile, testers will need to test in 4G, 3G, WiFi and offline environments.
  • Automation and test case writing: Hybrid apps either make automation really easy or really hard. If built only on web based languages and adapted with as little native coding as possible, it’s very easy for QA engineers to write automated scripts that function across various OSes, which is truly a novelty. But if the app is created with completely different languages and/or has very different navigation features, then writing automated scripts will need to be accomplished in each support platform.
  • Device and desktop security: One reason that enterprises will choose to develop desktop apps over web apps is because the internet is notoriously unsecure. When native desktop or mobile apps pull data from the internet, the access can conceivably go the other direction. Apps can be used to access or take over computers and phones. Dependent on the product, risk analysis might not be enough. Actually attempting attacks might be required.

The good news is that all excellent testers love a challenge, and hybrid apps are certainly challenging. They provide a fun puzzle for QA managers when it comes to test design and strategy and create lots of opportunities for testers to come up with creative cases.

For strategic testing of hybrid applications, get in touch with us!

Categories: Companies

Automating the Automation Tools at Capital One

Sonatype Blog - Mon, 05/15/2017 - 15:14
Listening to his talk, it seems like George Parris and his team at Capital One aren’t keeping “banker’s hours.” George is a Master Software Engineer, Retail Bank DevOps at Capital One. At the All Day DevOps conference, George gave a talk, entitled Meta Infrastructure as Code: How Capital One...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today