Skip to content

Feed aggregator

WebLogic Integrations with UrbanCode Deploy

IBM UrbanCode - Release And Deploy - Wed, 06/07/2017 - 19:57

The Oracle Weblogic Server is an application server for building and deploying enterprise JAVA application programs.
IBM UrbanCode provides six plugins that integrate UCD with WebLogic:

In this post I will describe each of the plugins and their uses, and then go into detail on the WLDeploy plugin.

Oracle WebLogic Scripting Tool (WLST)

The Oracle WebLogic Scripting Tool (WLST) plugin automates the deployment and management of applications on Oracle WebLogic Server.
This plugin uses the WLST command-line scripting interface in the WL_HOME/common/bin folder.
Full documentation for the plugin is located here.

Steps
  • Check Server Status
  • Deploy
  • Execute Script
  • Resume Server
  • Shutdown Server
  • Start Application
  • Start Server
  • Suspend Server
  • Undeploy
Compatibility

This plugin is compatible with WebLogic Server versions 11g and later, and IBM UrbanCode Deploy 6.0.1 and later.

Oracle WebLogic Application Deployment

The Oracle WebLogic Application Deployment plug-in provides processes to manipulate applications on a WebLogic server. The plug-in includes a component template to assist in setting up deployment automation.
This plugin uses the weblogic.jar in the /WL_HOME/server/lib folder.
Full documentation for the plugin is located here.

Steps
  • Deploy
  • Redeploy
  • Start
  • Stop
  • Undeploy
Compatibility

This plugin is compatible with WebLogic Server versions 10.3 and later, and IBM UrbanCode Deploy 6.0.1 and later.

Oracle WebLogic Integration Resource Management

The Oracle WebLogic Integration Resource Management plug-in provides processes to create, update and delete WebLogic Integration objects.
This plugin uses the WLST command-line scripting interface in the WL_HOME/common/bin folder.
Full documentation for the plugin is located here.

Steps
  • Create FTP EG
  • Create File EG
  • Create JMS EG
  • Create XML Cache Entry
  • Delete FTP EG
  • Delete File EG
  • Delete JMS EG
  • Delete XML Cache Entry
  • Update FTP EG
  • Update File EG
  • Update JMS EG
  • Update XML Cache Entry
Compatibility

This plugin is compatible with WebLogic Server versions 10g and later, and IBM UrbanCode Deploy 6.0.1 and later.

Oracle WebLogic Server Resource Management

The Oracle WebLogic Server Resource Management plug-in provides steps to support automated deployment of various WebLogic server resources, such as connection factories, quotas, templates, queues, JDBC data sources, file stores, and subdeployments.
Each step is self-contained. That is, connection credentials that are required to connect to the WebLogic server is contained in each step. All Step properties include location information for the following required files:

  • A JMX JAR file which contains connection information file for accessing the WebLogic server
  • JMX properties files that define the objects.

This plugin uses the wljmxclient.jar in the /WL_HOME/server/lib folder.
Full documentation for the plugin is located here.

Steps
  • Create Capacity
  • Create Connection Factory
  • Create Distributed Queue
  • Create Distributed Topic
  • Create File Store
  • Create JDBC Data Source
  • Create JDBC Store
  • Create JMS Server
  • Create Max Threads Constraint
  • Create Min Threads Constraint
  • Create Module
  • Create Queue
  • Create Quota
  • Create SAF Imported Destination
  • Create SubDeployment
  • Create Template
  • Create Topic
  • Create Work Manager
  • Delete Capacity
  • Delete Connection Factory
  • Delete Distributed Queue
  • Delete Distributed Topic
  • Delete File Store
  • Delete JDBC Data Source
  • Delete JDBC Store
  • Delete JMS Server
  • Delete Max Threads Constraint
  • Delete Min Threads Constraint
  • Delete Module
  • Delete Queue
  • Delete Quota
  • Delete SAF Imported Destination
  • Delete SubDeployment
  • Delete Template
  • Delete Topic
  • Delete Work Manager
  • Update Capacity
  • Update Connection Factory
  • Update Distributed Queue
  • Update Distributed Topic
  • Update File Store
  • Update JDBC Data Source
  • Update JDBC Store
  • Update JMS Server
  • Update Max Threads Constraint
  • Update Min Threads Constraint
  • Update Module
  • Update Queue
  • Update Quota
  • Update SAF Imported Destination
  • Update SubDeployment
  • Update Template
  • Update Topic
  • Update Work Manager
Compatibility

This plugin is compatible with WebLogic Server versions 10.3 and later, and IBM UrbanCode Deploy 6.0.1 and later.

Oracle WebLogic Server Security Management

The Oracle WebLogic Server security features provide end-to-end security for applications on the WebLogic server.
The Oracle WebLogic Server Security Management plug-in provides processes to work with WebLogic Server security configurations.
This plugin uses the wljmxclient.jar in the /WL_HOME/server/lib folder.
Full documentation for the plugin is located here.

Steps
  • Create Role Mapper
  • Create or Update Authentication Provider
  • Create or Update Realm
  • Manage Users and or Groups
  • Manages Roles
  • Update Authentication Provider
  • Update Realm
Compatibility

This plugin is compatible with WebLogic Server versions 10g and later, and IBM UrbanCode Deploy 6.0.1 and later.

Oracle WebLogic WLDeploy

The wldeploy Ant task is used to complete weblogic.Deployer functions by using attributes that are specified in an Ant XML file. You can use the wldeploy Ant task with other WebLogic Server Ant tasks to create a single Ant build script. This plug-in also provides steps for other deployment actions such as: undeploy, deploy, and redeploy. It also can be used to start and stop WebLogic servers and clusters.
The Oracle WebLogic WLDeploy plug-in allows you to run a wldeploy Ant task as part of a deployment process.
This plugin uses the weblogic.jar in the /WL_HOME/server/lib folder, as well as the weblogic.Deployer command-line interface.
Full documentation for the plugin is located here.

Steps Compatibility

Version 14 and later require UrbanCode Deploy version 6.1.1.2 or later, and versions before 14 require UrbanCode Deploy 6.0.1 or later.

Check Application Status

The Check Application Status step compares a desired status for a given application to that application’s current status.

The step UI for the Check Application Status step.


Check Target Status

The Check Target Status step compares a desired status for a given target to that target’s current status. A target may be a server, list of servers, or cluster.

The step UI for the Check Target Status step.


List Applications on Targets

The step UI for the List Applications on Target step.


Run WLDeploy

This step can be used to either deploy, undeploy, redeploy, distribute, start, or stop an application.

The step UI for the Run WLDeploy step.


Start Targets

This step can be used to start either a server, a list of servers, or a cluster.

The step UI for the Start Targets step.


Stop Targets

This step can be used to stop either a server, a list of servers, or a cluster.

The step UI for the Stop Targets step.


Wait for Application on Targets

The Wait for Application on Targets step waits for a specified application to reach a desired state on a target or targets.

The step UI for the Wait for Applications step.


WL Auto-Configure

The WL Auto-Configure step discovers and creates resources in UCD for an existing QWebLogic installation.

The step UI for the WL Auto-Configure step.


WL Discovery

The WL Discovery step discovers if WebLogic is on any agent by checking common installation paths. If it is, it will assign the role to the resource.

The step UI for the WL Discover step.

Please feel free to ask any further questions regarding the WebLogic plugins.

Categories: Companies

Testing in Future Space with ScalaTest

Testing TV - Wed, 06/07/2017 - 18:28
ScalaTest is a popular open source testing tool in the Scala ecosystem. In ScalaTest 3.0’s new async testing styles, tests have a result type of Future[Assertion]. Instead of blocking until a future completes, then performing assertions on the result, you map assertions onto the future and return the resulting Future[Assertion] to ScalaTest. The test will […]
Categories: Blogs

Testing Angular Applications

Software Testing Magazine - Wed, 06/07/2017 - 17:56
The best reason for writing tests is to automate your testing. Without tests, you will likely perform software testing manually. This manual testing will take longer and longer as your codebase...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Top Programming Skills for Software Testers

Gurock Software Blog - Wed, 06/07/2017 - 17:25

Top Programming Skills for Software Testers

This is a guest posting by Jess Ingrassellino. Jess is a software engineer in New York. She has perused interests in music, writing, teaching, technology, art and philosophy. She is the founder of TeachCode.org

With literally thousands of programming languages and new technologies being created daily, figuring out what technical skills to learn can be overwhelming for testers. Fortunately, many web and mobile apps tend to work with a similar tech stack. This means that learning some core skills can help testers work in a variety of environments, large and small. Let’s talk about the most useful skills for software testers, and where to find information and training.

“Do the thing you think you cannot do.” – Eleanor Roosevelt

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.



Subscribe
We will never share your email. 1-click unsubscribes. articles Front End Skills: HTML, CSS, JavaScript

Top Programming Skills for Software Testers

Any tester wishing to know how to write automated UI tests will need to be familiar with HTML/CSS/JavaScript. Understanding these languages, and how they work together, enables a software tester to learn more about how a web application is built.

Knowing HTML and CSS helps testers to interpret code from a browser console as they investigate a page. It is a good starting point as HTML is the foundation behind all web pages. It’s used to add structure, text, images, and other types of media. CSS is the language used to style HTML content to create visually appealing web pages.

Knowing JavaScript is also helpful because it can be executed from within Selenium scripts, if needed.

Let’s say you are a software tester and you are testing a form submission on a webpage. When you submit the form, you see an error. A tester who knows HTML/CSS/JS can open up the development tools in their browser, select the console option, and repeat their actions to reproduce the error. In the console, they will be able to see the JavaScript error that is thrown, and then use this information to either further investigate the issue or make a thorough bug report to developers.

Skills to Automate Regression: WebDriver, Ruby, Java, Python

Top Programming Skills for Software Testers

More often, testers are being asked to create automated UI tests for web applications. While there are many options, Selenium WebDriver (WebDriver) tends to be the most popular API for driving a browser. Ruby, Python, and Java are popular language choices for people wishing to work with Gherkin style syntax and WebDriver. Another reason these languages tend to be popular is that many web apps use them in other capacities, making it easier for the tester to get help from developers in creating and maintaining the test suite.

For example, a tester may be asked to design an end-to-end regression test of a website that includes logging into the site. Using HTML and CSS skills, the tester can open the browser console elements, locate the login box on the page, and inspect the element to find the id of the element. The tester can then use Ruby, Java, or Python to write commands for the WebDriver API. Some sample code in Python might look like this:

driver.get(“https://www.website.com”)
element = driver.find_element_by_id(“login-id”)
element.send_keys(“yourname”)

The above code goes to the website, locates the textbox that accepts the login, and then enters the login name into the text box. Learning the WebDriver API along with a language like Ruby, Java, or Python are key skills for building automated regression tests.

Working with Code: SVN, Git, and BASH

Top Programming Skills for Software Testers

Sometimes, a tester is lucky enough to work in a large or mature company where they can develop their tests against an environment that just “exists”. More typically, however, some environment setup is required. Also, testers who are writing UI automation will need to check in their own code. For maximum work independence and flexibility, testers should know a version control system also. Two popular version control systems are SVN and Git. Both SVN and Git can easily be learned online with free resources.

In addition, testers should get to know the command prompt/command line and some simple Bash Commands to move around both their computer and their code base. Bash Commands can help the tester navigate files easily while writing tests. Additionally, it is generally necessary to use the command prompt to run automated UI tests and investigate the results of failed scenarios or features.

For example, a tester might be starting a job at a company whose code is kept in GitHub. They can do a ‘git clone’ command to get the code onto the computer. Bash Commands would help the tester move files around, save changes to his test code, and push test changes up to Git for review. Some sample code might look like this:

git clone (gets a copy of the repository)
cd (bash command “cd” means “change directory” to your company’s repository)
git checkout -b (makes a local copy of the code just for you)
git add . (save your changes)
git push origin/your_shiny_new_branch (sends your branch to the repository)

There are, of course, many tech skills beyond this that software testers can learn, and there are no rules about where to start. Different companies can and will employ different technologies. However, these top tech skills can help software testers build a core understanding of web technologies that they can use to branch out into more varied technologies during their careers.

Places to Learn Programming Skills Online for Free

Top Programming Skills for Software Testers

Find out more at TeachCode.Org.

HTML and CSS
Learn HTML and CSS – This course takes approximately 10 hours to complete.

JavaScript
Learn JavaScript – This course will teach you the most fundamental concepts in programming JavaScript. It takes approximately 10 hours to complete.

Computer Programming – Learn the basics, starting with Intro to programming. With the Khan Academy you can learn how to use the JavaScript language and the Processing.js library to create fun drawings and animations. There are also courses available that will enable you to combine HTML and JS for interactive webpages.

WebDriver
Online Selenium Tutorial for beginners in Java – Learn Selenium WebDriver automation step by step hands-on practical examples.

Ruby
Learn Ruby – In this course you can gain familiarity in Ruby around basic programming concepts, including variables, loops, control flow, and object-oriented programming. You will also get the opportunity to test your understanding in a final project which you’ll build locally. The course takes approximately 9 hours.

Java
Learn Java – In this course you’ll learn fundamental programming concepts, including object-oriented programming in Java. You will also get the opportunity to build 7 Java projects, like a basic calculator, to help you practice along the way. This course takes approximately 4 hours to complete.

Python
Learn Python – This course is a great introduction to both fundamental programming concepts and the Python programming language. It takes approximately 13 hours.

Learn Python the Hard Way – This is the 3rd Edition of Learn Python the Hard Way. You can visit the companion site to the book at http://learnpythonthehardway.org/ where you can purchase digital downloads and paper versions of the book. The free HTML version of the book is available at http://learnpythonthehardway.org/book/.

Command Line
Learn the Command Line – Learning to use the Command Line will help you to discover all that your computer is capable of and accomplish a wider set of tasks more effectively and efficiently. This course takes approximately 3 hours.

SVN
Learn SVN – Apache Subversion which is often abbreviated as SVN, is a software versioning and revision control system distributed under an open source license.

Git
Learn Git – This course teaches you to save and manage different versions of code projects. It takes approximately 2 hours to complete.

Categories: Companies

7 Ways to Prioritize More Successfully During Test Planning

Testlio - Community of testers - Wed, 06/07/2017 - 17:00

Test planning is an essential part of software testing no matter the size of the project or the team. A test plan ensures product coverage, keeps testing in line with overall project goals, and keeps everything on schedule. A solid test plan is likely to include:

  • Business requirements
  • Features to be tested
  • Features NOT to be tested
  • Testing approach, criteria & deliverables
  • Properties of the test environment
  • Duties & tasks assigned to each tester

Creating a test plan in a single document is a straightforward way to keep testing accountable to a schedule, a budget, and client requirements.

Test planning should be strategic—never arbitrary. Luckily, there are some creative ways to break up this seemingly impossible task. Here’s how to prioritize more meaningfully during test planning, to get a next-level bird’s eye view.

Adopt a team-based approach during risk analysis

Testers naturally tend to conduct risk analysis, but often in an unstructured way. Ever run through a mental checklist of customer priorities, features prone to failure, and any changes since the last build? That’s a good way to start a test cycle.

But putting some structure around the process of risk analysis is even better. To review risks in a way that’s fully comprehensive, try bringing in people from other teams. Hold a risk analysis meeting with a business analyst, a customer support representative, and a developer. Pulling in other viewpoints in an organized way gives the maximum insight into the system to be tested.

image01-1.jpg

Together, you can determine the likelihood of failure for each feature and its impact. Organize the results of the risk analysis around features most likely to break and the ones whose failure would be the most detrimental.

Ensure that test planning is prioritized around client concerns

The QA department can receive client concerns in a variety of mediums:

  • Forwarded emails
  • Recorded calls of the client and dedicated support
  • Input from developers
  • Direct communication

One of the most important things you can do during test planning is to consolidate and visualize all client concerns. Get yourself in the habit of tracking any input that comes your way (in any format). During the test planning phase, translate relevant client concerns into one master document, bulleted list, or virtual bulletin board—whatever works. Once consolidated and translated into the language of the project, it’ll be that much smoother to work into your plan.

Track the discovery process

Any QA manager designing a test plan must incorporate the existing documentation of a project with an initial discovery process of the application.

With agile projects, you may have little to no documented information, and instead learn about the project during various touch points along its lifecycle or end up exploring it nearly blind. It’s critical to track this discovery process of a new application, regardless of how much you know up-front.

Try recording your initial exploration in a way that will be easy to share with your testing team. You can create a mind map with different features and their functions, record video and/or audio, or simply take notes of your first impressions and then format them. This can be used to inform how you segment tasks, and can later help you introduce new testers to the project.

Create product mind maps

Mind maps can be used to keep track of client concerns, track the discovery process, and understand the product as a whole.

Plus they’re fun.

You probably learned how to make mind maps in grade school. Simply grab a large sheet of paper and start connecting various elements and forming relationships between features and functions. Mind maps, whether digital or analog, are a highly visual way to organize any sort of work, but they really come in hand in software testing, which can quickly become overwhelming.

image02-1.png

Mind maps can help you outline a product so that it’s easier to assign individual testing tasks. More importantly, mind maps allow you to track coverage of an app.

Separate creative from repetitive tasks

Most apps, particularly mobile, benefit from a mixed approach to testing. Regression tests, scripted tests, and manual exploration come together to provide full coverage that ensures quality and meets business objectives.

Striking the right balance will depend on each project, but it’s always necessary to determine which functions require scripting and clear pass/fail criteria, and which should be hard cases—creative issues that are harder to find.

Dependent on skill level, you can assign exploratory and scripted cases to each tester. This allows testers a creative break from sometimes boring scripts and provides the maximum amount of viewpoints in terms of usability and reliability. In short, give testers room to really try and break things, while making sure that all the basic steps are covered.

Organize test planning around user personas

User personas deserve a whole mind map of their own. Instead of breaking the product up by features, break it up by users (and what those users are trying to accomplish).

A photo editing app, for example, could produce a variety of user stories: someone who simply needs to resize an image, someone who wants to crop, someone who needs to create and print out a birthday card, and someone making an infographic. And on and on.

Next, you take those stories and break them down into steps, layering those user-based steps on top of other approaches for complete product coverage.image00-1.png

Work backward: write the most complex test cases first

Quick timeline? Tight budget? Small team?

Maybe you don’t have time for scripted AND manual tests to overlap each other. Maybe you don’t have time for testing that’s organized around features AND user personas.

When you need to cover an entire product on a tight timeline, you need to work backward. Take the above photo editing example. Let’s say your user wants to crop, resize, and download an image as a different file type than what was originally uploaded.

When you start off with the most complicated use cases (and track each step), you’ll find that most of the app gets covered. You can then search for any gaps and cover those in shorter, more simple test cases.

Scripting out the longest test cases first and having them covered early on in the test schedule allows for the entire process to move that much quicker.

All of these prioritization tips can be combined during test planning to make the cycle a success. When you’re focused on the full use of the product, know client concerns and are aware of risks, then how to make the best use of your time and resources becomes that much clearer.

Categories: Companies

Why “Try another browser” is no longer acceptable

Users can be demanding, and you won’t find a more demanding group than social savvy millennials. These punks have grown up believing they pretty much know everything, are never wrong, and demand immediate, accurate responses when interacting on social.

If websites are slow or apps crash, it’s not just lost revenue – but also damage to your brand.

Last year we ran a survey and found 51% of millennials would turn to social media to complain if a website or app performs badly. Reading between the lines, application and digital experiences are no longer just an issue for IT.

These millennials believe, and rightly so, they are not a transaction.

When digital experiences fail, it’s comical for the social observer, but its no laughing matter for the brand, or the social media team. Put simply, reply with accuracy; reply with speed; reply with intelligence…or else.

It’s no longer OK to reply with “try another browser”.

But we keep doing this:

I ran a simple search query and found that the phrase “try another browser” is mentioned quite frequently. And it’s not just small businesses, but brand and industry leaders, too. And, at this point, I should mention that I post this image with some trepidation because several of these companies are good customers of Dynatrace.

To me this suggests two things:

  1. There is an opportunity for some of these brands to take a competitive advantage
  2. Some of these social media and marketing teams need to head over to IT and have a chat about some of the crazy user-experience insights they can gain. Sorry Marketing, Google Analytics isn’t the answer.

Here is a snippet from a presentation I did last year that talks about why social media teams, and marketing teams, need access to digital experience data.

One of the unique capabilities of Dynatrace is the ability to see every user, every click, tap, swipe, on every single device. This gives you a distinct advantage by being able to see a user who is struggling to check out, failing to login in, or having trouble on their digital journey. If user experience is the single biggest differentiator, and social media is one of the most critical marketing communication channels, then aligning Dynatrace data to your social arsenal is really a no-brainer.

The post Why “Try another browser” is no longer acceptable appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Now on DevOps Radio: Perficient’s Ronda Kiser-Oakes on Hitting a Homerun with DevOps

Episode 19 of DevOps Radio features Ronda Kiser-Oakes, director of the North America DevOps practice at Perficient, a digital transformation consulting firm serving leading companies. Besides being an avid sports fan, Ronda is passionate about the business of IT and how DevOps is changing the way businesses innovate. Unlike her beloved Cubs, Ronda says companies don’t have to wait 108 years to see DevOps success.

While sitting down with host Andre Pino, Ronda shares her industry observations, saying that while many companies that have started their DevOps journey, most do not know what route to take. She notes this is a “MapQuest” scenario where companies know they have to get from point A to point B, but there are too many different routes and they want to ensure they reach milestones along the way. Ronda’s approach is to assess the situation and bring vision to tactical strategy by identifying where to start, how to get to the next point and which endpoint to get to first. Many of the processes like continuous integration, continuous delivery, continuous deployment, shift left and agile that organizations look to employ, are really the endpoint to their DevOps initiatives.

Ronda also outlines the three elements every DevOps game plan should include: people, process and technology. With the DevOps journey now spanning so much of the organization - through DevSecOps, security, QA testing and operations - and different roles within the company investing in it, it’s important to keep the people aspect in mind. At Perficient, to get all parties aligned, they outline five ways to deliver services, which Ronda shares.

Like a true champion, Ronda fields Andre’s question about DevOps politics by saying that anytime you are changing the world in IT, there’s going to be politics but it is important to understand the culture and adjust to it, to stay neutral. She then compares DevOps and sports, highlighting the competitive edge in both. Ronda says while we all want to get to the Super Bowl and the World Series - or seamlessly deliver a successful product - there’s always competitiveness and naysayers, but by continuing to work hard, we can see how we can do better to stay true to the team. Her coaching advice for the ultimate DevOps dream team is to put the customer first and everybody wins.

Want more MVP DevOps advice? Swing on over to the CloudBees website, search us on iTunes or subscribe to DevOps Radio via RSS feed. Extra game points if you Tweet @CloudBees and include #DevOpsRadio in your post!

Categories: Companies

Telerik and Testdroid Webinar Bonus Q&A

Telerik TestStudio - Tue, 06/06/2017 - 15:04
In this blog, Ville-Veikko Helppi tackles some of the unanswered questions from the webinar. 2016-06-06T14:55:06Z 2017-06-06T12:29:09Z Ville-Veikko Helppi
Categories: Companies

How To Survive as a QA in a Software Development Team

Software Testing Magazine - Tue, 06/06/2017 - 10:47
It is not always easy to be a software tester in a software development team. Developers will often consider software quality assurance (QA) people as inferior and would wonder how they could...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Integrate Automated Testing into Jenkins

Ranorex - Tue, 06/06/2017 - 10:00

In software engineering, continuous integration means the continuous application of quality control processes — small units of effort, applied frequently.

In this blog we’ll show you how to set up a CI job with Hudson/Jenkins that automatically builds and executes your Ranorex automation as well as automatically sends out the generated test reports for every committed change in a Subversion repository.

Advantages of Continuous Integration Testing

Continuous integration has many advantages:

  • When tests fail or bugs emerge, developers can revert the codebase to a bug-free state without wasting time for debugging
  • Developers detect and fix integration problems continuously – and thus avoid last-minute chaos at release dates
  • Early warning of broken/incompatible code
  • Early warning of conflicting changes
  • Immediate testing of all changes
  • Constant availability of a “current” build for testing, demo, or release purposes
  • Immediate feedback to developers on the quality, functionality, or system-wide impact of their written code
  • Frequent code check-ins push developers to create modular, less complex code
Infrastructure Continuous Integration Tool

You can find a download link and installation description for Hudson and Jenkins here:

In this blog post we are going to use Jenkins as CI tool. There shouldn’t be much of a difference when using Hudson.

As Jenkins or the nodes executing the CI jobs normally are started as Windows services, they do not have sufficient rights to start UI-applications.

Please make sure that Jenkins as master and its slave nodes, where the Ranorex automation should be triggered, are not started as a service.

For the Jenkins master, open the “Services” tool (which is part of the “Administrative Tools” in the control panel), choose “Jenkins” service, stop the service, and set the “Startup type” to disabled:

disable start as service

Use the following command to start Jenkins manually from the installation folder:

java -jar jenkins.war

manually start jenkins

After starting Jenkins, use this address to access the web interface:

http://localhost:8080/

To configure your Jenkins server, navigate to the Jenkins menu and select “Manage Jenkins” -> “Configure System”:

Configure System

Note: It is necessary to have the Ranorex main components – and a valid Ranorex license – installed on each machine you want to build and execute Ranorex code.

Source Code Management

As mentioned before, we are going to use a Subversion repository as base of our continuous integration process.

In this sample, we have two solutions in our repository: the application under test and as the automated Ranorex tests.

Repository

To start the application under test from your test project, simply add a new “Run Application” action to your action table in Ranorex Studio, which starts the application under test, using a relative path to the repository root:

Run Application Action

Plugins

As we want to build our code for each committed change within our SVN repository, we need a Subversion as well as a MS Build plugin for Jenkins. An additional mail plugin will make sure that a mail is sent with each build.

Install Plugins

Open the “Manage Plugins” section (“Manage Jenkins” -> “Manage Plugins”), choose the following plugins from the list of available plugins and install them if they are not installed already:

Configure Plugins

The installed plugins also need to be configured. To do so

  • open the “Configure System” and configure the “Extended E-Mail Notification” plugin. To do so, set the recipients and alter the subject and content (adding the environment variable $BUILD_LOG to the content will add the whole console output of the build and the test to the sent mail),
    Configure Mails
  • configure the “E-mail Notification” plugin by setting the SMTP server.
  • and navigate to “Global Tool Configuraiton” and configure your “MSBuild” plugin by choosing the “msbuild.exe” installed on your machine.
    Configure MSBuild
Add New Job

Now, as the system is configured, we can add a new Jenkins job, which will update the checked out files from a SVN repository, build both the application under test and the Ranorex automation project, execute the application under test as well as the automation code and send a mail with the report file attached.

Start by creating a new item. Choose “Build free-style software project” as job type and enter a job name:

Add New Item

Configure Source Code Management

Next, we have to check out the source of both the application under test and our test automation project. Start with choosing Subversion as source code management tool. Then, enter the repository holding your application under test as well as your test automation project. Finally, choose “Use ‘svn update’ as much as possible” as check out strategy:

Configure SVN

With this configuration, the application under test as well as the test automation project will be checked out and updated locally.

Add Build Steps

Now, as the source code management is configured, we can start with processing the updated files.
First of all, let’s add MSBuild steps for both projects:

Add MSBuild Buildstep

Choose your configured MSBuild version and enter the path of the solution file relative to the repository root (which is the workspace folder of the Jenkins job) for both the automated and the automating project:

Added MSBuild Buildsteps

Add “Run a Ranorex Test Suite” Step

By adding these two build steps, the executables will be automatically built. Now the newly built application should be tested.

This can be accomplished by adding a new “Run a Ranorex test suite” build step that starts the test suite executable:
Add-Ranorex-Build-Step
Configure-Ranorex-Build-Step

How to set up the „Run a Ranorex test suite” build step

  • Ranorex test suite file: Enter the path to the test suite file (*.rxtst) located in the output folder of your solution.
  • Ranorex run configuration: Enter the exact name of the run configuration you want to use. By default, the run configuration currently selected in the test suite is used. If you want to create or edit run configurations, please use Ranorex Studio or the Ranorex Test Suite Runner.
  • Ranorex report directory: Specify the directory that your report (and accordingly the JUnit report file) will be saved to. If you don’t specify a path, the directory where your test executable is located will be used.
  • Ranorex report file name: Specify the file name of the generated report. By default, the file name specified in the test suite settings is used.
  • JUnit-compatible report: If checked, Ranorex will create both a JUnit-compatible report and a Ranorex report.
  • Compressed copy of Ranorex report: Compresses the report and the associated files into a single archive with the extension .rxzlog. The following additional input fields will appear when this option is enabled:
    • Compressed report directory: Allows you to specify the directory that your compressed report will be saved to. If you don’t specify a path, the directory where your test executable is located will be used.
    • Compressed report file: Allows you to specify the file name of the compressed report file. By default, the file name specified in the test suite settings is used.
  • Global parameters: Allows you to create or override values for global parameters set in the test suite. Enter parameters according to the following pattern: “ParameterName=Value”. Separate parameters with semicolons or newlines.
  • Command line arguments: Allows you to add additional test suite command line arguments. Please refer to our user guide to get a list of all available command line arguments.

The test suite executable returns “0” on success and “-1” on failure. Based on this return value, Jenkins will mark the build as successful or failed.

Add Post-Build Action

After executing the automated tests, we will publish a JUnit test result report to the Jenkins build. For this reason, we have created a JUnit compatible copy of the report file, by checking the corresponding option. Now, we can add a “Publish Junit test result report” post-build action to the build job and define the test report xml, representing your JUnit report file.

Added JUnit Action

Additionally, we will send an email which informs us about the success of the triggered build.
This email should include the zipped report file, mentioned before, as attachment.
To do so, add the new post-build action “Editable Email Notification”, choose the report file location defined before as attachment, and add triggers for each job status you want to be informed about. In this sample, an email will be sent if a job has failed or succeeded.

Added Mail Action

Run Job

Once you’ve completed these steps and saved your changes, check if everything works as expected by clicking “Build now”:

Build Now

After running the generated job, you will see all finished builds within the build history. Icons indicate the status of the individual builds. If you click on a specific build, you will see the detailed JUnit test results. You can view the zipped report files of all builds by opening them in the local workspace (“Workspace/Reports”):

Build History

As configured before, an email will be sent to the specified email address(es), including the console output in the email text as well as the generated zipped report file as attachment.

Add Repository Hook

Now we can manually trigger a build. As we are working with Subversion, it would be beneficial to trigger the script for each commit.
To do so, you can add a server side repository hook, which automatically triggers Jenkins to start a new build for each change committed, as described in the subversion plugin documentation.

Alternatively, you can activate polling of the source code management system as build trigger in your Jenkins job configuration.

As shown in following picture, you can define the interval, after which the source code management will be invoked (e.g. 5 minutes after every full hour):

Added Build Trigger

Conclusion

Following the steps above you will be able to easily setup a continuous integration process performing the automated test of the application you develop. Each commit will now trigger an automated test run. Once the test run has finished, you’ll instantly receive a mail with the Ranorex test report.

Note: This blog was originally published in July 2012 and has been revised to reflect recent technical developments.

Download Free 30-Day Trial

The post Integrate Automated Testing into Jenkins appeared first on Ranorex Blog.

Categories: Companies

Book Review: Scaling Teams

thekua.com@work - Mon, 06/05/2017 - 23:31

This weekend I finished reading Scaling Teams by Alexander Grosse & David Loftesness.

I know Grosse personally and was looking forward to reading the book, knowing his own personal take on dealing with organisations and the structure.

tl;dr Summary

A concise book offering plenty of practical tips and ideas of what to watch out for and do when an organisation grows.

Detailed summary

The authors of the book have done a lot of extensive reading, research and talking to lots of other people in different organisations understanding their take on how they have grown their organisations. They have taken their findings and opinions and grouped them into five different areas:

  • Hiring
  • People Management
  • Organisational Structure
  • Culture
  • Communication

In each of these different areas, they describe the different challenges that organisations experience when growing, sharing a number of war stories, warning signs to look out for and different approaches of dealing with them.

I like the pragmatic approach to their “there’s no single answer” to a lot of their advice, as they acknoweldge in each section the different factors about why you might favour one option over another and there are always trade-offs you want to think about. In doing so, they make some of these trade-offs a lot more explict, and equip new managers with different examples of how companies have handled some of these situations.

There are a lot of links to reading materials (which, in my opinion, were heavily web-centric content). The articles were definitely relevant and up to date in the context of the topics being discussed but I would have expected that for a freshly published book. A small improvement would have been a way to have them all grouped together at the end in a referenced section, or perhaps, (hint hint), they might publish all the links on their website.

What I really liked about this book its wide reaching, practical advice. Although the book is aimed at rapidly growing start-ups, I find the advice useful for many of the companies we consult for, who are often already considered very succesful business.

I’ll be adding it to my list of recommended reading for leaders looking to improve their technology organisations. I suggest you get a copy too.

Categories: Blogs

High-Impact Usability Testing That’s Actually Doable

Testlio - Community of testers - Mon, 06/05/2017 - 19:30

Testlio is all about helping our customers make their customers happy. We believe that when it comes down to it, that’s the point of QA.

Getting as much real feedback as possible is critical to ensuring the highest quality user experience. The exploratory method helps to confidently cover apps while mimicking true user behavior, but only 29% of mobile developers actually conduct exploratory testing. Due to budget and time constraints, customer-focused QA gets cut, leading to poor user experiences and ultimately low retention rates.

If QA performed by testers gets easily cut, you can believe that few companies invest in usability testing, which is conducted by actual users.

Getting real users in a room and recording their activity while they use your app sounds really complicated, time-consuming, expensive, and a little nerve-wracking, right?

It is.

Traditional usability testing

Usability testing traditionally involves a set of users and a researcher. The researcher will work with the users one by one, sitting in an informal lab or room where the user’s behavior can be recorded, such as with screen and voice recordings of a website.

Here’s an example of a website under usability testing:

<iframe src=”https://player.vimeo.com/video/114778650″ width=”640″ height=”360″ frameborder=”0″ webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>

<p><a href=”https://vimeo.com/114778650″>Toshiba – usability test clip</a> from <a href=”https://vimeo.com/experienceux”>Experience UX</a> on <a href=”https://vimeo.com”>Vimeo</a>.</p>

The researcher will give small prompts or ask questions to move the user along. On a shopping app or site, for example, the researcher may ask them, “How would you find a pair of pants?” Then the researcher could watch whether they use the search the bar or navigate from the menu to find clothes by category.

Keeping the language neutral and asking “how” questions is important to not directing the user too much. Sometimes, the researcher may simply hand them a mobile phone loaded with an app and say, “What do you make of this?” and let the user take it from there.

The usability report delivered by the researcher is seen as a huge learning experience for developers. This method can reduce the risk of building the wrong products for the wrong people, help uncover current quality enhancement opportunities, and impact future decision making.

For most companies, user experience metrics are a much more affordable, viable option for collecting user inputs. That’s why 84% of companies plan to increase their focus and investment on user experience metrics.

While UX metrics can impact day-to-day decisions for both devs and QAs, measurements can’t replace the raw, qualitative inputs of real users. There’s a pretty big difference between knowing how many seconds a user was logged on and what they were thinking during that time.

But there’s a simpler way to conduct usability testing, no lab required.

Why the “guerrilla method” is a valid option

In a nutshell, guerrilla usability testing means asking your friends, family, and complete strangers to use your app.

Doesn’t matter whether you’re baking brownies, building an app, or writing a book, everyone has a different opinion on whether or not it makes sense to request input from your friends.

Will your neighbor, your best buddy, and your mom really be able to give you the feedback you need? Perhaps not.

But the guerrilla method isn’t necessarily strategic or unstructured. It doesn’t have to be so personal. QA analysts for a medical office software need not ask their yoga teachers to explore the product.

By getting the answers to four basic questions, a usability testing plan can be developed:

  • What
  • Where
  • Who
  • How

What – Users are not testers. You can not assume that they will explore the product in its entirety. For that reason, QAMs strategizing usability testing need to identify critical areas. Maybe these are users’ favorite features or tasks. The product should be broken down into a few different themes. Each user may or may not test each theme.

Where – With guerrilla testing, you have to pounce on people in the right location. A Starbucks cafe, a college campus, an e-commerce conference? What makes sense? The point is finding the right people and also working with them in the right environment. B2B products should be explored in an office setting to elicit life-like use. Desktop or web apps (as opposed to mobile) will require comfortable seating for participants.

Who – Choosing the “where” goes hand in hand with choosing the “who.” Simply picking the location might knock out the “who” question altogether, as anyone in that location who doesn’t appear to be in a rush is worth approaching. The who question might be one of the demographics, or if the product is for the mass market, it could be anyone in a public space.

How – Most importantly, you have to keep the testing open-ended. Don’t tell them to complete such-and-such task. Ask them what they think about something, how they would accomplish something, or what they would even do with your app in general. It’s also important to record the session.

What about when many of your users are out of reach? You may have users spread around the globe, using different devices and speaking different languages. Buy a plane ticket. Just kidding!

Testlio’s testing community is a huge asset to companies with a global user base. Our QAMs pair up the right testers in the right places, to incorporate language and cultural differences as well as technical environment changes.

Testers may not be users, but they can “put on their user hats” as Testlio Head of QA Kristi Kaljurand likes to say.

Tips for succeeding at informal usability testing

Usability testing is very different from any other QA project. It will seem exciting to the extroverts and daunting to the introverts. To make sure that a day or two spent on usability testing truly benefits the organization, consider the following:

  • Customize a release form template and have participants sign it, so they agree to how the recordings will be internally or externally shared and used.
  • Use a screen recorder like UX recorder or a similar tool. Use a voice recorder for verbal feedback.
  • Elicit as much verbal feedback as possible by asking users to talk you through what they are doing.
  • Be polite: don’t waste too much of anyone’s time or word questions in a way that they feel as if they are being tested, rather than the product.
  • Create a finalized report by consolidating and categorizing similar points of feedback, identifying unexpected workflows, and including particularly valuable quotes.

A quick, relatively informal guerrilla usability testing project can produce invaluable information for an organization: the knowledge of how users really respond. The organization can find out: What are we getting wrong? What are we getting right?

Combined with UX metrics and functional testing, companies can feel confident that they not only provide excellent user experiences but that they are armed to make the right product decisions in the future.

Insight into real user behavior should never feel out of reach.

For testing focused on customer delight, contact us for a demo.

Categories: Companies

Medical Device Security: A New Look at Open Source Software

Sonatype Blog - Sun, 06/04/2017 - 22:15
We all do it. When we sense something wrong with our health, we often go to the internet, plug in our symptoms and try to diagnose the issue.   In our ever-connected world, we are not the only ones using the internet.  In order to improve the effectiveness and safety of our...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Taking Note

Hiccupps - James Thomas - Sat, 06/03/2017 - 07:40

In Something of Note, a post about Karo Stoltzenburg and Neil Younger's recent workshop on note-taking, I wrote:
I am especially inspired to see whether I can distil any conventions from my own note-taking ...  I favour plain text for note-taking on the computer and I have established conventions that suit me for that. I wonder are any conventions present in multiple of the approaches that I use?Since then I've been collecting fieldstones as I observe myself at work, talking to colleagues about how they see my note-taking and how it differs from theirs, and looking for patterns and lack of patterns in that data.

ConventionsI already knew that I'd been revising and refining how I take notes on the computer for years. Looking back I can see that I first blogged about it in The Power of Fancy Plain Text in 2011 but I'd long since been crafting my conventions and had settled on something close to Mediawiki markup for pretty much everything.  And Mediawiki's format still forms the basis for much of my note-taking, although that's strongly influenced by my work context.

These are my current conventions for typed notes:
  • * bullet lists. Lots of my notes are bullets because (I find) it forces me to get to "the thing"
  • ... as a way to carry on thoughts across bullets while preserving the structure
  • > for my side of a conversation (where that is the context), or commentary (in other contexts)
  • / emphasis
  • " for direct quotes
  • ---- at start line and end line for longer quoted examples, code snippets, command line trace etc
  • ==, ====, ==== etc for section headers
  • +,-,? as variant bullet points for positive, negative, questionable
  • !,? as annotations for important and need-to-ask

These are quick to enter,  being single characters or repeated single characters. They favour readability in the editor over strict adherence to Mediawiki, e.g. I use a slash rather than  repeated single quotes for emphasis because it looks better in email and can be search-replaced easily.

I am less likely to force a particular convention on paper and I realise that I haven't put much time into thinking about the way I want to take notes in that medium. Here's what I've come up with by observation:
  • whole sentences or at least phrases
  • quotation marks around actual quotes
  • questions to me and others annotated with a name
  • starring for emphasis
  • arrows to link thoughts, with writing on the arrows sometimes
  • boxes and circles (for emphasis, but no obvious rhyme or reason to them)
  • structure diagrams; occasional mind map
  • to-do lists - I rarely keep these in files
  • ... and I cross out what I've done
  • ... and I put a big star next to things I crossed out that I didn't mean to

Why don't I care to think so hard about hand-written notes? Good question. I think it's a combination of these factors: I don't need to, I write less on paper these days, the conventions I've evolved intuitively serve me well enough, it is a free-form medium and so inventing on the fly is natural, information lodges on paper for a short time - I'll type up anything I want to keep later.

Similarities and DifferencesI want to get something of that natural, intuitive spirit when typing too, although I'm not expecting the same kind of freedom as a pen on paper. What I can aim for is less mediation between my brain and the content I'm creating. To facilitate this I have, for example:
  • practised typing faster and more accurately, and without looking at my fingers
  • learned more keyboard shortcuts, e.g. for navigating between applications, managing tabs within applications, placing the cursor in the URL bar in browsers, and moving around within documents
  • pinned a set of convenient applications to the Windows taskbar in the same order on all of the computers I use regularly
  • set up the Quick Access Toolbar in Office products, and made it the same across all Office products that I use
  • made more use of MRU (most recently used) lists in applications, including increasing their size and pinning files where I can

With these, for example, I can type Windows-7, Alt-5 to open Excel and show a list of recently-used and pinned files. Jerry Weinberg aims to record his fieldstones within five seconds of thinking of them. I don't have such strict goals for myself, but I do want to make entering my data as convenient as possible, and as much like simply picking up a notepad and turning to the page I was last working on as I can.

That's one way I'm trying to bring my hand and typed note-taking closer together in spirit, at least. There are also some content similarities. For instance, I tend to write whole sentences, or at least phrases. Interestingly, I now see that I didn't record that in my list of conventions for typed notes above. Those conventions concentrate solely on syntax and I wonder if that is significant.

I don't recall an experiment where I tried hard not to write in sentences. The closest I can think of is my various attempts to use mind maps, where I find myself frustrated at the lack of verbal resolution that the size of the nodes encourages - single words for the most part. Again, I wonder whether I don't trust myself enough to remember the points that I had in mind from the shorter cues.

In both hand and typed notes, I overload some of the conventions and trust context to distinguish them. For example, on paper I can use stars for emphasis or specifically to note that something needs to be considered undeleted. On screen I'll use ? for questions and also uncertainty. I also find that I rarely start numbered lists because I don't want the overhead of going back and renumbering if I want to insert am item into the list,

Something else that I do in both cases is "layering". In Something of Note I mentioned that I'd shown my notes to another tester and we'd observed that I take what I've written and add "layers" of emphasis, connections, sub-thoughts, and new ideas on top of them. (Usually I'll do this with annotations, or perhaps sidebars linked to content with arrows.)

Similarly, one of my colleagues watched me taking notes on the computer during a phone call and commented on how I will (mostly unconsciously) take down points and then go back and refine or add to them as more information is delivered, or I have commentary on the points I've recorded.

There are some differences between the two modes of note-taking. One thing that I notice immediately is that there is no equivalent to doodling in my computer-based notes where my hand-written notes are covered in doodles. I don't know what to conclude from that.

Also, I will use different textual orientations in my written notes, to squeeze material into spaces which mean it is physically co-located with text that is related to it in some way. I don't have that freedom on screen and so any relationships have to be flagged in other ways, or rely on e.g. dynamically resizing lists to add data - something that's less easy on paper.

Where I am aggregating content into a single file over time - as I do with my notes in 1-1 meetings - I almost always work top-down so that the latest material is at the bottom and I can quickly scroll up to get recent context. (I find this intuitive, but I know others prefer latest material at the top.)

Because I don't aggregate content over time in the same way on paper, I don't have quite the same option. I write all of my notes into the same notebook, regardless of context (though I may start a new page for a new topic) so I don't have lots of places to look for a particular note that I made.

Within a notebook, I can flick back through pages to look for related material. I date-stamp my notebooks with a sticker on the front so that I can in principle go back to earlier books, but I rarely do either over periods anything longer than a handful of days.

One other major difference - a side-effect, but a significant one - is that I can easily search my computer notes.

ChoosingI found that there are situations where I'll tend to use one or other of the note-taking techniques, given free choice. I prefer hand-written notes for:
  • technical meetings
  • meetings where it's less important that I maintain a record
  • meetings where typing would be intrusive or colleagues have said they find it distracting
  • informal presentations, our Team Eating brown bag lunches, local meetups
  • face-to-face job interviews
  • team meetings
  • to-do lists
  • when I need to make diagrams
  • when I don't have access to my computer

Whereas computer-based notes tend to be used for:
  • 1-1 (whether I'm the manager or the report)
  • writing reports
  • writing testing notes (including during sessions)
  • writing blogs
  • where I'm trying to think through an idea
  • when I want to copy-paste data from elsewhere or use hyperlinks
  • when I want to not have to write up later
  • when I want to be able to continue adding content over an extended period of time 

And there are occasions where I use both in tandem. For example, when engaged in testing I'll often record evidence in screenshots and drop the file location into my notes.

I might sketch a mind map on paper to help me to explore a space, then write it up in an editor because that helps me to explore the nature of the relationships.  This is probably a special case of a more general approach where I'll start on paper and switch to screen when I feel I have enough idea - or sometimes when I don't - because editing is cheaper on the computer. From Tools: Take Your Pick:
Most of my writing starts as plain text. Blog posts usually start in Notepad++ because I like the ease of editing in a real editor, because I save drafts to disk, because I work offline ... When writing in text files I also have heuristics about switching to a richer format. For instance, if I find that I'm using a set of multiply-indented bullets that are essentially representing two-dimensional data it's a sign that the data I am describing is richer than the format I'm using. In particular, I will aggressively move to Excel for tabular data. (And I have been refining the way I use Excel for quick one-off projects too; I love tables.)

ReflectionsI am an inveterate note-taker and I think I'll always prefer to record more rather than less. But when it comes to the formatting, I'll always prefer less over more. For me, the form should serve the content and nothing else, and a simpler format is (all other things being equal) a more portable format.

It appears that I'm happy to exploit differences where it serves me well, or doesn't disadvantage me too much - I clearly am not trying to go to only hand-written or only computer-based notes. But I do want to reduce variation where it doesn't have value because it means I can switch contexts without having to switch technique and that means a lower cost of switching, because I might already be switching domain, task, type of reasoning etc. In a similar spirit, I am interested in consolidating content. I want related notes in the same place by default.

But I'm not a slave to my formatting conventions: something recorded somehow now is better than nothing recorded perfectly later. I will tend to do the expedient over the consistent, and then go back and fix it if that's merited. I very deliberately default to sticking to my conventions but notice when I find myself regularly going against them, because that indicates that I probably need to change something.

Right now I am in the process of considering whether to change from ---- at the start and end of blocks to using three dashes and four dashes at start and end respectively. Why? Because sometimes I need to replace the blocks with <pre> and </pre> tags for the wiki. Marking up the start and end with the same syntax doesn't aid me in search-replacing.

When I am trying to introduce some new behaviour, I will force myself to do it. If I fail, I'll go back and redo it to help to build up muscle memory. I think of this as very loosely like a kata. For example, I was slower at typing for a while when I started to type in a more traditional way, but I put up with that cost in the belief that I would ultimately end up in a better place. (And I did.)

I think that my computer note-taking is influencing the way that I write non-note content. To give one illustration: over the years I have evolved my written communications (particularly email) to have a more note-like structure. I am now likely to write multiple one-sentence paragraphs, pared back to the minimum I think is necessary to get across the point or chain of reasoning that I want to deliver.

Likewise, I try to write more, shorter paragraphs in my blog, because research I've read, and my own experience, is that this is a more consumable format on screen.  (After seeing how much content I'd aggregated for this blog post, I considered splitting it too.)

I use text files as repositories of related information, but I also sometimes have a level of organisation above the file I'm working in. I'm recruiting as I write this. If, after I review a CV, I want to talk to the candidate, I start a text file in the folder I'm maintaining for this round of recruitment. My notes on the CV go there, as do questions I'll ask when we speak. On the phone I'll type directly into the file, recording their answers, my thoughts on their answers, new questions I want to ask and so on. At the end of the interview, I'll briefly review and note down my conclusions in the file too.

The same technique applies to my team. I have weekly 1-1 with my team and an annual review cycle. I make a folder per person, inside that a folder per cycle and, inside that I have a text file, called Notes.txt. In 1-1 I will enter notes while we talk. Outside of 1-1 I'll drop thoughts, questions, suggestions and so on into the file in preparation for our next meeting. Over time, this becomes an historical record too, so I can provide longitudinal context to discussions.

This stuff works for me - or at least, is working for me right now better than anything else I've tried recently and given the kinds of assessments I've made of it - but none of it is set in stone. My overarching goal is to be efficient and effective and I'm always interested in other people's conventions in case I can learn something that helps me to improve my own.
Image: https://flic.kr/p/iXmQCZ
Categories: Blogs

Request attributes: Simplify request searches & filtering

Dynatrace tracks all requests from end-to-end and automatically monitors the services that underly each transaction. The performance and attributes of each request can be analyzed in detail. You can even create custom, multi-faceted filters that enable you to analyze call sequences from multiple angles. With such advanced request filtering, Dynatrace enables you to slice and dice your way through your requests to find the proverbial “needle in a haystack.” Until now such filtering was only possible on certain predefined attributes. With the latest Dynatrace release, you can now configure custom request attributes that you can use to improve filtering and analysis of problematic web requests.

What are request attributes?

Request attributes are essentially key/value pairs that are associated with a particular service request. For example, if you have a travel website that tracks the destinations of each of your customers’ bookings, you can set up a destination attribute for each service request. The specific value of the destination attribute of each request is populated for you automatically on all calls that include a destination attribute (see the easyTravel destination attribute example below).

Request attributes

If an attribute exists on multiple requests within a single PurePath then the attribute is applied to each request. In such instances, a single attribute may have varying values for each transaction request in the PurePath. You can even have multiple attributes on the service calls within a single PurePath. This makes request attributes a powerful and versatile feature when combined with Dynatrace advanced filtering.

In the image below you can see that the easyTravel User attribute exists on the triggering request (/services/AuthenticationService/authenticate) as well as on the authenticate request of  the AuthenticationService that is being called. In the same below the value is the same, however your application might be different and the values might not agree.

Request attributesRequest attributes

Create a request attribute To configure a request attribute
  1. Go to Settings > Server-side monitoring > Request attributes.
  2. Click the Create new request attribute button.
  3. Provide a unique Request attribute name. You can rename an attribute at any point in the future.
  4. Request attributes can have one or more rules. Rules define how attribute values are fetched.
Request attribute rules

Have a look at the example request attribute rule below. Note that the request attribute destination can obtain its value from two different sources, either an HTTP Post parameter (iceform:destination) or an HTTP GET parameter (destination)Rules are executed in order. If a request meets the criteria for both rules, its value will be taken from the first rule.

Each rule needs a source. In the example below, the request attribute source is a web request HTTP GET parameter (destination).

This GET parameter will be captured on each and all monitored processes that support code-level insight and it will be reported on all requests that are monitored by Dynatrace.

While this is convenient, it’s not always what’s needed. This is why you can restrict rules to a subset of process groups and services. To do this, select process group and service names from the four drop-lists above to reduce the number of process groups and services that the rule applies to.

You may not be interested in capturing every value. In other cases, a value may contain a prefix that you want to check against. To do this, specify that the designated parameter should only be used if its value matches a certain value. You can also opt to not to use an entire value, but instead extract a portion of a value. The example below is set up to only consider iceform:destination HTTP POST parameters that begin with the string Journey :. This approach will extract everything that follows the string Journey: and store it in the request attribute.

Requests can have as many attributes as you want.

Request attributes on service pages

Once you’ve defined your attributes, go to any service page where you expect to see your defined request attributes. Have a look at the Top requests section (see example below). The requests now feature attribute labels indicating that at least some of respective requests contain the new request attribute. Click any request attribute to filter the entire page view down to only those requests that carry the selected attribute.

This includes both the chart at the top of the page and the request table further down the page. Any further analysis you do is likewise focused on these same requests.

Service flow only shows those requests that contain the easyTravel destination request attribute.

A new Request attributes tab has been added next to the Top requests tab. This tab lists the request attributes that correspond to the request page. This table reflects the current filter settings and shows the same metrics as the request table.

There are four request attributes included in the example below. The Median response time is the median response time of all requests that contain the request attribute. Total time consumption represents the sum of response times of all requests in the selected timeframe that have the selected request attribute.

You can also view the corresponding throughput metrics. In the example below, there were 2,400 requests that dealt with easyTravel JourneyId and the current throughput is 16/min.

Request attributes do of course have values. You can see the values by expanding any attribute row. The table below shows the throughput numbers for all requests that contain the easyTravel destination attribute, broken out into the Top 18 values.

Here again, click any request attribute key/value pair to narrow down the results on the page to just those requests that include the selected attribute value. For example, the chart below only shows those requests that have the attribute key/value pair easyTravel destination = Zurich.

Request attributes in service analysis

Request attributes can be leveraged across all service analysis views. The service flow below shows the transaction flow of 52 Requests. 73% of the requests make about 10 calls to JourneyService. The service flow is filtered with the request attribute key/value pair destination = Berlin. This means that all 52 requests on the easyTravel Customer frontend service have a request attribute destination with the value Berlin!

We can add additional filters on JourneyService for other attributes that exist on these requests. The following service flow only shows requests that have the attribute destination = Berlin on the easyTravel Customer Frontend request and also make POST requests to the Journey Service.

request attribute

This filtering approach works across all levels of all service analysis views.

Protect confidential attribute values

Because request attributes can include confidential values, Dynatrace makes it possible to hide sensitive data from certain user groups and restrict who can define the data items that are captured within request attributes. To define or edit a request attribute, users must have the Configure capture of sensitive data permission.

If you select the Request attribute contains confidential data check box (see below), only users who have the View sensitive request data permission will be able to see the values of the attribute and use the attribute as a filter. The attribute values are hidden from all other users.

The request attribute table still indicates to unauthorized users that the attribute exists and provides overall request numbers, but the values are hidden (see example below).

Looking at the PurePath, you can see that the actual JourneyId is hidden because this user doesn’t have permission to view confidential data.

What’s next?

At the moment, we only allow the capture of web request headers and parameters. Soon we’ll extend the functionality to make it even more versatile. We also plan to further expand the use of request attributes across Dynatrace. So, please stay tuned.

The post Request attributes: Simplify request searches & filtering appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Hot from the lab: Latest releases center on unification, scalability and powerful analytics that tie back to business metrics

Innovation has always been at the core of the Dynatrace culture. We invest heavily in our product development so that our 500+ global R&D experts continue to break new ground in APM.

This is an exciting post for me, as I get to share highlights of the most recent advancements.

I even shot a quick video that hits some of the big changes.

Visually Complete

Visually complete puts you in the user’s seat and captures the precise visual experience of all your real users. Combined with Speed index, which shows how fast your page loads, and synthetic transaction data, you can now see exactly how digital performance is impacting revenue, bounce rates and conversions. Available July.

Enhanced visuals – make everyone a performance expert

The Dynatrace experience has never been more equipped to unite teams around the metrics that matter. Fresh dashboards and our AI-powered analytics gives everyone in your business precise answers to complex problems – stay high level or dive into the detail – everyone can be a performance expert.

Unifying enterprise monitoring

Heterogeneous IT landscapes continue to surge in complexity and scale. On the flip side, our customers are simplifying; taking advantage of our enterprise-wide, full stack solution that does away with monitoring in silos. From microservices to APIs, mobile to mainframe, Dynatrace is the only one that can support the depth and scale of our customers’ digital business.

So now let’s look at some techs and specs.

Dynatrace 

  • Business impact reports with every problem discovered, so you can see precisely how your customers were affected and why. More here at our blog.
  • Map and position your custom network device within our Smartscape topology using AI, to capture important custom metrics in the broader topology context. Read more here.
  • Auto-discover all hosts, applications, and services—along with their relationships— and synchronize with your ServiceNow ITIL CMDB database. More to read here.

To stay up with the latest, head here.

AppMon

  • Extended time and deployment-based PurePath problem pattern detection that fully automates analysis of millions of PurePaths across multiple deployments so you get instant feedback on common issues, reducing the chance of quality degradation.
  • Deep insight into every visit and user action, including W3C metrics and JavaScript error diagnostics, that delivers insight into every browser and app from the customer perspective.
  • Full PurePath, method hotspots, exceptions and database diagnostics in the Web UI to open up the power of Dynatrace to everyone in your company and foster collaboration.

Heaps more to read about here.

Advanced Synthetics

 Filter error analysis across time, location, error type to quickly pinpoint availability issues.

  • Emulate any mobile connection across our global performance network to optimize the digital experience for mobile devices.
  • New interactive waterfall analysis enables automatic filtering by third party service category, analyzing W3C browser timing events and more to reveal the greatest impact on user experience.

Lots more updates to read up on here

DC RUM

  • Auto-discovery of new – or recently inactive – services and servers informs you of important changes in your environment and the impact on user experience.
  • New explorer views for DNS, Network and Citrix deliver increased interactive analysis for speedier insights.
  • One-minute data collection intervals expedite alert triggering and view micro-trends in enhanced granularity.

 Read about the rest of the advancements here.

The post Hot from the lab: Latest releases center on unification, scalability and powerful analytics that tie back to business metrics appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Agile Testing Essentials – LiveLessons video course

Agile Testing with Lisa Crispin - Fri, 06/02/2017 - 15:04
Agile Testing Essentials video courseAgile Testing Essentials video course

Janet Gregory and I offer our new five hour introduction to agile testing, based on our booksAgile Testing: A Practical Guide for Testers and Agile Teams, and More Agile Testing: Learning Journeys for the Whole Team. “Agile Testing Essentials” is for anyone working on or with a software delivery team who wants to learn the basic principles and practices for building quality into your software product. 

In the course, we spend 5-10 minutes explaining some agile testing concept, technique or practice, then give you an exercise to help you practice it yourself. Then we discuss and show you how we would approach the exercise. Janet and I share our personal experiences and give lots of examples to hep you learn.

Read Lisi Hocke’s review and Mike Talks’ review to learn more about the course and whether it will fit your needs. Please email me with any questions.

The post Agile Testing Essentials – LiveLessons video course appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Italian Software Testing Forum, Milan, Italy, June 19-20 2017

Software Testing Magazine - Fri, 06/02/2017 - 10:00
The Italian Software Testing Forum is a three-day conference dedicated to Software Testing that takes place in Milan. International experts from the industry and the academia will share experiences,...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Waterfall findings and JavaScript error details included in web checks

Web check analysis just got even better. Following the recent introduction of user-action waterfall analysis and JavaScript error analysis into Dynatrace real user monitoring, we’ve now made both features additionally available for web checks.

Web checks are synthetic user visits that monitor the availability of a URL or any number of business-critical transactions in your environment 24/7. At Dynatrace, we’re convinced that synthetic monitoring (i.e., “web checks”) and real user monitoring are most effective when they’re used in combination with one another. This is why both Dynatrace real user monitoring and Dynatrace web checks are built on top of common technology that enables easy comparison of monitoring results when you need it.

Waterfall analysis findings for web checks

To drill down to the waterfall finding of a specific web check execution

  1. Select Web checks & availability from the navigation menu.
  2. Select a web check.
  3. Expand the Details of one of the monitoring locations and click the Analyze button to open Web check analysis view.
    Waterfall findings
  4. From the Analyze a specific web check run drop list, select the web check run you’re interested in.
  5. You’ll find the waterfall findings for the selected run in the Detailed breakdown section of the page (see example below). These findings help you quickly identify potential problems with your application, such as uncompressed resources and slow 3rd party providers. For a detailed list of available findings, please have at our recent user-action waterfall analysis blog post.
  6. Click one of the findings at the top of the waterfall analysis to highlight the related Resources in the waterfall.
    Note: Resource timings can now be grouped by domain.
JavaScript error analysis for web checks

You can now also analyze the details of JavaScript errors that are detected during Web check runs. JavaScript errors appear as red markers in the waterfall timeline at the point in time they occurred during the run (see example below). Simply click a marker to analyze the details of that error (see the next example further down this page).

The Error details page for each detected JavaScript error includes a complete stacktrace that identifies the exact line of code that’s responsible for the error. This insight can dramatically accelerate the time it takes to resolve such errors.

To learn more about the JavaScript error analysis capabilities of Dynatrace, please see the blog post Source map support for JavaScript error analysis.

The post Waterfall findings and JavaScript error details included in web checks appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Rapise 5.1 Provides Scriptless Test Automation

Software Testing Magazine - Thu, 06/01/2017 - 16:25
Inflectra has announced the release of Rapise 5.1, the latest version of its Rapise test automation platform. Rapise is the most comprehensive and powerful automated testing solution on the market....

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today