FixStream Meridian with DC RUM I’m excited to co-author this blog with Bishnu Nayak from our new partner FixStream. I’ll write the Dynatrace perspective to set the stage, then let Bishnu add more details on how FixStream’s Meridian offering extends the definition and implementation of comprehensive performance visibility. At Dynatrace, we – along with our […]
The post Closing Gap between EUE, Application & Infrastructure Performance appeared first on about:performance.
I pair on all my conference sessions. It’s more fun, participants get a better learning opportunity, and if my pairs are less experienced at presenting, they get to practice their skills. Big bonus: I learn a lot too!
I’ve paired with quite a few awesome people. Janet Gregory and I have, of course, been pairing for many years. In addition, I’ve paired during the past few years with Emma Armstrong, Abby Bangser, and Amitai Schlair, among others. I’ve picked up many good skills, habits, ideas and insights from all of them!
The Ministry of Testing published my article on what I learned pairing with Abby at a TestBash workshop about how distributed teams can build quality into their product. If you’d like to hone your own presenting and facilitating skills, consider pairing with someone to propose and present a conference session. It’s a great way to learn! And if you want to pair with me in 2017, let me know!
Surround SCM 2016 is now available and it includes some great new features you don’t want to miss out on. Here’s a quick look at what’s new.Capture electronic signatures and run audit trail reports for compliance purposes
To meet Title 21 CFR Part 11 compliance requirements, you can enable electronic signatures for workflow states to track when files move to the state and who signed off on the files in that state. Signature records are stored in the mainline database, and you can run audit trail reports to validate and review the records during an audit or for other compliance purposes.
Perform the following tasks to capture electronic signatures and create audit trail reports.
1. Configure workflow states that require an electronic signature. You can add new states or edit existing ones to change the signature requirements. More info
2. Configure compliance server options to specify the electronic signature settings, such as the signature components and the certification message to include with signatures. More info
3. Users responsible for signing off on files set states on files and enter their electronic signature. More info
4. Create and run audit trail reports to review electronic signature information, such as when files entered specific states and the users who entered signatures. More info
Administrative users can configure workflow states to automatically reset to the default state if the file version changes. This helps ensure files are not left in an incorrect state when they are updated. For example, you can configure the Reviewed state to reset to the default state to automatically restart a review workflow when additional changes are checked in. More infoRun reports from the Source Tree window
There are two new ways to run reports on files from the Source Tree window.
You can quickly create and run one-time use reports from the Tools > Quick Reports menu. You no longer need to open the Reports dialog box first to create reports you don’t need to save. More info
You can also create custom shortcut menu items to run saved reports when you right-click repositories. Before you can run reports this way, you need to create a plug-in for the menu item and add it to the repository shortcut menu. More info
Enable syntax highlighting to show specific text in different styles and colors in the built-in file viewer and editor. This makes file contents easier to read when you view and edit files, perform code reviews, and search for text in files. More info
We hosted this month's Lean Coffee at Linguamatics. Here's some brief, aggregated comments on topics covered by the group I was in.
It is fair to refer to testers as "QA"?
- One test manager talked about how he has renamed his test team as the QA team
- He has found that it has changed how his team are regarded by their peers (in a positive way).
- Interestingly, he doesn't call it "Quality Assurance" just "QA"
- His team have bought into it.
- Role titles are a lot about perception: one colleague told him that "QA" feels more like "BA".
- Another suggestion that "QA" could be "Quality Assisting"
- We covered the angle that (traditional) QA is more about process and compliance than what most of us generally call testing.
- We didn't discuss the fairness aspect of the original question.
What books have you read recently that contributed something to your testing?
- The Linguamatics test team has a reading group for Perfect Software going on at the moment.
- Although I've read the book several times, I always find a new perspective on some aspect of something when I dip into it. This time around it's been meta testing.
- The book reinforces the message that a lot of testing (and work around the actual testing) is psychology.
- But also that there is no simple recipe to apply in any situation.
- We discussed police procedural novels and how the investigation, hypotheses, data gathering in them might be related to our day job
When should we not look at customer bugs?
- When your product is a platform for your customers to run on, you may find bugs in customer products when testing yours.
- How far should you go when you find a bug in customer code?
- Should you carry on investigating even after you've reported it to them?
- In the end we boiled this question down to: as a problem-solver, how do you leave an unresolved issue alone?
- Suggestions: time-box, remember that your interests are not necessarily the company priorities, automate (when you think you need lots of attempts to find a rare case), take the stakeholder's guidance, brainstorm with others, ...
- If the customer is still screaming, you should still be working. (An interesting metric.)
In chemical reactions, a catalyst makes reactions occur faster and require less activation energy. In performance testing, HPE LoadRunner is the catalyst. Keep reading to find out how to maximize its capabilities.
JMeter is a very popular open source load testing tool with great flexibility thanks to its Java-based extension points. What it lacks is the ability to analyze the results in combination with metrics from your application and your infrastructure. As mentioned in a recent PerfBytes podcast, because JMeter itself doesn’t provide good data visualization, most users stream […]
If you’re testing a web application, it would naturally be best to test it with not only one but with all of the most popular browsers (cross-browser testing).
This blog post will show you how to record your automated website browser tests and then automatically execute the recorded tests on different browser for browser compatibility testing. Ranorex is a cross browser testing tool which can run tests in Microsoft Internet Explorer, Mozilla Firefox, Google Chrome, Chromium, Apple Safari and Microsoft Edge.
To demonstrate how to perform a multiple browser test, we will generate a small sample which enters data in our VIP Database Test Web Application
First of all we’ll create a Test Case holding two Recordings, one for opening and one for closing the browser as setup and teardown modules.
Now we add a “OpenBrowser” action to the OpenBrowser Module with “http://www.ranorex.com/web-testing-examples/vip/” as Url and e.g. “IE” as browser.
As next step we add a recording module validating the status String on connecting and disconnecting.
The recording module simply
- validates, that the status text equals “Online”,
- validates, that the status text equals “Offline”,
- connects again,
- confirms to connect in the pop up window
- and validates, that the status text equals “Online” again.
Make sure to have two repository items representing the connection status text, one for “Online” and one for “Offline”. This allows you to overcome issues with delaying validation steps. In our application it takes some time that the status text changes from “connecting…” to “Online”. To make the Validation work, we can simply add the actual validation into the RanoreXPath and only validate the existence of the status text in our web page. By doing so, we are using the search timeout of the repository item to wait for the status text to change.
Additionally to the TestConnection recording, we will generate a recording for adding VIP’s to the database. This recording will be added to a new Test Case as we want to data driven add VIP’s and do not want to open and close the browser and testing the connection with each iteration of adding a new VIP.
The recording might look something like this:
As we want to make our test data driven, we have to add variables which can be bound with the data from our data source.
The key sequences for first and last name contain the variables $FirstName and $LastName. To select the category, we have to add a SetValue action and set the TagValue to the variable $Category. The gender can be set by adding a variable to the RanoreXPath of the corresponding repository item. Additionally, we validate the VIP count against a variable $VIP_Count.
After generating the recording, we create a data source for the Test Case Add_VIP’s and bind the data tables to the variables of the recording AddVIP.
As last step we add a Close Application action to the CloseBrowser Module with the application folder of the web application as repository item.
Now we can execute our Test Suite Project, which:
- opens the web application in Internet Explorer in the setup region,
- performs connection tests,
- adds 3 VIPs following the data driven approach (the data for the 3 VIPs are stored in a simple data table),
- validates the count of the VIPs stored in the web application
- and closes the browser in the tear down region.
Therefore open the recording “OpenBrowser” and edit the browser which should be started. Now choose “As new Variable…” instead of “IE” and add a new Variable called BrowserName.
After that, add a new simple data table to the Test Case “Add_VIP_and_Validate”, holding the names of the different browsers and bind the data connector to the variable “BrowserName”.
After making the browser variable that way and binding this variable to a table holding all supported browser names, you can execute your test script for all supported browser.
To read more, visit our blog at www.sonatype.org/nexus.
Have you ever worked on a web-based test team and switched to a mobile team and wondered if your life is about to get easier or harder? There are significant differences between testing mobile vs. web, and yes, one is MUCH harder than the other. Want to guess which one? Read on and see if you guessed correctly.Let’s Compare
The table below shows the different facets of testing and where its execution is most challenging.ScenarioWeb is HarderMobile is HarderIt's a Draw Feature Functionality Testing
For the case where you have a web application with a supporting mobile app, it is likely the app will have a subset of features that the web side does. When first developing a feature, it is new to the web development team. They have to go through the process of designing and building out the new feature, and training their team on the concept.
When the supporting app is created, it would most likely rely on the existing web feature set, so the design and concepts should be easier for the team to comprehend.X Feature Parity
Speaking of features, when you are building a web app you only have to account for the features in one user world, such as the browser. The feature only needs to be developed once. When you create features on a mobile app, you now have to take into account not only each device platform’s capabilities, but even the user communities’ expectations.
Apple and Droid users have different expectations for how their apps will work. And if you want to have feature parity in your apps, than the development process management becomes all the more difficult. From a test perspective you now have double the work.X
(mobile is harder) Deploying the Application
When using CI, your test environments can constantly be updated for your use with the latest builds. You can also do this on the mobile side if you are using simulators and emulators, or a device cloud such as the one provided by Sauce Labs.
But what about if you want to test on live, local devices? Most likely your team is using them throughout the day and they are not connected to any CI environment. You have to manually install the apps when you want to test the latest, which can be time-consuming, especially when you have multiple devices and platforms.X
(mobile is harder) Integration Tests
Testing needs to account for backend services. Both web and mobile development teams need to run integration tests against their respective APIs. The time and effort to accomplish this are similar.X
(it's a draw) Automating with Page Object Locators
When writing automation on a web application, you need to find the page object locators. You only need to write code to support one set of locators.
If you are developing automated tests for mobile apps, you now need to work with two different dev teams to determine the locators. (note, this is an ideal place to have cross-team standards to build consistency between the apps). And as stated previously, the features might not be in sync, causing testers to write multiple tests for similar features.X
(mobile is harder) Tool and Support Community
Web automation is a very mature industry. Not only are the tools in place to support most automation interactions, but the user community is available to answer any question for any level of user. While the mobile world of automation has come a long way, it still doesn’t have the amount of support that web does.X
(mobile is harder) Platform Complexities
When talking about the test infrastructure, mobile is much more fragile. The different deployment rules between Apple and Droid can make for some tricky problems to solve. Not only that, but the multiple-platform issue comes into play again because now you are supporting two sets of infrastructure that can push updates at any time, causing your tests to unexpectedly break.X
(mobile is harder) Test Strategy
When you think of testing on the Web, your strategy usually takes into account the different supported browsers and maybe the underlying operating systems (OS). With mobile, you have to take into account the OS versions for each platform, as well as device types. While Apple is pretty stable and their user community is up-to-date on the OS, Droid can have a ton of different configurations that your user community supports.X
(mobile is harder) Did You Guess Right?
When I first posed the mobile vs. web question to an automation architect who has worked in both arenas, he immediately replied “mobile is 110% harder.” I have to agree.
But that is not bad news. First, mobile tooling is still evolving, and does not have the maturity that web tooling does. Secondly, there is already plenty available of resources that can help, such as real-device clouds and more and more robust testing frameworks like Appium and Robotium. While you can’t do much to change the nature of app stores, testing and CI tools for mobile are only getting better.
Joe Nolan (@JoeSolobx) is a Mobile QA Team Manager with over 12 years of experience leading multi-nationally located QA teams, and is the founder of the DC Software QA and Testing Meetup.
TestTrack 2016 is coming soon—and we’re offering a live, 30-minute
What’s New in TestTrack 2016 sneak peek on Wednesday, April 27!
Join Gordon Alexander, Seapine Solutions Consultant, for a preview and live demo of the key new features, including how to:
- Export Items to Word. You can now export all TestTrack items to Word! The completely rewritten export functionality makes it easy to produce documents that include the exact data you need, in the exact Word format you want.
- View and Navigate to Linked Items. To quickly see links between items, you can now add columns with linked item information to list windows. Want to see more? Click a linked item and go straight to it to see more details.
- Use Note Widgets with Dashboards. Want to easily share getting started information with new team members, provide links to important information for a sprint, or provide team contact information? Put a note, which can be seen by everyone, right on a dashboard.
Can’t make it to the live event? Not to worry—register and we’ll email you the recording as soon as it’s available.