Skip to content

Feed aggregator

Dynatrace Perform 2017 Digital Performance Management conference date announced: February 7 – 9, Las Vegas

The fourth annual Perform 2017 conference takes place February 7 – 9 2017 at the Cosmopolitan Hotel in Las Vegas! Two-and a-half days, 60+ sessions, 75+ industry thought leaders, and over 100 hours of training. We are scheduling speakers and creating an agenda that you don’t want to miss! Last year’s presenters included executives from over 60 Dynatrace customers like Nordstrom, […]

The post Dynatrace Perform 2017 Digital Performance Management conference date announced: February 7 – 9, Las Vegas appeared first on about:performance.

Categories: Companies

Agile in Regulated and Quality-Critical Industries

The Seapine View - Tue, 06/28/2016 - 15:06

After spending much of my time on the road the last two months visiting various companies from a wide range of industries, I noticed a common thread starting to appear for those in quality-critical and highly regulated industries.

Can we do Agile even if we are regulated or quality-critical?

The answer is YES, it’s possible. “But how?” you may ask.

To break it down to its core, Agile is a method for managing Work In Progress. You can still have your process that enforces compliance with various checkpoints to ensure quality, adherence to standards, and more. That does not have to change at all.

Here is an example:
Traditional requirements and specification documents are created for the project that sets out what needs to happen—design items as you are used to today.

When it comes to breaking those down to tasks (things you actually have to do) you can use Scrum or Kanban. You take these requirements and break them into tasks. Scrum can now be used to help you manage a backlog or the work that needs to be completed, ranking it, having a team work on sizing those, setting out a plan of attack.

Then, just like any other Scrum team, you can start to work on the tasks. As these tasks are completed, they will give you an accurate progress towards the requirements or specifications that you are trying to implement.

Essentially, what happens is that we are taking a better approach to managing work in progress. The goal is faster delivery by making the work in progress easier to manage and breaking it down into smaller items that are easier to track.

Based on what I have seen from a number of teams, it seems this approach gives them what they need to get the work done faster.

The key goal should be to deliver better quality work in better ways. I think this approach could be a great way forward for many companies.

Learn More about Agile in Regulated Industries

Seapine has a good collection of white papers and guides on Agile topics, including:

Categories: Companies

Gurock & TestRail Acquired by IDERA

Gurock Software Blog - Tue, 06/28/2016 - 14:47

header

We have some exciting news to share today. When Tobias and I founded Gurock Software more than 10 years ago, and when we released our now flagship test management tool TestRail in 2010, we never envisioned that we would get the opportunity to work with so many amazing customers to help them build better software. Since its release, TestRail has become the number #1 modern test management tool with thousands of companies using it every day.

We have been able to accomplish this with a relatively small team by being laser focused on customer service and heavily investing in TestRail’s product development and our cloud infrastructure. When IDERA approached us earlier this year to discuss a possible acquisition, we thought a lot about the additional resources this would bring to the product, team and our customers.

Especially with our fast growing world-wide customer base, becoming part of a US & international software company gives us access to important infrastructure we didn’t have before. We are excited to announce that as of last week, Gurock Software has joined and is now part of the IDERA family! We couldn’t be happier about the decision and the many benefits this will bring.

I’m also happy to say that the entire existing Gurock team will stay with the company and will continue working on our products SmartInspect and TestRail and will continue working with our customers. Being part of IDERA will allow us to benefit from the additional resources a larger company provides, while staying focused on our existing approach on product management and customer service.

What does this mean for our existing customers? For now there won’t be too many changes. You will continue working closely with our existing team to get the most out of our products. And we will continue to release big product updates to help you work more efficiently (we have some big product announcements planned for later this year, stay tuned). In fact, one big reason we decided to join IDERA is that the team shares our long-term vision for our product roadmap and our focus on customer success.

Behind the scenes we will start on-boarding additional team members and integrating additional resources as soon as possible. Our longer term goal is to further accelerate TestRail’s product development for even faster release cycles, we will also add more members to our customer success team and we will be able to benefit from a larger partner network to help more teams adopt TestRail. If you have any questions, feedback or comments, please feel free to reach out to Tobias, Lara or myself!

Randy Jacops, CEO of IDERA, also wanted to take the opportunity to welcome all customers to the IDERA family and please see Randy’s message below.

Cheers,
Dennis & Tobias
Gurock Co-Founders

Welcome to IDERA

Hello Everyone,

My name is Randy Jacops and I am the CEO of Idera, Inc (IDERA). We recently acquired Gurock Software GmbH (Gurock) and it is my privilege to introduce IDERA and welcome you to the IDERA family. IDERA has more than 20,000 successful customer relationships, and we have a history of world-class retention levels emphasizing our commitment to customer success. IDERA actually consists of two businesses: IDERA, focused on database lifecycle management solutions, and Embarcadero, focused on development tools to accelerate solutions. Gurock complements our capability in both areas and will continue as a stand-alone entity within the IDERA family.

I became CEO in 2013 and during my tenure have focused the company on long-term customer relationships built on innovation, quality, and ease of use. I strongly believe a successful software company starts with a focus on driving customer success via:

  1. High quality software – complete testing with significant automated code coverage
  2. Ease of use – simplified user experience, particularly the install/upgrade process
  3. Application speed – minimal wait time, robust scalability, and real-time analytics

A software company that delivers these metrics will have generally happy customers. Delivering prioritized innovation and features on a reliable schedule advances the relationship from generally happy to customer success. IDERA prioritizes investment to reflect these goals, and I am delighted to note Gurock shares the same philosophy.

We are excited to partner with Gurock’s founders to invest in enhancing the product and expanding the customer base. Gurock has a proven record of delivering exceptional product and value to customers. We expect to maintain the best of Gurock by retaining the founders and employees, maintaining the product family focused on testing solutions, and continuing the simple, elegant user interface cherished by customers.

Over the past several years, Gurock grew to a customer base exceeding 5,000 with high retention and organic customer growth. We attribute this to the founder’s relentless focus on reducing friction in the trial experience, sales process, customer support, and test integration process. We are committed to continuing this focus and delivering the value you expect from Gurock. To confirm our commitment, we plan to continue the Gurock brand with the same leadership team focused on test management solutions that help many of the world’s best software teams build better products.

As a large global company, we can provide global support, sales, and development teams to enhance your experience with Gurock software. Put simply, our goal is to continue the best of Gurock while providing the capital and resources to expand the products and grow customer relationships.

From an ongoing communication standpoint, we believe in maintaining dialogue with customers interested in contributing. We believe user communities deliver significant customer value and will continue investing appropriately. We also believe that change facilitates innovation and increases value for our customers. We will remain committed to these principles and look forward to working with you to improve every day and help you get the most out of our products and solutions.

If you have questions, please feel free to contact me directly (you can reach me at randy (dot) jacops (at) idera.com). With more than 25,000 customers, it’s not practical to maintain a direct dialogue with every customer, but we will respond to questions. Most importantly, we will communicate future product roadmap reviews, webinars, and related content you will find interesting. I encourage you to participate in all of these sessions so you have an opportunity to provide feedback on our plans.

Thank you for your business and I look forward to the future!

Regards,
Randy Jacops
CEO, Idera, Inc.

Categories: Companies

Your Tests Are not Flaky

Software Testing Magazine - Tue, 06/28/2016 - 14:47
Flaky tests are the bugbear of any automated test engineer; as someone once said “insanity is running the same tests over and over again and getting different results”. Flaky tests cause no end of despair, but perhaps there is no such thing as a flaky or non-flaky test, perhaps we need to look at this software testing problem through a different lens. We should spend more time building more deterministic, more testable systems than spending time building resilient and persistent tests. This presentation shares some examples of when test flakiness hid real problems underneath the system, and how it is possible to solve test flakiness by building better systems. Video producer: https://developers.google.com/google-test-automation-conference/
Categories: Communities

Automatically Improve the Quality of your Code

Testing TV - Tue, 06/28/2016 - 14:39
This talk presents some tools that can help you in automatically checking the code style and quality of your CSS, Sass, and JavaScript of your team to improve the project without slowing anyone down. When working in a team, you deal with people at different levels, from juniors to seniors. If you’re the lead of […]
Categories: Blogs

The Rat Trap

Hiccupps - James Thomas - Tue, 06/28/2016 - 13:55

Another of the capsule insights I took from The Shape of Actions by Harry Collins (see also Auto Did Act) is the idea that a function of the value some technology gives us is the extent to which we are prepared to accommodate its behaviour.

What does that mean? Imagine that you have a large set of data to process. You might pull it into Excel and start hacking away at its rows and columns, you might use a statistical package like R to program your analysis, you might use command line tools like grep, awk and sed to cut out slices of the data for narrower manual inspection. Each of these will have compromises, for instance:
  • some tools have possibilities for interaction that other tools do not have (Excel has a GUI which grep does not)
  • some tools are more specialised for particular applications (R has more depth in statistics than Excel)
  • some tools are easier to plug into pipelines than others (Linux utilities can be chained together in a way that is apparently trickier in R

These are clear functional benefits and disbenefits, and surely many others could be enumerated, although they won't be universal but dependent on the user, the task in hand, the data, and so on.

In this book, Collins is talking about a different dimension altogether. He calls it RAT or Repair, Attribution and all That. As I read it, the essential aspect is that users tend to project unwarranted capabilities onto technology and ignore latent shortcomings.

For example, when a cheap calculator returns 6.9999996 for the calculation (7/11) x 11 we repair its result to 7. We conveniently forget this, or just naturally do not notice it, and attribute powers to the calculator which we are in fact providing, e.g. by translating data on the way in (to a form the technology can accept) and out (to correct the technology's flaws).

The all that is more amorphous but constitutes the kinds of things that need to be done to put the technology in a position to perform. For example, entering the data into a small display which can be hard to read under some lighting conditions using very fiddly rubber keys with multiple functions represented by indiscernible graphics.

Because these skills are ubiquitous in humans (for the most part), we think nothing of them. But imagine how useful a calculator would be if a human was not performing those actions.

I had some recent experience of this with a mapping app I bought to use as a Satnav when driving in the USA. I had some functional requirements, including:
  • offline maps (so that I wasn't dependent on a phone or network connection)
  • usable in the UK and the USA (so that I could practise with it at home)
  • usable on multiple devices (so that I can walk with it using my phone, or drive with it on a tablet)

I tried a few apps out and found one that suited my needs based on short experiments done on journeys around Cambridge. Despite accepting this app, I observed that it had some shortcomings, such as:
  • its built-in destination-finding capacity has holes
  • it is inconsistent in notifications about a road changing name or number while driving along it
  • it is idiosyncratic about whether a bend in the road is a turn or not
  • it is occasionally very late with verbal directions
  • its display can be unclear about which option to take at complex junctions

In these cases I am prepared to do the RAT by, for instance, looking up destinations on Google, reviewing a route myself in advance, asking a passenger for assistance in some cases. Why? Because the functionality I want wasn't as well-satisfied by other apps I tried; because in general it is good enough; because overall it is a time-saver; because even flawed it provides some insurance against getting lost; because recovery in the case of taking the wrong turning was generally very efficient; because human navigators are not perfect or bastions of clarity either; because my previous experience of a Satnav (a dedicated piece of hardware) was much, much worse; because while interacting with the software more I started to get used to the particular behaviours that the app exhibits and was able to interpret its meaning more accurately.

Having just read The Shape of Actions, this was an interesting experience and meta-experience and user experience. A takeaway for me is that software which can exploit the human tendency to repair and accommodate and all that - which aligns its behaviour with that of its users - gives itself a chance to feel more usable and more valuable more quickly.
Image: https://flic.kr/p/x2M76w
Categories: Blogs

The Bestselling Software Intended for People Who Couldn’t Use It.

James Bach's Blog - Mon, 06/27/2016 - 17:26

In 1983, my boss, Dale Disharoon, designed a little game called Alphabet Zoo. My job was to write the Commodore 64 and Apple II versions of that game. Alphabet Zoo is a game for kids who are learning to read. The child uses a joystick to move a little character through a maze, collecting letters to spell words.

We did no user testing on this game until the very day we sent the final software to the publisher. On that day, we discovered that out target users (five year-olds) did not have the ability to use a joystick well enough to play the game. They just got frustrated and gave up.

We shipped anyway.

This game became a bestseller.

alphabet-zoo

It was placed on a list of educational games recommended by the National Education Association.

alphabet-zoo-list

Source: Information Please Almanac, 1986

So, how to explain this?

Some years later, when I became a father, I understood. My son was able to play a lot of games that were too hard for him because I operated the controls. I spoke to at least one dad who did exactly that with Alphabet Zoo.

I guess the moral of the story is: we don’t necessarily know the value of our own creations.

Categories: Blogs

ASQT Conference for Software Quality, Test and Innovation, Klagenfurt, Austria, September 21-23 2016

Software Testing Magazine - Mon, 06/27/2016 - 12:00
The ASQT Conference for Software Quality, Test and Innovation is a two-day conference focused on software testing and software quality that takes place in Austria. Most of the talks are in German, but there are also some English sessions. The ASQT Conference for Software Quality, Test and Innovation mixes contributions form academics and industrial software testers. In the agenda up you can find topics like “Continuous Testing in a Java Migration Project”, “Test Automation for Mobile Apps”, “A Model-based Combinatorial Testing Approach of Web Applications”, “Quality through Model-based Testing – When does it pay off?”, “Building Software that Matters – The Lean Startup Approach” or “Modularity of Javascript Libraries and Frameworks in Modern Web Applications”. Web site: http://www.asqt.org/?lang=en Location for ASQT Conference for Software Quality, Test and Innovation: University of Klagenfurt, Universitätsstrasse 65-67, 9020 Klagenfurt am Wörthersee
Categories: Communities

Dangers of Certainty in Realizing Customer Value

Don Quixote was certain he saw Giants instead of windmills. In this epic story, he believed he knew the answers and saw what he wanted to see.  Unfortunately in many organizations, there is this same phenomenon, a need to act as if we are certain.  In fact, the higher up you go in an organization, the compulsion of acting with certainty becomes greater and greater.  Statements like “That’s why we pay you the big bucks” are used to imply that the higher in an organization, the more you are expected to just “know”. 
Some think they must act with “pretend certainty” for the benefit of their career.  Others have convinced themselves of “arrogant certainty” where they believe they know the answer or solution but don’t (or can’t) provide any solid basis for this certainty. Unfortunately this arrogance can be interpreted as confidence that can be dangerous to the success of a company.  Nassim Nicolas Taleb refers to “epistemic arrogance” that highlights the difference between what someone actually knows and how much he thinks he knows. The excess implies arrogance.  What has allowed certainty within companies to thrive is that there is a distance between the upfront certainty and the time it takes to get to the final outcome.  There lacks accountability between certainty at the beginning and the actual results at the end.  Often times the difference is explained away by the incompetence of others who didn’t build or implement the solution correctly.
Of course, the truth is somewhere in between. The concept of certainty is actually dangerous to an enterprise since it removes the opportunity of acknowledging the options and allowing the enterprise to apply a discovery mindset approach toward real customer value via customer feedback loops and more.
We also want to avoid the inverse that is remaining in uncertainty due to analysis paralysis.  A way to avoid this is to apply work in an incremental framework with customer feedback loops to enable more effective and timely decision-making. Customer feedback will provide us with the evidence for making better decisions. Applying an incremental mindset will enable us to make smaller bets that are easier to make and allow us to adapt sooner. 
A healthier and more realistic approach is to have leaders who understand that uncertainty is actually a smart starting position and then apply processes that support gaining certainty. It is, therefore, incumbent upon us to have an approach that admits to limited information and uncertainty, and then applies a discovery process toward customer value. In the end, the beaten and battered Don Quixote forswears all the chivalric false certainty he followed so fervently.  Is it time for management to give up the certainty mindset they think they have and instead replace it with a discovery mindset as a better path to customer success? 
Categories: Blogs

Enhanced online screen in HPE Performance Center

HP LoadRunner and Performance Center Blog - Sun, 06/26/2016 - 08:22

Performance Center screenshot teaser.png

Our goal in the new online screen graphs in HPE Performance Center was to improve the user experience by adding new functionality that provides the user with more capabilities, and a higher level of convenience.

 

Continue reading to learn about the new abilities of performance center online graphs.

Categories: Companies

Making the Earth Move

Hiccupps - James Thomas - Sat, 06/25/2016 - 09:59

In our reading group at work recently we looked at Are Your Lights On? By Weinberg and Gauss. Opinions of it were mixed but I forgive any flaws it may have for this one definition:

  A problem is a difference between things as desired and things as perceived.

It's hard to beat for pithiness, but Michael Bolton's relative rule comes close. It runs:

  For any abstract X, X is X to some person, at some time.

And combining these gives us good starting points for attacking a problem of any magnitude:
  • the things
  • the perception of those things
  • the desires for those things
  • the person(s) desiring or perceiving
  • the context(s) in which the desiring or percieving is taking place
Aspiring problem solvers: we have a lever. Let's go and make the earth move for someone!
Image: Wikimedia Commons
Categories: Blogs

AutoMapper 5.0 speed increases

Jimmy Bogard - Fri, 06/24/2016 - 23:43

Just an update on the work we’ve been doing to speed up AutoMapper. I’ve captured times to map some common scenarios (1M mappings). Time is in seconds:

  Flattening Ctor Complex Deep Native 0.0148 0.0060 0.9615 0.2070 5.0 0.2203 0.1791 2.5272 1.4054 4.2.1 4.3989 1.5608 134.39 29.023 3.3.1 4.7785 1.3384 72.812 34.485 2.2.1 5.1175 1.7855 122.0081 35.863 1.1.0.118 6.7143 n/a 29.222 38.852

The complex mappings had the biggest variation, but across the board AutoMapper is *much* faster than previous versions. Sometimes 20x faster, 50x in others. It’s been a ton of work to get here, mainly from the change in having a single configuration step that let us build execution plans that exactly target your configuration. We now build up an expression tree for the mapping plan based on the configuration, instead of evaluating the same rules over and over again.

We *could* get marginally faster than this, but that would require us sacrificing diagnostic information or not handling nulls etc. Still, not too shabby, and in the same ballpark as the other mappers (faster than some, marginally slower than others) out there. With this release, I think we can officially stop labeling AutoMapper as “slow” ;)

Look for the 5.0 release to drop with the release of .NET Core next week!

Categories: Blogs

Results: Performance Engineering and Your End Users [Webinar]

HP LoadRunner and Performance Center Blog - Fri, 06/24/2016 - 20:02

Vivit-PE End User.jpg

Performance Engineering and Your End Users. Each of these topics have continued to accelerate into increasing needs across organizations and industries. Learn a few new tips related from our expert panel in this webinar.

Categories: Companies

Brexit Crunch on Financial Services Sites

With the Brexit decision over, financial markets are reacting to Britain’s decision to leave the European Union.  Below is a view showing the performance of various financial services websites, aggregated by industry and country. The most immediate impact has been with UK based brokerage sites.  The team at Dynatrace proactively monitors hundreds of financial services […]

The post Brexit Crunch on Financial Services Sites appeared first on about:performance.

Categories: Companies

Q & A : Design Patterns for Scalable Test Automation

Sauce Labs - Fri, 06/24/2016 - 16:00

Thanks to everyone who joined the webinar given by Sahas Subramanian, “Design Patterns for Scalable Test Automation with Selenium and WebdriverIO”. There were a number of great questions that were posed prior to and during the session, and we asked Sahas to consolidate some of these questions and answer them in this follow-up post. Disclaimer: opinions shared below are Sahas’ and not those of his employer or Sauce Labs.

If you missed the webinar, you can find the video, slides and link to a related blog post here. Should you have any additional questions, send a tweet to @Sahaswaranamam.

Q: How can you best handle security authentication pop-ups from specific browsers? What are the best ways to switch between tabs and to close tabs?

A: Use the getCurrentTabId API to get the handle of the current window. Once you have the pop-up window handle, you could close it using browser.close(popUpHandle)

Reference: http://webdriver.io/api/window/getCurrentTabId.html

Q: How should I handle SOAP/SOAPUI testing?

A: Generally speaking, Selenium and Webdriver are appropriate for UI testing. If your intention is to test the APIs, I would suggest using tools like JMeter and/or Taurus. Reference: http://gettaurus.org

Q: How do I create my own wrapper? (How can I check for page title?)

A: Check out http://webdriver.io/api/protocol/title.html

Q: What are your thoughts on using a recording IDE versus writing your own automated scripts in terms of time efficiency, maintenance, robustness, and efficiency?

A: While record and replay tools can help you get started faster they have an inherent weakness that leads to brittle tests – when the UI changes it is harder to update the generated code since the team won’t know the architecture and design behind the code. Other limitations:

– They are often proprietary and licensed.

– Some tools are not flexible in the respect that a small change might force you to regenerate the entire workflow.

Overall, record/replay tools might be a good solution for a UI that doesn’t change. For changing interface, well understood hand-crafted code is better from all perspectives.

Q: It was my understanding that there is no guarantee of the order in which unit tests will run. In your example you have unit tests that are running one part of your workflow, but if they did not run in the order you expect they would fail.

A: Mocha describes() can handle synchronous and asynchronous execution by just passing a callback to it() block. My example uses Mocha and leverages synchronous nature to orchestrate the workflow.

Q: Is there an expected condition that can do page reload?

A: Sometimes a test has to wait for a process to complete, but that element is not updated except on page reload. Webdriver.io offers an API to reload/refresh the page. Check out http://webdriver.io/v3.0/api/protocol/refresh.html. Depending on your workflow, try to reload the page and use one of the waitFor APIs for that specific element to be visible or enabled.

Q: How do you write tests that aren’t fragile? How do you best introduce approaches to limiting brittle integration testing and improving test reliability?

A: Some of my top picks:

  1. Make sure the UI test is the right technique to automate the requirement. If the requirement can be tested thru View
  2. Testing or API testing, prefer that over Webdriver driven UI Tests
  3. Prefer declarative vs. imperative BDD
  4. Adopt Page Object pattern and follow clear chain of responsibility between Tests and Page Objects
  5. Prefer “Tell Don’t Ask pattern”, build the logic in Page Objects and keep tests lean
  6. Avoid Thread.Sleep and handle asynchronous behavior via code in the Page Object logic
  7. Share test logic and engineer the automation with possible coding patterns & principles
  8. Constantly review and refactor test automation code much similar to production code

 

Q: We have an old legacy app and no team adoption of test automation. Changes break things all the time. How would you suggest we talk to the team about adding test automation to our process?

A: I would suggest to begin with visualizing the value stream map for your delivery process and understand the current engineering cycle time, bottlenecks, and make waste visible. Additionally, try to quantify feature dev vs. bug fix effort and the number of bugs found in production.

Typically, lack of automation will indicate high cycle time, long (manual) test effort, high defect rate and/or high bug fixing effort. With that initial measure you could work with the product and technology leadership to improve the situation.

Q: What are some good “quality” measurements we can use to demonstrate project success?

A: IMO, I don’t see quality/test automation as a separate effort rather it’s part of development. It should help to ship the product faster with greater quality. Each type of test should help to increase confidence. Given that, Value stream map before and after the effort should expose the benefits (if test automation was the bottleneck).

In addition, measure:

  • Total test automation (#unit tests, #view test, #api workflow tests, #UI workflow tests, #A/B tests). Expectation – overall trend should be up (we should be adding more automation), individual test automation technique trend should align to a pyramid.
  • Customer reported bugs. Expectation: this should be trending down
  • Automation success rate over time. Expectation: should be trending up and stay close to 100%
  • Automation execution time over time. Expectation: should be trending down

Q: How do I avoid Thread.sleep()? I’ve put in all kinds of waits in my code, but I still get periodic failures because some element or another can’t be found. Is that just something you have to live with when doing browser testing? Or is there ever a reliable method that you can trust every time? How can we tell Webdriver to wait until Ajax is done?

A: Generally, thread.sleep or waits are used when the UI is waiting on an asynchronous request from the back-end. The easiest way to handle the situation is to use the Webdriver-provided expected conditions class.

If your language of choice doesn’t have anything like ExpectedConditions class, I would suggest referring to how Webdriver.io implemented the same logic and try to make your own, if necessary. Soon we’ll have another blog post walking this thru.

Q: How can you speed up tests with Sauce Labs?

A: If your test is slow due to asynchronous behavior on the app, I’m not sure the “test” can run faster than the app. We need to look at the application performance to improve the situation.

Given that the app is faster but tests are running slow, there could be many reasons. Some of the common things I would try:

  • CRUD flow – try to combine scenarios to be meaningful end-user behaviors. For example, let’s assume that you are testing the WordPress blogging app (create blog post, view the blog post, verify visitors, view by geography, delete the post etc). If each one of them is an independent scenario, potentially some steps are repeated (e.g., launching the browser, navigating to the website, logging in, navigating to posts page, etc). Instead of separate scenarios, if we combine them to be logical workflow for a given persona, repeated steps can be optimized and as a result tests complete faster.
  • Scaled infrastructure – If all your UI tests are essential, run them in parallel. Leverage Sauce Labs or a Selenium grid as appropriate
  • Test category & parallel execution – Split the tests by different category, run them in parallel
  • Limit browser mix – From the utilization metrics, learn the most widely used browsers by your customers and prioritize that browser mix. We can’t test all scenarios across all different browsers, all the versions overtime.
  • Logging and visibility – integrate logging with some time series database, create visibility, measure flakiness, slow tests trends to focus and improve.

Q: How difficult is integrate Webdriver.io with a CI server like Jenkins?

A: We’ll have another blog post on this soon. However, it’s fairly simple to integrate with any CI/CD system. In my example project, all you need to do is:

  • Checkout the source from your repo
  • Navigate to *_tests directory
  • npm install
  • npm run test-Sauce Labs

This last command above will return zero exit code on success. Configure your system to fail on non-zero exit code.

Q: Are view tests part of the product code base same as unit?

A: Yes. We tag them as “View specs” and run part of unit tests.

Q: Can Selenium support shared object repositories (a concept of UFT)? Just like LeanFT can we build upon Selenium tests using such repos?

A: I’ve not used either of the above mentioned products. However, UI map can be considered as UI elements repository and shall be shared via package management.

Q: What is the best way  to handle timeout issues with complex UI scripts from Jenkins, e.g. timeout occurred after 300 sec (randomly)?

A: IMO, this has less to do with Jenkins or Sauce Labs. This can be handled using the test runner (e.g., Mocha) and your testing framework (e.g., Webdriver.io)

Q: Is Webdriver.IO a part of Webdriver or a different product? In other words, can Webdriver.IO work with Webdriver? Is Webdriver.IO the same as Selenium Webdriver?

A: Webdriver.IO is a wrapper on top of Webdriver to control browser and mobile applications efficiently.

Q: Is there a way to do step-by-step debugging with Webdriver.io?

A: Yes:

  1. Configure your IDE for debugging node js. For example, I use VSCode – here is a reference: https://code.visualstudio.com/Docs/editor/debugging
  2. Try “pause” API. http://webdriver.io/api/utility/pause.html

 

Q: How should logic be passed to the configuration in WebdriverIO? For a variety of modes, e.g., multiple brands, environments, resolutions, and local vs. cloud runners?

A: Webdriver.io leverages JavaScript to receive configuration parameters. You can create a master config (generic one), environment specific configs, and merge them at execution time. Reference: https://github.com/sahas-/webdriverio-examples/tree/master/googleSearch_tests/config

Q: What is the best way to implement TDM (Test Data Management)? Is it a good practice to hardcode my data in the test itself or use an external resource like Excel, CSV?

1. Using Test DB separately.
2. Generating data on the fly in code and using it.
3. Use Excel spreadsheet.
4. Use a TDM tool(Need Free Tool).
5. Using SQL Inserts directly into appDB.

A: IMO, I try to do #2 as much as possible. Part of test setup, call the back-end service, and create necessary data. Also, delete them as part of clean up. Secondly, if we follow CRUD workflow based approach, create the data as first step in the workflow, test other operations such as edit, update and finally delete the record as a last step of the process.

Q: Which is the most commonly recommended framework to use with Sauce Labs?

A: It’s hard to say. IMO, your choice of tool depends on your:

  • Goal (i.e., we need ONE framework to test legacy app + web app + mobile app + native mobile app)
  • Development stack – should align with your development stack for developers to contribute and maintain tests. Ultimately support team owns quality principle.

Q: Why use WebdriverIO instead of Protractor? For testing an Angular-based website, how much can Selenium help? Or should I just use Protractor by itself? How different is this framework than Nightwatch.js? Is Webdriver.io a replacement/alternative to Nightwatch.js?

A: The webinar’s intention was to look at some practical patterns that can help stabilize and scale test automation. These patterns are applicable to almost any language of choice and it’ll be great if the framework of choice helps implementing these patterns. I use WebdriverIO for several reasons mentioned in the talk. You should evaluate the choice of language/framework based on your goals.

Q: What is the best way to run just specific tests within our test suite?

A: It depends on the testing framework that you have chosen. For example, I use Mocha in my example, below ——grep option allows me to run specific tests matching a RegEx.

Q: What is the ROI for automation specialists spending time explaining the advantages of programming unique ID and NAME tags to web developers?

A: Unique ID, names do offer stable ways to locate and act on the element. However, there could be some implementations where providing a unique ID impossible. For example, Grid component populates data dynamically based on the response from the backend and it’s not easy to provide a unique ID for every single cell. Given these situations, best bet is to collaborate with the UI/HTML developer who develops the component and let them provide you with the UI Map class since they know the best technique to locate the elements. This is one of the reasons I recommend separating the UI map.

Q: Does this technique work with Appium for Mobile App testing?

A: Yes

Q: Should we automate all scenarios included in US? If Yes, why? If no, why not?

A: You should automate as much as possible and leverage the machine to help boosting confidence in your app. However, you should pick the appropriate automation technique to achieve your goal.

Q: Do you prefer a monolithic structure for mobile application and website automation? Or is the best practice to create and develop as separate projects?

A: IMO, it would be great to leverage common code as much as possible and drive the web/mobile workflow based on the configuration. Less code, less maintenance.

Q: What is the best way to shorten Selenium code except POM and PF?

A: I need a bit more context. However, at a high level

  1. Implement possible OOP concepts to cut down redundant code and reduce maintenance.
  2. Leverage Agile testing quadrants thoughts to rationalize the automation and adapt appropriate technique. For example, view testing can be leveraged to increase automation confidence and reduce Webdriver based automation footprint.

 

Q: How best to use Selenium or WebdriverIO for microservices?

A: Microservices architecture doesn’t change the UI test automation paradigm. However, I would strongly suggest you reference the book “Building Microservices” in which the author has dedicated Section 7 for Testing and briefly explains the different techniques for stable, maintainable test automation.

 

Categories: Companies

[Webinar Recording] Closing the Gap Between Risk and Requirements

The Seapine View - Fri, 06/24/2016 - 13:30

webinar

If you’re managing risk in Excel but your requirements are in a separate tool, you are creating a gap that has significant time costs to manage and increases the possibility of errors.

  • How hard is it to keep your risk register updated as requirements change?
  • Do you know the current status of your mitigating actions?
  • Can you assign actions, see who is responsible for each action, and see if they are making progress?
  • And, most importantly, can you relate all this information back to your requirements?

Watch this webinar recording to learn more about closing the gap between risk and requirements. Using an FMEA as an example, Gordon Alexander, Seapine Software solutions consultant,  explains how an integrated risk and requirements management solution can save you time and help you systematically and continuously manage risks and requirements.

Categories: Companies

Reviewing "Context Driven Approach to Automation in Testing"

Chris McMahon's Blog - Fri, 06/24/2016 - 01:45


I recently had occasion to read the "Context Driven Approach to Automation in Testing". As a professional software tester with extensive experience in test automation at the user interface (both UI and API) for the last decade or more for organizations such as Thoughtworks, Wikipedia, Salesforce, and others, I found it a nostalgic mixture of FUD (Fear, Uncertainty, Doubt), propaganda, ignorance and obfuscation. 

It was weirdly nostalgic for me: take away the obfuscatory modern propaganda terminology and it could be an artifact directly out of the test automation landscape circa 1998 when vendors, in the absence of any competition, foisted broken tools like WinRunner and SilkTest on gullible customers, when Open Source was exotic, when the World Wide Web was novel. Times have changed since 1998, but the CDT approach to test automation has not changed with it. I'd like to point out the deficiencies in this document as a warning to people who might be tempted to take it seriously.

The opening paragraph is simply FUD. If we take out the opinionated language

poorly applied
terrible waste
confusion
pain
hard
shallow, narrow, and ritualistic
pandemic, rarely examined, and absolutely false

what's left is "Tool use in testing must therefore be mediated by people who understand the complexities of tools and of tests". This is of course trivially true, if not an outright tautology. The authors then proceed to demonstrate how little they know about such complexities.

The sections that follow down to the bits about "Invest in..." are mostly propaganda with some FUD and straw-man arguments about test automation strewn throughout. ("The only reason people consider it interesting to automate testing is that they honestly believe testing requires no skill or judgment" Please, spare me.) If you've worked in test automation for some time (and if you can parse the idiosyncratic language), there is nothing new to read here, this was all answered long ago. Again, much of these ten or so pages for me brought strong echoes of the state of test automation in the late 1990s. If you are new to test automation, consider thinking of this part of the document as an obsolete, historical look into the past. There are better sources for understanding the current state of test automation.

The sections entitled (as of June 2016) "Invest in tools that give you more freedom in more situations" and "Invest in testability" are actually all good basic advice, I can find no fault in any of this. Unfortunately the example shown in the sections that follow ignores every single piece of that advice.

Not only does the example that fills the final part of the paper ignore every bit of advice the authors give, it is as if the authors have chosen a project doomed to fail, from the odd nature of the system they've chosen to automate, to the wildly inappropriate tools they've chosen to automate it with.

Their application to be tested is a lightweight text editor they've gotten as a native Windows executable. Cursory research shows it is an open source project written in C++ and Qt, and the repo on github  has no test/ or spec/ directory, so it is likely to be some sort of cowboy code under there. I assume that is why they chose this instead of, say, Microsoft Word or some more well engineered application.

Case #1 and Case #2 describe some primitive mucking around with grep, regular expressions, and configuration. It would have been easier just to read the source on github. If this sort of thing is new to you, you probably haven't been doing this sort of work long, and I would suggest you look elsewhere for lessons.

Case #3 is where things get bizarre. First they try automating the editor with something called "AutoHotKey", which seems to be some sort of ad-hoc collection of Windows API client calls, which according to the AutoHotKey project history is wildly buggy as of late 2013 but has had some maintenance off and on since then. I would not depend on this tool in a production environment.

That fails, so then they try some Ruby libraries. Support for Windows on Ruby is notoriously bad, it's been a sticking point in the Ruby community for years, and any serious Ruby programmer would know that. Ruby is likely the worst possible language choice for a native Windows automation project. If all you have is a hammer...

Then they resort to some proprietary tool from HP. You can guess the result.

Again, assuming someone would want to automate a third-party Windows/Qt app at all, anyone serious about automating a native Windows app would use a native Windows language, C# or VisualBasic.NET, instead of some hack like AutoHotKey. C# and VisualBasic.NET are really the only reasonable choices for such a project.

It is as if this project has been deliberately or naively sabotaged. If this was done deliberately, then it is highly misleading; if naively, then it is simply sad.

Finally I have to point out (relevant to the article section "Invest in testability", and again strong shades of 1998) that this paper completely ignores the undeniable fact that the vast majority of modern software development takes place on the web, with the UI appearing in a web browser and APIs offered from servers over a network.  This article makes no mention that selenium/webdriver is a UI automation standard adopted by the World Wide Web Consortium (W3C), that the webdriver automation interface is fully supported by every major browser vendor:  Google Chrome, Mozilla Firefox, Microsoft Internet Explorer, Opera, and most recently Apple Safari, or that the Selenium API is fully supported in five programming languages: C#, Java, Ruby, Python, and Javascript, and partially supported in many more.

Ultimately, this article is mostly FUD, propaganda, and obfuscation. The parts that are not actually wrong or misleading are naive and trivial. Put it like this: if I were considering hiring someone for a testing position, and they submitted this exercise as part of their application, I would not hire them, even for a junior position. I would feel sorry for them.



Categories: Blogs

Automatic Problem Detection with Dynatrace

Can you imagine automatic problem detection being a reality?! What would it take to make it possible, practical and functional? Over the years we at Dynatrace have seen a lot of PurePaths being captured in small to very large applications showing why new deployments simply fail to deliver the expected user experience, scalability or performance. Since I started my […]

The post Automatic Problem Detection with Dynatrace appeared first on about:performance.

Categories: Companies

Don’t miss the latest in load testing and performance testing at this webinar

HP LoadRunner and Performance Center Blog - Thu, 06/23/2016 - 21:26

whats new in version 12.53 teaser.png

Keep reading to better understand the new capabilities of the new LoadRunner, Performance Center and Network Virtualization v. 12.53, attend this complete webinar.

Categories: Companies

Agile Hiring, Load Testing & Goal Management in Methods & Tools Summer 2016 issue

SQA Zone - Thu, 06/23/2016 - 16:09
Methods & Tools – the free e-magazine for software developers, testers and project managers – has published its Summer 2016 issue that discusses hiring for agility, load testing scripts errors, managing with goals on every level a ...
Categories: Communities

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today