Skip to content

Companies

Browser Calibration for Remote Machines in the Latest Test Studio

Telerik TestStudio - Tue, 06/27/2017 - 23:08
With the second major release for 2017, the Test Studio team delivers some great, customer-requested features and focuses on making the product even more stable. 2017-06-06T15:59:29Z 2017-06-27T21:08:15Z Iliyan Panchev
Categories: Companies

Latest Test Studio Brings New Reports and HTML Results

Telerik TestStudio - Tue, 06/27/2017 - 23:08
Test Studio continues to make test automation even easier with the first major release of 2017. 2017-03-10T14:33:30Z 2017-06-27T21:08:15Z Iliyan Panchev
Categories: Companies

How We Test Software: Chapter Four Part II—Developer Tools

Telerik TestStudio - Tue, 06/27/2017 - 23:08
Have you wondered how the teams working on Telerik products test software? In the final chapter of our detailed guide, we give you deeper insight into the processes of our Web Division. 2016-12-13T20:41:09Z 2017-06-27T21:08:15Z Antonia Bozhkova
Categories: Companies

How We Test Software: Chapter Four—Telerik Developer Tools

Telerik TestStudio - Tue, 06/27/2017 - 23:08
Have you wondered how the teams working on Telerik products test software? In the next chapter of our detailed guide, we give you deeper insight into the processes of our Dev Tools division. 2016-11-29T13:00:00Z 2017-06-27T21:08:15Z Antonia Bozhkova
Categories: Companies

How We Test Software: Chapter Three Part III—Telerik Business Services

Telerik TestStudio - Tue, 06/27/2017 - 23:08
Have you wondered how the teams working on Telerik products test software? In the next chapter of our detailed guide, we give you deeper insight into the processes of our Business Services. 2016-11-21T19:48:49Z 2017-06-27T21:08:15Z Anton Angelov
Categories: Companies

Test Studio R3 Release Webinar Recording and Q&A

Telerik TestStudio - Tue, 06/27/2017 - 23:08
Just a few weeks ago the 3rd major for 2016 release of Telerik Test Studio ushered in loads of new features like Angular 2 support, API Testing Fiddler integration, support for NativeScript and iOS10, and more. These were all demoed at our usual post-release webinar last week. Here's a recap of the Q&A session. 2016-10-21T18:24:16Z 2017-06-27T21:08:15Z Antonia Bozhkova
Categories: Companies

Introducing Fiddler for OS X Beta 1

Telerik TestStudio - Tue, 06/27/2017 - 23:08
Over the years, we have received numerous requests from our user community to provide a Fiddler build for OS X. So we have ported the latest version of Fiddler to the Mono Framework which in turn supports OS X—and you can grab the beta bits today. 2016-10-17T13:49:40Z 2017-06-27T21:08:15Z Tsviatko Yovtchev
Categories: Companies

How We Test Software: Chapter Three Part II—Telerik Business Services

Telerik TestStudio - Tue, 06/27/2017 - 23:08
Have you wondered how the teams working on Telerik products test software? We continue with the next chapter in our detailed guide, giving you deeper insight into our very own processes. This chapter focuses on Telerik Business Services. 2016-10-12T12:30:00Z 2017-06-27T21:08:15Z Anton Angelov
Categories: Companies

Test Studio Goes Big with Angular, Mobile and API Testing

Telerik TestStudio - Tue, 06/27/2017 - 23:08
The third major Test Studio update of the year just came out today and it adds loads of new bits to our API testing and Mobile testing solutions. 2016-09-28T15:16:16Z 2017-06-27T21:08:15Z Antonia Bozhkova
Categories: Companies

How We Test Software: Chapter Three—Telerik Business Services

Telerik TestStudio - Tue, 06/27/2017 - 23:08
Have you wondered how the teams working on Telerik products test software? In the next chapter of our detailed guide, we give you deeper insight into the processes of our Business Services. 2016-09-09T14:53:23Z 2017-06-27T21:08:15Z Anton Angelov
Categories: Companies

How We Test Software: Chapter Two—Telerik Platform Part Two

Telerik TestStudio - Tue, 06/27/2017 - 23:08
Have you wondered how the teams working on Telerik products test software? We continue with the next chapter in our detailed guide, giving you deeper insight into our very own processes. 2016-08-24T12:30:00Z 2017-06-27T21:08:15Z Angel Tsvetkov
Categories: Companies

Test Studio R2 Release Webinar Wrap Up and Q&A

Telerik TestStudio - Tue, 06/27/2017 - 23:08
Last week we hosted a release webinar on the latest Test Studio features, including a new API tester. Here's a recap of some of the interesting questions we got during the live webcast. 2016-07-22T14:34:27Z 2017-06-27T21:08:15Z Antonia Bozhkova
Categories: Companies

Be an API Testing Hero with the New Test Studio

Telerik TestStudio - Tue, 06/27/2017 - 23:08
The new Test Studio release is here! We are now offering GIT integration, MS Edge Support, provisioning for Android device along with the Test Studio for APIs Beta. 2016-06-30T13:12:42Z 2017-06-27T21:08:15Z Antonia Bozhkova
Categories: Companies

Webinar Follow-up: New Testing Battlefields

Telerik TestStudio - Tue, 06/27/2017 - 23:08
Four testing experts recently explored testing beyond the "traditional" desktop and web, including the new battlefields of Mobile and IoT. Read on for a recap or to watch a webinar replay. 2016-06-21T12:20:00Z 2017-06-27T21:08:15Z Jim Holmes
Categories: Companies

Are You Ready for the New Testing Battlefields?

Telerik TestStudio - Tue, 06/27/2017 - 23:08
A software-defined way of living and the digital transformation of traditional businesses are not the future. They are already here. This brings challenges to testers and developers alike. Join this one-hour roundtable discussion with industry experts to hear what’s new in testing today. 2016-06-13T15:20:23Z 2017-06-27T21:08:15Z Antonia Bozhkova
Categories: Companies

The Difference Between DevOps and Everything Else

Sonatype Blog - Tue, 06/27/2017 - 16:09
In my role I get to attend several conferences, meet with customers, give talks, and sit on a lot of panel discussions where the main topic is DevOps. I can report that while there has been a decline in folks asking, "what is DevOps," it is a question that still lingers. For many, the...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

OpenStack monitoring beyond the Elastic Stack – Part 2: Monitoring tool options

This article is the second part of our OpenStack monitoring series. Part 1 explores the state of OpenStack, and some of its key terms. In this post we will take a closer look at what your options are in case you want to set up a monitoring tool for OpenStack.

The OpenStack monitoring space: Monasca, Ceilometer, Zabbix, Elastic Stack – and what they lack Monasca

Monasca is the OpenStack Community’s in-house project for monitoring OpenStack. Defined as “monitoring-as-a-service”, Monasca is a multi-tenant, highly scalable, fault-tolerant open source monitoring tool. It works with an agent and it’s also easily extendable with plugins. After installing it on the node, users have to define what should be measured, what statistics should be collected, what should trigger an alarm, and how they want to be notified. Once set, Monasca shows metrics like disk usage, CPU usage, network errors, ZooKeeper average latency, and VM CPU usage.

Ceilometer

Even though it’s a bit far-fetched to say that Ceilometer is an OpenStack monitoring solution, I decided to put it in this list because many people refer to it as a monitoring tool. The reality is, Ceilometer is the telemetry project of the OpenStack Community, aiming to measure and collect infrastructure metrics such as CPU, network, and storage utilization. It is a data collection service designed for gathering usage data on objects managed by OpenStack, which are then transformed into metrics that can be retrieved by external applications via APIs. Also, Ceilometer is often used for billing based on consumption.

Zabbix

Zabbix is an enterprise open source monitoring software for networks and applications. It’s best suited to monitor the health of servers, network devices, and storage devices, but it doesn’t collect highly granular or deep metrics. Once installed and configured, Zabbix provides availability and performance metrics of hypervisors, service endpoints, and OpenStack nodes.

Elastic Stack

Perhaps the most widely used open source monitoring tool which also works well with OpenStack is the Elastic Stack (aka ELK Stack). It consists of three separate projects – Elasticsearch, Logstash, and Kibana – and is driven by the open source vendor Elastic.

The Elastic philosophy is easy: it couples good search capabilities with good visualization, which results in outstanding analytics. Their open source analytics tool – which is now rivaling with big players like Microsoft, Oracle or Splunk – supports OpenStack too.

Monitoring OpenStack with Elastic starts by installing and configuring the Elastic Stack’s log collector tool, Logstash. Logstash is the server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to Elasticsearch for indexing. Once installed and configured, Logstash starts to retrieve logs through the OpenStack API.

Through the API, you get good insights into OpenStack Nova, the component responsible for provisioning and managing the virtual machines. From Nova, you get the hypervisor metrics, which give an overview of the available capacities for both computation and storage. Nova server metrics provide information on the virtual machines’ performance. Tenant metrics can be useful in identifying the need for change with quotas in line with resource allocation trends. Logstash also monitors and logs RabbitMQ performance.

Finally, you want to visualize all the collected OpenStack performance metrics. Kibana is a browser-based interface that allows you to build graphical visualizations of the log data based on Elasticsearch queries. It allows you to slice and dice your data and create bar, line or pie charts and maps on top of large volumes of data.

What open source OpenStack monitoring tools lack

Monitoring OpenStack is not an easy task. Getting a clear overview of the complex application ecosystem built on OpenStack is even more difficult. The above-mentioned tools provide good visibility into different OpenStack components and use cases. However, they clearly have several disadvantages:

  • They are unable to see the causation of events
  • They fail at understanding data in context
  • They rely heavily on manual configuration

Because they are missing the big picture, companies often implement different monitoring tools for different silos. However, they quickly realize that with dozens of tools they are unable to identify the root cause of a performance issue. In these circumstances, how could they reduce MTTR and downtime? And with a number of separate tools, how could they ever see performance trends or predict capacity needs?

By using different monitoring tools for different use cases, companies miss out exactly on the monitoring skills today’s complex business applications require:

Okay, so how is all of this possible with OpenStack? Is there any intelligent OpenStack monitoring tool? In the next part we investigate this by focusing on the Dynatrace way of monitoring OpenStack. Stay tuned!

The post OpenStack monitoring beyond the Elastic Stack – Part 2: Monitoring tool options appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Integrating Ranorex Test Cases into Jira

Ranorex - Tue, 06/27/2017 - 10:20

Jira is an issue and project tracking software from Atlassian. The following article describes how you can integrate Ranorex test cases into Jira. That way you will empower Ranorex to submit or modify testing issues within Jira in an automated way.

As Jira offers a REST web service (API description available here), it becomes possible to submit issues automatically. This is achieved using the JiraRestClient  and RestSharp. Please note that the libraries provided are not part of Ranorex and, therefore, not covered by Ranorex support services. These libraries are wrapped with Ranorex functionality, forming re-usable modules. Starting with Ranorex 7.0, these libraries are, however, already included in the Ranorex Jira NuGet package. The integration of these Jira testing modules into Ranorex test automation is described in the following article.

The following steps need to be performed:

Step 1 – Adding the Libraries to Ranorex for Jira Automation:

A NuGet package with predefined Ranorex modules is available from the “Manage Packages” dialog for Ranorex 7.x and higher. To add the NuGet package to your Ranorex project

  • right-click on “References” in the Ranorex project view,
  • select “Manage Packages…”,
  • search for “Ranorex” and add the “Ranorex Jira Reporter ” package

Add NuGet Package

Predefined modules for Ranorex 6.x (for x86 architecture and .NET 4.0) are available here and modules for Ranorex 5.x (for x86 architecture and .NET 3.5) are available here. The assemblies in this zip-file just need to be added to the Ranorex project. In succession the modules (as shown below) will appear in the module browser under “JiraReporter” (demonstrated in the Ranorex KeePass sample):

AddReference

Step 2 – Using the Modules in the Ranorex Test Suite

Individual modules are available in the “JiraReporter” project. These modules merely need to be used within the Ranorex Test Suite, as shown below:

Modules_TestSuite

The modules are interacting with Jira, based on the results of the related test cases. Except for the initialization module, which should be part of the Ranorex set up region, it is recommended to place the modules in the test case’s teardown.

Available modules for Jira automation:

  • InitializeJiraReporter — This module establishes the connection to the Jira server. It is mandatory for the following modules to be functional.
  • AutoCreateNewIssueIfTestCaseFails — If the test case fails, an issue is automatically created on the server, which is defined in “InitializeJiraReporter”. An issue number is automatically created by the server.
    A compressed Ranorex report is automatically uploaded as well.
  • ReOpenExistingIssueIfTestCaseFails — If the test case fails, an existing and already closed issue gets re-opened.
  • ResolveIssueIfTestCaseSuccessful — If the test case is successful, an existing and already open issue is set to “resolved”.
  • UpdateExistingIssueIfTestCaseFails — If a test case fails, attributes of an existing issue are updated.
  • AutoHandleJiraIntegration — This module combines the modules AutoCreateNewIssueIfTestCaseFail, ReOpenExistingIssueIfTestCaseFails and ResolveIssueIfTestCaseSuccessful.

 

Step 3 – Configure Parameters for the Modules

The modules has different configurable variables. Each module accepts different parameters, but they’re all used in the same way in the modules. Which module accepts which parameters can be seen when using the modules in the Ranorex project.

  • JiraUserName: The username to connect to the Jira server.
  • JiraPassword: The password for the specified user.
  • JiraServerURL: The URL for the Jira server.
  • JiraProjectKey: The project key as specified in Jira (e.g. MYP).
  • JiraIssueType: An issue type, as available in Jira (e.g., Bug)
  • JiraSummary: Some free summary text for the issue.
  • JiraDescription: Some free description text for the issue.
  • JiraLabels: Labels for the issue separated by “;” (e.g., Mobile; USB; Connection)
  • JiraIssueKey: The key for the respective issue (e.g., MYP-25).
  • StateClosed: The state which will be set when an issue will be closed (e.g., Done).
  • StateReopened: The state which will be set when an issue will be reopened (e.g., In Progress).
  • RxAutomationFieldName: The name of the custom field, the test case name will be stored. This field will be taken to identify one or more issues.
  • jqlQueryToConnectIssue: Alternatively to the parameter RxAutomationFieldName, a jql query can be taken to identify one or more issues.

The configuration of the modules is then done with common Ranorex data binding:

DataBinding

… and you’re done:

In succession, Ranorex will automatically interact with Jira when one of the modules is executed. The issues can then be processed in Jira. The following screenshot shows an automatically created issue and its attached report:

JiraPic

 

Advanced usage: OnDemandCreateNewIssueIfTestCaseFails

This module enables you to create a new Jira issue directly out of the Ranorex report. A new issue will only be created when the link, that is provided within the report, is clicked. So the user or tester can decide whether an issue is create or not.

The compressed Ranorex report will also be automatically uploaded to the newly created issue..

rxReport

Note: This functionality relies on a batch file created by Ranorex in the output folder and the execution of the Jira Command Line interface (CLI). It does not depend on a prior initialization from “InitializeJiraReporter”.

The module exposes the same variables as the modules mentioned above. An additional parameter is essential for this module:

  • JiraCLIFileLocation: The full path to the “jira-cli-<version>.jar” file, provided by the Jira CLI.

Please note that the following requirements need to be met to use this module:

  • Remote API must be enabled in your JIRA installation
  • The mentioned batch file needs to be accessible over the same file path, where the file was initially created. If the file is moved to a new location, the link is not working anymore.
    In this case the batch-file needs to be started manually.

 

JiraReporter Source Code:

The whole project which contains the code for the JiraReporter is available on GitHub under the following link:

https://github.com/ranorex/Ranorex-Jira-Integration

Please feel free to modify the code according to individual needs and/or upload new modules.

 

Troubleshooting:
  • Assembly can’t get loaded
    If an error similar to the following one happens, you have to enable loading from “RemoteSources”.
    Error messsage: Could not load file or assembly ‘file:///C:RanorexJiraOldTestJiraOldTestbinDebugJiraReporter.dll’ or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515)
    An attempt was made to load an assembly from a network location which would have caused …

    • Add the following lines to the app.conf of the Ranorex project:
      <runtime>
        <loadFromRemoteSources enabled="true" />
      </runtime>

      Jira_AssemblyNotLoaded

    • “Unblock” all .dlls files
      (property of each of them and click “unblock”) — an example how this is done is available here.
  •  Exception — “JIRA returned wrong status
    If you’re encountering an error like
    ‘{“errorMessages”:[], “errors”:{“summary”:”Field ‘summary’ cannot be set. It is not on the appropriate screen, or unknown.”, …}}‘,
    it is very likely that your Jira installation is customized in some way. This message says that the library wants to set a field, which is not available when performing the desired action. Please check your Jira installation as well as the respective action for all necessary and available fields. To overcome this issue, you have to extend and compile the underlying library. A potential starting point for modifications would be the JiraReporter class.

Please note that this blog was originally published November 26, 2014, but has been revised to reflect recent technical developments.

Download free trial

The post Integrating Ranorex Test Cases into Jira appeared first on Ranorex Blog.

Categories: Companies

A Tale of Four Bugs

This post is a post about a recent chain of interconnected bugs and mistakes that we found. I feel there is learning in this tale of many interconnected bugs/mistakes…even if I cannot quite place my finger on what exactly that learning is.

So our story all beings with the great UI refactoring that is JENKINS-43507…

Ideally, any change should be small. Code review works best when the changes are small…but we also have to balance that with ensuring that the changes are complete, so while the refactoring started off as a series of small changes there reached a point of The Great Switcheroo™ where it was necessary to swap everything over with new tests to cover the switch over.

Ideally a lot of the preparation code could have been merged in small change requests one at a time, but it can be hard to test code until it can be used, and adding a change request that consists of new code that isn’t used (yet) and cannot be tested (yet) can make things hard to review… anyway that is my excuse for a collection of changes requests that clock in at nearly 40k LoC.

If we take the GitHub Branch Source PR as an example of the code: github-branch-source-plugin#141

GitHub reports this as:

GitHub diffstats for github-branch-source-plugin#141

This is the part of the story wherein I provide some excuses as to why the PR is not really that big, if you are not interested in excuses you can safely skip it!

Scary…well let’s take a look at that in more detail:

# number of lines of code in new files
$ for created in $(git diff master --name-status | sed -n -e 's/^A.//gp;') ; \
do git diff master -- "$created" | sed -n -e '/^+++ a/d;/^+/p' ; done | wc -l
   11343
# number of lines of code in deleted files
$ for removed in $(git diff master --name-status | sed -n -e 's/^D.//gp;') ; \
do git diff master -- "$removed" | sed -n -e '/^--- a/d;/^-/p' ; done | wc -l
     155
# number of lines of code removed from existing files
$ for modified in $(git diff master --name-status | sed -n -e 's/^M.//gp;') ; \
do git diff master -- "$modified" | sed -n -e '/^--- a/d;/^-/p' ; done | wc -l
    1320
# number of lines of code added to existing files
$ for modified in $(git diff master --name-status | sed -n -e 's/^M.//gp;') ; \
do git diff master -- "$modified" | sed -n -e '/^+++ a/d;/^+/p' ; done | wc -l
    2739

Still looking like a lot of code… but we can dig further

# number of lines of test code in new files
$ for created in $(git diff master --name-status | sed -n -e 's/^A.//gp;' | grep 'src/test') ; \
do git diff master -- "$created" | sed -n -e '/^+++ a/d;/^+/p' ; done | wc -l
    8405
# number of lines of test code in deleted files
$ for removed in $(git diff master --name-status | sed -n -e 's/^D.//gp;' | grep 'src/test') ; \
do git diff master -- "$removed" | sed -n -e '/^--- a/d;/^-/p' ; done | wc -l
       0
# number of lines of test code removed from existing files
$ for modified in $(git diff master --name-status | sed -n -e 's/^M.//gp;' | grep 'src/test') ; do \
git diff master -- "$modified" | sed -n -e '/^--- a/d;/^-/p' ; done | wc -l
       5
# number of lines of test code added to existing files
$ for modified in $(git diff master --name-status | sed -n -e 's/^M.//gp;' | grep 'src/test') ; \
do git diff master -- "$modified" | sed -n -e '/^+++ a/d;/^+/p' ; done | wc -l
      14

So of the (cli count) 14082 lines “added”, 8405 of those lines were test code and test data…

$ for created in $(git diff master --name-status | sed -n -e 's/^A.//gp;' | grep 'src/test/java'); \
do git diff master -- "$created" | sed -n -e '/^+++ a/d;/^+/p'  ; done | wc -l
    6355
$ for created in $(git diff master --name-status | sed -n -e 's/^A.//gp;' | grep 'src/test/resources"); \
do git diff master -- "$created" | sed -n -e '/^+++ a/d;/^+/p'  ; done | wc -l
    2050

OK, at least half of the new code is actually new tests. We can do a similar analysis on the production code new files:

$ for created in $(git diff master --name-status | sed -n -e 's/^A.//gp;' | grep 'src/main/java'); \
do git diff master -- "$created" | sed -n -e '/^+++ a/d;/^+/p' ; done | wc -l
    2735
$ for created in $(git diff master --name-status | sed -n -e 's/^A.//gp;' | grep 'src/main/resources'); \
do git diff master -- "$created" | sed -n -e '/^+++ a/d;/^+/p' ; done | wc -l
     203

I tend to add a lot of comments and Javadoc to production code… so what is left if we strip that out (and blank lines):

$ for created in $(git diff master --name-status | sed -n -e 's/^A.//gp;' | grep 'src/main/java'); \
do git diff master -- "$created" | sed -n -e '/^+++ a/d;/^+/p' ; done | sed -e 's/^+//' | \
sed -e '/^ *\*/d;/^ *\/\*/d;/^ *$/d;/^ *\/\//d' | wc -l
    1327

So more than half of the new production code that I wrote is actually comments…

What about the files that I changed:

# including comments (added lines)
$ for modified in $(git diff master --name-status | sed -n -e 's/^M.//gp;' | grep 'src/main/java'); \
do git diff master -- "$modified" | sed -n -e '/^+++ a/d;/^+/p'  ; done | sed -e 's/^+//' | wc -l
    2651
# including comments (removed lines)
$ for modified in $(git diff master --name-status | sed -n -e 's/^M.//gp;' | grep 'src/main/java'); \
do git diff master -- "$modified" | sed -n -e '/^--- a/d;/^-/p'  ; done | sed -e 's/^-//' | wc -l
    1214

# excluding comments (added lines)
$ for modified in $(git diff master --name-status | sed -n -e 's/^M.//gp;' | grep 'src/main/java'); \
do git diff master -- "$modified" | sed -n -e '/^+++ a/d;/^+/p'  ; done | sed -e 's/^+//' | \
sed -e '/^ *\*/d;/^ *\/\*/d;/^ *$/d;/^ *\/\//d' | wc -l
    1934
# excluding comments (removed lines)
$ for modified in $(git diff master --name-status | sed -n -e 's/^M.//gp;' | grep 'src/main/java'); \
do git diff master -- "$modified" | sed -n -e '/^--- a/d;/^-/p'  ; done | sed -e 's/^-//' | \
sed -e '/^ *\*/d;/^ *\/\*/d;/^ *$/d;/^ *\/\//d' | wc -l
    1040

So what this means is that the 13-14k LoC PR is about 3k of new production code (I would argue it is about 1k lines moved and 2k lines added)…which is a lot…but not as bad as the diffstat initially said…and we got about twice that amount of new tests.

So yeah, the pull request should not be that big, but we reached the point where small refactorings could not continue while being reviewed in context.

This is the end of the excuses.

First off, in this refactoring of the GitHub Branch Source I made a simple mistake:

Mistake 1

In all the changes in the PR, I was refactoring old methods such that they called through to the corresponding new methods (to retain binary API compatibility).

Pop-Quiz: Can you spot the mistake in maintaining binary API compatibility in this code?

New code with mistake

For comparison, here is what the old code looked like:

Old code

The mistake is that the effective behavior of the code is changed. I had maintained binary compatibility. I had deliberately not maintained source compatibility (so that when you update the dependency you are forced to switch to the new method) but I was missing behavioral compatibility.

The fix is to add these four lines:

Fixed code

So, I hear you ask, with 6k LoC of new tests, how come you didn’t catch that one?

Mistake 2

The existing tests all called the now deprecated setBuildOriginBranch(boolean), setBuildOriginBranchWithPR(boolean), etc methods in order to configure the branch and pull request discovery settings to those required for the test. Those methods were changed. Previously they were simple setters that just wrote to the backing boolean field. One of the points of this PR is to refactor away from 6 boolean fields with 64 combinations and replace them with more easily tested traits, so the setters will add, update the traits as necessary:

Legacy setter adds traits when missing

So because the tests were setting up the instance to test explicitly, they were not going to catch any issues with the legacy constructor’s default behavior settings, though they did catch some issues with my migration logic.

I used code coverage to verify that I had tests for all of the new methods containing logic…so of course I had added tests like:

Test of legacy setter

Which were checking the branch point in the constructor…so when self-reviewing the code I looked at the 100% code coverage for the method and said Woot! (This was Mistake 2)

Can't have low coverage... if you don't have the code for the missing tests

I had not got any tests that verified the behavioral contract of the legacy constructor.

Mistake 3

Now these plugins have a semi-close coupling with BlueOcean, so one of our critical acceptance criteria includes verification against BlueOcean.

Acceptance criteria

The first step in all that was to bump the dependency versions in BlueOcean to run the acceptance tests…

Now you may remember that I said that I had explicitly broken source compatibility for the legacy methods, this was in order to catch cases where people are assuming that the old getters / setters are exclusively the entirety of the configuration. If you are copying or re-creating a GitHubSCMNavigator instance via code and you use the legacy methods, the new instance will be invalid, your code needs to upgrade to the new traits based API to function correctly.

So when I bumped the dependencies in BlueOcean without changing the code, my expectation was that the build would blow up because of the source incompatibility and it would then be compiler assisted replacement of the legacy methods… oh but little did I count on this little subtle behavioural change between ASM4 and ASM5…

ASM5 needed a signature change so we can enforce @Restricted

Back in early May, Jesse spotted and fixed this issue with the ASM upgrade… 

Without blaming anyone, the mistake here is that the BlueOcean code had not picked up the fix, so there were no compiler errors. The code compiled correctly.

This turns out to be fortuitous…

Mistake 4

BlueOcean’s create flow for GitHub needs to reconfigure the GitHubSCMNavigator to add each repository that is “created” into the regex of named repositories to discover.

Now, in hindsight, there is a lot wrong with that… but the mistake was to recreate a new instance of the GitHubSCMNavigator each time, rather than reconfigure the instance.

In fact the original code even had a setter for the regex field:

Old code had setPattern(...)

So to some extent there was no need to replace the existing instance with every change:

Replacing the instance always (last line)

But, in principle there should be nothing wrong with replacing it each time…and in any case the new repository may require the credentialsId to be updated and the pre-JENKINS-43507 code used a final field and did not provide a setter…

The mistake here was not to replicate the rest of the configuration. In effect, every time you created a pipeline on GitHub using the BlueOcean creation flow, you blew away any configuration that had been applied via the classic UI: JENKINS-45058…the code should really have looked like this:

Pre-JENKINS-43507 fix for JENKINS-45058

So how did we discover all four of these mistakes?

Well my PR #1186 that bumped the plugin versions had test failures:

OMG! Failing tests, it is the end of days

The fact that the compilation succeeded rather than fail as expected (because of the @Restricted) annotation exposed Mistake 3

Then Mistake 4 actually exposed Mistake 1… if BlueOcean was preserving the configuration on creation these tests may not have failed…certainly a manual verification of the test scenario might have resulted in the test failure being chalked up as a bad test, but because the configuration was continually being reset to the constructor default, the manual verification forced Mistake 1 to the surface:

Manual verification escalates attention

So Mistake 1 would have been caught if we didn’t have Mistake 2…

Once you catch a mistake in production code, you typically add tests so part of fixing Mistake 1 was to also fix Mistake 2

If it were not for Mistake 3, running the BlueOcean tests with the updated plugin versions would have required code changes that would probably have bypassed Mistake 4 and we would have missed Mistake 1 in making those code changes…

Without Mistake 4 we might not have found Mistake 1…

Without Mistake 1 we might not have found Mistake 4…

Four interrelated mistakes and without anyone of them we might not have found any of the others.

As I said at the beginning, I feel there is learning in this tail of many interconnected bugs/mistakes…even if I cannot quite place my finger on what exactly that learning is.

Hopefully you have enjoyed reading this analysis!

 

Blog Categories: Developer Zone
Categories: Companies

You Don’t Have Test Cases, Think Again

PractiTest - Mon, 06/26/2017 - 14:00

NOTICE:
We, at the QABlog, are always looking to share ideas and information from as many angles of the Testing Community as we can.  

Even if at times we do not subscribe to all points and the interpretations, we believe there is value in listening to all sides of the dialogue, and allowing this to help us think where do we stand on the different issues being reviewed by our community.

We want to invite people who want to provide their own opinion on this or any other topic to get in touch with us with their ideas and articles.  As long as we believe the article is written in good faith and provides valid arguments for their views, we will be happy to publish it and share it with the world.

Let communication make us smarter, and productive arguments seed the ideas to grow the next generation of open-minded testing professionals!

*The following is a guest post by Robin F. Goldsmith, JD, Go Pro Management, Inc. The opinions stated in this post are his own. 

 

You Don’t Have Test Cases, Think Again

Recently I’ve been aware of folks from the Exploratory Testing community claiming they don’t have test cases. I’m not sure whether it’s ignorance or arrogance, or yet another example of their trying to gain acceptance of alternative facts that help aggrandize them. Regardless, a number of the supposed gurus’ followers have drunk the Kool-Aid, mindlessly mouthing this and other phrases as if they’d been delivered by a burning bush.

What Could They Be Thinking?

The notion of not having test cases seems to stem from two mistaken presumptions:

1. A test case must be written.
2. The writing must be in a certain format, specifically a script with a set of steps and lots of keystroke-level procedural detail describing how to execute each step.

Exploratory Testing originated arguing that the more time one spends writing test cases, the less of limited test time is left for actually executing tests. That’s true. Conclusions Exploratory claims flow from it is not so true because it’s based on false presumptions that the only alternative to Exploratory is such mind-numbing tedious scripts and that Exploratory is the only alternative to such excessive busywork.

Exploratory’s solution is to go to the opposite extreme and not write down any test plans, designs, or cases to guide execution, thereby enabling the tester to spend all available time executing tests. Eliminating paperwork is understandably appealing to testers, who generally find executing tests more interesting and fun than documenting them, especially when extensive documentation seems to provide little actual value.

Since Exploratory tends not to write down anything prior to execution, and especially not such laborious test scripts, one can understand why many Exploratory testers probably sincerely believe they don’t have test cases. Moreover, somehow along the way, Exploratory gurus have managed to get many folks even beyond their immediate followers to buy into their claim that Exploratory tests also are better tests.

But, In Fact…

If you execute a test, you are executing and therefore have a test case, regardless whether it is written and irrespective of its format. As my top-tip-of-the-year “What Is a Test Case?” article explains, at its essence a test case consists of inputs and/or conditions and expected results.

Inputs include data and actions. Conditions already exist and thus technically are not inputs, although some implicitly lump them with inputs; and often simulating/creating necessary conditions can be the most challenging part of executing a test case.

Exploratory folks often claim they don’t have expected results; but of course they’re being disingenuous. Expected results are essential to delivering value from testing, since expected results provide the basis for the test’s determination of whether the actual results indicate that the product under test works appropriately.

Effective testing defines expected results independently of and preferably prior to obtaining actual results. Folks fool themselves when they attempt to figure out after-the-fact whether an actual result is correct—in other words, whether it’s what should have been expected. Seeing the actual result without an expected result to compare it to reduces test effectiveness by biasing one to believe the expected result must be whatever the actual result was.

Exploratory gurus have further muddied the expected results Kool-Aid by trying to appropriate the long-standing term “testing,” claiming a false distinction whereby non-Exploratory folks engage in a lesser activity dubbed “checking.” According to this con, checking has expected results that can be compared mechanically to actual results. In contrast, relying on the Exploratory tester’s brilliance to guess expected results after-the-fact is supposedly a virtue that differentiates Exploratory as superior and true “testing.”

Better Tests?

Most tests’ actual and expected results can be compared precisely—what Exploratory calls “checking.” Despite Exploratory’s wishes, that doesn’t make the test any less of a test. Sometimes, though, comparison does involve judgment to weigh various forms of uncertainty. That makes it a harder test but not necessarily a better test. In fact, it will be a poorer test if the tester’s attitudes actually interfere with reliably determining whether actual results are what should have been expected.

I fully recognize that Exploratory tests often find issues traditional, especially heavily-procedurally-scripted, tests miss. That means Exploratory, like any different technique, is likely to reveal some issues other techniques miss. Thus, well-designed non-Exploratory tests similarly may detect issues that Exploratory misses. What can’t be told from this single data point is whether Exploratory tests in fact are testing the most important things, how much of importance they’re missing, how much value actually is in the different issues Exploratory does detect, and how much better the non-Exploratory tests could have been. Above all, it does not necessarily mean Exploratory tests are better than any others.

In fact, one can argue Exploratory tests actually are inherently poorer because they are reactive. That is, in my experience Exploratory testing focuses almost entirely on executing programs, largely reacting to the program to see how it works and try out things suggested by the operating program’s context. That means Exploratory tests come at the end, after the program has been developed, when detected defects are hardest and most expensive to fix.

Moreover, reacting to what has been built easily misses issues of what should have been built. That’s especially important because about two-thirds of errors are in the design, which Exploratory’s testing at the end cannot help detect in time to prevent their producing defects in the code. It’s certainly possible an Exploratory tester does get involved earlier. However, since the essence of Exploratory is dynamic execution, I think one would be hard-pressed to call static review of requirements and designs “Exploratory.” Nor would Exploratory testers seem to do it differently from other folks.

Furthermore, some Exploratory gurus assiduously disdain requirements; so they’re very unlikely to get involved with intermediate development deliverables prior to executable code. On the other hand, I do focus on up-front deliverables. In fact, one of the biggest-name Exploratory gurus once disrupted my “21 Ways to Test Requirements Adequacy” seminar by ranting about how bad requirements-based testing is. Clearly he didn’t understand the context.

Testing’s creativity, challenge, and value are in identifying an appropriate set of test cases that together must be demonstrated to give confidence something works. Part of that identification involves selecting suitable inputs and/or conditions, part of it involves correctly determining expected results, and part of it involves figuring out and then doing what is necessary to effectively and efficiently execute the tests.

Effective testers write things so they don’t forget and so they can share, reuse, and continually improve their tests based on additional information, including from using Exploratory tests as a supplementary rather than sole technique.

My Proactive Testing™ methodology economically enlists these and other powerful special ways to more reliably identify truly better important tests that conventional and Exploratory testing commonly overlook. Moreover, Proactive Testing™ can prevent many issues, especially large showstoppers that Exploratory can’t address well, by detecting them in the design so they don’t occur in the code. And, Proactive Testing™ captures content in low-overhead written formats that facilitate remembering, review, refinement, and reuse.

About the Author

Robin GoldsmithRobin F. Goldsmith, JD helps organizations get the right results right. President of Go Pro Management, Inc. Needham, MA consultancy which he co-founded in 1982, he works directly with and trains professionals in requirements, software acquisition, project management, process improvement, metrics, ROI, quality and testing. .

Previously he was a developer, systems programmer/DBA/QA, and project leader with the City of Cleveland, leading financial institutions, and a “Big 4” consulting firm.

Author of the Proactive Testing™ risk-based methodology for delivering better software quicker and cheaper, numerous articles, the Artech House book Discovering REAL Business Requirements for Software Project Success, the forthcoming book Cut Creep—Put Business Back in Business Analysis to Discover REAL Business Requirements for Agile, ATDD, and Other Project Success, and a frequent featured speaker at leading professional conferences, he was formerly International Vice President of the Association for Systems Management and Executive Editor of the Journal of Systems Management. He was Founding Chairman of the New England Center for Organizational Effectiveness. He belongs to the Boston SPIN and served on the SEPG’95 Planning and Program Committees. He is past President and current Vice President of the Software Quality Group of New England (SQGNE).

Mr. Goldsmith Chaired attendance-record-setting BOSCON 2000 and 2001, ASQ Boston Section’s Annual Quality Conferences, and was a member of the working groups for the IEEE Software Test Documentation Std. 829-2008 and IEEE Std. 730-2014 Software Quality Assurance revisions, the latter of which was influenced by his Proactive Software Quality Assurance (SQA)™ methodology. He is a member of the Advisory Boards for the International Institute for Software Testing (IIST) and for the International Institute for Software Process (IISP). He is a requirements and testing subject expert for TechTarget’s SearchSoftwareQuality.com and an International Institute of Business Analysis (IIBA) Business Analysis Body of Knowledge (BABOK v2) reviewer and subject expert.

He holds the following degrees: Kenyon College, A.B. with Honors in Psychology; Pennsylvania State University, M.S. in Psychology; Suffolk University, J.D.; Boston University, LL.M. in Tax Law. Mr. Goldsmith is a member of the Massachusetts Bar and licensed to practice law in Massachusetts.

www.gopromanagement.com
robin@gopromanagement.com

 

The post You Don’t Have Test Cases, Think Again appeared first on QA Intelligence.

Categories: Companies