Skip to content

Feed aggregator

When You Should Choose Manual vs. Automated Testing

uTest - Tue, 12/16/2014 - 20:54

ibook-software-developmentThe following is a guest post to the uTest Blog by Eli Lopian of Typemock.

QA analysts and IT firms are often confronted with the same question when testing a software project — whether to go with manual software testing options or to try out new automated techniques.

In certain situations, there are clear advantages to working with automated software testing solutions, and other times the automated software technology is too leading-edge and could wind up costing you way more than it’s worth. That’s why it’s essential to weigh the costs and benefits according to each project.

Manual Software Testing 

Manual Software Testing is the process of going in and running each individual program or series of tasks and comparing the results to the expectations, in order to find the defects in the program. Essentially, manual testing is using the program as the user would under all possible scenarios, and making sure all of the features act appropriately.

The process can be rather tedious to select every setting within a software package. For instance, you might be testing global software for 300 countries, so you need to make sure each of the countries aligns with the appropriate currency. To manually test this, you would select a country and look to see that it has the appropriate currency. But if a program might only have a few options, it would be much more manageable to manually run through the selections and the outcomes of each selection.

Usually, when you’re working for a small company with few financial resources, this is going to be your best option. A big advantage of manual testing is the ability to see real user issues. Unlike a robot, when developing software, going through manually allows you to see bugs that a real user could face.

Manual testing also gives you a lot more flexibility. Using automated tools, it’s difficult to change values in the program once you’ve began testing. When manually testing, you can quickly test the results and it allows you to see which ideas work the best.

In general, automated testing wouldn’t make sense for short-term projects because the upfront cost is too high. In addition, if you’re testing for things that require the human touch like usability, it’s better to have a ‘human’ tester. Companies that have little expertise in the area are also recommended to begin with manual testing. Once the team has mastered testing risks and test coverage, they can then move toward automation.

Automated Software Testing

Automated software testing uses automated tools to run tests based on algorithms to compare the developing program’s expected outcomes with the actual outcomes. If the outcomes are aligned, your program is running properly and you are most likely bug free. If the two don’t align, you have to take another look at your code and alter it and continue to run tests until the outcomes align.

According to John Overbaugh, a senior SDET lead at Microsoft, “It only makes sense to use automated testing tools when the costs of acquiring the tool and building and maintaining the tests is less than the efficiency gained from the effort.”

Automated testing is best to use when you’re working on a large project, and there are many system users. The biggest advantages of automated testing are its relative quickness and effectiveness. Once your initial test is set up, it’s an easy process to repeat tests, continuously fill out the same information and everything is done for you automatically.

Richard Fennel, the engineering director at Black Marble, explains the significance in using automated tools: “The addition of automated testing has helped to shorten the delivery cycle, as we are no longer limited to the slow and complex SharePoint development experience. It has not removed the need for the traditional development cycle completely, but much of the validation, particularly for web parts, has been made far easier.”

Automated testing also keeps developer minds more intrigued. It’s tedious work typing in the same information into the same forms over and over again in manual testing. The process of setting up test cases in automated testing takes a technical mind, and keeps you on your feet. It’s also more applicable to the rest of the team. When using automated tests, any member of the team can automatically see the results published to the testing system. This provides for better team collaboration, and a better overall product.

There are quite a few considerations to make based upon your project when deciding to go with manual or automated testing. Make sure you make an educated and well-informed decision that’s best for your specific project needs.

Do you have any other suggestions on whether to choose automated tools or just go manually? What’s worked for you in the past? Feel free to sound off in the comments below.

Eli Lopian is the Founder and Head of Products & Technology at Typemock. He enjoys a white board, code and transforming developing environments. Secretly, his one true love is Unit Testing and he has dedicated his life to making unit testing easier for everyone else.

Categories: Companies

Getting Started Testing in Python

Testing TV - Tue, 12/16/2014 - 20:34
If you’ve never written tests before, you probably know you *should*, but view the whole process as a bureaucratic paperwork nightmare to check off on your ready-to-ship checklist. This is the wrong way to approach testing. Tests are a solution to a problem that is important to you: does my code work? This presentation shows […]
Categories: Blogs

10 White Papers to Round Out 2014

The Seapine View - Tue, 12/16/2014 - 18:43

10 wps 2014Our industry experts published several great white papers this year, on a wide range of topics—from reducing risk to managing regulatory compliance to best practices for improving your development process.

Here are 10 of our most popular white papers and guides from 2014.

9 Tips for Writing Useful Requirements
If you work for a company that has explicitly defined standards for writing product requirements, consider yourself lucky. Most organizations don’t have the benefit of documented standards, which may result in poorly written requirements. This guide discusses nine tips for writing better requirements, including why it’s important to know your audience, how much information to include in a requirement, and why you’ll want to conduct a postmortem after each release.

5 Practices for Reducing Requirements Churn
Requirements churn—or changes to a product’s requirements—is inevitable, but there are ways to keep it from becoming excessive. Learn how to reduce unnecessary churn and keep your project on time and on budget with the five practices included in this guide.

Effective Traceability for Embedded Systems Development
For embedded systems developers, documenting and sharing requirements and changes among team members can be complex and costly when traditional, manual methods are used. An integrated product development solution can automate traceability for even complex relationships and artifacts, giving your team the ability to easily link product requirements back to stakeholders’ rationales and forward to corresponding design artifacts, code, and test cases.

Achieving IEC 61508 Compliance with Seapine Software
Safety-critical companies can quickly and cost-effectively prove compliance with the IEC 61508 standard by using an integrated solution to manage product development.Read this guide for a brief overview of IEC 61508 and to learn how to make proving compliance easier, less error prone, and more cost effective by automating the creation, management, maintenance, and documentation of requirements traceability.

Managing ISO 26262 Compliance with Seapine Software
Seapine’s integrated product development management solutions, which include TestTrack and Surround SCM, offer significant productivity and cost benefits for companies seeking to comply with the ISO 26262 standard. Together, TestTrack and Surround SCM make compliance verification easier, less error prone, and more cost effective by automating the creation, management, maintenance, and documentation of requirements traceability. Learn how Seapine Software’s product development solutions can help you prove ISO 26262 compliance.

Reducing Risk With Exploratory Testing
Scripted testing alone often fails to find hidden or divergent risks in a product under development. Exploratory testing, however, can expose these risks because it incorporates human intuition and experience into the testing process. This white paper examines a few ways that exploratory testing can improve your test coverage and help reduce risk.

Risk Management Is Easier Than You Think
Identifying, assessing, and tracking risk is a complex and time-consuming process, and even after all of that effort, many companies fail to sufficiently expose and address serious potential harms. Read this white paper to earn how automated traceability can improve safety while reducing the time and cost of your risk management process.

Managing FMEAs with TestTrack
Failure modes and effects analysis (FMEA) helps companies discover the risks that could occur with a product, including both built-in risks and issues that might arise because of the way users interact with the product. By conducting FMEAs early, companies can make more informed decisions about which risks to mitigate, eliminate, or accept. Learn 7 ways TestTrack improves risk management by making FMEAs more visible.

Six Tips for System Integration Testing
The six best practices outlined in the white paper include tips on improving test data and testing environments. The paper shows how a centralized, integrated test management solution, when combined with an efficient triage process, can help improve visibility and avoid errors during testing. The benefits of automatic reporting are also discussed.

Using Root Cause Analysis and TestTrack for Powerful Defect Prevention
Many product development teams employ root cause analysis (RCA) and root cause corrective action (RCCA) to identify the true origin of defects in their development processes and prevent them from recurring. These processes can be complicated and time-consuming when done manually, however. In this white paper, you will learn how TestTrack makes the RCA process faster and easier by putting vital data just a few clicks away and automating traceability matrices and other key reports.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

RadView Launches Load Testing Web Dashboard

Software Testing Magazine - Tue, 12/16/2014 - 18:41
RadView Software has launched WebLOAD 10.2, featuring a Web dashboard for improved performance test results analysis. The new Web Dashboard enables analyzing performance test results from any web browser or mobile device. Test and engineers and managers can collaborate and share load test results, view real-time results as load tests are running, and easily personalize dashboard views. “With this version we are further extending WebLOAD’s load test capabilities, providing performance engineers with yet more power, flexibility and efficiency,” said Eyal Shalom, RadView CEO. “For enterprise companies focused on large-scale web load testing, ...
Categories: Communities

Microsoft acquires HockeyApp

Software Testing Magazine - Tue, 12/16/2014 - 18:19
Microsoft has acquired HockeyApp, a service for mobile crash analytics and app distribution for developers building apps on iOS, Android and Windows Phone. Microsoft will integrate HockeyApp into the Application Insights service in Visual Studio Online to expand Application Insights support for iOS and Android. Based in Stuttgart, Germany, HockeyApp offers a range of mobile development services enabling developers to develop, distribute and beta test great mobile applications. This includes: * Crash reporting. Fast and precise crash reporting with easy app integration, rich crash analysis and support for connecting ...
Categories: Communities

LDRA Launches New Software Verification for VxWorks 7 Platform

Software Testing Magazine - Tue, 12/16/2014 - 18:12
LDRA has fully integrated the LDRA tool suite with the next generation Wind River VxWorks 7 real-time operating system (RTOS) to achieve full compliance with industry safety- and security-critical standards. The reduced overhead and comprehensive support of the LDRA tool suite for a wide range of target architectures, regardless of their respective footprints, ensures that VxWorks-based systems can be built and verified faster and at lower cost. LDRA brings advanced software testing capabilities to the VxWorks platform. LDRA stands as the sole software verification provider capable of delivering object code verification ...
Categories: Communities

Announcing Test Automation Bazaar Jan 16-17 in Austin

Watir - Web Application Testing in Ruby - Tue, 12/16/2014 - 17:38

Originally posted on Testing with Vision:

I am pleased to announce that the Test Automation Bazaar will be held in Austin, Texas on  January 16-17, 2015 (Fri – Sat). I am convening this event with Zeljko Filipin and the Austin Homebrew Testers, and we are pleased that the event will be sponsored by the Open Information Foundation, a non-profit which we have recently joined and which also sponsors the Citcon conferences. This is a follow up to the 2012 Test Automation Bazaar, also held in Austin. Like all OIF events, this conference will be free and open to the public, but we also will be asking for donations and sponsors to cover the expenses of the event. We are currently confirming a location in the Domain. We invite people to come, share their experiences with test automation and learn from others. The organizers have a bias for Ruby, Webdriver (Watir/Selenium), and open-source tools, but we…

View original 134 more words

Categories: Open Source

[Part 3] Code, Cars, and Congress: A Time for Cyber Supply Chain Management

Sonatype Blog - Tue, 12/16/2014 - 16:30
  On December 4th, 2014, U.S. Congressional Representatives Ed Royce (R-CA) and Lynn Jenkins (R-KS) introduced H.R. 5793, the “Cyber Supply Chain Management and Transparency Act of 2014.” The legislation will ensure all contractors of software, firmware or products to the federal government...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

YUI Library Support Now Improved by Ranorex

Ranorex - Tue, 12/16/2014 - 13:10
With Ranorex 5.2 the support of the Yahoo! User Interface (YUI) library has been improved. Start testing your web application based on the JS/CSS Framework.




Download latest Ranorex version and start YUI testing  

Upgrade for free with your valid subscription (You'll find a direct download link to the latest version of Ranorex on the Ranorex Studio start page.)
Categories: Companies

EuroSTAR – Star – Alex Schladebeck

The Social Tester - Tue, 12/16/2014 - 13:00

This is a short series leading up to Christmas where I feature a tester that was at EuroSTAR 2014, who I personally believe you, my readers, will benefit from knowing. Next up is Alex Schladebeck. I’ve known Alex for a while now after meeting her at Agile Testing Days about 5 years ago. It’s always … Read More →

The post EuroSTAR – Star – Alex Schladebeck appeared first on The Social Tester.

Categories: Blogs

Webinar Q&A: Orchestrating the CD Process in Jenkins with Workflow

Thank you to everyone who joined us on our webinar, the recording is now available.
And the slides are here.

Below are the questions we received during the webinar Q&A:


Table Of Contents
  • Workflow
      • General Workflow questions
      • Workflow, SCM and libraries
      • Workflow visualization
    • Workflow and Plugin Ecosystem
  • Webinar & Demo Questions
  • Jenkins Dev questions
  • Generic Jenkins Plugin Ecosystem Questions

Workflow

General Workflow Questions

Q: Where can I find docs on workflow and on samples for complex builds where multiple plugins/ build steps and post build actions are required?
A: See this webinar and the Workflow tutorial.

Q: Where are the docs on the integration of plugins with Jenkins Workflow?
A: See https://github.com/jenkinsci/workflow-plugin/blob/master/README.md

Q: Is the workflow functionality only available in the enterprise version?
A: No, the Jenkins Workflow Engine is part of Jenkins Open Source (see install here). Jenkins Enterprise by CloudBees adds additional workflow features such as the Stage View Visualisation or CheckPoints to resume the workflow from an intermediate point.

Q: Do you offer transition services to help adopt the solution?
A: Please contact sales@cloudbees.com, we will be pleased to introduce you to our service partners.

Q: Do the workflows run on slaves, and across multiple/different slaves for each step?
A: Yes, workflows run on slaves and can span on multiple slaves with the statement “node(‘my-label’)”.

Q: *nix only slaves, or Windows/MacOS also?
A: Jenkins workflows can run on any Jenkins slave including *nix, Windows and MacOS.

Q: Does Jenkins have a way to block some processes from executing if the prerequisites have not yet fired?
A: A flow could wait for some conditions, if that is what you are asking. There is also a Request For Enhancement named “Wait-for-condition step”.

Q: We have some scenarios where 9 prerequisites need to happen before 5 other processes can fire off.
A: Parallel step execution could be a solution. Otherwise, there is a Request For Enhancement named “Wait-for-condition step

Q: How does one implement control points i.e. 'Gating' in Jenkins?
A: The “input” step for human interaction allows you to do it. You can even apply Role Based Access Control to define who can “move” to the next step.

Q: Can we trigger a workflow build using Gerrit events?
A: Any job trigger is supported for Workflow jobs generally. This one currently has a bug.

Q: Can we restrict also a step to run on a particular slave?
A: Yes with a statement “node(‘my-label-my-node-name’)” . You can restrict execution on a particular node or on a node matching a particular label.

Q: Does Jenkins support automatic retries on tasks that fail rather than just failing out right?
A: Yes there is a retry step (“retry(int maxRetries)” ).

Q: Is it possible to do conditional workflow steps?
A: Yes, absolutely. Jenkins workflow support standard Groovy conditional expressions such as “if then else” and “switch case default”.

Q: Does it work okay with the folders plugin?
A: Yes, folders can be used.

Q: Is there a way to call a job or a workflow from a workflow? ie. can I call an existing (complex) freestyle job that is taking care of build and call other jobs as part of the workflow?
A: Yes there is a 'build' step for this purpose.

Q: Is there a special syntax for creating a Publisher in workflow groovy? For example to chain workflows.
A: No special syntax for publishers. The 'build' step can be used to chain workflows, or you can use the standard reverse build trigger on the downstream flow.

Q: How do you handle flows where I may have 3 builds to Dev with only 1 of them going to QA?
A: stage step can take an optional concurrency: 1 (e.g. “stage ‘qa’, concurrency: 1”).

Q: Can Jenkins support multiple java and ant versions - we have a need to compile programs with java 1.5, 1.6, and 1.7 simultaneously?
A: Yes, using the tool step you can select a particular tool version.

Q: And support 32 bit and 64 bit compilations simultaneously?
A: Sure, if you run on appropriate slaves in parallel.

Sample:

parallel( build32bits: {

     node('linux32') { // SLAVE MATCHING LABEL ‘linux32’
       // ...
     }
}, build64bits: {

     node('linux64') { // SLAVE MATCHING LABEL ‘linux64’
      // ...
     }
})

Q: Is it possible to have some human approval on workflow step? Like creating externally accessible web callback inside workflow, and continuing once this callback was called?
A: Yes, with the “input” step as show in the webinar.

Q: When you have a build waiting for human input, can you specify which users have permission to continue the build?
A: Yes you can specify a list of approvers. This list of approvers is typically defined with the CloudBees RBAC authorisation (part of Jenkins Enterprise by CloudBees).

Q: In a step can we have a dropdown where they select an option, for example, can we take user input to indicate which feature test environment to deploy to?
A: Yes the input step can accept any parameter just like build parameters, including pulldowns.

Q: What about text input or dropdown in human interaction part? Is that there?
A: Yes you can specify any parameters you like for the input step.

Q: If multiple builds are waiting on the same User Input message (say builds 1, 2 and 3) and the user responds positively to build 3, do builds 1 and 2 continue waiting or do they automatically abort?
A: They would continue waiting, though there are ways that a newer build can abort an earlier build, mainly by using the stage step.

Q: Is this workflow plugin available for current linux LTS release?
A: Yes, available for 1.580.1, the current LTS.

Q: One of the concerns I had was with troubleshooting efforts. I was curious to how that is handled in terms of documentation or support to resolve issues related to Jenkins?
A: There is a publicly available tutorial. If you are a CloudBees customer we offer support, and other providers may be available as well.

Q: Is there documentation for that right now or does CloudBees support troubleshooting efforts particular to Jenkins, that a developer and/or OPS representative might not be familiar with?
A: CloudBees offers support for any Jenkins operational issues.

Q: Does this plugin have access to Jenkins project model? I mean can it be used as a replacement for Jenkins script console?
A: It does have access to the Jenkins project model, yes, so you could use it for that purpose, though it is not intended to replace (say) Scriptler.

Q: I may have missed this being answered but, is that catchError on the card able to parse the log or does it just look for an exit code?
A: Just checks the exit code. There is a known RFE to capture shell command output and let you inspect its contents in Groovy as you wish.

Q: It appears that some of this feature set is in open-source Jenkins, and some in Cloudbees Enterprise. Is there a clear feature matrix that details these differences?
A: The stage view, and checkpoints, are currently the Enterprise additions. All else is OSS.

Q: Is the DSL extensible and available OSS?
A: All steps are added by plugins, so yes it is certainly extensible (and yes the DSL is OSS).

Q: Is it a full fledged Groovy interpreter, e.g., can I @Grab some modules?
A: @Grab grapes is not yet supported, though it has been proposed. But yes it is a full Groovy interpreter.

Q: Is it possible to install an app to the /usr directory on a the Jenkins in the cloud master or slave?
A: There is not currently any special step for app deployment but I expect to see some soon. In the meantime you would use a shell/batch script step.

Q: What mechanism does archive/unarchive use? Do you define your own revision system for it?
A: No this just uses the artifact system already in place in Jenkins.

Q: If I cannot @Grab is there any other possibility to extend the plugin? Can I access APIs of other plugins?
A: Yes you can access the API of other plugins directly from the script (subject to security approval); and you can add other steps from plugins.

Q: How does the workflow plugin interact with multi master systems?
A: There is no current integration with Jenkins Operations Center.

Q: How do you manage security access to trigger jobs or certain steps? (Integration LDAP and so on)
A: Controlling trigger permission to jobs is a general Jenkins feature.

Q: Does Jenkins workflow support parallelization of steps?
A: Yes, using the “parallel” step.

Q: Is there a way to promote jobs or manually trigger a job after job completion? I saw the wait for input but it looked the job was in a running state for that to work
A: The preferred way is to wait for some further condition. The build consumes no executor while waiting (if you are outside any node step).

Q: Can we run arbitrary Groovy code similar to groovy build steps from within the workflow?
A: Yes you can run arbitrary Groovy code, though Workflow is not optimized for this.

Q: We use tests that based on failures invoke more detailed tests and capture more detailed logs...and could jump out (maybe?) out of the Workflow context...
A: Your script could inspect partial test results and decide what to do next on that basis.

Q: When I trigger a Workflow from a normal "Freestyle project" an error occurs: "Web-Workflow is not buildable."
A: There is a known bug in the Parameterized Trigger plugin in this respect.

Q: If the Jenkins master is rebooted for some reason midway through workflow / job build, does the last good state stay in cache and restart automatically once Jenkins is back online? Or does last job require manually invoking the last build step?
A: The workflow resumes automatically once Jenkins comes back online.

Q: It is possible to reuse same workspace between builds, e.g. for incremental builds?
A: Yes the workspace is reused by default, just as for Jenkins freestyle projects. If you run node {...} in your flow, and it grabs a workspace on a slave in build #1, by default build #2 will run in the same directory. However, workspaces are not shared between different workflows / jobs.
Workflow, SCM and libraries.


Q: Would it be possible to pull workflow configuration script from SCM?A: Yes, you can store the workflow definition in an SCM and use the ‘load’ step.

Q: Can Jenkins directly access the SCM system for the workflow.groovy script?
A: Sort of. You can either check out and load() from workspace, or you can store script libraries in Jenkins master. Directly loading the whole flow from SCM is a known RFE.

Q: How do we reuse portions of scripts between different pipelines?
A: Yes, the Groovy language helps to extract functions and data structures to create libraries that can be loaded in workflows with the ‘load’ step and can typically be stored in an SCM. Or there are other options, including Templates integration in Jenkins Enterprise.

Q: Are all of the groovy functions generic? or these are pre-defined functions please?
A: Workflows are written in standard Groovy and allows to use the standard programming constructs such as functions, classes, variables, logical expressions (for loops, if then else ...).

The Domain Specific part of the workflow syntax is brought by the jenkins workflow engine (e.g. “parallel”, “node” ...) plugin steps (e.g. ”git”, “tool” …).

You can in addition write custom workflow libraries written in groovy scripts.

We can expect to see people sharing libraries of workflow scripts.


Q: If I am satisfied with a particular build, can I have optional steps in the workflow to, e.g., add an SCM tag to the respective source, or stage the artifacts to a different (higher level) artifact repository?
A: Yes, you can use simple if-then blocks, etc.

Q: So if I break workflow script into multiple part I have to use 'load' to compose them into the workflow?
A: Yes, or there is already support for loading Groovy classes by name rather than from the workspace. Other options in this space may be added in the future.

Q: How can I reuse similar functions in many workflows? For example right now we are using Build Flow plugin and we have on unittest job and we trigger it by many different workflows with different parameters.
A: You can use the step “load()” to share a common script from each flow; or store Groovy class libraries on Jenkins; or use Templates integration in Jenkins Enterprise.

Q: If we want to automatically start workflow build after someone pushes to a Git repo, can we set up that inside workflow definition?
A: Yes, just use the git step to check out your flow the first time, and configure the SCM trigger in your job.


Workflow Visualization

Q: Any plans for a combined visualization of multiple, related pipelines (for example, the build pipelines for an applications UI WAR and Web service WARs)?
A: CloudBees is working on a “release centric visualisation”, your idea could fit in it. We don’t have any ETA for the release-centric view.

Q: Is that "workflow view" only available on the main job page? I want to know if we could see multiple application's workflows in one place like the Continuous Delivery Pipeline plugin
A: Currently only on the job main page, though we are considering other options too.

Q: Is there any other visualizations possible, other than the tomcat based one? Like build-graph or build-flow?
A: The Jenkins Build Graph View Plugin and other pre-existing job flow views do not allow to visualize the internals of a workflow execution.

Q: So the nice UI is available only in Jenkins Enterprise, right?
A: Yes the table view of stages is available only in Jenkins Enterprise by CloudBees.



Workflow and Plugin Ecosystem

Q: Can existing build step plugins be access from the Groovy script - for example, the Copy Artifact plugin?
A: Some can, though minor changes need to be made to the plugin to support it. In this case, see JENKINS-24887.

Q: How do we figure out the generic step interface for the existing plugins?
A: The plugin step must implement SimpleBuildStep

Q: As far as I saw, it's more for Java development. How good Workflow will do for Windows development environment with Visual Studio and TFS source control environment?
A: Windows development is already well supported by Jenkins with various plugins including a Jenkins TFS Plugin and a Jenkins MSBuild plugin. Jenkins workflow support execution of Windows Batch scripts as it is possible with the Jenkins Step: Windows Batch script.

Q: Is it possible to integrate Jenkins with MSBuild from Microsoft to build .net applications?
A: The msbuild step probably does not yet support the newer APIs needed for Workflow integration but this is likely possible to add. In the meantime you can use the bat step to run msbuild.exe.

Q: Is it possible to invoke HTTP/S API through the workflow?
A: There is no specific step for this yet, but one could be created.In this demo, we invoke “curl” in an “sh” step.

Q: Does it support the job history plugin
A: Yes, the Jenkins Job Configuration History plugin applied to workflow job will track the history of the workflow definition.

Q: Can the workflow perform JIRA JQL workflow changes (i.e. use functions from the JIRA plugin) and update relevant JIRA tickets?
A: There is probably no support from the JIRA plugin for the moment but it is probably not hard to add. In the meantime, raw http calls with “curl” may be a solution.

Q: I'm interested in how JIRA is integrated with workflow. Do you have any info?
A: I am not aware of any particular integration yet. This would just be an RFE for the JIRA plugin, either to integrate with the SimpleBuildStep API in 1.580.x, or to add a dedicated Workflow step if that is more pleasant.

Q: I saw a question about Jenkins workflow integration with JIRA, having Jenkins update JIRA tix, etc. Is the opposite possible - can you incorporate JIRA workflows into Jenkins workflow? like abort deployment if ticket has not passed code rev. in JIRA workflow?
A: This would also need to be a Workflow step defined in the JIRA plugin to query the status of a ticket, etc. In the meantime you could access the JIRA remote API using a command-line client, perhaps.

Q: Does Jenkins have a way to start a job via email notification?
A: Not sure if such a plugin exists, but if it does, it should work the same with workflows. Otherwise, you can invoke a standard shell “mail” command on a linux slave.

Q: 'mvn' is always started using “sh" in the sample scripts - so are the scripts always OS dependent?
A: Certainly the sh step is (you would use bat on Windows). In the future we expect to have a neutral Maven step.

Q: Are there any additional DSLs available/planned?
A: Additional DSLs, or additional steps? There is no plan for any other DSL other than the Groovy one, but the architecture supports the possibility.

Q: Are we able to add JARs to the workflow Groovy scripts classpath without having to restart Jenkins?
A: You may not add precompiled JARs to the classpath currently. There are security implications to this, and also any function called in a JAR could not survive Jenkins restart. There may in the future be a way of using common libraries like this.

Q: But I am the Jenkins user, not root, it does not give me access to copy files to /usr.
A: Well somehow there must be a privileged script (setuid?) to deploy things; out of scope for Jenkins.

Q: How do you publish to Artifactory?
A: A step to publish to Artifactory could be added, or you can simply use any other external command which accomplishes this, such as sh 'mvn deploy'.

Q: Any potential gotchas/problems with Workflow and Ruby/Cucumber...?
A: Not that I know of. What kind of problems do you foresee?

Q: Do you have support for Docker containers? what about LB's ? Let's say I have service deployed to service machines ... with an LB in front of it. Is this nothing more than using shell constructs in the workflow ... where I sh to the LB's cli ? I want to be able to install (which I can do) to new boxes ... test on the new boxes ... again I can do that ... but then I want to put those boxes into a LB pool and take the old one out.
A: Jenkins supports integration with Docker to do the following:

Slave provisioning

Building and deploying Docker artifacts
You may have to look at application deployment solutions for your orchestration needs.
Webinar & Demo Questions

Q: What is deploy staging?
A: In this demo, we name “staging environment” the environment that mimics the production environment that is often called “pre-production environment”.

Q: Can you provide the Workflow Groovy script, other example and sites for review and learning?
A: See:
Q: Can you add to demo project a deployment step using puppet? How to capture the success/fail status of deployment via puppet?
A: Please look at this presentation of Jenkins Workflow combined with Puppet:
Slides: http://www.slideshare.net/cloudbees/devopscomcloudbeeswebinar

Workflow script:
https://github.com/cyrille-leclerc/spring-petclinic/blob/master/src/main/jenkins/workflow-with-puppet.groovy


Jenkins Dev Questions

Q: Is there currently any (or are there plans) for a testing harness so that the Groovy Workflow scripts can be evaluated for correctness?
A: No current plans for a workflow testing harness, beyond what is available for Jenkins generally such as the acceptance testing harness.

Q: Is there some metamodel for Workflow script? That you can walk through it programmatically?
A: There is an API for accessing the graph of steps run during a flow, if that is what you are asking for.

Q: For plugins developers, should one develop a DSL for a plugin?
A: You just need to implement the Step extension point, defined in the workflow-step-api plugin.


Generic Jenkins Plugin Ecosystem Questions

Q: Is support only for Chef / Puppet? Is there support (planned) for SaltStack, Rundeck and similar tooling?
A: There are more than 1,000 open source plugins for Jenkins including plugins for Chef, Puppet and Rundeck and SaltStack. Please note that the tracking of artifacts is not yet implemented in the Jenkins Rundeck plugin and the Jenkins SlatStack plugin.

Q: Does Jenkins integrate with OpenStack?
A: Yes, the Jenkins JClouds Plugin allows you to provision slaves “on demand” . We are not aware of plugins to ease the scripting of OpenStack or the packaging of OpenStack artifacts from Jenkins (e.g. automatic install of the OpenStack CLI on Jenkins slaves …)

Q: Does Jenkins have built-in database? If not, does Jenkins have DB plugins?
A: There is no built-in database. There may be plugins to work with your existing database.

Q: Can I get the Jenkins workflow to integrate with Heat orchestration on OpenStack?
A: we are not aware of such integration for the moment.





Steven Harris is senior vice president of products at CloudBees.
Follow Steve on Twitter.





Jesse Glick is a developer for CloudBees and is based in Boston. He works with Jenkins every single day. Read more about Jesse in his Meet the Bees blog post.



Cyrille Le Clerc is an elite architect at CloudBees, with more than 12 years of experience in Java technologies. He came to CloudBees from Xebia, where he was CTO and architect. Cyrille was an early adopter of the “You Build It, You Run It” model that he put in place for a number of high-volume websites. He naturally embraced the DevOps culture, as well as cloud computing. He has implemented both for his customers. Cyrille is very active in the Java community as the creator of the embedded-jmxtrans open source project and as a speaker at conferences.
Categories: Companies

Is Scripted Testing Just for the Newbie Tester?

uTest - Mon, 12/15/2014 - 21:51

Scripted testing naturally seems like it’s a match made in heaven just for the novice tester.Microsoft Web Test Managment Runner Hub Test Runner Anna Russo

After all, you have steps and directions clearly defined — wouldn’t the inviting structure to the scripted testing compensate for a lack of experience on the part of the tester? Not necessarily, if you ask our uTesters, whom recently approached the topic in a lively Analyze This testing debate in our uTest Forums.

Most of our community members found that while experienced testers may be spending their time creating test cases and junior testers executing them, there were several notable reasons as to why executing these important steps can’t just be left exclusively to the novice.

Junior Testers Can’t (Yet) Add Value

There may be improvements that need or could be made to streamline the test case process, according to one tester, that the novice couldn’t even yet imagine:

Given sufficient time, it is desirable for senior testers to executed scripted test to 1) identify test cases which were already obsolete (have them deleted) or are not as effective anymore (rewrite or re-organize some test cases so they will be tested differently from that point on) and 2) catch new test cases which were not yet scripted.

Their Abilities are a Wildcard

Forrest Gump was all about the philosophy that ‘life is like a box of chocolates. You never know what you’re going to get.’ Color the novice tester as one of those chocolates:

In the end, these are brand new testers… what are they missing? You just really don’t know. Maybe not because they are junior, but because you just don’t know their ability yet. Or their reliability, either. It is always good to have that more experienced eye to give your scripted testing a run to make sure that you have the coverage you need.

It’s Involves Going Above and Beyond the Test Case

Does the junior tester have the ability to go above and beyond the written test case and cover things outside of the explicitly written instructions? If not, they may not be doing their job as a tester:

If they are really new to the field of testing, then whatever the script, even if well-written and detailed, will be of no help to stakeholders if the tester cannot think and test outside of the steps. It’s not all about what is on the script that matters.

What do you think? Should the work of the scripted test case fall into the hands of the junior tester exclusively?We’d like to hear from testers outside of the realm of the Forums in the comments below.

Categories: Companies

.NET Developers Who Keep Learning

NCover - Code Coverage for .NET Developers - Mon, 12/15/2014 - 19:20

We sometimes forget that IT as we know it changes dramatically every few months. We wanted to pause and give a special shout-out to these two .NET veterans who have been in the field with a combined 50 years of experience. They have been keeping up and changing and leading worldwide companies on their .NET efforts. Keep learning and coding!

ncover_mvp_tom_cabanski_twitter Tom Cabanksi

Tom is a software developer and entrepreneur with a passion for building quality software. How can you go wrong with that as a description? He is a professional agile developer, leader and entrepreneur with a successful 30 year track record of using technology to enhance business success. Tom has worked across a variety of industries including telecommunications, oil and gas, finance, aerospace, heavy manufacturing, commercial printing, distribution, e-commerce, mortgage and retail in technology development, consulting and IT management roles. Whew! Over the last ten years he has been a hands-on leader in software product and solution development of hosted web applications on the .NET platform using agile methods including Scrum and XP. Keep up with what he is up to through his blog http://tom.cabanski.com/ or on twitter @tcabanski.

 

ncover_mvp_andre_kramer_twitterAndré Kraemer

André is a self-employed software developer, instructor and consultant. He’s been in the IT business since 1997 and spent most of those years working with Microsoft technologies. He started with a focus on commercial client / server applications for well-known companies and collaboration solutions based on Microsoft Outlook and Exchange. He then took an interest in software solutions for productivity management in global operating software companies. There he worked as a software developer, and later as Chief Software Architect and Developer. His current main interests are Windows Store Apps, ASP.NET MVC, JavaScript and SharePoint. Stay tuned with his current discoveries on twitter @codemurai or check out his blog.

The post .NET Developers Who Keep Learning appeared first on NCover.

Categories: Companies

TestTrack 2015 Launched by Seapine

Software Testing Magazine - Mon, 12/15/2014 - 18:53
Seapine Software has launched TestTrack 2015, the latest release of its core product development management solution. Interactive task boards, which enable real-time project visibility, take center stage in this release. TestTrack 2015 features task boards to help development teams communicate and measure progress during a sprint, release, or other milestone. With task boards teams can: * Organize and visualize work with cards, columns, and swimlanes * Plan and collaborate during stand-ups, retrospectives, issue, triage, and other team meetings * Gain real-time visibility into work at the project, sprint, and user level Additional updates in TestTrack ...
Categories: Communities

Application Security Testing Gets Tasty With Sauce Labs And NT OBJECTives

Sauce Labs - Mon, 12/15/2014 - 18:00

Finally, a win-win-win for development, QA, and security! If your development team is looking for easier ways to incorporate security earlier in a way that’s simple, easy and that your team to understand, we may have a solution for you. Security defects are like any other defect. Finding them early saves money and time. There are tools that execute security tests for security professionals – like NT OBJECTives’ NTOSpider. NTOSpider can use the application knowledge defined Selenium scripts to execute a better, more comprehensive security test on an application.

The problem has always been that developers and testers know the application and security teams know security. Its been hard for the two teams to collaborate to build security earlier into the development lifecycle. This solution combines the development team’s knowledge about the application is captured in the Selenium scripts with the Security teams’ expertise built into their security tests.

It has long been known that fixing defects earlier in the software development lifecycle is less expensive and easier than fixing them later. The same is true for security defects – it is easier and less expensive to fix them when they are found earlier, before they are replicated across the application. To that end, integrating security testing earlier into the lifecycle, makes perfect sense.

So, why wait for the security team to find defects toward the end of development when you can build it into your process – especially your CI process – so its automatic and early! It will make your life easier. Security defects will be reported alongside all of your other Selenium/SauceLabs defects. With this integration, you can incorporate a security test with very very little additional work.

Now development and security can form an effective partnership with development creating test scripts to make sure the application works and security teams adding in the security auditing.  Encourage your security team to leverage combine your team’s Selenium scripts with their security tests!!

How NTOBJECTives’ NTOSpider Works with Sauce

Development & QA teams typically record a series of Selenium scripts to test specific application functionality (e.g. create an account, select a product, purchase your items). The aggregation of these scripts guarantees that the application is tested in its entirety. Our partnership allows security and QA groups to leverage these scripts to test the applications for security vulnerabilities.

NTOSpider integrates with both the cloud version of Selenium that Sauce Labs offers as well as local installations of Selenium. [More on how NTOSpider works with Selenium in another blog.]

All an enterprise has to do is configure the addition of the Selenium script into NTOSpider, NTO’s automated vulnerability assessment tool, and start a scan.

NT OBJECTives

NT OBJECTives offers an array of scalable web application security services and solutions designed to meet the unique needs of our clients. These days, finding an accurate, comprehensive web application security scanner is difficult, as many scanning solutions are only capable of scanning HTML – leaving you with less coverage and less accurate results.

However, NTO’s fully-automated NTOSpider dynamic application security scanner does what many scanning solutions do not – we interpret and attack today’s modern applications build with rich clients, mobile clients and web services. (Using technologies like REST, AJAX, JSON and GWT) providing full coverage of your mobile and web applications, because we understand that coverage is the first step of accuracy. We also offer the same extensive scanning solution, NTOSpider On-Demand, in one convenient, easy-to-use SaaS/cloud offering – that can be leveraged without purchasing or installing scanning software.

What does this mean?

The NTOSpider and Selenium integration enables you to automatically detect security defects earlier in the software development lifecycle, such as during the nightly build process.

The benefits of leveraging the combined solution are:

  • Find security defects early – Build security testing processes early into the lifecycle to find security defects early and save money.

  • Streamline defect reporting – Report security defects like any other defects reported in Selenium.

  • Integrate with CI – Many development teams are using Continuous Integration solutions (such as Hudson or Jenkins or home grown solutions) to streamline testing and speed time to market. Developers, testing teams and security teams are looking for ways to plug their work into the CI to ensure that all relevant testing processes are automated during the tests. With Sauce Labs’ and NT OBJECTives’, developers, testers and security experts can automatically integrate re-usable, pre-defined tests into nightly builds.

  • Speed up development – By adding NTOSpider security testing into your SauceLabs Selenium testing, you can speed development by avoiding late stage discoveries of security defects.

  • Make security testing easy – This combined solution is designed to enable you to execute repeatable, comprehensive tests automatically. Its designed to make life easier for development teams.

  • Streamline reporting – Security and functional testing use the same Selenium scripts so that all defects are reported in the same way.

  • Mobile testing supported – Both NT OBJECTives and Sauce Labs are committed to supported the technologies used in today’s applications. That includes mobile applications. Both NTO and SauceLabs have support for testing your mobile applications.

Combine Sauce Labs and NTO – so simple, yet it makes so much sense!

More information

More about Sauce Labs

Categories: Companies

CITCON Asia, Hong Kong, February 6-7 2015

Software Testing Magazine - Mon, 12/15/2014 - 16:27
CITCON (Continuous Integration and Testing Conference) is a series of conferences that brings together software developers and software testers to discuss continuous integration software testing. CITCON Asia takes place this year in Hong Kong. In the agenda of CITCON is based on the Open Space principle that lets participants build their own conference program according to their needs. Topics discussed include Test Driven Development (TDD), Continuous Deployment, code metrics, post-release monitoring. Attendees include developer-testers, tester-developers, devops and other people looking for cross-functional solutions to make software better. Web site: http://citconf.com/hongkong2015/ Location for 2014 ...
Categories: Communities

EuroSTAR – Star – Amy Phillips

The Social Tester - Mon, 12/15/2014 - 13:00

This is a short series leading up to Christmas where I feature a tester that was at EuroSTAR 2014, who I personally believe you, my readers, will benefit from knowing. Next up is Amy Phillips. I didn’t get a chance to see Amy’s talk at this years conference but I did see her at last … Read More →

The post EuroSTAR – Star – Amy Phillips appeared first on The Social Tester.

Categories: Blogs

Agile and Medical Device Development: Is It a Good Fit?

The Seapine View - Mon, 12/15/2014 - 12:00

As part of the 2014 State of Medical Device Survey, we asked respondents if their product development team was considering or already using Agile. The results were surprising; almost half are either successfully using Agile or are planning to adopt Agile practices within the next 12 months. Another third said they are working to understand how or if Agile can help their development efforts.

using-agile 600

Why Agile?

Why are so many medical device companies becoming more agile? Agile development methodologies improve the economics of product development by reducing costly and unnecessary project overhead. A major advantage that Agile has is the reduction in wasted time and effort Waterfall developers spend designing or documenting functionality that is never implemented or that changes before implementation.

Agile has been widely adopted by software development teams to improve the quality of software, rapidly develop and deliver working software, and minimize risk by incrementally developing requirements as they evolve. In an Agile process, verification and validation is performed after each sprint, rather than at the end of the development cycle.

agile vs waterfall

Why Now?

Although Agile has been popular for years, it only recently caught fire in the medical device industry. Part of the reason for this may be that the FDA embraced Agile in January 2013, adding AAMI TIR45:2012, “Guidance on the use of Agile practices in the development of medical device software,” to its list of approved standards.

Another reason could be that it took this long for Agile buy-in to trickle up from the developer level to management. There is always a resistance to trying new things when “we’ve always done it this way,” and this is especially true in compliance-heavy industries like medical device development.

Agile Doesn’t Mean Purely Agile

Pure Agile teams are the unicorns of the software development world; rumored to be beautiful, but they probably don’t exist. Among Seapine’s medical device customers, most who have adopted Agile are using a hybrid approach. In other words, the software teams use agile practices to iterate quickly and respond to change, but they’re also maintaining all the traceability and documentation that’s required to get the product through regulatory approvals. It’s a little bit Agile, a little bit Waterfall.

Because of its emphasis on working software and incremental development cycles, which reduce requirements and software changes, Agile can be an excellent way for medical device companies to reduce risk while bringing high-quality products to market faster than their competitors. While the reduced amount of documentation may seem problematic from a traceability standpoint, integrated software tools can make up for this by automatically tracking changes, linking artifacts, and generating the reports needed to meet the requirements of the FDA and other regulators.

We explore this topic in depth in our free white paper, Agile in FDA-Regulated Environments. And to learn more about this year’s survey results, download the 2014 State of Medical Device Development 2014 report.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Agile Retrospective – the Gift that keeps on Giving

As we approach the holiday season and the new year, it may be a good time to celebrate our past success, understand where we may need to improve, and commit to where we want to go in the future.  Applying a “Year-in-Review Retrospective” can be a good way to do this.  This type of Retrospective asks you to reflect on the year (e.g., 2014), embrace what you are thankful for, and help you build meaningful and realistic resolutions for the new year. 
Borrowing a page from an Agile retrospective, here is a personal way to conduct a "Year-in-Review Retrospective".  It can be done individually, with your family, or any size cohort of people.   The 12th Agile Principle asks us “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.”  This can be adapted to read “At regular intervals, we celebrate our successes, reflect on areas for improvement, and commit to resolutions to improve. “
Let’s begin by establishing a framework for applying the retrospective.  Let’s call this the Retrospective Starter Kit.   We start with creating a reflection (or retrospective) board.  This board should have a place to indicate “What I am Thankful for” (or “What we are Thankful for”), “What can be Improved”, and “Actions for Improvement”. 
Start with reflecting on what went well over the past year.  Document the successes and events and then put them in an order of significances or importance.  Then take a moment and celebrate what went well (maybe over a drink and with some music in the background).
Next, reflect on what was problematic over the past year.  Write each statement as a problem.  Avoid jumping to solutions (that will come next).  Once all problems are listed, prioritize which problems may be the most significant or challenging. 
Lastly, take the top 2 or 3 problems and consider actions for improvement.  You want to identify the root cause for each problem so that you establish a solution that addresses the cause behind the problem.  For example, if the problem is that “I am out of shape”, suggesting a solution of “I need to work out” may not be the only action.  Instead, applying root case techniques may uncover that you need to create time in your schedule first.  Otherwise simply saying that you should “work out” will be unrealistic.  Once you have identified actions, then you must commit to your resolutions (or actions that you identified).   
Over time, if you find the retrospective valuable, you may want to conduct this practice in an increasingly timely manner, possibly every month.  Within an Agile context, this occurs at the end of each iterative of work which could be weekly, bi-weekly, every three weeks, or four weeks.
Finally, in the spirit of giving, may I gift you with the following items:
  • Agile Adoption Roadmap – this blog offers many helpful ways to implement and practice Agile to achieve an Agile mindset and receive the business benefits of being Agile. 
  • Being Agile: Your Roadmap to a Successful Adoption of Agile – in 24 concise chapters, this book focuses on the business benefits of Agile and then introduces you to the Ready, Implement, Coach, and Hone (RICH) deployment model as a pathway to help you in your Agile transformation.     
Happy Holiday and Celebrate, Reflect, and Improve!
Categories: Blogs

A systematic approach to finding performance regressions using overweight analysis

Rico Mariani's Performance Tidbits - Sun, 12/14/2014 - 04:14

I have been using this approach to do systematic analysis of performance regressions for several years now. I came up with it while looking at some tricky problems in Internet Explorer about three years ago and it’s served me well since then. The idea is pretty a simple one but it gives surprisingly good results in many cases.

I’ll be giving examples that talk about CPU as the metric but of course the same procedure works for any metric for which you can compute inclusive costs.

Nomenclature: Inclusive cost (e.g. time) is the cost in a method and everything it calls. Exclusive cost (e.g. time) is the cost from only the method itself, not counting anything it calls. Both are interesting but this technique really relies on inclusive cost.

Now the usual situation: You have some test that used to take say 50ms and now it takes 55ms. That’s a 10% growth. You want to know where to start looking and you’re fortunate enough to have a summary of costs from before and after. But there could be thousands of symbols and the costs could be spread all over the place. Also some symbols might have been renamed or other such inconvenient things. You could try staring at the traces in call-tree outlining but that gets very tedious especially if the call stacks are 50 levels deep or so. It’s when things get big and messy that having an analysis you can automate is helpful. So here’s how I do it.

First I consider only symbols that appear in both traces, that’s not everything but it’s a lot and is typically enough to give you a solid hint. For each symbol I know the inclusive cost in the base case and test case, from this I can compute the delta easily enough to tell me how much it grew. Now the magic. Since I know how much the overall scenario regressed (10% in this example) I can easily compute how much any particular symbol should have gotten slower if we take as our null hypothesis that “bah, it’s all just evenly slower because it sucks to be me” so we compute that number. So a symbol that had a previous cost of 10 in my example here should have a growth of 10% or a delta of 1. We compute the ratio of the actual delta to the observed delta and that is called the “overweight percentage” and then we sort on that. And then stuff starts popping out like magic.

I’ll have more examples shortly but let’s do a very easy one so you can see what’s going on. Suppose main calls f and g and does nothing else. Each takes 50ms for a total of 100ms. Now suppose f gets slower, to 60ms. The total is now 110, or 10% worse. How is this algorithm going to help? Well let’s look at the overweights. Of course main is 100 going to 110, or 10%, it’s all of it so the expected growth is 10 and the actual is 10. Overweight 100%. Nothing to see there. Now let’s look at g, it was 50, stayed at 50. But it was “supposed” to go to 55. Overweight 0/5 or 0%. And finally, our big winner, f, it went from 50 to 60, gain of 10. At 10% growth it should have gained 5. Overweight 10/5 or 200%. It’s very clear where the problem is! But actually it gets even better. Suppose that f actually had two children x and y. Each used to take 25ms but now x slowed down to 35ms. With no gain attributable to y, the overweight for y will be 0%, just like g was. But if we look at x we will find that it went from 25 to 35, a gain of 10 and it was supposed to grow by merely 2.5 so it’s overweight is 10/2.5 or 400%. At this point the pattern should be clear:

The overweight number keeps going up as you get closer to the root of the subtree which is the source of the problem. Everything below that will tend to have the same overweight. For instance if the problem is that x is being called one more time by f you’d find that x and all its children have the same overweight number.

This brings us to the second part of the technique. You want to pick a symbol that has a big overweight but is also responsible for a largeish fraction of the regression. So we compute its growth and divide by the total regression cost to get the responsibility percentage. This is important because sometimes you get leaf functions that had 2 samples and grew to 3 just because of sampling error. Those could look like enormous overweights, so you have to concentrate on methods that have a reasonable responsibility percentage and also a big overweight.

Below are some examples as well as the sample program I used to create them and some inline analysis.
 
Example 1, baseline

The sample program uses a simulated set of call-stacks and costs for its input.  Each line represents a call chain and the time in that call chain. So for instance the first line means 5 units in main. The second line means 5 units in f when called by main. Together those would make 10 units of inclusive cost for main and 5 for f. The next line is 5 units in j when called by f when called by main. Main's total goes up to 15 inclusive, f goes to 10, and j begins at 5. This particular example is designed to spread some load all over the tree so that I can illustrate variations from it.

main, 5
main/f, 5
main/f/j, 5
main/f/j/x, 5
main/f/j/y, 5
main/f/j/z, 5
main/f/k, 5
main/f/k/x, 5
main/f/l, 5
main/f/l/x, 5
main/g/j, 5
main/g/j/x, 5
main/g/j/y, 5
main/g/j/z, 5
main/g/k, 5
main/g/k/x, 5
main/g/k/y, 5
main/g/k/z, 5

Example 2, in which k costs more when called by f

This one line is changed. Other appearances of k are not affected, just the one place.

main/f/k, 10

Example 3, in which x always costs a little more

All the lines that end in x became 6 instead of 5. Like this:

main/f/j/x, 6
 
Example 4, in which f calls j more so that subtree gains cost

All the lines under f/j got one bigger like so:

main/f/j, 6
main/f/j/x, 6
main/f/j/y, 6
main/f/j/z, 6
 
And finally example 5, in which x gets faster but k gets a lot slower

All the x lines get a little better:

main/f/j/x, 4

But the k line got worse in two places

main/f/k, 15
main/g/k, 15

Let's see how we do with automated analysis of those things:

Summary of Inclusive times for example 1, baseline Symbol   Inclusive Cost Exclusive Cost main 90 5 f 45 5 g 40 0 j 40 10 k 30 10 x 25 25 y 15 15 z 15 15 l 10 5


This gives us the baseline of 90 units for main and you can see how all the "5" costs spread throughout the tree.

Summary of Inclusive times for example 2, in which k costs more when called by f Symbol   Inclusive Cost Exclusive Cost main 95 5 f 50 5 g 40 0 j 40 10 k 35 15 x 25 25 y 15 15 z 15 15 l 10 5

You can see that k has gone up a bit here but not much.  A straight diff would show you that.  However there's more to see.  Let's look at the first overweight report.

Overweight Report

Before: example 1, baseline
After: example 2, in which k costs more when called by f

Before Time: 90
After Time: 95
Overall Delta: 5

Analysis:

Name Base Cost Test Cost Delta Responsibility % Overweight % k 30.0 35.0 5.0 100.00 300.00 f 45.0 50.0 5.0 100.00 200.00 main 90.0 95.0 5.0 100.00 100.00 j 40.0 40.0 0.0 0.00 0.00 x 25.0 25.0 0.0 0.00 0.00 y 15.0 15.0 0.0 0.00 0.00 z 15.0 15.0 0.0 0.00 0.00 l 10.0 10.0 0.0 0.00 0.00 g 40.0 40.0 0.0 0.00 0.00


OK the report clearly shows that k is overweight and so is f.  So that gives us a real clue that it's k when called by f that is the problem.  And also it's k's exclusive cost that is the problem because all it's normal children have 0% overweight.  Not that there is a clear difference between methods with otherwise equal deltas.

Summary of Inclusive times for example 3, in which x always costs a little more Symbol   Inclusive Cost Exclusive Cost main 95 5 f 48 5 g 42 0 j 42 10 k 32 10 x 30 30 y 15 15 z 15 15 l 11 5


Our second example, again you could see this somewhat because x is bigger, but it doesn't really pop here.  And many methods seem to have been affected.  A straight diff wouldn't tell you nearly as much.

Overweight Report

Before: example 1, baseline
After: example 3, in which x always costs a little more

Before Time: 90
After Time: 95
Overall Delta: 5

Analysis:

Name Base Cost Test Cost Delta Responsibility % Overweight % x 25.0 30.0 5.0 100.00 360.00 l 10.0 11.0 1.0 20.00 180.00 f 45.0 48.0 3.0 60.00 120.00 k 30.0 32.0 2.0 40.00 120.00 main 90.0 95.0 5.0 100.00 100.00 j 40.0 42.0 2.0 40.00 90.00 g 40.0 42.0 2.0 40.00 90.00 y 15.0 15.0 0.0 0.00 0.00 z 15.0 15.0 0.0 0.00 0.00

Well now things are leaping right off the page.  We can see that x was the best source of the regression and also that l and k are being implicated.  And f and k are bearing equal cost.  We can also see that some branches are underweight.  The j path is affected more than the k path because of the distribution of calls.

Summary of Inclusive times for example 4, in which f calls j more so that subtree gains cost Symbol   Inclusive Cost Exclusive Cost main 94 5 f 49 5 j 44 11 g 40 0 k 30 10 x 26 26 y 16 16 z 16 16 l 10 5

Again a straight analysis with so few symbols does evidence the problem, however, it's much clearer below...

Overweight Report

Before: example 1, baseline
After: example 4, in which f calls j more so that subtree gains cost

Before Time: 90
After Time: 94
Overall Delta: 4

Analysis:

Name Base Cost Test Cost Delta Responsibility % Overweight % j 40.0 44.0 4.0 100.00 225.00 f 45.0 49.0 4.0 100.00 200.00 y 15.0 16.0 1.0 25.00 150.00 z 15.0 16.0 1.0 25.00 150.00 main 90.0 94.0 4.0 100.00 100.00 x 25.0 26.0 1.0 25.00 90.00 k 30.0 30.0 0.0 0.00 0.00 l 10.0 10.0 0.0 0.00 0.00 g 40.0 40.0 0.0 0.00 0.00


The J method is the worst offender, y and z are getting the same impact due to extra calls from j and j apparently comes from f.

Summary of Inclusive times for example 5, in which x gets faster but k gets a lot slower Symbol   Inclusive Cost Exclusive Cost main 105 5 f 52 5 g 48 0 k 48 30 j 38 10 x 20 20 y 15 15 z 15 15 l 9 5


Now we have some soup.  It is worse but things are a bit confused.  What's going on?

Overweight Report

Before: example 1, baseline
After: example 5, in which x gets faster but k gets a lot slower

Before Time: 90
After Time: 105
Overall Delta: 15

Analysis:

Name Base Cost Test Cost Delta Responsibility % Overweight % k 30.0 48.0 18.0 120.00 360.00 g 40.0 48.0 8.0 53.33 120.00 main 90.0 105.0 15.0 100.00 100.00 f 45.0 52.0 7.0 46.67 93.33 y 15.0 15.0 0.0 0.00 0.00 z 15.0 15.0 0.0 0.00 0.00 j 40.0 38.0 -2.0 -13.33 -30.00 l 10.0 9.0 -1.0 -6.67 -60.00 x 25.0 20.0 -5.0 -33.33 -120.00

Now again things are a lot clearer.  Those negative overweights are showing gains where there should be losses.  x is helping.  And k jumps to the top with a big 360%.  And it's 120% responsible for this mess, meaning not only did it cause the regression it also wiped out gains elsewhere.

In practice negatives are fairly common because sometimes costs move from one place to another.  Sometimes because of normal things like, in IE, a layout could caused by a timer for paint rather than caused by an explicit request from script, but we still get one layout, so the cost just moved a bit.  The overweights would show nothing new in the layout space but a big motion in timer events vs. script cost.

In practice this approach has been very good at finding problems in deep call stacks.  It even works pretty good if some of the symbols have been renamed because usually you'll find some symbol that was just above or below the renamed symbol as your starting source for investigation.

Finally you can actually use this technique recursively.  Once you find an interesting symbol ("the pivot") that has a big overweight, you regenerate the inclusive costs but ignore any stacks in which the pivot appears.  Search for new interesting symbols in what's left the same way and repeat. 

The code that generated these reports is here.

Appendix

As an afterthought I ran an experiment where I did the "recursion" on the last test case.  Here are the results:

Summary of Inclusive times for example 6, baseline with k removed Symbol   Inclusive Cost Exclusive Cost main 60 5 j 40 10 f 35 5 g 20 0 x 15 15 l 10 5 y 10 10 z 10 10

Note k is gone.

Summary of Inclusive times for example 6, in which x gets faster and k is removed Symbol   Inclusive Cost Exclusive Cost main 57 5 j 38 10 f 33 5 g 19 0 x 12 12 y 10 10 z 10 10 l 9 5

Note k is gone

Overweight Report

Before: example 6, baseline with k removed
After: example 6, in which x gets faster and k is removed

Before Time: 60
After Time: 57
Overall Delta: -3

Analysis:

Name Base Cost Test Cost Delta Responsibility % Overweight % x 15.0 12.0 -3.0 100.00 400.00 l 10.0 9.0 -1.0 33.33 200.00 f 35.0 33.0 -2.0 66.67 114.29 g 20.0 19.0 -1.0 33.33 100.00 j 40.0 38.0 -2.0 66.67 100.00 main 60.0 57.0 -3.0 100.00 100.00 y 10.0 10.0 0.0 0.00 0.00 z 10.0 10.0 0.0 0.00 0.00

Overweight analysis leaves no doubt that x is responsible for the gains.

Categories: Blogs

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today