Skip to content

Feed aggregator

Vote Now to Send One Lucky uTester to STAREAST

uTest - 2 hours 2 min ago

One uTester is a step away from some testing fun in the sun.

uTesters have been busy filming their most compelling reasons on why they should be sent to STAREAST 2015 logoStarEast– and the time has now come to view and vote for your favorite entries. The uTest & TechWell STAREAST contest voting is officially live!

With the voting period now through 11:59pm ET on April 4, testers will have their chance to vote on the most compelling and creative entries, and send one of their peers to STAREAST 2015 in sunny Orlando, Florida, May 3-8. The grand prize of over $5,000 includes admission to the show, airfare, and all accommodations and meals for the duration of the conference.

Judges from uTest and TechWell will select from among the most liked/voted videos to pick the lucky winner, so be sure to vote for your favorites now.

Have a favorite yourself, or participated in the contest? Be sure to share your submission on your favorite social media networks to get the exposure your video deserves…and all the votes! After you vote, follow the results in real-time every step of the way with our Leaderboard.

VOTE NOW!

The post Vote Now to Send One Lucky uTester to STAREAST appeared first on Software Testing Blog.

Categories: Companies

Appium Version 1.3.7 Released on Sauce Labs

Sauce Labs - Fri, 03/27/2015 - 01:04

Appium logo w- tagline {final}-01

We’re pleased to announce that Appium version 1.3.7 is available on Sauce. This small release includes two hotfixes:

General

  • fixed a failure to remap session id in proxied responses

iOS

  • fixed intermittent failure to find Xcode
Categories: Companies

Codehaus & Ben: Thank You and Good Bye

Sonar - Thu, 03/26/2015 - 20:55

It seems very natural today that SonarQube is hosted at Codehaus, but there was a time when it was not! In fact joining Codehaus was a big achievement for us; you might even say it was one of the project’s first milestones, because Codehaus didn’t accept just any project. That may seem strange today, when you can get started on Github in a matter of minutes, but Codehaus was picky, and just being accepted was a big deal.

It was also a big deal because being accepted by Codehaus gave us access to a full suite of best-of-breed tools: IntelliJ, JProfiler, and Nexus, plus Jira, Confluence, and the rest of the Atlassian suite… This, coupled with the fact that Codehaus took on the burden of hosting and maintaining that infrastructure, allowed us to focus on the SonarQube platform and ecosystem. It enabled us to make what we think is a great product – a product that wouldn’t be what it is today without Codehaus.

The first ticket ever created for the SonarQube (née Sonar) project was SONAR-1, entered in the Codehaus Jira on Dec. 17th, 2007. The project was just under a year old at the time (SonarSource hadn’t even been founded yet). Over the next 7+ years, that ticket was followed by nearly 14,000 more across 42 projects, more than 60,000 emails across two mailing lists, and countless documentation revisions over the many versions of SonarQube and its plugins.

Of course, “Codehaus” really boils down to one guy: Ben Walding, who has been running the 1,000-project forge on his own time and his own dime from the beginning. No matter what was going on in Ben’s life, Codehaus was up. And he wasn’t just “keeping the lights on”, either; Ben always made things not just possible, but easy. So when he told us a couple of months ago that Codehaus was shutting down, it wasn’t really a surprise. In fact, as he said, the writing had been on the wall for a while. But it was saddening. Because no matter how many other options there are today for open source projects, Codehaus will always have a special place in the history of the open source movement and in our hearts.

We’ll announce what Life After Codehaus will look like in May, but in the meantime, we say: Merci beaucoup, Большое спасибо, Heel erg bedankt, Grazie mille, vielen Dank, Suur aitäh, Nagyon köszönöm, and Thank you, Ben. Goodbye to Codehaus, and thank you very much.

Categories: Open Source

Software Testing With a Spoon (And Other Interesting Interview Topics)

uTest - Thu, 03/26/2015 - 19:05

indexFrequent uTest contributor Daniel Knott recently penned a nice piece on ‘how to test a spoon.‘ It was a question used during a software testing job interview that Daniel stumbled upon.

OK, so you’re probably asking — why in the heck would I test a spoon in the first place? But if you’re asking this question according to Knott, you’re just in the wrong mindset when this curveball is thrown your way.

Knott argues that it’s less about the process of actually breaking the spoon, and more about the thoughts the question elicits:

  • What is the purpose of this spoon?
    • Will it be used as a normal spoon for soup or will it be used in a chemical environment for acid liquids?
  • What is the area of operation of this spoon?
    • Will it be used in a hot or cold environment?
  • What material is it made of?
    • Is it made of plastic, metal or wood?

Have you ever been asked to test a spoon during a job interview? Have you ever been asked any other unusual brain teasers or questions while interviewing for a job? How did you approach the question? We’d love to hear from you in the comments below!

Not a uTester yet? Sign up today to comment on all of our blogs, and gain access to free training, the latest software testing news, opportunities to work on paid testing projects, and networking with over 150,000 testing pros. Join now.

 

The post Software Testing With a Spoon (And Other Interesting Interview Topics) appeared first on Software Testing Blog.

Categories: Companies

Orasi Expands Software Testing Utilities

Software Testing Magazine - Thu, 03/26/2015 - 18:15
Orasi Software, an Atlanta-based quality assurance software reseller and professional services company, has announced that it is expanding its line of software testing utilities. With the addition of the upcoming OPTIC (Orasi Performance Test Intelligence Connector), Orasi will offer seven products targeted to supporting or enhancing test automation, team communication, reporting and process validation, and other important facets of software development. OPTIC will expand this portfolio by connecting two industry-leading solutions: HP LoadRunner and AppDynamics. (Further details will be released at a later date.) Orasi also places strong emphasis on test ...
Categories: Communities

Webinar Q&A: Facilitating Continuous Delivery Pipelines with Jenkins Workflow

Thank you to everyone who joined us for the webinar with eSynergy--you can view the recording here.

Below are the link found in the slides from the webinar:

Get The Code:
https://github.com/harniman/workflow-demos
https://github.com/harniman/spring-petclinic
https://github.com/jenkinsci/workflow-plugin/blob/master/COMPATIBILITY.md

Tutorial:
https://github.com/jenkinsci/workflow-plugin/blob/master/TUTORIAL.md

Jenkins Enterprise by CloudBees Workflow-Related Functionality:
http://jenkins-enterprise.cloudbees.com/docs/user-guide-docs/workflow.html

Following are answers to the questions we received during the webinar:
________________________________________________________________

Q: Can 'pause for input" can be accepted via api? A: Yes you can POST to a URL to proceed. Q: Any support for Android Emulator in workflow?A: No specific support yet that we are aware of. You can use `sh` steps to launch and tear down the emulator. Q: Are you developing any plugins/workflows that uses Puppet/Chef?A: Not currently, though of course you can run such commands from shell scripts. There is a plugin that allows tracking of puppet deployments. Q: Pending JENKINS-27295 Booleans end up as Strings in the Groovy script. Is there a workaround?A: Currently you would use Boolean.parse or just check MYPARAM == 'true' Q: Can the DSL be extended with new commands?, For example to call things specific to plugins and not present in core? Like running an xcode build.A: DSL steps are contributed to by plugins. Q: Is there a list of plugins compatible with Workflow?A: See https://github.com/jenkinsci/workflow-plugin/blob/master/COMPATIBILITY.md Q: Can the DSL functions be reused across jobs?A: You can `load` a file of Groovy functions you wrote, or commit a Groovy class to a Jenkins-managed Git repository and then import it from as many flows as you like. Q: Can we restrict checkpoint to a particular access group who can click ok?A: Not currently; just based on Job/Build permission. Q: If we have 10 commits then would all 10 builds pause at Staging using up 10 executors unless someone clicks yes or no? A: It is not using up executors while waiting (unless you are inside `node`). If you want to automatically cancel older prompts, that would be: https://issues.jenkins-ci.org/browse/JENKINS-27039 Q: Do we have ability to integrate with Jira? Like opening a task in Jira when the tests fail. A: Its possible the JIRA plugin for Jenkins may implement this. Or if there is a way to access JIRA REST API e.g. from curl/wget you could do this today. Q: Do you plan to implement a “task list” screen for a user to see all tasks awaiting their input?A: Not at present. This would be a great RFE. Q: How to enable own jars in workflow lib? Is that possible. Is that possible to download workflow lib directory from external git (scm) instead of use of "static" content? A: External JARs in classpath (other than plugins) not currently supported. You can load *.groovy sources (interpreted) from the load step, or use a Jenkins-managed Git repo to save common classes/functions. Q: How to take standard input/error and error code in shell script to manage step (sh command)? A: See RFE: https://issues.jenkins-ci.org/browse/JENKINS-26133 which also contains workaround idioms. Q: Can you propose interfaces for some commands using not only String but also Arrays? I.e. it is not possible to send mail to many recipients or set more than one submitter in input phrase because of String, not String[] interface?A: For the `submitter` option to `input` you can set the name of an external group, for example LDAP group. Then anyone in that group may approve. In general yes steps can and do take arrays/maps where needed. Possible RFE, but I think https://issues.jenkins-ci.org/browse/JENKINS-27134 would be the better approach in general (integrates nicely with authorization strategy configuration for Jenkins overall). Q: How can I add RFE request? A: See https://github.com/jenkinsci/workflow-plugin#developmentQ: When I run the code on slave the current dir and context is set to the workplace directory. Is this way to automatically set it to the job-numbered dir or I have to do it manually? A: You can use the dir step to temporarily change directory (like pushd in shell scripts), or you can ask for a specific _locked_ workspace with the `ws` step. Q: Is that possible to put some links into workflow report screen and/or workflow step screen (i.e. to show sonar, junit links)? A: JUnit result archiving is supported. For Sonar use the https://wiki.jenkins-ci.org/display/JENKINS/Sidebar-Link+Plugin Q: Do you plan to provide ability to group steps in step list (now each sh is separate step) to improve readability? A: See https://issues.jenkins-ci.org/browse/JENKINS-26107Q: Does Input send email to alert the responsible party? A: No but you could use the distinct `mail` step to do so. Q: Does workflow support multiconfiguration jobs? E.g. can I define configurations within the workflow to run multiple concurrent builds on different platforms and have each of these follow a workflow? A: You can do so, although there is not currently a good way to separate displayed results by configuration. This is something we are thinking about adding. Q: How hard is it to implement the github triggers? A: See https://issues.jenkins-ci.org/browse/JENKINS-27136 You would need Java development experience and knowledge of Jenkins plugin APIs. Q: Is build promotions a feature of Workflow? What does build promotion look like in Workflow? A: The equivalent of the Promoted Builds plugin for Workflow would be build stages. Q: Does workflow provide functionality to capture test result output of test stages and aggregate it similar to the JUnit test result reports in the build stage? A: You can run test result archiving multiple times per build. Currently all such results are simply aggregated. https://issues.jenkins-ci.org/browse/JENKINS-27395 suggests refinements. Q: Is there more than the simple proceed / cancel functionality to the transitions between stages? A: https://issues.jenkins-ci.org/browse/JENKINS-27039 suggests other behaviors. Q: Are checkpoints and resuming from checkpoints a feature reserved only for Jenkins Enterprise? A: Yes. OSS Workflow does let builds survive simple Jenkins restarts (or slave disconnections and reconnections). Q: Can workflow accommodate a deployment model that is asynchronous? We use Puppet to deploy. The handshake is via MQ so the deployment status is not available immediately. A: I would suggest using the waitUntil step to wait for a deployment flag to be set. In the future there might be a step designed to wait for MQ events, or specifically for Puppet, etc. Q: Is wait until asynchronous? A: Yes, checks for the condition repeatedly, at increasingly long intervals if still false. Q: If we are using a separate tool for Deploying apps, something like IBM uDeploy how does it integrate into the workflow? A: You would use the `sh` (or `bat`) step to run it as an external process. Q: If you have jobs which now continue past a Jenkins reboot, how do you restart jobs that are stuck in a poor state or need some attention? A: You can cancel stuck builds if you need to. Q: Is it competing with XebiaLabs XLRelease and/or Nolio ReleaseAutomation? Does it have inframent abstractions, does it has easy keystores for passwords needed? A: Not really competing with that kind of product; more complementary. There is integration with the Credentials system in Jenkins. Q: Is it possible to just get an enterprise version of this plugin or does it only come as part of a package? A: Currently only as part of the Jenkins Enterprise package. Q: Is the entire pipeline defined in a single file? A: Yes. (Or you could define parts in a different file if that made things more readable.) Q: Is the Job DSL plugin compatible with it? Or are they sort of competing plugins doing the same thing? (though I know Job DSL does not do workflows). A: Job DSL supports Workflow as of a recent release. Q: Can the Job DSL plugin generate workflow dsl? A: Yes it can. Q: Are there other ways to create workflow jobs? A: Yes, you can use the Jenkins Enterprise Templates capability. Q: Is there a way to have a particular user be able to authorize the proceed / cancel upon reaching an input point? A: Yes, `input message: '...', submitter: 'thatUserID'` Q: Can Jenkins be used as an artifact repository? Similar to Artifactory. A: You can archive artifacts from the build, though if you have big files, or want to use them from other tools, you are recommended to use Artifactory/Nexus/etc. Q: Is there a way to send an email to solicit input from non-technical users? A: There is a `mail` step. Q: Is there an option to launch Hadoop workflow jobs? A: If there is a cloud provider for Jenkins generally, it can be used to run Workflow `node` steps on that kind of slave. Q: Wanted to see if Jenkins can be a single place to launch jobs for hadoop, may internally Ooozie scheduler be used to launch, any option? A: If there is a shell command you can run to launch the job, then you can do it from Workflow. Q: Nowadays we rely on MultiBranch Project Plugin for support our development process based on Pull Request and Code Review. Is it possible to achieve the same result using workflow plugin? A: Not currently but I would like to add such support: https://issues.jenkins-ci.org/browse/JENKINS-26129Q: Where is there any source of work flow plugin job configurations aside from the github page? A: A new tag for StackOverflow has been proposed. Q: Is there perforce integration? A: The new Perforce plugin (from Perforce) recently added Workflow support. The older one does not have it. Q: Would you want to create a groovy script that contains functions that can be reused in other workflow builds, it should be a plugin? A: Need not be a plugin, can just live in your SCM, etc. Q: What is the recommended way to have reusable groovy parts shared among different jobs?A: You can use the `load` step, or you can define classes in a Git repo hosted by Jenkins and which are then available for immediate import. Other options may be added in the future. Q: Why is 'Surviving Restarts' implemented in a Plugin, and not in Jenkins Core? A: Technical reasons too involved to go into here. Basically Jenkins was not originally designed to make that possible. Jenkins Enterprise provides a plugin to support Long Running Builds. Q: Is Console output of parallel executions in any way grouped, or just all lines mixed up? A: With lines intermingled, but prefixed by branch name. Q: I have my workflow script in one SCM repo, but want to trigger the job from all changes in a different repo, can I do that? Or would I need to keep workflow script with my sources? A: Yes, you can use as many repositories are you like from a single build, which may or may not be where the Groovy script is kept. Q: Will there be an option to download the example workflow source? A: Yes - https://github.com/harniman/workflow-demosQ: How is the Jenkins workflow kept in a source code control system of some kind? For example to see what variables were defined as at some point in the past? A: You can load your whole flow script, or just a part of it, from source control.

--Nigel Harniman
www.cloudbees.com

Nigel is a Senior Solution Architect at CloudBees.



Categories: Companies

Repost: Testing in a Real Browser with Sauce Labs + Travis CI

Sauce Labs - Thu, 03/26/2015 - 17:01

This post comes from our friend Sam Saccone, who wrote a nice how-to on using Sauce Connect and Travis CI. Check out the original post on his blog.

I recently found myself implementing a basic set of JavaScript integration tests on an open source project. I quickly came to the realization that there is a serious lack of good documentation on how to get a basic browser test running on a Travis CI using Sauce Labs and Sauce Connect.

After stumbling across the barren desert of outdated doc pages, incorrect stackoverflow answers, and ancient google group posting, I present to you the spoils of my quest to the underbelly of the web.

Let’s approach this in the context of a real problem: Testing javascript in a real web browser.
(view the complete project here)

 

settinguptravisci

(view the full diff)
To get going we are going to setup http://travis.ci to run on our repo.

Setting up our repo to work with travis

Once travis is enabled we will want to setup a basic .travis.yml file to run a basic test on our repo.

language: node_js
node_js:
    - "0.12

We are telling it to use node_js and specifying the version of node. By default node projects will run npm install and npm test so everything should be good to go. And we will see our basic test run

Creating our first browser test.

sam browser test

(commit diff)

To take advantage of Sauce Lab and Sauce Connect we are going to need to use selenium webdriver. In case you are not familiar with selenium, it allows you to remotely control a web browser through code (pretty awesome stuff).

The documentation for it is a bit hard to find but I will link it here so you do not have to go huntinghttp://selenium.googlecode.com/git/docs/api/javascript/class_webdriver_WebElement.html

We will want to install selenium and setup out basic tests

Instead of going over the entire diff lets look at a few critical pieces of the code.

var assert = require("assert");
var webdriver = require("selenium-webdriver");

describe("testing javascript in the browser", function() {
  beforeEach(function() {
    this.browser = new webdriver.Builder()
    .withCapabilities({
      browserName: "chrome"
    }).build();

    return this.browser.get("http://localhost:8000/page/index.html");
  });

  afterEach(function() {
    return this.browser.quit();
  });

  it("should handle clicking on a headline", function(done) {
    var headline = this.browser.findElement(webdriver.By.css('h1'));

    headline.click();

    headline.getText().then(function(txt) {
      assert.equal(txt, "awesome");
      done();
    });
  });
});

At a high level we are setting up selenium to launch a chrome instance, visiting the page we want to test, clicking on an element, and then asserting that the clicked element’s text has changed to an expected value.

There is a bit of async complexity going on that mocha and selenium for the most part abstract for us, so no need to fret about that for now.

If you noticed we are visiting http://localhost:8000, that means that we need to boot up a server to serve our content. Travis makes this easy enough via a before_script task. In the task we will just start a python simple server and give it a few seconds to boot. The ampersand at the end of the python line tells travis to run the process in the background instead of blocking the execution thread, allowing us to run tasks at the same time.

language: node_js
before_script:
  - python -m SimpleHTTPServer &
  - sleep 2
node_js:
    - "0.12"

Believe it or not that is all we need to run our tests in a real browser (or at least it seems to satisfy the requirements to run things locally).

And in Comes Sauce Labs

(view the full diff)
When we push our work from before, we are rudely awoken by the unfortunate fact that nothing is working on travis.

sam sauce labs

https://travis-ci.org/samccone/travis-sauce-connect/builds/54796160

This is where we will lean on Sauce. Sauce Labs provides its services for free for Open Source projects. We can get our access keys via https://docs.saucelabs.com/ci-integrations/travis-ci/ and just add the encrypted keys to our travis.yml file, these keys will allow us to connect to Sauce’s VM cluster to run our tests against.

sam sauce labs 2

From here we just need to enable the sauce_connect addon for travis, again only a minor change to the travis.yml file is needed.

sam travis

The next step is to tweak our selenium browser build step to use sauce labs VMs instead of the local machines.

 beforeEach(function() {
    if (process.env.SAUCE_USERNAME != undefined) {
      this.browser = new webdriver.Builder()
      .usingServer('http://'+ process.env.SAUCE_USERNAME+':'+process.env.SAUCE_ACCESS_KEY+'@ondemand.saucelabs.com:80/wd/hub')
      .withCapabilities({
        'tunnel-identifier': process.env.TRAVIS_JOB_NUMBER,
        build: process.env.TRAVIS_BUILD_NUMBER,
        username: process.env.SAUCE_USERNAME,
        accessKey: process.env.SAUCE_ACCESS_KEY,
        browserName: "chrome"
      }).build();
    } else {
      this.browser = new webdriver.Builder()
      .withCapabilities({
        browserName: "chrome"
      }).build();
    }

    return this.browser.get("http://localhost:8000/page/index.html");
  });

Our beforeEach step gets a tiny bit more complex. We first detect if there is a sauce environment variable and if so we setup the browser with the required parameters for it to connect to Sauce’s infrastructure. After that setup is done, we can be blind to the changes that we made, since we will not have to worry about it again.

Once we push our changes we can see everything works like charm!


sam sauce 3

We even get an awesome playback video from sauce to help us debug if there were any problems Sauce Test Playback

Hopefully this helps to codify the path to adding integration tests on your javascript projects.
If you still have questions please reach out via a github issue or on twitter.
Sam Saccone @samccone

Categories: Companies

Guiding Principles for Building a Performance Engineering-Driven Delivery Model

While recently attending a Dynatrace User Group in Hartford, I had the opportunity to sit in on a great presentation from a leading US insurance company as they explained their 3 year APM journey. I see a lot of these success stories, but this one was especially impressive. To see how they have refined their […]

The post Guiding Principles for Building a Performance Engineering-Driven Delivery Model appeared first on Dynatrace APM Blog.

Categories: Companies

Reach the next level - NEW Ranorex Advanced Training Course

Ranorex - Thu, 03/26/2015 - 11:00
Have you been looking for ways to increase the effectiveness of your automated tests using the Ranorex tools but haven’t had the time or experience to take advantage of some of the more sophisticated functionality Ranorex has to offer?

We are happy to announce a new offering how you can reach the next level of Ranorex competency with our advanced training course:

What:This 2-day course is geared towards existing Ranorex users who would like to gain a deeper understanding on key aspects like Object Recognition using RanoreXpath and Ranorex Spy, Ranorex Object Repository optimization, Ranorex API usage, plus much more! You will reduce maintenance and increase functionality while learning best practices to customize Ranorex and collaborate in a team environment. The content is technology agnostic and you will benefit from this training regardless of the type of UI you are automating with Ranorex.When:The course dates are April 28-29, 2015 and June 16-17, 2015. Stay tuned for additional dates later in 2015.Where:At the Ranorex North American Headquarters in Clearwater, FL. This will allow us to provide a more personal interaction and attention to each attendee to cover advanced topics. The advanced training course will not be offered online. Instead, we offer you the opportunity to visit the Sunshine state and are hosting the training at our corporate office. This allows you to engage with the Ranorex team for a more individual learning experience through a combination of presentations, hands-on exercises and group discussions.Cost:Registration for the 2-day training is $1,395 per person. Space is limited, so reserve your seat now!Registration: Right this way please or contact our sales team for a quote.
Additional information can be found in the course description and curriculum . Please be aware of the necessary prerequisites to attending this training course.



Please contact us at sales.us@ranorex.com with any questions you may have.
We look forward to seeing you in one of our advanced classes soon!

Look at the schedules for additional workshops in the next few months:

Categories: Companies

Pulse Roadmap Update

a little madness - Thu, 03/26/2015 - 05:30

Long time users of our Pulse Continuous Integration Server would know that we don’t believe in posting long-term roadmaps. They just never reflect a changing reality! But we have always been happy to discuss features with customers, including keeping our issue tracker (creaky old version of Jira that it is) completely open for all to see and contribute. In that spirit I’d like to talk a little about where we’re heading with Pulse in the near term, the bit that can be predicted, in a format more digestible than disparate issues.

The next version of Pulse (as yet unnamed), will have updates focused on a few areas:

  1. Upgrades of underlying libraries including Equinox, Spring, Spring Security, Hibernate, Jetty, Quartz, EhCache and more. If you haven’t seen a lot of visible changes reported recently this is why: these upgrades have occupied the first part of this development cycle. These are truly the most boring of all changes, which we hope you won’t notice directly at all! What you will notice, though, is a payoff of this strong foundation over time.
  2. Major updates to the administration interface. The interface works well enough at the moment but could be improved in a couple of key areas: discoverability and efficiency. Key goals for these updates include:

    • Improving the visibility of the most commonly-used configuration via overview pages.
    • Making it easier to discover what is overridden (via templating) and where.
    • More efficient navigation, especially through the template hierarchy.
    • Modernisation to take advantage of HTML 5 (which the current interface predates).

    These changes are big enough to warrant a dedicated blog post at a future point.

  3. Improved visibility of the build environment. When builds fail in curious ways the culprit is often a small difference in the environment. Pulse currently publishes environment information via implicit env.txt artifacts, but these haven’t kept up to date with the variety of options Pulse now gives for specifying build properties.
  4. Improvements to the Windows experience. In 2.7 work was done to improve Windows service support, but more could be done to streamline the setup process in particular.

As always we will also be working on dozens of smaller improvements and suggestions from our user base, most of which fall under one of:

  • UI polish, especially in the reporting interface.
  • Increased flexibility of project and build configuration.
  • Updated support for modern versions of build tooling.

Customers are more than welcome to connect with us via our support email, support forum, or issue tracker to discuss these and other changes you’d like to see in Pulse!

Categories: Companies

Registration is Open for JUC 2015!

Attend THE conference for Jenkins users, by Jenkins users.Register and Learn More. Early Bird rate ends May 1!
 In the past, the Jenkins User Conference has been a one-day event, but this year for the first time ever it will be a two-day event in three cities, providing you with more content and more networking opportunities with more Jenkins users! (Refer to the graph below to see just how many attendees are expected.)

The main focus of JUC is the use of Jenkins for continuous integration (CI) and continuous delivery (CD) as the fundamental best practice for enterprise software delivery. All JUC presenters are experienced Jenkins developers, build managers, QA, DevOps practitioners, IT managers/executives, architects and IT operations who are luminaries within the Jenkins community. They represent the many organizations around the world that are leveraging the use of Jenkins within the software delivery lifecycle.
In 2014, the community saw an 80% increase in attendance
over 2013. This year, 800-1000 attendees are expected in
each city!
We welcome you and other leading Jenkins developers, QA, DevOps and operations personnel to the Jenkins User Conference World Tour. As the organizing sponsor of the Jenkins User Conferences, CloudBees has helped the community grow the Jenkins User Conferences worldwide over the last four years.

In 2015, the World Tour will bring together the full strength of the Jenkins community—now over 100,000 installations strong—and the ever expanding Jenkins partner ecosystem, allowing attendees to learn, explore, network face-to-face and to shape the next evolution of Jenkins development. Kohsuke Kawaguchi will kick off the event with a keynote address and lead us into the two-day conference. Attend a JUC to get the knowledge you need to make your current and future Jenkins projects a success.
Categories: Companies

Testing Angular Applications

Software Testing Magazine - Wed, 03/25/2015 - 17:47
Ari Lerner believes that testing is a core aspect of development, that they cannot be separated from one another, that they are one in the same. This talk is about Angular, and is specifically about testing the Angular JavaScript framework, but the approaches discussed are universal to front-end applications alike. This talk try to anwers the following questions about the testing of Angular.js applications: * What does it mean to test? * Why test? * What is good testing technique? * How can we test every component of our application? Conference producer: http://forwardjs.com/ Video producer: https://thenewcircle.com/
Categories: Communities

uTest Announces Winning Testers of the Quarter for Q1 2015

uTest - Wed, 03/25/2015 - 17:12

uTest is proud to announce the first 2015 Testers of the Quarter for Q1!badgeTesterOfQuarter

Our quarterly community recognition program exists solely to recognize and award the rock stars of our global community. Testers recently concluded voting for their peers and mentors, recognizing their dedication and quality work in various facets of uTest participation including test cycle performance, course writing and blogging, and Forums participation.

In addition to the winners below, you can also view their names now in our uTest Hall of Fame. Without further ado, here are the Q1 2015 Testers of the Quarter at uTest:

Outstanding Forums Contributors
David Petura, Czech Republic
David Shakhunov, United States
Bhudev Dalal, India

Outstanding Content Contributors
George McConnon, United Kingdom
Evan Hjelmstad, United States

Outstanding TTLs
George McConnon, United Kingdom
Nadezda Jerjomina, Latvia
Linda Peterson, United States

Outstanding Testers, TTLs’ Choice
Matthew Duval, United States
Milos Dedijer, Serbia

A big congratulations to all of those that had the distinction of being recognized by their peers for the first 2015 edition of Tester of the Quarter. We even had some multiple-quarter-and-category winners lighting it up this quarter, and continuing to be rockstars in the eyes of their peers! Additionally, while their names may not be here, there were also countless other testers that got individual praise along the way — their hard work did not go unnoticed.

Leave your congratulations in the Comments below, or visit the Forums to see the full announcement…along with some of the tester praise that led to these distinctions!

The post uTest Announces Winning Testers of the Quarter for Q1 2015 appeared first on Software Testing Blog.

Categories: Companies

Master the Essentials of UI Test Automation Series: Chapter Six

Telerik TestStudio - Wed, 03/25/2015 - 14:00
Chapter 6: Automation in the Real World So here you are: ready and raring to get real work done. Hopefully, at this point, you're feeling excited about what you've accomplished so far. Your team has set itself up for success through the right amount of planning, learning and prototyping. Now it's time to execute on what you've laid out. Remember: your best chance for success is focusing on early conversations to eliminate rework or waste, and being passionate about true collaboration. Break down the walls wherever possible to make the mechanics of automation all that much easier...
Categories: Companies

Making Sure New TestTrack Items are Immediately Assigned

The Seapine View - Wed, 03/25/2015 - 09:00

Have you ever wondered if there was a way to make sure new items are assigned to a user as soon as they are added to TestTrack, so your project does not contain a large number of items that are not assigned? Here are a couple of solutions that can help you do just that.

Note: The following solutions use trigger rules to make sure new items added are assigned to users. Issues are used in the example configurations, but the information can also be used to assign new requirements, documents, and test cases.

Solution 1: Create a trigger that assigns items based on the value set in a custom assignment field

This solution uses a trigger rule to automatically assign new issues based on the value set in a custom field.

First, create a custom assignment field that lets users select who to assign new issues to. Because issues will be assigned to different users as they move through the workflow, consider using a field name that clearly indicates the field is only used to set the initial assignment for new issues. After the field is added, you will update the security privileges to make sure it is only available when users add issues. More on that later.

The assignment field should be a Pop-Up Menu field that uses the Users value list. Display the field on the Main Issue Window so users can easily see it and be sure to select Supports multiple selection if you want users to be able to assign new issues to more than one person.

AddCustomFieldAssignedTo

After the assignment field is added, create a trigger that enters an Assign event when new issues are created and uses the value set in the assignment field to determine who to assign them to. In the Add Trigger Rule dialog box, leave the Precondition setting as Not Filtered so the trigger applies to all new issues. Click the Trigger When tab to specify that the trigger should run when a new issue is created and before it is saved. Click the Actions tab to add an Enter event action. The trigger should enter an Assign event and assign the issue to the list of users selected in the assignment field. The following screenshot shows the complete trigger summary.

EnterAssignEventTriggerSummary

Next, update field security for the assignment field in all security groups. Users should have read/write privilege to the field when adding new issues, but the field should be hidden when editing issues to prevent users from using the field instead of a workflow event to assign updated items.

Finally, make the assignment field required so users must select who to assign new issues to. If users do not set the field when adding issues, they are prompted to set it before saving.

When all the changes are saved, add a new issue to test the required assignment field and trigger. Enter all the required issue information except the assignment setting and click Add. You should see a message indicating that you cannot save the issue because the assignment field is required.

AddIssueAssignToRequired

Select a user in the assignment field and then click Add again. The issue is added and automatically assigned to the selected user. When you view or edit the issue, you can see the Assign event was added to the issue and the initial assignment field is hidden.

ViewIssueAssignTrigger

Solution 2: Create a trigger that prevents users from adding new items without assigning them first

This solution uses a trigger rule to make sure new issues are assigned through the workflow to the appropriate user before they are saved.

First, create a trigger that applies only to issues that are not assigned. In the Add Trigger Rule dialog box, on the Precondition tab, click Create Filter. The filter should use the Currently Assigned To restriction to select issues that have an unknown assignment.

AddFilterNotAssigned

After selecting the precondition filter for the trigger, click the Trigger When tab to specify that the trigger should run when a new issue is created and before it is saved. Finally, click the Actions tab to add a prevent action that displays a message to users instructing them to enter the Assign event before saving new issues. The following screenshot shows the complete trigger summary.

PreventAddingWithAssignmentTriggerSummary

Next, make sure the initial state in the issue workflow allows users to add an Assign event. To check this, choose Tools > Administration > Workflow. Select Issues as the Type and click the Transitions tab. The initial state for new issues should have Assign set as an available transition.

ConfigureWorkflowIntitalStateAssignTransition

When all the changes are saved, add a new issue to test the trigger. Enter all the required issue information and then click Add. You should see the message indicating that you cannot save the issue because you did not enter an Assign event.

AddIssueCannotSaveUnassignedIssue

Assign the issue and then click Add again. The issue is added and assigned to the selected user.

Thanks to Gordon Alexander, Seapine Software solutions specialist, for providing the second solution mentioned in this post.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Learn How to Find Highly Valuable Bugs

uTest - Tue, 03/24/2015 - 20:01

uTest University recently hosted a “How to Find Highly Valuable Bugs” live webinar, led by Test Team Leads (TTLs) Dave D’Amico and Todd Miller. Dave’s experience in software support/system administration and Todd ‘s perspective as a former test manager on the customer side provided some unique insights into the bug hunting and bug reporting process.

Some tester tips from the webinar include:

  • It is important to know the customer’s product life cycle and where your testing fits in to that life cycle.
  • Know the scope and known issues for your test cycle so that you recognize a high value bug when you encounter one.
  • Monitor the cycle and see what other testers are submitting. Are there traits to the approved bugs that you adopt to improve your own reporting?
  • Every cycle is unique, so a tester needs to adapt based on the information given in each new cycle.
  • Context builds value! Bug reports often get sent to people on the customer side who were not part of the test cycle. Make sure your bug report is written so that everyone can understand it.

In this excerpt from the webinar, Dave and Todd talk about the concept of “high value” and why that can be different between different customers and test cycles. You can also view the full recorded webinar.

 

For more courses, how-tos and webinars, check out uTest University, your source for free software testing training.

Not a uTester yet? Sign up today to comment on all of our blogs, and gain access to free training, the latest software testing news, opportunities to work on paid testing projects, and networking with over 150,000 testing pros. Join now.

The post Learn How to Find Highly Valuable Bugs appeared first on Software Testing Blog.

Categories: Companies

This Is The Only Time To Ask For App Reviews

Testlio - Community of testers - Tue, 03/24/2015 - 19:33

Do you like me?

Unless you know me or are a steady reader of this blog I can’t imagine you would be ready to answer that question.

In fact, you’re likely leaning towards no.

It’s obvious why people don’t ask that question when they first meet someone.

Despite this being so common, I see apps asking me to review them in the app store within the first 10 minutes.

New users are delicate. The slightest nudge in the wrong direction can turn them off from your app forever. In fact, 80-90% of users delete an app after using it just once.

However it is important to ask your users to rate your app on the app store. Few of your users will be proactive enough to go to the app store and rate it on their own unless you have a stand out app.

Your users need to be pushed to rate your app. Otherwise it will never show up in results and potential users will be reluctant to try it out.

Whenever you ask your users to review your app, the timing needs to make complete sense. If you don’t give them enough time to play with your app and learn how it works, you end up rushing them. When you rush your users to make a decision, it will leave a bad taste in their mouth. When you leave a bad taste in your user’s mouth, they will translate that foul taste in as an app review.

 

 

KissMetrics App Reviews Image from Kissmetrics

There is only one situation where it makes sense to ask for an app review from your user.

After they experience the core value of your app.

This is the only time I have found where it makes sense to ask a user to rate your app if you’re looking for constructive feedback and/or positive reviews.

This is important if you want to sustain high app store ratings and improve your app.

If you’re creating a messaging app, ask your users to rate your app after they’ve sent at least 20 messages. By then it should be clear that your app generates some sense of value for them.

If you’re creating a social network, then ask people to review your app after they’ve added a few of their friends and interacted with them. You wouldn’t ask someone to rate your app if they’ve never used it.

 

“But what if I want to use the app store to see how I can improve my app?”

Your app store reviews is not a support center.

If your users have a problem with your app, don’t give them the option to release their frustration on your app store rating. Instead, make a prompt come up asking them to explain their frustrations. Keep it private. Not everyone in the world needs to know that your app is buggy.

In fact, the best customers are the ones with the problems. By giving your users an avenue to express their frustrations, you open an opportunity to turn a frustrated user into an evangelist by providing exceptional customer service.

If you’re looking to turn frustrated users around I recommend using Helpshift. Their product creates a very simple instant communication avenue with your users.

 

Those who like you give you five stars. Those who love you give you four.

A common mistake I see when founders are reading their app reviews is focusing too much on one and five star ratings.

App reviews on the app store

I have a problem with anything using the Likert scale because you end up with a U-shaped results. You have a lot of five star ratings, a lot of one star ratings, and very few of everything in between.

One star ratings are almost useless. The only time they make sense is when there is a huge flaw such as frequent crashing. Other times they’re a user who had a bad experience at no fault of your app. For example, this person gave a waterfall mapping app a one star review because they got stung by wasps.

One star ratings generally aren’t helpful. Five star ratings aren’t much better.

Thoughtful five star reviews are great for finding out what users love about your app. While this may be great for your team’s focus, it doesn’t give you much in terms of improvement or news.

Users who rate your app between 2-4 stars want to see you succeed. These are the reviewers that give the thoughtful constructive criticism.

They will tell you what they like, what they don’t like, and what they think could be done for improvement.

 

Conclusion

If you’re asking your users to rate your app, make sure you’re doing it in a timely manner. Don’t rush them to form an opinion when they haven’t had enough time to experience the benefit of the app.

One star and five star app reviews will not be as helpful for information to improve your app. Instead focus on the 2-4 star reviews.

When do you ask your users to rate your app? How has it worked out for you so far? Tweet your answer to www.twitter.com/willietran_ or reply in the comments below.

The post This Is The Only Time To Ask For App Reviews appeared first on Testlio.

Categories: Companies

Vector Software Adds Covered By Analysis Capability

Software Testing Magazine - Tue, 03/24/2015 - 18:20
Vector Software, a provider of solutions for embedded software quality, has announced the availability of VectorCAST/CBA (Covered By Analysis) which allows users in regulated industries to augment measured coverage with manual analysis to achieve the mandated 100% code coverage. VectorCAST/CBA is available as an add-on for all VectorCAST products, and provides an intuitive editor which allows users to provide analysis for statements, branch outcomes, or MC/DC pairs depending on the coverage level. The ability to combine Coverage Analysis data sets with measured code coverage from unit, integration, and system testing, provides ...
Categories: Communities

Clean Tests: Isolation with Fakes

Jimmy Bogard - Tue, 03/24/2015 - 17:58

Other posts in this series:

So far in this series, I’ve walked through different modes of isolation – from internal state using child containers and external state with database resets and Respawn. In my tests, I try to avoid fakes/mocks as much as possible. If I can control the state, isolating it, then I’ll leave the real implementations in my tests.

There are some edge cases in which there are dependencies that I can’t control – web services, message queues and so on. For these difficult to isolate dependencies, fakes are acceptable. We’re using AutoFixture to supply our mocks, and child containers to isolate any modifications. It should be fairly straightforward then to forward mocks in our container.

As far as mocking frameworks go, I try to pick the mocking framework with the simplest interface and the least amount of features. More features is more headache, as mocking frameworks go. For me, that would be FakeItEasy.

First, let’s look at a simple scenario of creating a mock and modifying our container.

Manual injection

We’ve got our libraries added, now we just need to add a way to create a fake and inject it into our child container. Since we’ve built an explicit fixture object, this is the perfect place to put our code:

public T Fake<T>()
{
    var fake = A.Fake<T>();

    Container.EjectAllInstancesOf<T>();
    Container.Inject(typeof(T), fake);

    return fake;
}

We create the fake using FakeItEasy, then inject the instance into our child container. Because we might have some existing instances configured, I use “EjectAllInstancesOf” to purge any configured instances. Once we’ve injected our fake, we can now both configure the fake and use our container to build out an instance of a root component. The code we’re trying to test is:

public class InvoiceApprover : IInvoiceApprover
{
    private readonly IApprovalService _approvalService;

    public InvoiceApprover(IApprovalService approvalService)
    {
        _approvalService = approvalService;
    }

    public void Approve(Invoice invoice)
    {
        var canBeApproved = _approvalService.CheckApproval(invoice.Id);

        if (canBeApproved)
        {
            invoice.Approve();
        }
    }
}

In our situation, the approval service is some web service that we can’t control and we’d like to stub that out. Our test now becomes:

public class InvoiceApprovalTests
{
    private readonly Invoice _invoice;

    public InvoiceApprovalTests(Invoice invoice,
        SlowTestFixture fixture)
    {
        _invoice = invoice;

        var mockService = fixture.Fake<IApprovalService>();
        A.CallTo(() => mockService.CheckApproval(invoice.Id)).Returns(true);

        var invoiceApprover = fixture.Container.GetInstance<IInvoiceApprover>();

        invoiceApprover.Approve(invoice);
        fixture.Save(invoice);
    }

    public void ShouldMarkInvoiceApproved()
    {
        _invoice.IsApproved.ShouldBe(true);
    }

    public void ShouldMarkInvoiceLocked()
    {
        _invoice.IsLocked.ShouldBe(true);
    }
}

Instead of using FakeItEasy directly, we go through our fixture instead. Once our fixture creates the fake, we can use the fixture’s child container directly to build out our root component. We configured the child container to use our fake instead of the real web service – but this is encapsulated in our test. We just grab a fake and start going.

The manual injection works fine, but we can also instruct AutoFixture to handle this a little more intelligently.

Automatic injection

We’re trying to get out of creating the fake and root component ourselves – that’s what AutoFixture is supposed to take care of, creating our fixtures. We can instead create an attribute that AutoFixture can key into:

[AttributeUsage(AttributeTargets.Parameter)]
public sealed class FakeAttribute : Attribute { }

Instead of building out the fixture items ourselves, we go back to AutoFixture supplying them, but now with our new Fake attribute:

public InvoiceApprovalTests(Invoice invoice, 
    [Fake] IApprovalService mockService,
    IInvoiceApprover invoiceApprover,
    SlowTestFixture fixture)
{
    _invoice = invoice;

    A.CallTo(() => mockService.CheckApproval(invoice.Id)).Returns(true);

    invoiceApprover.Approve(invoice);
    fixture.Save(invoice);
}

In order to build out our fake instances, we need to create a specimen builder for AutoFixture:

public class FakeBuilder : ISpecimenBuilder
{
    private readonly IContainer _container;

    public FakeBuilder(IContainer container)
    {
        _container = container;
    }

    public object Create(object request, ISpecimenContext context)
    {
        var paramInfo = request as ParameterInfo;

        if (paramInfo == null)
            return new NoSpecimen(request);

        var attr = paramInfo.GetCustomAttribute<FakeAttribute>();

        if (attr == null)
            return new NoSpecimen(request);

        var method = typeof(A)
            .GetMethod("Fake", Type.EmptyTypes)
            .MakeGenericMethod(paramInfo.ParameterType);

        var fake = method.Invoke(null, null);

        _container.Configure(cfg => cfg.For(paramInfo.ParameterType).Use(fake));

        return fake;
    }
}

It’s the same code as inside our context object’s “Fake” method, made a tiny bit more verbose since we’re dealing with type metadata. Finally, we need to register our specimen builder with AutoFixture:

public class SlowTestsCustomization : ICustomization
{
    public void Customize(IFixture fixture)
    {
        var contextFixture = new SlowTestFixture();

        fixture.Register(() => contextFixture);

        fixture.Customizations.Add(new FakeBuilder(contextFixture.Container));
        fixture.Customizations.Add(new ContainerBuilder(contextFixture.Container));
    }
}

We now have two options when building out fakes – manually through our context object, or automatically through AutoFixture. Either way, our fakes are completely isolated from other tests but we still build out our root components we’re testing through our container. Building out through the container forces our test to match what we’d do in production as much as possible. This cuts down on false positives/negatives.

That’s it for this series on clean tests – we looked at isolating internal and external state, using Fixie to build out how we want to structure tests, and AutoFixture to supply our inputs. At one point, I wasn’t too interested in structuring and refactoring test code. But having been on projects with lots of tests, I’ve found that tests retain their value when we put thought into their design, favor composition over inheritance, and try to keep them as tightly focused as possible (just like production code).

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today