Skip to content

Feed aggregator

Conference organizers, try harder. Conference participants: shop!

Agile Testing with Lisa Crispin - Sun, 03/01/2015 - 01:25

I just received a flyer in my snail mail for yet another conference where four out of the five keynote speakers are white men and only one is a woman. Are you kidding me? And this is a testing conference. Testing is a field that does indeed have lots of women, I would guess a significantly higher percentage than, say, programming.

I know the organizers of this conference and they are good people who aren’t purposely discriminating against women (or minorities, for that matter). But they aren’t trying hard enough, either. I’ve personally sent long lists of women I recommend to speak at their conferences. True, most of these women aren’t “known” keynote speakers – maybe because nobody ever asks them to keynote. These women are highly experienced testing practitioners who have valuable experience to share.

This same company has an upcoming testing conference with no female keynoters, so I guess this is an improvement. But I’m not letting them off the hook, and you shouldn’t either.

What do you value more: a highly entertaining, “big name” keynote speech? Or an experienced practitioner who competently helps you learn some new ideas to go and try with your own teams, but maybe isn’t as well known or flashy?

You probably don’t get to go to many conferences, so be choosy. Choose the ones with a diverse lineup of not only keynoters but presenters of all types of sessions. In fact, choose conferences that have lots of hands-on sessions where you get to learn by practicing with your peers. We have the choice of these conferences now. And I hope you will leave your favorites in comments here. I don’t want to make my friends unhappy by naming names here, but email me and I’ll give you my own recommendations. (Another disclaimer – I’m personally not looking for keynoting gigs, so these are not sour grapes. I don’t like doing keynotes, and I know my limitations as a presenter).

The organizations sponsoring and organizing conferences are pandering to what they think you, their paying audience, wants to see. If you’re going to conferences to see big names and polished speakers, and you don’t care if the lineup is diverse, go ahead. If you want a really great learning experience, maybe do some more research about where your time and money will reap the most value for you.

I’m not trying to start a boycott, but I am saying: we are the market. Let’s start demanding what we want, and I know these conference organizers will then have to step up and try harder.

The post Conference organizers, try harder. Conference participants: shop! appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Summary of five years

Thought Nursery - Jeffrey Fredrick - Sat, 02/28/2015 - 11:05

“What have you been doing these last few years?” was the question Péter Halácsy asked me during my visit to Prezi. I was there to for the CTO equivalent of a developer exchange: learning how things were done at Prezi, sharing my observations, and then speaking at the Budapest Jenkins meetup. Prior to my visit Péter had come to this blog to learn more about me, only to learn that I’d not been blogging. I’m resolved to get back into the blogging habit this year and I decided I’d take the time to fill in the gap for any future Péters. In part this will recapitulate my LinkedIn profile but also describe some of what I felt was most significant.

The primary reason I only posted a single post after 2009 was that I joined Urbancode in a marketing/evangelism role and I posted almost everything I had to say under their masthead. In my two and half years there I had a great time spreading the word about build automation, continuous delivery and Devops. I was able to visit a wide range of companies, learn first hand about the challenges of enterprise organization, and then turn this information into new content. At Urbancode we developed a very good flow of information and almost every month we had a new webinar, a newsletter, and maybe a white paper. My primary content collaborator was Eric Minick and he has kept up those evangelizing ways at IBM following their acquisition of Urbancode.

After I left Urbancode we made a family decision to try living in London for a few years. I reached out to Douglas Squirrel and he brought me into TIM Group to do continuous delivery, infrastructure and operations. In my time there I’ve become CTO and Head of Product and I’ve really enjoyed the opportunity to apply what I know, both about product development and about organizational change. I’ve been almost equally as absent from the TIM Group development blog, but I have managed to share some of our experiences and learning at a few conferences including GOTO Conference 2012 (talk description & slides: A Leap from Agile to DevOps), London Devops Days 2013 (video of talk: Crossing the Uncanny Valley of Culture through Mutual Learning),  and XPDay London 2014.

During my time in London Benjamin Mitchell has been one of the biggest influences on my thinking and approach to organizational change. Benjamin has been a guide to the work of Chris Argyris and Action Science. It has been what I’ve learned from and with Benjamin that has inspired me to start the London Action Science Meetup.

Finally, I couldn’t recap the last few years without also mentioning Paul Julius and CITCON. Since I last mentioned CITCON North America in Minneapolis on this blog in 2009 we’ve gone on to organize 16 additional CITCON events worldwide, most recently in Auckland (CITCON ANZ), Zagreb (CITCON Europe), Austin (CITCON North America), and Hong Kong (CITCON Asia). For PJ and I this is our 10th year of CITCON (and OIF, the Open Information Foundation) and it has been fantastic to continue to meet people throughout the world who care about improving the way we do software development.

Categories: Blogs

How To Add Visual Testing To Existing Selenium Tests

Sauce Labs - Fri, 02/27/2015 - 22:00

Thanks again to those of you who attended our recent webinar with Applitools on automated visual testing.  If you want to share it or if you happened to miss it, you can catch the audio and slides hereWe also worked with Selenium expert Dave Haeffner to provide the how-to on the subject. Enjoy his post below.

 

The Problem

In previous write-ups I covered what automated visual testing is and how to do it. Unfortunately, based on the examples demonstrated, it may be unclear how automated visual testing fits into your existing automated testing practice.

Do you need to write and maintain a separate set of tests? What about your existing Selenium tests? What do you do if there isn’t a sufficient library for the programming language you’re currently using?

A Solution

You can rest easy knowing that you can build automated visual testing checks into your existing Selenium tests. By leveraging a third-party platform like Applitools Eyes, this is a simple feat.

And when coupled with Sauce Labs, you can quickly add coverage for those hard to reach browser, device, and platform combinations.

Let’s step through an example.

An Example

NOTE: This example is written in Java with the JUnit testing framework.

Let’s start with an existing Selenium test. A simple one that logs into a website.

// filename: Login.java

import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;

public class Login {

    private WebDriver driver;

    @Before
    public void setup() {
        driver =  new FirefoxDriver();
    }

    @Test
    public void succeeded() {
        driver.get("http://the-internet.herokuapp.com/login");
        driver.findElement(By.id("username")).sendKeys("tomsmith");
        driver.findElement(By.id("password")).sendKeys("SuperSecretPassword!");
        driver.findElement(By.id("login")).submit();
        Assert.assertTrue("success message should be present after logging in",
                driver.findElement(By.cssSelector(".flash.success")).isDisplayed());
    }

    @After
    public void teardown() {
        driver.quit();
    }
}

In it we’re loading an instance of Firefox, visiting the login page on the-internet, inputting the username & password, submitting the form, asserting that we reached a logged in state, and closing the browser.

Now let’s add in Applitools Eyes support.

If you haven’t already done so, you’ll need to create a free Applitools Eyes account (no credit-card required). You’ll then need to install the Applitools Eyes Java SDK and import it into the test.

// filename: pom.xml

<dependency>
  <groupId>com.applitools</groupId>
  <artifactId>eyes-selenium-java</artifactId>
  <version>RELEASE</version>
</dependency>
// filename: Login.java

import com.applitools.eyes.Eyes;
...

Next, we’ll need to add a variable (to store the instance of Applitools Eyes) and modify our test setup.

// filename: Login.java
...
public class Login {

    private WebDriver driver;
    private Eyes eyes;

    @Before
    public void setup() {
        WebDriver browser =  new FirefoxDriver();
        eyes = new Eyes();
        eyes.setApiKey("YOUR_APPLITOOLS_API_KEY");
        driver = eyes.open(browser, "the-internet", "Login succeeded");
    }
...

Rather than storing the Selenium instance in the driver variable, we’re now storing it in a localbrowser variable and passing it into eyes.open — storing the WebDriver object that eyes.openreturns in the driver variable instead.

This way the Eyes platform will be able to capture what our test is doing when we ask it to capture a screenshot. The Selenium actions in our test will not need to be modified.

Before calling eyes.open we provide the API key (which can be found on your Account Details page in Applitools). When calling eyes.open, we pass it the Selenium instance, the name of the app we’re testing (e.g., "the-internet"), and the name of the test (e.g., "Login succeeded").

Now we’re ready to add some visual checks to our test.

// filename: Login.java
...
    @Test
    public void succeeded() {
        driver.get("http://the-internet.herokuapp.com/login");
        eyes.checkWindow("Login");
        driver.findElement(By.id("username")).sendKeys("tomsmith");
        driver.findElement(By.id("password")).sendKeys("SuperSecretPassword!");
        driver.findElement(By.id("login")).submit();
        eyes.checkWindow("Logged In");
        Assert.assertTrue("success message should be present after logging in",
                driver.findElement(By.cssSelector(".flash.success")).isDisplayed());
        eyes.close();
    }
...

With eyes.checkWindow(); we are specifying when in the test’s workflow we’d like Applitools Eyes to capture a screenshot (along with some description text). For this test we want to check the page before logging in, and then the screen just after logging in — so we use eyes.checkWindow(); two times.

NOTE: These visual checks are effectively doing the same work as the pre-existing assertion (e.g., where we’re asking Selenium if a success notification is displayed and asserting on the Boolean result) — in addition to reviewing other visual aspects of the page. So once we verify that our test is working correctly we can remove this assertion and still be covered.

We end the test with eyes.close. You may feel the urge to place this in teardown, but in addition to closing the session with Eyes, it acts like an assertion. If Eyes finds a failure in the app (or if a baseline image approval is required), then eyes.close will throw an exception; failing the test. So it’s best suited to live in the test.

NOTE: An exceptions from eyes.close will include a URL to the Applitools Eyes job in your test output. The job will include screenshots from each test step and enable you to play back the keystrokes and mouse movements from your Selenium tests.

When an exception gets thrown by eyes.close, the Eyes session will close. But if an exception occurs before eyes.close can fire, the session will remain open. To handle that, we’ll need to add an additional command to our teardown.

// filename: Login.java
...
    @After
    public void teardown() {
        eyes.abortIfNotClosed();
        driver.quit();
    }
}

eyes.abortIfNotClosed(); will make sure the Eyes session terminates properly regardless of what happens in the test.

Now when we run the test, it will execute locally while also performing visual checks in Applitools Eyes.

What About Other Browsers?

If we want to run our test with it’s newly added visual checks against other browsers and operating systems, it’s simple enough to add in Sauce Labs support.

NOTE: If you don’t already have a Sauce Labs account, sign up for a free trial account here.

First we’ll need to import the relevant classes.

// filename: Login.java
...
import org.openqa.selenium.Platform;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
import java.net.URL;
...

We’ll then need to modify the test setup to load a Sauce browser instance (via Selenium Remote) instead of a local Firefox one.

// filename: Login.java
...
    @Before
    public void setup() throws Exception {
        DesiredCapabilities capabilities = DesiredCapabilities.internetExplorer();
        capabilities.setCapability("platform", Platform.XP);
        capabilities.setCapability("version", "8");
        capabilities.setCapability("name", "Login succeeded");
        String sauceUrl = String.format(
                "http://%s:%s@ondemand.saucelabs.com:80/wd/hub",
                "YOUR_SAUCE_USERNAME",
                "YOUR_SAUCE_ACCESS_KEY");
        WebDriver browser = new RemoteWebDriver(new URL(sauceUrl), capabilities);
        eyes = new Eyes();
        eyes.setApiKey(System.getenv("APPLITOOLS_API_KEY"));
        driver = eyes.open(browser, "the-internet", "Login succeeded");
    }
...

We tell Sauce what we want in our test instance through DesiredCapabilities. The main things we want to specify are the browser, browser version, operating system (OS), and name of the test. You can see a full list of the available browser and OS combinations here.

In order to connect to Sauce, we need to provide an account username and access key. The access key can be found on your account page. These values get concatenated into a URL that points to Sauce’s on-demand Grid.

Once we have the DesiredCapabilities and concatenated URL, we create a Selenium Remote instance with them and store it in a local browser variable. Just like in our previous example, we feedbrowser to eyes.open and store the return object in the driver variable.

Now when we run this test, it will execute against Internet Explorer 8 on Windows XP. You can see the test while it’s running in your Sauce Labs account dashboard. And you can see the images captured on your Applitools account dashboard.

A Small Bit of Cleanup

Both Applitools and Sauce Labs require you to specify a test name. Up until now, we’ve been hard-coding a value. Let’s change it so it gets set automatically.

We can do this by leveraging a JUnit TestWatcher and a public variable.

// filename: Login.java
...
import org.junit.rules.TestRule;
import org.junit.rules.TestWatcher;
import org.junit.runner.Description;
...
public class Login {

    private WebDriver driver;
    private Eyes eyes;
    public String testName;

    @Rule
    public TestRule watcher = new TestWatcher() {
        protected void starting(Description description) {
            testName = description.getDisplayName();
        }
    };
...

Each time a test starts, the TestWatcher starting function will grab the display name of the test and store it in the testName variable.

Let’s clean up our setup to use this variable instead of a hard-coded value.

// filename: Login.java
...
    @Before
    public void setup() throws Exception {
        DesiredCapabilities capabilities = DesiredCapabilities.internetExplorer();
        capabilities.setCapability("platform", Platform.XP);
        capabilities.setCapability("version", "8");
        capabilities.setCapability("name", testName);
        String sauceUrl = String.format(
                "http://%s:%s@ondemand.saucelabs.com:80/wd/hub",
                System.getenv("SAUCE_USERNAME"),
                System.getenv("SAUCE_ACCESS_KEY"));
        WebDriver browser = new RemoteWebDriver(new URL(sauceUrl), capabilities);
        eyes = new Eyes();
        eyes.setApiKey(System.getenv("APPLITOOLS_API_KEY"));
        driver = eyes.open(browser, "the-internet", testName);
    }
...

Now when we run our test, the name will automatically appear. This will come in handy with additional tests.

One More Thing

When a job fails in Applitools Eyes, it automatically returns a URL for it in the test output. It would be nice if we could also get the Sauce Labs job URL in the output. So let’s add it.

First, we’ll need a public variable to store the session ID of the Selenium job.

// filename: Login.java
...
public class Login {

    private WebDriver driver;
    private Eyes eyes;
    public String testName;
    public String sessionId;
...

Next we’ll add an additional function to TestWatcher that will trigger when there’s a failure. In it, we’ll display the Sauce job URL in standard output.

// filename: Login.java
...
    @Rule
    public TestRule watcher = new TestWatcher() {
        protected void starting(Description description) {
            testName = description.getDisplayName();
        }

        @Override
        protected void failed(Throwable e, Description description) {
            System.out.println(String.format("https://saucelabs.com/tests/%s", sessionId));
        }
    };
...

Lastly, we’ll grab the session ID from the Sauce browser instance just after it’s created.

// filename: Login.java
...
        WebDriver browser = new RemoteWebDriver(new URL(sauceUrl), capabilities);
        sessionId = ((RemoteWebDriver) browser).getSessionId().toString();
...

Now when we run our test, if there’s a Selenium failure, a URL to the Sauce job will be returned in the test output.

Expected Outcome
  • Connect to Applitools Eyes
  • Load an instance of Selenium in Sauce Labs
  • Run the test, performing visual checks at specified points
  • Close the Applitools session
  • Close the Sauce Labs session
  • Return a URL to a failed job in either Applitools Eyes or Sauce Labs
Outro

Happy Testing!

 

About Dave Haeffner: Dave is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Categories: Companies

Can You Hack Into Google Chrome? It Could Net You an ‘Infinity Million’

uTest - Fri, 02/27/2015 - 20:55

59681140Google once again is holding its annual hackathon for participants to search for holes and major flaws in its Chrome OS. Last year, the bounty was $2.71828 million in prizes.

However, this year, they’ve totally upped the ante — to the infinite degree. In fact, according to Entrepreneur, “Google has changed the nature of the prize money at stake…It now goes all the way up to $∞ million.”

Prizes in the hackathon range from $500 up to a new high of $50,000, and there’s no limit on the reward pool, but that could always be scrapped at the drop of a hat. Google says that the changes “are meant to lower the barrier of entry, and remove the incentive for hackers to sit on discovered bugs until the annual competition.”

This certainly sweetens the pot for hackers everywhere, although I could totally see that blank check of an “infinity million” being a very temporary experiment when competition gets out of hand (and Google’s bank accounts…low).

What would you do with an infinity million?

Not a uTester yet? Sign up today to comment on all of our blogs, and gain access to free training, the latest software testing news, opportunities to work on paid testing projects, and networking with over 150,000 testing pros. Join now.

Categories: Companies

Nexus Reaches 50,000

Sonatype Blog - Fri, 02/27/2015 - 17:34
Active Nexus instances have grown 100% within the past 18 months. Just awesome. And, YOU, our user community made it happen. As of today, we surpassed the milestone of 50,000 active Nexus installs! Thank you.

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

Code Coverage with NCover and UFT / QTP

NCover - Code Coverage for .NET Developers - Fri, 02/27/2015 - 16:00
Code Coverage with NCover and UFT / QTP

code_coverage_ncover_uft_qtp_blogAutomated testing is key to any productive QA team. We at NCover often work with teams using any number of test automation solutions to simplify their test processes.  One of the solutions that we encounter frequently is HP’s Unified Functional Testing (UFT), the updated version of another HP product, Quick Test Professional (QTP).  In fact, many teams still use the older QTP solution today and the steps that we’ll share in this post work well with either UFT or QTP.

Background On Working with UFT / QTP

While UFT/ QTP is helpful for increasing the efficiency of the testing process, these solutions can often interfere with coverage collection. The landscape of plugins and configurations possible with UFT/QTP can impose obstacles to the profiling methodology used by NCover. NCover performs in-memory profiling via the CLR profiler interface provide by .Net. The results are mixed when customers tackle this integration. Certain scenarios work well, but frequently the level of integration required by both tool sets is just difficult to orchestrate. The conflicts are various and fleeting at times, making this an integration with the potential of significant maintenance.

The good news is that the introduction of pre-instrumentation in NCover 5 makes these conflicts a thing of past.  With pre-instrumentation you can insert coverage instrumentation into the assemblies of an application on disk, rather than in memory, to avoid the CLR Profiler orchestration. Teams using this approach have achieved positive results and improved performance.

How Pre-Instrumentation Works

Pre-instrumenting an assembly is a simple process and can be done at the command line or as part of a scripts with the following command:

c:> ncover instrument myassembly.dll

or

c:> ncover instrument myassembly.dll myapp.exe myassembly2.dll

or

c:> ncover instrument *.dll *.exe

Wildcards are available for this command but it is important to be judicious about instrumentation. Avoid instrumenting third party libraries and system assemblies and focus on instrumenting only what is actionable. Selective pre-instrumentation becomes a form of pre-coverage filtering.

Pre-instrumentation is valid for both a single assembly in a solution or multiple assemblies, on a simultaneous basis, within a solution.  Through this approach, selective pre-instrumentation becomes an additional coverage filter allowing you to instrument only the assemblies for which coverage collection is needed. It is not necessary to pre-instrument an .exe file in order to collect coverage on a specific .dll file. Instrument only the .dll file and execute testing on the application as usual.

The execution of a pre-instrumented assembly collects coverage counters into a mapped file on disk while the application testing is underway. When the process exits, the binary coverage file remains on disk and can be imported via NCover Desktop, Code Central or Collector. These files carry the .ncprof extension and are raw coverage data. By making this file a raw binary counter file, the file remains valid for import even if your application crashes before you complete testing.

The coverage file follows a naming convention that includes the process name and a timestamp.  All pre-instrumented assemblies in the same process store their coverage counters in the same data file. This is similar to the same behavior of in-memory profiling, which places all of the coverage data in a single execution. Each .ncprof file represents a single execution of an pre-instrumented application.

Importing a coverage file is accomplished simply with the following command:

c:> ncover import --project=Project1 --file=MyAppName-[timestamp].ncprof

or

c:> ncover import --project=”My Coverage Project” --file=*.ncprof
Additional Resources

If you would like to learn more about instrumentation, we would encourage you to visit our posts on Pre-Instrumentation For Windows Store Apps and Integrating NCover Pre-Instrumentation Into Your Build Process.

The post Code Coverage with NCover and UFT / QTP appeared first on NCover.

Categories: Companies

uTest Takes: Best Software Testing Blogs From the Week of Feb. 23

uTest - Fri, 02/27/2015 - 15:30

From time to time, the uTest Blog highlights some of the recent blog entries that uTesters have crafted on their own personbbc-blogsal blogs, along with some standouts from the outside testing world.

Here are some such notables from the week of Feb. 23, 2015:

Blogs This Week from uTesters & uTest Contributors
  • Aspects of a Good Session: Any testers out there presenting at an upcoming conference or want to down the line? uTester Stephan Kämper penned this list of what he values in a “good” session, from humor and pain points, to not overdoing it on the slides, and telling good stories.
  • A Tester’s Portfolio: uTest contributor Stephen Janaway’s latest post from his own blog takes on the fact that while devs may have a robust portfolio, testers usually don’t — afterall, they don’t have a final creation to show for their efforts. What does that mean? They have to create this portfolio themselves through arenas like blogging, sharing presentations online and speaking at testing conferences.
  • Less Eeyore, More Spock: I didn’t grow up a Star Trek fan (I was always a Star Wars guy, myself), but I do know Spock, and frequent contributor Michael Larsen’s view on why testers should aspire to be Spock is thought-provoking. Live long and prosper!
Others That Caught Our Eye
  • Letter to a Starting Tester: This recent post from Joel Montvelisky of PractiTest (whom uTest partnered with for the State of Testing survey) is in the form of a letter, writing back to the ‘1998 Joel’ just starting out. It’s a very cool read, and especially hammers home advice a lot of context-driven testers would be proud of — seeking out fellow testers within your own organization and always questioning/standing your ground. For the tester just starting out — read Joel’s advice!
  • These Chicks Were O.G. (Original Geeks): Why do men get all the love in programming? Statistically there may be more males in the industry, but it downplays all of the important women that made important contributions to programming and testing. Nice post from the Testy Engineer that pays homage to Ada Lovelace and Grace Hopper — two female pioneers in the field.

Have ideas or blogs of your own that you haven’t yet shared with the world? Become a contributor to the uTest Blog today.

Not a uTester yet? Sign up today to comment on all of our blogs, and gain access to free training, the latest software testing news, opportunities to work on paid testing projects, and networking with over 150,000 testing pros. Join now.

Categories: Companies

Testing alert/confirm/prompt and touch mocking, BugBuster v3.5.0 is out!

BugBuster - Fri, 02/27/2015 - 10:30

Today, we release BugBuster v3.5.0 with 2 new features:

  • Support for alert/prompt/confirm dialog boxes
    • It is now possible to decide how a scenario will react to alert, confirm or prompt dialog boxes. As an exmaple, providing an input to a prompt is as easy as:
    // Click a button to trigger the prompt, and then enter some text
    wnd.click('[name="showDialog"]', { promptDialog: 'This text will be provided to the prompt dialog!' });
  • Support for touch event mocking

Let us know what you think about it!

The post Testing alert/confirm/prompt and touch mocking, BugBuster v3.5.0 is out! appeared first on BugBuster.

Categories: Companies

Big Sale on e-Learning Courses in Software Testing!

For a limited time, I am offering discounted pricing on ALL my e-learning courses, including ISTQB Foundation Level and Advanced Level certification courses!

Just go to http://www.mysoftwaretesting.com to see the savings on each course.

Remember, as always, you get:
  • LIFETIME access to the training
  • In-depth course notes
  • Exams included on the certification courses
  • Direct access to me, Randy Rice, to ask any questions during the course
Hurry! This sale ends at Midnight CST on Friday, March 20th, 2015.
Questions? Contact me by e-mail or call 405-691-8075.
Categories: Blogs

uTest Platform Updates Focus on Bug Reports

uTest - Thu, 02/26/2015 - 18:47
[Testers] require the effective integration of technologies to simplify their workflow and boost efficiency.

- Anne M. Mulcahy

uTesters on paid projects: We’re happy to announce some new uTest Platform functionality, with this week’s release, that enhances the bug reporting experience.

Save Your Bug Reports

It’s now easier than ever to create and save your bug templates. You may remember that in the previous release, we added a field that allowed you to configure a custom bug report template. We’ve simplified this process by allowing you to save the bug report you entered as your bug report template by adding a “Save as Template” button in the lower right-hand corner of the bug report form.

Pic1

We hope that this will enable you to create bug report templates even faster and with more efficiency.

Custom Bug Report Fields

Customers will often require that testers provide specific details in their bug reports. To date, testers had to refer to the scope of the cycle to remember which information to include. However, going forward, a customer or PM can add the following template fields directly in a bug report form:

  • Device Make and Model (their app is on phones, tablets, set top boxes and game consoles)
  • Browsers with versions (Some customers need “exact browser build version”)
  • OS (Service pack version)
  • URL where issue occurs
  • Does issue occur on production (for staging cycles)
  • Login details
  • Does issue occur for multiple login providers
  • Number of times reproduced
  • Other pages with the same issue
  • Reproducibility (x of y times)

Additionally, customers and project managers will have the ability to create other custom inputs to ensure flexibility across all cycle types.

Pic2

We hope that this will help streamline the bug reporting process for our testers, resulting in higher-quality reports for our customers.

If you like what you see, feel free to drop a note in the Forums to share your ideas on these and other recent platform updates!

Categories: Companies

How Manual Testers Can Break into Automation without Programming Skills – Free Special Webinar

Ranorex - Thu, 02/26/2015 - 17:06
We are pleased to inform you about a special webinar presented by Ranorex Professional Jim Trentadue entitled "How Manual Testers Can Break into Automation without Programming Skills".



Adoption of automating tests has not happened as quickly as organizations need. As more companies move toward implementing agile development as their software development lifecycle, more features are being realized and released more quickly. This leaves less time for full regression testing of the system, nonetheless this should still be done. Manual testers need to transform into test automation testers as well.

Many manual testers believe they have to learn a development language in addition to the functionality of a specific tool to be effective. Add to that the in-depth or SME knowledge one must have about the system under test along with the development and management support required and it may not seem at all clear where to start.
Jim will cover the following in this session:
  • The challenges faced by many organizations beginning the test automation journey
  • Early stages of adoption and adding to the value of work handled by a manual test team with little programming knowledge.
Find out how traditional manual testers can break into test automation without having in-depth scripting or programming skills. Learn how to make this jump as a manual tester and focus on the right areas first e.g. automation test structure, object recognition and results interpretation.

Jim Trentadue has more than fifteen years of experience as a coordinator/manager in the software testing field. As a speaker, Jim has presented at numerous industry conferences, chapter meetings, and at the University of South Florida's software testing class.

Register for the free webinar " How Manual Testers Can Break into Automation without Programming Skills " to be held on March 4, 2015, at 11:00 AM EST.
Categories: Companies

Cross Browser Testing using CodedUI Test

Testing tools Blog - Mayank Srivastava - Thu, 02/26/2015 - 16:18
Before starting this topic I would like to clear in beginning that Visual Studio 2013 uses Selenium WebDriver component to achieve cross browser testing as of now. So to integrate WebDriver component with Visual Studio follow below given steps- Start the Visual Studio and go to Tools menu and click on Extensions and Updates… System […]
Categories: Blogs

Now on Amazon – Remaining Relevant and Employable

The Social Tester - Thu, 02/26/2015 - 16:00

I’m really proud to announce that I launched my remaining relevant book on Amazon this week. This release is my new non-testing edition (so it’s suitable for those not working in IT or testing also). It contains loads of advice...
Read more

The post Now on Amazon – Remaining Relevant and Employable appeared first on The Social Tester.

Categories: Blogs

Omni-Channel Monitoring in Real Life

In September, Macy’s announced that they will invest 1 Billion $ into their omni-channel strategy. When spending so much money the question that immediately comes up is how to measure the success? Key questions as “Are conversion rates increasing as planned?”, “How good is the User Experience for each channel?” need answers. Since my first […]

The post Omni-Channel Monitoring in Real Life appeared first on Dynatrace APM Blog.

Categories: Companies

Introducing Ranorex 5.3

Ranorex - Thu, 02/26/2015 - 11:20
We are pleased to announce that Ranorex 5.3 is now available for download!
This latest release considerably extends the object recognition capabilities of Ranorex for 3rd party controls, introducing a brand new native WPF plug-in.

In addition to that, now both iOS instrumentation and deployment can be set up on Windows machines which means more independence from app development. Additionally, the brand new iOS service app allows you to start and stop apps under test. Finally, Ranorex 5.3 introduces a guided start for recording, increasing the simplicity and robustness of your test automation.
For an overview of all the new features, check out the release notes .



Upgrade for free with your valid subscription (You can find a direct download link for the latest version of Ranorex on the Ranorex Studio start page.) iOS Instrumentation on Windows Machines Ranorex 5.3 introduces a whole new process for instrumenting your iOS app directly on the Windows machine you are using for testing. OS X and Xcode are no longer required to instrument your apps.
Native WPF Plug-In With Ranorex 5.3, the support for many 3rd party controls has been added – it's handled by the brand new native WPF plug-in. This brings web object recognition to the next level.
Guided Recording Our Brand new version of Ranorex provides a guided start for first time users, allowing a quick start in test automation by choosing the technology your test is based on and automatically preparing your system under test. Now it's even easier to start your robust test automation project.
Categories: Companies

100K Celebration Podcast

As a part of the Jenkins 100K celebration, Dean Yu, Andrew Bayer, R. Tyler Croy, Chris Orr, and myself got together late Tuesday evening to go over the history of the project, how big the community was back then, how we grow, where we are now, and maybe a bit about future.

We got carried away and the recording became longer than we all planned. But it has some nice sound bites, back stage stories, and stuff even some of us didn't know about! I hope you'll enjoy it. The MP3 file is here, or you can use your favorite podcast app and subscribe to http://jenkins-ci.org/podcast.

Categories: Open Source

How to get the most out of Given-When-Then

Gojko Adzic - Wed, 02/25/2015 - 18:36

This is an excerpt from my upcoming book, Fifty Quick Ideas To Improve Your Tests

Behaviour-driven development is becoming increasingly popular over the last few years, and with it the Given-When-Then format for examples. In many ways, Given-When-Then seems as the de-facto standard for expressing functional checks using examples. Introduced by JBehave in 2003, this structure was intended to support conversations between teams and business stakeholders, but also lead those discussions towards a conclusion that would be easy to automate as a test.

Given-When-Then statements are great because they are easy to capture on whiteboards and flipcharts, but also easy to transfer to electronic documents, including plain text files and wiki pages. In addition, there are automation tools for all popular application platforms today that support tests specified as Given-When-Then.

On the other hand, Given-When-Then is a very sharp tool and unless handled properly, it can hurt badly. Without understanding the true purpose of that way of capturing expectations, many teams out there just create tests that are too long, too difficult to maintain, and almost impossible to understand. Here is a typical example:

    Scenario: Payroll salary calculations

    Given the admin page is open
    When the user types John into the 'employee name'
    and the user types 30000 into the 'salary'
    and the user clicks 'Add'
    Then the page reloads
    And the user types Mike into the 'employee name'
    and the user types 40000 into the 'salary'
    and the user clicks 'Add'
    When the user selects 'Payslips'
    And the user selects employee number 1
    Then the user clicks on 'View'
    When the user selects 'Info'
    Then the 'salary' shows 29000
    Then the user clicks 'Edit'
    and the user types 40000 into the 'salary'
    When the user clicks on 'View'
    And the 'salary' shows 31000

This example might have been clear to the person who first wrote it, but it’s purpose is unclear – what is it really testing? Is the salary a parameter of the test, or is it an expected outcome? If one of the bottom steps of this scenario fails, it will be very difficult to understand the exact cause of the problem.

Spoken language is ambiguous, and it’s perfectly OK to say ‘Given an employee has a salary …, When the tax deduction is…, then the employee gets a payslip and the payslip shows …’. It’s also OK to say ‘When an employee has a salary …, Given the tax deduction is …’ or ‘Given an employee … and the tax deduction … then the payslip …’. All those combinations mean the same thing, and they will be easily understood within the wider context.

But there is only one right way to describe those conditions with Given-When-Then if you want to get the most out of it from the perspective of long-term test maintenance.

The sequence is important. ‘Given’ comes before ‘When’, and ‘When’ comes before ‘Then’. Those clauses should not be mixed. All parameters should be specified with ‘Given’ clauses, the action under test should be specified with the ‘When’ clause, and all expected outcomes should be listed with ‘Then’ clauses. Each scenario should ideally have only one When clause, that clearly points to the purpose of the test.

Given-When-Then is not just an automation-friendly way of describing expectations, it’s a structural pattern for designing clear specifications. It’s been around for quite a while under different names. When use cases were popular, it was known as Preconditions-Trigger-Postconditions. In unit testing, it’s known as Arrange-Act-Assert.

Key benefits

Using Given-When-Then in sequence is a great reminder for several great test design ideas. It suggests that pre-conditions and post-conditions need to be identified and separated. It suggests that the purpose of the test should be clearly communicated, and that each scenario should check one and only one thing. When there is only one action under test, people are forced to look beyond the mechanics of test execution and really identify a clear purpose.

When used correctly, Given-When-Then helps teams design specifications and checks that are easy to understand and maintain. As tests will be focused on one particular action, they will be less brittle and easier to diagnose and troubleshoot. When the parameters and expectations are clearly separated, it’s easier to evaluate if we need to add more examples, and discover missing cases.

How to make it work

A good trick, that prevents most of accidental misuse of Given-When-Then, is to use past tense for ‘Given’ clauses, present tense for ‘When’ and future tense for ‘Then’. This makes it clear that ‘Given’ statements are preconditions and parameters, and that ‘Then’ statements are postconditions and expectations.

Make ‘Given’ and ‘Then’ passive – they should describe values rather than actions. Make sure ‘When’ is active – it should describe the action under test.

Try having only one ‘When’ statement for each scenario.

Categories: Blogs

Eating the dog food

Sonar - Wed, 02/25/2015 - 17:36

The SonarQube platform includes an increasing powerful lineup of tools to manage technical debt. So why don’t you ever see SonarSourcers using Nemo, the official public instance, to manage the debt in the SonarQube code? Because there’s another, bleeding-edge instance where we don’t just manage our own technical debt, we also test our code changes, as soon as possible after they’re made.

Dory (do geeks love a naming convention, or what?), is where we check our code each morning, and mid-morning, and so on, and deal with new issues. In doing so, each one of us gives the UI – and any recent changes to it – a thorough workout. That’s because Dory doesn’t run the newest released version, but the newest milestone build. That means that each algorithm change and UI tweak is closely scrutinized before it gets to you.

The result is that we often iterate many times on any change to get it right. For instance, SonarQube 5.0 introduced a new Issues page with a powerful search mechanism and keyboard shortcuts for issue management. Please don’t think that it sprang fully formed from the head of our UI designer, Stas Vilchik. It’s the result of several months of design, iteration, and Continuous Delivery. First came the bare list of issues, then keyboard shortcuts and inter-issue navigation, then the wrangling over the details. Because we were each using the page on a daily basis, every new change got plenty of attention and lots of feedback. Once we all agreed that the page was both fully functional and highly usable, we moved on.

The same thing happens with new rules. Recently we implemented a new rule in the Java plugin based on FindBugs, "Serializable" classes should have a version id. The changes were made, tested, and approved. Overnight the latest snapshot of the plugin was deployed to Dory, and the next morning the issues page was lit up like a Christmas tree.

We had expected a few new issues, but nothing like the 300+ we got, and we (the Java plugin team and I) weren’t the only ones to notice. We got “feedback” from several folks on the team. So then the investigation began: which issues shouldn’t be there? Well, technically they all belonged: every class that was flagged either implemented Serializable or had a (grand)parent that did. (Subclasses of Serializable classes are Serializable too, so for instance every Exception is Serializable.) Okay, so why didn’t the FindBugs equivalent flag all those classes? Ah, because it has some exclusions.

Next came the debate: should we have exclusions too, and if so which ones? In the end, we slightly expanded the FindBugs exclusion list and re-ran the analyses. A few issues remained, and they were all legit. Perfect. Time to move on.

When I first came to SonarSource and I was told that the internal SonarQube instance was named Dory, I thought I got it: Nemo and Dory. Haha. Cute. But the more I work on Dory, the more the reality sinks it. We rely on Dory on a daily basis; she’s our guide on the journey. But since our path isn’t necessarily a straight line, it’s a blessing for all of us that she can forget the bad decisions and only retain the good.

Categories: Open Source

SharePoint Lessons Learned from SPTechCon Austin

Yee-haw! Leading up to their first show in Texas, this was the message that adorned SPTechCon’s website in large letters made of rope. At first, this reference to a caricature of Texas culture made me cringe, being a native Texan and all. However, to SPTechCon’s credit, there is some significance to the phrase being used […]

The post SharePoint Lessons Learned from SPTechCon Austin appeared first on Dynatrace APM Blog.

Categories: Companies

Q&A: ‘Let’s Test’ Leader Talks Global Reach of Context-Driven Testing, Previews Conference

uTest - Wed, 02/25/2015 - 15:30

Johan Jonasson is one of the organizers of Let’s Test conferences, which celebrate the context-driven school of JohanJonassonthought. In addition to co-founding testing consulting firm House of Test, Johlets-test-logo-180x47pxan is a contributing delegate at the SWET peer conferences, and has spoke at several national & international software testing conferences. He is also an active member of the Association for Software Testing (AST). Follow him on Twitter @johanjonasson.

Let’s Test 2015 is slated for May 25-27, 2015, in Stockholm, Sweden, and uTest has secured an exclusive 10% discount off new registrations. Email testers@utest.com for this special discount code, available only to registered uTest members.

In this interview, we talk with Johan on the global, inclusive context-driven testing community, and get a sense of what testers can expect at the 2015 edition of Let’s Test.

uTest: You have a lot of crossover with the CAST conference in the US — both are context-driven testing sessions featuring content by testers for testers. There’s also a lot of sessions driven by folks who were at CAST. What does it mean to you to have a fervent following that travels the world for these shows?

Johan Jonasson: The fact that many speakers and attendees are willing to travel lengthy distances to both events is, I think, a great testament to the fact that context-driven testing is something that excites a lot of people, and that there is a truly global and inclusive community eager to meet and share experiences. Last year, we had attendees from literally all parts of the world come to Let’s Test.

At the same time, there is a fairly large number of local testing experts on this year’s program too, which I think shows how much context-driven testing has grown in Europe since the first conference in 2012. The context-driven approach to testing is definitely gaining ground, even in some European markets that are traditionally considered ‘factory testing’ school strongholds, which is great.

uTest: Has there been a common theme in terms of tester pain points coming out of the conferences from the last couple of years?

JJ: There seems to be two main challenges that keep coming up. One is how to package and communicate the qualitative information that our testing reveals in a way that can be readily understood by our stakeholders, who might be used to making release decisions based on traditional and flawed quantitative metrics like bug counts and pass/fail ratios. In other words, how can you perform good test reporting that stands up to scrutiny?

The other one is how to convince managers and companies buying testing services to move away from wasteful and dehumanizing testing practices sold by the lowest bidder, and adopt approaches that focus on the value of information, and the demonstrable skill of the professionals delivering that value.

uTest: Speaking of pain points, ISO 29119, and its attempt to standardize testing, has been a pain for many in the context-driven community. It’s also the subject of one of the sessions this year at Let’s Test. What are your own views on 29119?

JJ: I actively oppose the work being done on ISO 29119. I think it is flawed thinking in the first place to even try to standardize adaptive, intellectual and creative work like testing.

Assuming for the sake of argument that it would be a good idea. In order to standardize testing, we would have to agree on at least the fundamental aspects of testing, and for the longest time, the global testing community has never been in agreement about those. Don’t get me wrong, I think that’s a good thing. Consensus is highly overrated. Argument and disagreement is crucial if we are to move forward. Which is another reason to not try to create a one-size-fits-all standard in a field that is still highly innovative and developing.

Those are just a couple of issues I have with ISO 29119, and that’s before we even start talking about the archaic and long-since-discredited models of testing that the this standard has presented so far, or the motivations behind the standard.

uTest: Which were some of the most impactful or memorable sessions for you personally from the 2014 edition of Let’s Test?

JJ: I very much appreciate all the experience-based talks as well as the inspirational or innovation-focused ones, and it wouldn’t be Let’s Test without them, but my favorite sessions are the experiential workshops and the learning that comes from doing and experiencing situations firsthand in those simulated environments.

Because of that, my favorite session last year was Steve Smith’s experiential keynote session where the entire conference participated in the keynote. So it was a 150+ person simulation which ended with fascinating presentations from the participants, and observations from Steve Smith. I don’t think either Steve or us organizers really knew if it would work to have a simulation that big before we tried it, but we’re never content with just doing what we’ve been doing the year before. We try to constantly raise the bar for both the content and format of Let’s Test.

uTest: You have had Let’s Test in Europe for several years now, and piloted Let’s Test Oz in Australia last year. Are there other  areas where you want to bring context-driven testing or see it emerge more?

JJ: Absolutely! We’ve done smaller Let’s Test events (called Tasting Let’s Test) in both the Netherlands and South Africa during 2014, and we’re planning another Let’s Test Oz, and trying to find a good date that would fit well with other things going on in the near future.

The next big thing though is an upcoming three-day Let’s Test event in South Africa in November 2015 that we’ll be releasing more information on in the coming weeks at our site and on Twitter @Letstest_conf.

uTest: Could you give us a preview of what may be different at Let’s Test this year?

JJ: We’ve really tried to turn up the number of workshops for Let’s Test 2015. Like I said previously, there’s always a need for great talks and experiences at Let’s Test. However, by aligning with the residential, almost retreat-like format of the Let’s Test conference, we felt that what really gets people talking is hands-on sessions where we spend more than an hour on a certain topic.

So for Let’s Test 2015, we have an unprecedented 26 workshops and tutorials of different sizes lined up during the three days of the conference. Several of these are three-plus hours in length to make sure there’s enough to listen to, experience, debrief and discuss for everyone participating.

Not a uTester yet? Sign up today to comment on all of our blogs, and gain access to free training, the latest software testing news, testing events, opportunities to work on paid testing projects, and networking with over 150,000 testing pros. Join now.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today