Skip to content

Feed aggregator

Redesigning BugBuster.com

BugBuster - Fri, 02/06/2015 - 17:14

“I looked at your site but I’m still not exactly sure what you do.”

Unfortunately, we were hearing this sort of comment far too often from our users. Our site wasn’t doing a very good job of explaining what we do here at BugBuster.

We decided to fix it.

Over the past few weeks, we have been working on a complete redesign of BugBuster.com. Our goal was to make it clearer what BugBuster is and what we have to offer.

What is BugBuster?

First, we had to answer this ourselves in the simplest way possible. Trying to tell someone that we are a “cloud-based end-to-end functional testing and site monitoring solution for web applications and ecommerce sites” is a tough pill to swallow. We needed to break it down into simpler terms.

  1. Build tests
  2. Manage test plans
  3. Monitor live sites

These three core functionalities cover most everything we do here at BugBuster. They became the basis for the content of the new site.

Crafting a story

We made the “build – manage – monitor” mantra basis of the new story of BugBuster. Our goal here at BugBuster is to make web testing accessible for everyone, so we cut back on the amount of technical jargon we used to keep the message clear for everyone.

Lightening the tone of the content also helped to improve the readability of our site. We used a tool called Hemingway to improve the readability of our site by reducing the clutter in our messaging. As an example, our previous homepage read at an 11th Grade level and our new homepage reads at a 7th Grade level. Hemingway was an invaluable tool for improving our content.

Watching the numbers

Before starting on the visual design, we went back and looked at our analytics to see what we could learn from our previous site. We saw that certain pages had unusually high bounce rates, the highest being the Pricing page. A high bounce rate can mean lots of things, but this information helped us to decide where to spend the most time improving the site.

We also looked at our browser statistics to determine what kind of browsers we needed to support for our redesign. Here is the breakdown of our user’s browser usage for Jan 2015:

  • Chrome – 73.26%
  • Firefox – 13.13%
  • Safari – 6.86%
  • IE 11 – 2.69%
  • IE 9 – 0.74%
  • IE 10 – 0.68%
  • IE 8 – 0.68%

We decided we would support all current browsers minus one version. Despite this, we did make an effort to make the site at least render well in IE9.

And the new site is fully responsive, despite the fact that mobile + tablet traffic only made up ~5% of our total traffic last month.

Designing a better experience

Finally, we had to create an experience fitting of the new content. We took a cleaner, more minimalist approach to the design than before. Our old site had content hidden in carousels and tabbed menus that was almost never seen by users.

The new site tells a simpler story of BugBuster. We reduced the amount of content a user sees at any given time. The content is easier to understand and by telling a shorter but clearer story with fewer pages, we hope to reduce the confusion felt by our users about what exactly we have to offer.

We will write a follow up to our redesign in a few weeks and provide some insight into how our users responded both based on analytics and user feedback.

Shoot us a tweet and let us know what you think about our new site.

The post Redesigning BugBuster.com appeared first on BugBuster.

Categories: Companies

uTest Announces Behavior-Driven Testing Webinar with Anand Bagmar

uTest - Fri, 02/06/2015 - 16:00

webinaruTest is excited to announce another live webinar opportunity. Registration is now open for the webinar Build the “right” regression suite using Behavior-Driven Testing (BDT), with Anand Bagmar. In this webinar, participants can learn:

  • How to build a good and valuable regression suite for the product under test
  • Different styles of identifying / writing scenarios that will validate the expected business functionality
  • Automating tests identified using the BDT approach will automate your business functionality
  • Advantages of identifying regression tests using the BDT approach

Anand is a familiar guest blogger on the uTest Blog. His recent post Selenium: 10 Years Later and Still Going Strong takes a look at the ecosystem that Selenium has nurtured over the past decade.

Webinar Details

  • What: A live webinar presented by Anand Bagmar called, “Build the ‘right’ regression suite using Behavior-Driven Testing (BDT)”
  • When: Wednesday, February 18, from 2-3 p.m. ET
  • How: Register now. Seats are limited!

About Anand Bagmar

Anand is a hands-on and results-oriented software quality evangelist with 17 years in the IT field. Passionate about shipping quality products, Anand specializes in building automated testing tools, infrastructure, and frameworks. He writes testing-related blogs and has built open-source tools related to software testing— Web Analytics Automation Testing Framework (WAAT), TaaS (for automating integration testing in disparate systems), and Test Trend Analyzer (TTA). Anand is the lead organizer for vodQA, the popular testing conference in India. Follow him on Twitter or read his Essence of Testing blog.

Not a uTester yet? Sign up today to comment on all of our blogs, and gain access to free training, the latest software testing news, opportunities to work on paid testing projects, and networking with over 150,000 testing pros. Join now.

Categories: Companies

ThoughtWorks London Opening Party

thekua.com@work - Fri, 02/06/2015 - 12:05

Last night ThoughtWorks had a welcoming party to celebrate the opening of our new London office, located in the heart of Soho.

I didn’t take as many photos I would have liked, but it was a fun event with a couple of musicians: Emily Lee, Scott McMahon and an amazing set of food prepared by Ed Baines (chef of Randall and Aubin).

OLYMPUS DIGITAL CAMERA OLYMPUS DIGITAL CAMERA OLYMPUS DIGITAL CAMERA OLYMPUS DIGITAL CAMERA OLYMPUS DIGITAL CAMERA OLYMPUS DIGITAL CAMERA OLYMPUS DIGITAL CAMERA OLYMPUS DIGITAL CAMERA OLYMPUS DIGITAL CAMERA OLYMPUS DIGITAL CAMERA OLYMPUS DIGITAL CAMERA OLYMPUS DIGITAL CAMERA
Categories: Blogs

Clean Tests: Building Test Types

Jimmy Bogard - Thu, 02/05/2015 - 23:38

Posts in this series:

In the primer, I described two types of tests I generally run into in my systems:

  • Arrange/act/assert fully encapsulated in a single method
  • Arrange/act in one place, assertions in each method

Effectively, I build tests in a procedural mode or in a context/specification mode. In xUnit Test Patterns language, I’m building execution plans around:

There’s another pattern listed there, “Testcase Class per Feature”, but I’ve found it to be a version of one of these two – AAA in a single method, or split out.

Most test frameworks have some extension point for you to be able to accomplish both of these patterns. Unfortunately, none of them are very flexible. In my tests, I want to have complete control over lifecycle, as my tests become more complicated to set up. My ideal would be to author tests as I do everything else:

  • Method arguments for variation in a single isolated test
  • Constructor arguments for delivering fixtures for multiple tests

Since I’m using Fixie, I can teach Fixie how to recognize these two types of tests and build individual test plans for both kinds. We could be silly and cheat with things like attributes, but I think we can be smarter, right? Looking at our two test types, we have two kinds of test classes:

  • No-arg constructor, methods have arguments for context/fixtures
  • Constructor with arguments, methods have no arguments (shared fixture)

With Fixie, I can easily distinguish between the two kinds of tests. I could do other things, like key off of namespaces (put all fast tests in one folder, slow tests in another), or separate by assemblies, it’s all up to me.

But what should supply my fixtures? With most other test frameworks, the fixtures need to be plain – class with a no-arg constructor or similar. I don’t want that. I want to use a library in which I can control and build out my fixtures in a deterministic, flexible manner.

Enter AutoFixture!

I’ll teach Fixie how to run my tests, and I’ll teach AutoFixture how to build out those constructor arguments. AutoFixture is my “Arrange”, my code is the Act, and for assertions, I’ll use Shoudly (this one I don’t care as much about, just should-based is enough).

First, let’s look at the simple kinds of tests – ones where the test is completely encapsulated in a single method.

Testcase Class per Class

For TestClass Per Class, my Fixie convention is:

public class TestcaseClassPerClassConvention : Convention
{
    public TestcaseClassPerClassConvention()
    {
        Classes
            .NameEndsWith("Tests")
            .Where(t => 
                t.GetConstructors()
                .All(ci => ci.GetParameters().Length == 0)
            );

        Methods.Where(mi => mi.IsPublic && mi.IsVoid());

        Parameters.Add(FillFromFixture);
    }

    private IEnumerable<object[]> FillFromFixture(MethodInfo method)
    {
        var fixture = new Fixture();

        yield return GetParameterData(method.GetParameters(), fixture);
    }

    private object[] GetParameterData(ParameterInfo[] parameters, Fixture fixture)
    {
        return parameters
            .Select(p => new SpecimenContext(fixture).Resolve(p.ParameterType))
            .ToArray();
    }
}

First, I need to tell Fixie what to look for in terms of test classes. I could have gone a lot of routes here like existing test frameworks “Things with a class attribute” or “Things with methods that have an attribute” or a base class or a namespace. To keep things simple, I look for classes named “Tests”. Next, because I want to target a workflow where AAA is in a single method, I make sure that this class has only no-arg constructors.

For test methods, that’s a bit easy – I just want public void methods. No attributes.

Finally, I want to fill parameters from AutoFixture. I tell Fixie to add parameters from AutoFixture, resolving each parameter value one at a time from AutoFixture.

For now, I’ll leave the AutoFixture configuration alone, but we’ll soon be layering on more behaviors as we go.

With this in place, my test becomes:

public class CalculatorTests
{
    public void ShouldAdd(Calculator calculator)
    {
        calculator.Add(2, 3).ShouldBe(5);
    }

    public void ShouldSubtract(Calculator calculator)
    {
        calculator.Subtract(5, 3).ShouldBe(2);
    }
}

So far so good! Now let’s look at our Testcase Class per Fixture example.

Testcase Class per Fixture

When we want to have a single arrange/act, but multiple assertions, our test lifecycle changes. We now want to not have to re-run the Arrange/Act every single time, we only want it run once and then each Assert work off of the results of the Act. This means that with our test class, we want it only run/instantiated once, and then the asserts happen. This is different than parameterized test methods, where we want the fixture recreated with every test.

Our Fixie configuration changes slightly:

public class TestcaseClassPerFixtureConvention : Convention
{
    public TestcaseClassPerFixtureConvention()
    {
        Classes
            .NameEndsWith("Tests")
            .Where(t => 
                t.GetConstructors().Count() == 1
                && t.GetConstructors().Count(ci => ci.GetParameters().Length > 0) == 1
            );

        Methods.Where(mi => mi.IsPublic && mi.IsVoid());

        ClassExecution
            .CreateInstancePerClass()
            .UsingFactory(CreateFromFixture);
    }

    private object CreateFromFixture(Type type)
    {
        var fixture = new Fixture();

        return new SpecimenContext(fixture).Resolve(type);
    }
}

With Fixie, I can create as many configurations as I like for different kinds of tests. Fixie layers them on each other, and I can customize styles appropriately. If I’m migrating from an existing testing platform, I could even configure Fixie to run the existing attribute-based tests!

In the configuration above, I’m looking for test classes ending with “Tests”, but also having a single constructor that has arguments. I don’t know what to do with classes with multiple constructors, so I’ll just ignore those for now.

The test methods I’m looking for are the same – except now I’ll not configure any method parameters. It would be weird to combine constructor arguments with method parameters for this style of test, so I’m ignoring that for now.

Finally, I configure test execution to create a single instance per class, using AutoFixture as my test case factory. This is the piece that starts to separate Fixie from other frameworks – you can completely customize how you want your tests to run and execute. Opinionated frameworks are great – but if I disagree, I’m left to migrate tests. Not a fun proposition.

A test that uses this convention becomes:

public class InvoiceApprovalTests
{
    private readonly Invoice _invoice;

    public InvoiceApprovalTests(Invoice invoice)
    {
        _invoice = invoice;

        _invoice.Approve();
    }

    public void ShouldMarkInvoiceApproved()
    {
        _invoice.IsApproved.ShouldBe(true);
    }

    public void ShouldMarkInvoiceLocked()
    {
        _invoice.IsLocked.ShouldBe(true);
    }
}

The constructor is invoked by AutoFixture, filling in the parameters as needed. The Act, inside the constructor, is executed once. Finally, I make individual assertions on the result of the Act.

With this style, I can build up a context and incrementally add behavior via assertions. This is a fantastic approach for lightweight BDD, since I’m focusing on behaviors and adding them one at a time.

Next up, we’ll look at going one step further and integrating the database into our tests and using Fixie to wrap interesting behaviors around them.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

QASymphony Adds Agile Features to qTest eXplorer

Software Testing Magazine - Thu, 02/05/2015 - 22:08
QASymphony has announced new functionality for qTest eXplorer, bringing their leading testing solution to enterprise with web testing, mobile app testing, and cloud storage integration. This a huge step for Atlanta-based company, that also just closed a $2.5MM Series A. In addition to qTest eXplorer’s cloud storage solution, the revolutionary recording and annotation tool now supports web testing. This makes qTest eXplorer available to users on all operating systems, from Windows to Mac to Linux. Mobile app testing functionality for both Android and iOS is the other exciting new addition in ...
Categories: Communities

Sauce Labs Gets $15 Million in New Funding

Software Testing Magazine - Thu, 02/05/2015 - 21:57
Sauce Labs, a cloud-based web and mobile application testing platform, today announced it has secured an additional $15 million in Series D expansion funding from current investor, Toba Capital. The news comes as Sauce Labs experiences overwhelming growth with over 200 million total tests run on its platform, and 145 percent revenue growth in 2014. The new round of funding will extend both the company’s core web application testing offering and its mobile testing platform, as well as provide continued backing of Appium, the premier open source mobile automation tool developed ...
Categories: Communities

21 of the Most Popular Test Automation Blogs

uTest - Thu, 02/05/2015 - 19:09

We’ve put a lot of stock into the most popular blogs in the QA/testing circuit, but some of these focus exclusively on functional testing conceindexpts.

Steven Machtelinckx of his own blog TestMinded pointed out a great list from TestBuffet of the 21 most popular blogs from 2014 that focus exclusively on automation, an especially hot area in testing right now.

You can check out the full list right here. It’s a great roster, and one of the blogs on it is actually headed up by uTester Stephan Kamper.

Not a uTester yet? Sign up today to comment on all of our blogs, and gain access to free training, the latest software testing news, opportunities to work on paid testing projects, and networking with over 150,000 testing pros. Join now.

Categories: Companies

How to Write Beautiful JavaScript Tests

Software Testing Magazine - Thu, 02/05/2015 - 17:50
This talk shares some experience with JavaScript tests, and show the most important patterns you can rely on to write simple, beautiful, maintainable and incredibly fast tests. Testing JavaScript is hard. When the presenter’s team first started writing JavaScript tests they almost gave up several times. They struggled with topics such as the DOM, Ajax requests and asynchronicity. After months of practice they had written hundreds of unit and end-to-end tests. The problem was that they were slowing down. Their tests had become a burden. They were complex, difficult to understand ...
Categories: Communities

Sauce Labs Secures $15 Million For Geographic And Infrastructure Expansion

Sauce Labs - Thu, 02/05/2015 - 17:30

SAN FRANCISCO – February 5, 2015 Sauce Labs, Inc., the leading cloud-based web and mobile application testing platform, today announced it has secured an additional $15 million in Series D expansion funding from current investor, Toba Capital. The news comes as Sauce Labs experiences overwhelming growth with over 200 million total tests run on its platform, and 145 percent revenue growth in 2014.

The new round of funding will extend both the company’s core web application testing offering and its mobile testing platform, as well as provide continued backing of Appium, the premier open source mobile automation tool developed by Sauce Labs and a thriving community of open source contributors. Additionally, it will be used to expand the company’s infrastructure, development team, and international operations.

“Automated testing is critical to modern software development,” said Jim Cerna, CEO of Sauce Labs. “We’ve seen tremendous adoption among web developers, and as mobile developers refine their continuous integration and continuous delivery processes, there is growing demand for scalable, reliable testing for mobile apps, as well. This new round of funding will help us continue to build out the infrastructure to meet this demand.”

“As software teams embrace continuous integration and delivery processes, automated testing plays a crucial role in improving quality and time to market, for both web and mobile applications. Enterprises and organizations of all sizes turn to us because we help them make their delivery pipelines efficient and reliable, while improving software quality along the way,” said Steve Hazel, co-founder and chief product officer of Sauce Labs.

Sauce Labs provides an instantly scalable testing cloud that is optimized for continuous integration (CI) and continuous delivery (CD). When tests are automated and run in parallel on multiple virtual machines across multiple browsers and platforms, testing time is reduced and developer time is freed up from managing infrastructure. When paired with a CI system, developers can easily test web, hybrid and native applications early on in their development cycles, continuously and affordably. Sauce Labs currently supports more than 450 browser, operating system and device platform combinations.

“The application development market is rapidly shifting with the adoption of continuous integration (CI) and continuous delivery (CD),” said Tyler Jewell, partner at Toba Capital and Sauce Labs board member. “CI and CD eliminate the time constraints that inhibit teams from achieving their full market potential, making development, support and release instantaneous and ongoing. We are thrilled to support Sauce Labs as they continue to make these CI/CD processes more achievable and effective for every software development team.”

Helpful Links

 

 

About Sauce Labs
Sauce Labs is the leading cloud-based web and mobile application automated testing platform. Its secure and reliable testing infrastructure enables users to run JavaScript unit and functional tests written with Selenium and Appium, eliminating the time and expense of maintaining a test grid. With Sauce Labs, organizations can achieve success with continuous integration and delivery, increase developer productivity and reduce infrastructure costs for software teams of all sizes.

Sauce Labs is a privately-held company funded by Toba Capital, Salesforce Ventures, Triage Ventures and the Contrarian Group. For more information, please visit http://saucelabs.com.

Categories: Companies

uTest Platform Update for the Week of February 2, 2015

uTest - Thu, 02/05/2015 - 16:30
“Perfection is not attainable, but if we chase perfection we can catch excellence.”

– Vince Lombardi

In our continued pursuit of chasing perfection in our tester platform, we’re pleased to preview this week’s latest platform release for uTesters on paid projects.

New Feature: Customizable Bug Report Template

Oftentimes when participating in a test cycle, there are specific requirements for how bug reports need to be completed. If a tester is filing multiple bug reports, this can result in the entering the same information again and again.

To help increase the efficiency associated with bug reporting, we have launched a new feature in the bug report form that will allow testers to create customizable bug templates (per test cycle), allowing users to configure:

  • Bug title prefixes or standard bug title content
  • Actions performed prefixes such as prerequisites or common steps to perform prior to your steps to reproduce
  • Additional environment information as requested by the Project Manager or Customer

To use this feature, you will need to access the bug report form. In the top right, you will now see two new links:  “Clear Form” and “Configure Template.”

Platform1

To create a template, click the “Configure Template” link. On this screen, enter the data you wish to see displayed in each bug report for that cycle and then click “Save Template.”

Platform2

To go back to the bug report, select the “Report Issue” button at the top right. The form will now be pre-filled with the template you configured for this test cycle.

If you like what you see, feel free to drop a note on the Forums to share your ideas on these and other recent platform updates.

Categories: Companies

Simple Software Testing Prevents Critical Failures

Testing TV - Thu, 02/05/2015 - 16:12
Large, production quality distributed systems still fail periodically, and do so sometimes catastrophically, where most or all users experience an outage or data loss. We present the result of a comprehensive study investigating 198 randomly selected, user-reported failures that occurred on Cassandra, HBase, Hadoop Distributed File System (HDFS), Hadoop MapReduce, and Redis, with the goal […]
Categories: Blogs

Top DevOps Tools We Love

The word DevOps is a portmanteau of “development” and “operations”. However, anyone who knows “The Phoenix Project” by Gene Kim et al. will agree that its radius is much wider than what the term suggests: it’s a melting pot that combines principles from Agile Software Development and Lean Manufacturing with the aim to reduce friction […]

The post Top DevOps Tools We Love appeared first on Dynatrace APM Blog.

Categories: Companies

C/C++/Objective-C: Dark past, bright future

Sonar - Thu, 02/05/2015 - 14:03

We’ve just released version 3.3 of the C/C++/Objective-C plugin, which features an increased scope and precision of analysis for C, as well as detection of real bugs such as null pointer dereferences and bugs related to types for C. These improvements were made possible by the addition of semantic analysis and symbolic execution, which is the analysis not of the structure of your code, but of what the code is actually doing.

Semantic analysis was part of the original goal set for the plugin about three years ago. Of course, the goal was broader than that: develop a static analyser for C++. The analyzer needed to continuously check your code’s conformance with your coding standards and practices, and more importantly detect bugs and vulnerabilities to help you to keep technical debt under control.

At the time, we didn’t think it would be hard, because many languages were already in our portfolio, including Java, COBOL, PL/SQL. Our best engineers, Freddy Mallet and Dinesh Bolkensteyn, were already working on C, the natural predecessor of C++. I joined them, and together we started work on C++. With the benefit of hindsight, I can say that we all were blind. Totally blind. We had no idea what a difficult and ambitious task we had set ourselves.

You see, a static analyzer is a program which is able to precisely understand what another program does. And, roughly speaking, a bug is detected when this understanding is different from what the developer really wanted to write. Huh! Already, the task is complex, but it’s doubly so for C++. Why is automatic analysis of C++ so complicated?

First of all, both C and C++ have the concept of preprocessing. For example consider this code:

struct command commands[] = { cmd(quit), cmd(help) };

One would think that there are two calls of the “cmd” function with the parameters “quit” and “help”. But that might not be the case if just before this line there’s a preprocessing directive:

#define cmd(name) { #name, name ## _command }

That directive completely changes meaning of the original code, literally turning it into

struct command commands[] = { { "quit", quit_command }, { "help", help_command } };

The existence of the preprocessor complicates many things on many different levels for an analysis. But most important is that the correct interpretation of preprocessing directives is crucial for the correctness and precision of an analysis. We rewrote our preprocessor implementation from scratch three times before we were satisfied with it. And it’s worth mentioning that on the market of static analysers (both commercial and open-source) you can easily find tools that don’t do preprocessing at all or do it only imprecisely.

Let’s move to the next difficulty. I’ve mentioned in the past that C and C++ are hard to parse. It’s time to talk a little bit about why. Roughly speaking, parsing is the process of recognizing language constructions – i.e. seeing what’s a statement, what’s an expression, and so on. Let’s take some example code and try to figure out what it is.

T * a

If this were Java code, the answer would be straightforward: most probably this is multiplication, and part of bigger expression. But the answer isn’t that simple in for C/C++. In general, the answer is “it depends…” This could indeed be an expression statement, if both “T” and “a” are variables:

int T, a;
T * a;

But it could also be the declaration of variable “a” with a type of pointer to “T”, if “T” is a type:

typedef int T;
T * a;

In other words, the context can completely change the meaning of code. This is called ambiguity.

Like natural languages, the grammars of programming languages can be ambiguous. While the C language has just a few ambiguous constructions, C++ has tons of them. And as you’ve seen, correct parsing is not possible without information about types. But getting that information is a difficulty in and of itself because it requires semantic analysis of language constructs before you can understand their types and relations. And that’s where it starts to be really complex. To parse we need semantic analysis, and to do semantic analysis we need to parse. Chicken and egg problem.

We had hit a wall, and when we looked around, we realized we weren’t alone. Many tools don’t even try to parse, get information about types or distinguish between ambiguous and unambiguous cases.

And then we found GLL, a relatively new theory about generalized parsing. It was first published in 2010, and there still aren’t any ready-to-use, publicly-available implementations for Java. Implementing a GLL parser wasn’t easy, and took us quite a while, but the ROI was high. This parser is able to preserve information about encountered ambiguities without their actual resolution. That allows us to do precise analysis of at least the unambiguous constructions without producing false-positives on ambiguous constructions.

The GLL parser was a win-win, and game changer! After 2 years of development from the first commit (which was approximately a year ago) we released precise preprocessing and parsing in version 2.0 of the C++ Plugin.

With the original goal well on the way to being met, we started to dream again, raised our expectations even higher, and were ready to welcome new developers. Today, I still work on the plugin, but it’s maintained primarily by Massimo Paladin and Samuel Mercier. They solved the analysis configuration problem, added support of Objective-C and Microsoft Component Extensions to the plugin.

Our next goal is to apply Semantic Analysis and Symbolic Execution on Objective-C and of course after that on C++, and to use them to cover more MISRA rules. So this is probably not the end of the story about difficulties in development of static analyser for C/C++/Objective-C – who knows what else will be encountered on the way. But now we are not blind as it was before, now we know that this is difficult. However based on past, I can say that we in SonarSource are unstoppable and even most incredible dreams come true! So keep dreaming! And just never ever give up!

Categories: Open Source

Winner of the Best Quality Tool Award 2015

Ranorex - Thu, 02/05/2015 - 11:00
Ranorex convinced both audience and jury at the Software Quality Days Tool Challenge and was voted winner of the "The Best Quality Tool Award 2015".

This year's challenge was focused on mobile testing – the goal was to automate a single test case for a mobile application live on stage in under 7 minutes! We would like to thank everyone in the audience as well the as jury for all the positive support.

About Software Quality Days
The seventh Software Quality Days took place from 20-23 January 2015 in the Austria Trend Hotel Savoyen in Vienna. This year the focus of the conference was on the topic of "Software and Systems Quality in Distributed and Mobile Environments". It was organized by Software Quality Lab GmbH. Around 350 participants from over 20 countries once again used the event as a platform for exchanging information, meeting other professionals in the field and networking. Read more about the software testing event
Categories: Companies

How to change test scenario inputs per environment

BugBuster - Thu, 02/05/2015 - 10:00
Placeholders in scenarios, or how to change scenario inputs based on the environment

We have just introduced placeholders in any text field of the scenario visual editor and the explorers (deep explorer, shallow explorer…).

It’s now possible to insert a placeholder to retrieve values from your environments, from the random module or from the current session. You can even execute arbitrary JavaScript code.

Simply use {{ … }} placeholder to add these variables or actions in any text field in the visual scenario editor.


  • Let’s take a typical use case: imagine you want to test the login form of your app. First, record the login from your default environment and add a check at the end.

  • recorded-signin

  • Screen Shot 2015-01-29 at 11.55.25

  • This scenario will always use testuser as the user name and 123456 as the password. The password usually changes from one environment to another (for instance, you will use 123456 for development and NFj?vr86N?kVch for production).

    In order to conditionally choose the user name and password based on the environment, you should use the {{ … }} placeholder. Start by going to the Environments view by clicking on Environments on the lower left corner of the screen. In the Environment data section, add your development credentials as depicted on the left.


  • Then select your production environment from the top of the screen:
    change-env
    And overwrite the credentials with your production credentials as depicted on the right.

  • Screen Shot 2015-01-29 at 11.59.12

  • recorded-signin-var

  • Now, in your scenario, you can refer to your Environment data by using {{ username }} and {{ password }}.

    Note that steps 2, 3, and 5 have been modified to retrieve variables from the current environment. When executed within the default environment, user name and password will be testuser and 123456 respectively. In production environment it will be johndoe and NFj?vr86N?kVch respectively.

Now you have it! A test scenario that will check your login process by using the right credential, whether you are targeting your development environment or your production environment. Let us know what you think about placeholders in scenarios!

The post How to change test scenario inputs per environment appeared first on BugBuster.

Categories: Companies

Chevy and DevOps: What the Wi-Fi?

Sonatype Blog - Wed, 02/04/2015 - 22:48
I'm sure you saw it too. During the Super Bowl, Chevy Trucks announced that they were adding 4G LTE wi-fi. How cool. I want that (and so would my kids). I can only imagine the possibilities. But, this is not all about my needs. Chevy and every other vehicle maker wants this too. And not for the...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

DZone: Guide To Continuous Delivery [DOWNLOAD]

Sauce Labs - Wed, 02/04/2015 - 21:36

DZone 2015 Guide to Continuous Delivery

We are very excited to be a research partner for DZone’s Guide to Continuous Delivery, a premium resource focused on continuous integration and DevOps management trends, strategies, and tools.

Readers of the guide will get an overview of continuous delivery practices and how continuous delivery affects many aspects of an organization. This guide includes:

  • Articles written by continuous integration and deployment experts – including Sauce labs
  • Detailed profiles of 35+ continuous delivery tools and solutions
  • “Continuous Delivery Maturity Checklist” that gauges where your continuous delivery skills rank
  • “Continuous Delivery: Visualized” infographic that details the tools developers use at every stage of the pipeline

DZone’s continuous delivery guide offers key insights into continuous integration and through a survey of 750+ developers and experts, allowing readers to learn trends from practitioners in the technology professional community. Additionally, the guide’s solutions directory compares various tools for continuous integration, application release automation, and configuration management to help readers wisely choose the solutions they need.

Download a free copy of the guide HERE.

About DZone

DZone provides expert research and learning communities for developers, tech professionals, and smart people everywhere.  DZone has been a trusted, global source of content for over 15 years.

Categories: Companies

Answering What’s New in Test Studio Webinar Q1 2015 Questions

Telerik TestStudio - Wed, 02/04/2015 - 15:00
This blog post will provide you with answers to many interesting questions that were asked during our recent release webinar for Test Studio Solution, "What's new in Test Studio."
Categories: Companies

Thoughts on OOP2015

thekua.com@work - Wed, 02/04/2015 - 11:15

I spent the first half of last week in Munich, where I was speaking at OOP Conference 2015. I missed last year when Martin Fowler was a keynote but had presented both in 2013 and 2012.

The conference still seems to attract more seasoned people like architects and decision makers and I am still constantly surprised at the number of suits I see for a technical conference – I do not know if that is more of a German culture difference as well. I felt like there were significantly more German-speaking sessions than English ones, and I sat in a number of them when I expanded my vocabulary.

I was only there for three of the five days of the conference, and was lucky enough to be invited and attend a special dinner on Monday evening where Dr Reinhold Ewald (a former German astronaut) gave a presentation about what it was like being an astronaut, what they do in space and some of the interesting challenges.

I saw a number of the keynotes and talks which I’ll briefly summarise here:

  • Challenges and Opportunities for the Internet of Things (IoT) by Dr Annabel Nickels – A relatively introductory session on what the Internet of Things actually means. The talk explained the IoT well, why it’s not possible and what people are experimenting with. It was clear that security and privacy aspects had not advanced and that there was still a lot of work to go, as there were lots of questions from the audience, but no clear answers in this space – more “it’s something we’re looking into”-sort of answers
  • Coding Culture by Sven Peters – Sven is an entertaining, engaging and obviously well-practiced presenter who knows how to engage with the audience with pictures and stories. His talk focused on coding culture – but more particularly the coding culture of Atlassian, the company Sven works for. An entertaining talk about how they work inside the company, but was not particularly surprising for me since I know already a lot about that company.
  • Aktives Warten für Architekten by Stefan Toth (Actively Waiting for Architecture) – A nice introduction to the Last Responsible Moment or what is more popular in the Agile community these days, Real Options.
  • Ökonomie und Architektur als effektives Duo by Gernot Starke, Michael Mahlberg (Economics and Architecture as an effective pair) – From my understanding, the talk focused on bringing the idea of calculating ROI on an architectural front. The pair spent a lot of ideas introducing financial terms and then a number of spreadsheets with a lot of numbers. Although well-intentioned, I wasn’t sure about the “calculations” they made since a lot of it was based on estimates of “man-days” needed and “man-days” spent/saved – it all looks very good when calculated out, but they didn’t really spent much time eliciting how they get estimates. They spent a lot of time introducing Aim42 which I wasn’t familiar but will now look into.

I ran two talks that had both good attendance and great feedback (like the one below):

OOP2015 - Best Talk

The fist was “The Geek’s Guide to Leading Teams” where I focused on exploring the responsibilities and remits of what a Tech Lead does and how it’s quite different from being a developer.

The Geek's Guide to Leading Teams from Patrick Kua

The second was “Architecting for Continuous Delivery” which focused on the principles and considerations for when people build systems with Continuous Delivery in mind.

Architecting For Continuous Delivery from Patrick Kua

I had a great time visiting the conference and had an interesting time expanding my German vocabulary as I tried to explain what I and what my company do in German – something I didn’t really do a lot of when I was living in Berlin.

Categories: Blogs

The Latest in VuGen Performance and Usability Tips

HP LoadRunner and Performance Center Blog - Wed, 02/04/2015 - 09:06

OptionsHow much time do you spend on script development? According to a recent customer survey, 70 percent of time invested by performance testers is spent on script development in VuGen.

 

Keep reading to find out how improvements to time and memory-consuming features have improved the VuGen experience. I will also provide tips on how to improve users’ day-to-day work with VuGen.

 

(This post was written by Yuriy Kipnis from the LoadRunner R&D Team)

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today