Skip to content

Blogs

9 Steps to publishing a book on Amazon Kindle

The Social Tester - Tue, 03/03/2015 - 16:00

Last week I published my first book on Amazon – Remaining Relevant It was a great sense of achievement. I learned a lot about the self publishing industry and I learned a lot about myself. I never realised quite how...
Read more

The post 9 Steps to publishing a book on Amazon Kindle appeared first on The Social Tester.

Categories: Blogs

Book Review: Waltzing with Bears

thekua.com@work - Tue, 03/03/2015 - 10:54

I read this book very early in my career, and I thought it would be useful to read it again as a refresher to risk management. Waltzing with Bears is a book that focuses on the identification and management of risks within the software industry. It’s filled with a number of stories answering the question, “Why should we care about risk?” as well providing a number of strategies and tools for handling risk.

Waltzing with Bears

Although a lot of the advice is still applicable today, I think it is true that some of the recommendations are commonplace today. They recommend, for instance, an incremental delivery approach to software (Hello agile software development!) without a specific reference to a methodology as a way of mitigating risk.

They also present the idea of estimate (un)certainty and using the idea to make better commitments to others. They also warn against default behaviour that is still rife in our industry: e.g. making a commitment to others based on an optimistic estimate hoping for a series of breaks when it is clearly high risk, which later leads on to the project death march, or a huge loss of quality.

Reading the book the second time around, I picked up a few other interesting models I don’t remember so clearly from the first time around including the use of probability profiles (a graph that shows the shape of our estimates and the uncertainties) to understand when and where to give estimated dates. There’s a useful section around risk-storming for people who have never been to a risk brainstorming session, but that wasn’t particularly new for me. Another one is the use of the Monte Carlo simulation for calculating the impact of risks. After reading that section however, I feel like I still don’t fully understand it and I would need to practice applying it for me to fully understand how that works.

Although the examples used in the book are relatively old now, there are still powerful stories about how people failed to manage risk, what they could have done instead and what the consequences were (e.g. Dallas Airport Baggage system!) I also like their practical experience shared when they talk about the cultural side of risk (e.g. how transparent or open and organisation is) and when it’s dangerous to share your view of risk. This sort of advice is often missed from books where the authors provide a very one-sided dogmatic approach without the consideration of contexts.

Overall this is an easily digestible book that introduces the idea and the importance of managing risk together with tools to help you achieve it, all set in the software industry context.

Categories: Blogs

Test Automation for Behavioral Models

Testing TV - Mon, 03/02/2015 - 19:57
Model-based testers design abstract tests in terms of models such as paths in graphs. Then the abstract tests need to be converted to concrete tests, which are defined in terms of implementation. The transformation from abstract tests to concrete tests have to be automated. Existing model-based testing techniques for behavioral models use many additional diagrams […]
Categories: Blogs

Clean Tests: Isolating the Database

Jimmy Bogard - Mon, 03/02/2015 - 19:35

Other posts in this series:

Isolating the database can be pretty difficult to do, but I’ve settled on a general approach that allows me to ensure my tests are built from a consistent starting point. I prefer a consistent starting point over something like rolled back transactions, since a rolled back transaction assumes that the database is in a consistent state to begin with.

I’m going to use my tool Respawn to build a reliable starting point in my tests, and integrate it into my tests. In my last post, I walked through creating a common fixture in which my tests use to build internal state. I’m going to extend that fixture to also include my Respawn project:

public class SlowTestFixture
{
    private static IContainer Root = IoC.BuildCompositionRoot();
    private static Checkpoint Checkpoint = new Checkpoint
    {
        TablesToIgnore = new[]
        {
            "sysdiagrams",
            "tblUser",
            "tblObjectType",
        },
        SchemasToExclude = new[]
        {
            "RoundhousE"
        }
    };

    public SlowTestFixture()
    {
        Container = Root.CreateChildContainer();
        Checkpoint.Reset("MyConnectionStringName");
    }

Since my SlowTestFixture is used in both styles of organization (fixture per test class/test method), my database will either get reset before my test class is constructed, or before each test method. My tests start with a clean slate, and I never have to worry about my tests failing because of inconsistent state again. The one downside I have is that my tests can’t be run in parallel at this point, but that’s a small price to pay.

That’s pretty much all there is – because I’ve created a common fixture class, it’s easy to add more behavior as necessary. In the next post, I’ll bring all these concepts together with a couple of complete examples.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Next Gen DevOps and Software Testing

Yet another bloody blog - Mark Crowther - Mon, 03/02/2015 - 15:18
In his book, Next Gen DevOps, Grant outlines the historic path along which DevOps emerged and then describes how the way it is currently performed is fundamentally flawed. He describes a number of commonly experienced frustrations and inhibitors, both internal and external to the DevOps team, which people in other IT areas will unfortunately recognise. He shows how these impact the ability of the DevOps movement and its practitioners to drive forward their vision of what DevOps would ideally evolve into. Common issues other practice areas will recognise include; a lack of understanding of DevOps resourcing profiles by HR, a verbal agreement to the concepts of DevOps by senior management, but a fear to commit to the corporate and operational change needed to realise the vision, a continuing siloed approach that prevents the establishment of cross-functional, integrative product-based teams that are central to Grant’s view of modern DevOps practice.
Much of what Grant outlines in his book will ring familiar to software test practitioners. I and others have long espoused the value and indeed the criticality, of positioning software test practitioners as an embedded part of the ‘application development team’, ensuring cross-team process synergy1. A term I use in preference to names such as Dev team, the Test team, Ops team, Support team, etc. which serve only to reinforce the ‘silo’ us-and-them perspective. The concept of having these as teams who operate in a non-integrated way is less and less meaningful in context of today’s perspectives on efficient development approaches. Clearly defined practice domains remain important, the sheer scale of today’s IT profession requires a level of specialisation, but this is not the same as being siloed.

Taking this further, as we mature the adaptive, pragmatic, delivery focused approaches that are justifiably popular at this time, and possibly emerge into a post-Agile paradigm, approaches that were established in an era where predictive development models were overlaid onto functionaly siloed teams are, I would suggest, as good as irrelevant.
Services or Products?Except for the most trivial of application development related work, it simply isn’t possible to deliver anything meaningful, from a technology, business or market perspective, without cross-domain collaboration. This is true because of two key factors; a) large scale application complexity is now so high, that no single person or team can perform all practices effectively and, b) the integrated nature of technology and cross-over of practice areas means that practitioners in one field will already be working in a cross-domain manner.
However, we also need to shift perspectives and come back to another key point in Grant’s Next Gen DevOps book, that rings true for us as software test practitioners and ask; are we delivering a discrete set of software testing services in support of some application development work or are we providing a suite of testing practices, alongside other practice areas, in broader support for the delivery of a software product requested by the business?
If we think more broadly than our technology domains, considering also other domains across the wider business and reflecting on why we’re employed by the business, it will be evident that we’re not really engaged in testing some development output, but instead we’re testing an aspect of a product the business has requested. With even a trivial reflection, it’s apparent that all of these practice areas for given domains need to be drawn together to support not just application development, but to support product development from conception to retirement, in context not of the Software Development Life Cycle, but instead of the Product Development Life Cycle2.
ConclusionIn conclusion on these limited points; while Next Gen DevOps is proposed as a model for DevOps, it discusses many concepts that run parallel to our area of concern, that of the role of software testing practice in the broader business context when delivering software products requested by the business.
Mark.
--------------------------Learn More...
You can get a copy of Grants book on Amazon at: http://amzn.to/Zn2SKYIf you're on twitter, follow Grant at: https://twitter.com/grjsmith with hashtag #DevOpsWhile you're about it, pay a visit to his website over at: http://nextgendevops.com/ 
--------------------------


References
[1] Crowther, Mark, (2005) “Cross Team Process Synergy” [online] Available at: http://cyreath.co.uk/papers/Cyreath_Cross_Team_Process_Synergy.pdf[Accessed 02-mar-15]
[2] Crowther, Mark, (2009) “Life Cycles – Course 1, Session 1”, pp. 3-4

Categories: Blogs

Conference organizers, try harder. Conference participants: shop!

Agile Testing with Lisa Crispin - Sun, 03/01/2015 - 01:25

I just received a flyer in my snail mail for yet another conference where four out of the five keynote speakers are white men and only one is a woman. Are you kidding me? And this is a testing conference. Testing is a field that does indeed have lots of women, I would guess a significantly higher percentage than, say, programming.

I know the organizers of this conference and they are good people who aren’t purposely discriminating against women (or minorities, for that matter). But they aren’t trying hard enough, either. I’ve personally sent long lists of women I recommend to speak at their conferences. True, most of these women aren’t “known” keynote speakers – maybe because nobody ever asks them to keynote. These women are highly experienced testing practitioners who have valuable experience to share.

This same company has an upcoming testing conference with no female keynoters, so I guess this is an improvement. But I’m not letting them off the hook, and you shouldn’t either.

What do you value more: a highly entertaining, “big name” keynote speech? Or an experienced practitioner who competently helps you learn some new ideas to go and try with your own teams, but maybe isn’t as well known or flashy?

You probably don’t get to go to many conferences, so be choosy. Choose the ones with a diverse lineup of not only keynoters but presenters of all types of sessions. In fact, choose conferences that have lots of hands-on sessions where you get to learn by practicing with your peers. We have the choice of these conferences now. And I hope you will leave your favorites in comments here. I don’t want to make my friends unhappy by naming names here, but email me and I’ll give you my own recommendations. (Another disclaimer – I’m personally not looking for keynoting gigs, so these are not sour grapes. I don’t like doing keynotes, and I know my limitations as a presenter).

The organizations sponsoring and organizing conferences are pandering to what they think you, their paying audience, wants to see. If you’re going to conferences to see big names and polished speakers, and you don’t care if the lineup is diverse, go ahead. If you want a really great learning experience, maybe do some more research about where your time and money will reap the most value for you.

I’m not trying to start a boycott, but I am saying: we are the market. Let’s start demanding what we want, and I know these conference organizers will then have to step up and try harder.

The post Conference organizers, try harder. Conference participants: shop! appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Summary of five years

Thought Nursery - Jeffrey Fredrick - Sat, 02/28/2015 - 11:05

“What have you been doing these last few years?” was the question Péter Halácsy asked me during my visit to Prezi. I was there to for the CTO equivalent of a developer exchange: learning how things were done at Prezi, sharing my observations, and then speaking at the Budapest Jenkins meetup. Prior to my visit Péter had come to this blog to learn more about me, only to learn that I’d not been blogging. I’m resolved to get back into the blogging habit this year and I decided I’d take the time to fill in the gap for any future Péters. In part this will recapitulate my LinkedIn profile but also describe some of what I felt was most significant.

The primary reason I only posted a single post after 2009 was that I joined Urbancode in a marketing/evangelism role and I posted almost everything I had to say under their masthead. In my two and half years there I had a great time spreading the word about build automation, continuous delivery and Devops. I was able to visit a wide range of companies, learn first hand about the challenges of enterprise organization, and then turn this information into new content. At Urbancode we developed a very good flow of information and almost every month we had a new webinar, a newsletter, and maybe a white paper. My primary content collaborator was Eric Minick and he has kept up those evangelizing ways at IBM following their acquisition of Urbancode.

After I left Urbancode we made a family decision to try living in London for a few years. I reached out to Douglas Squirrel and he brought me into TIM Group to do continuous delivery, infrastructure and operations. In my time there I’ve become CTO and Head of Product and I’ve really enjoyed the opportunity to apply what I know, both about product development and about organizational change. I’ve been almost equally as absent from the TIM Group development blog, but I have managed to share some of our experiences and learning at a few conferences including GOTO Conference 2012 (talk description & slides: A Leap from Agile to DevOps), London Devops Days 2013 (video of talk: Crossing the Uncanny Valley of Culture through Mutual Learning),  and XPDay London 2014.

During my time in London Benjamin Mitchell has been one of the biggest influences on my thinking and approach to organizational change. Benjamin has been a guide to the work of Chris Argyris and Action Science. It has been what I’ve learned from and with Benjamin that has inspired me to start the London Action Science Meetup.

Finally, I couldn’t recap the last few years without also mentioning Paul Julius and CITCON. Since I last mentioned CITCON North America in Minneapolis on this blog in 2009 we’ve gone on to organize 16 additional CITCON events worldwide, most recently in Auckland (CITCON ANZ), Zagreb (CITCON Europe), Austin (CITCON North America), and Hong Kong (CITCON Asia). For PJ and I this is our 10th year of CITCON (and OIF, the Open Information Foundation) and it has been fantastic to continue to meet people throughout the world who care about improving the way we do software development.

Categories: Blogs

Big Sale on e-Learning Courses in Software Testing!

For a limited time, I am offering discounted pricing on ALL my e-learning courses, including ISTQB Foundation Level and Advanced Level certification courses!

Just go to http://www.mysoftwaretesting.com to see the savings on each course.

Remember, as always, you get:
  • LIFETIME access to the training
  • In-depth course notes
  • Exams included on the certification courses
  • Direct access to me, Randy Rice, to ask any questions during the course
Hurry! This sale ends at Midnight CST on Friday, March 20th, 2015.
Questions? Contact me by e-mail or call 405-691-8075.
Categories: Blogs

Cross Browser Testing using CodedUI Test

Testing tools Blog - Mayank Srivastava - Thu, 02/26/2015 - 16:18
Before starting this topic I would like to clear in beginning that Visual Studio 2013 uses Selenium WebDriver component to achieve cross browser testing as of now. So to integrate WebDriver component with Visual Studio follow below given steps- Start the Visual Studio and go to Tools menu and click on Extensions and Updates… System […]
Categories: Blogs

Now on Amazon – Remaining Relevant and Employable

The Social Tester - Thu, 02/26/2015 - 16:00

I’m really proud to announce that I launched my remaining relevant book on Amazon this week. This release is my new non-testing edition (so it’s suitable for those not working in IT or testing also). It contains loads of advice...
Read more

The post Now on Amazon – Remaining Relevant and Employable appeared first on The Social Tester.

Categories: Blogs

How to get the most out of Given-When-Then

Gojko Adzic - Wed, 02/25/2015 - 18:36

This is an excerpt from my upcoming book, Fifty Quick Ideas To Improve Your Tests

Behaviour-driven development is becoming increasingly popular over the last few years, and with it the Given-When-Then format for examples. In many ways, Given-When-Then seems as the de-facto standard for expressing functional checks using examples. Introduced by JBehave in 2003, this structure was intended to support conversations between teams and business stakeholders, but also lead those discussions towards a conclusion that would be easy to automate as a test.

Given-When-Then statements are great because they are easy to capture on whiteboards and flipcharts, but also easy to transfer to electronic documents, including plain text files and wiki pages. In addition, there are automation tools for all popular application platforms today that support tests specified as Given-When-Then.

On the other hand, Given-When-Then is a very sharp tool and unless handled properly, it can hurt badly. Without understanding the true purpose of that way of capturing expectations, many teams out there just create tests that are too long, too difficult to maintain, and almost impossible to understand. Here is a typical example:

    Scenario: Payroll salary calculations

    Given the admin page is open
    When the user types John into the 'employee name'
    and the user types 30000 into the 'salary'
    and the user clicks 'Add'
    Then the page reloads
    And the user types Mike into the 'employee name'
    and the user types 40000 into the 'salary'
    and the user clicks 'Add'
    When the user selects 'Payslips'
    And the user selects employee number 1
    Then the user clicks on 'View'
    When the user selects 'Info'
    Then the 'salary' shows 29000
    Then the user clicks 'Edit'
    and the user types 40000 into the 'salary'
    When the user clicks on 'View'
    And the 'salary' shows 31000

This example might have been clear to the person who first wrote it, but it’s purpose is unclear – what is it really testing? Is the salary a parameter of the test, or is it an expected outcome? If one of the bottom steps of this scenario fails, it will be very difficult to understand the exact cause of the problem.

Spoken language is ambiguous, and it’s perfectly OK to say ‘Given an employee has a salary …, When the tax deduction is…, then the employee gets a payslip and the payslip shows …’. It’s also OK to say ‘When an employee has a salary …, Given the tax deduction is …’ or ‘Given an employee … and the tax deduction … then the payslip …’. All those combinations mean the same thing, and they will be easily understood within the wider context.

But there is only one right way to describe those conditions with Given-When-Then if you want to get the most out of it from the perspective of long-term test maintenance.

The sequence is important. ‘Given’ comes before ‘When’, and ‘When’ comes before ‘Then’. Those clauses should not be mixed. All parameters should be specified with ‘Given’ clauses, the action under test should be specified with the ‘When’ clause, and all expected outcomes should be listed with ‘Then’ clauses. Each scenario should ideally have only one When clause, that clearly points to the purpose of the test.

Given-When-Then is not just an automation-friendly way of describing expectations, it’s a structural pattern for designing clear specifications. It’s been around for quite a while under different names. When use cases were popular, it was known as Preconditions-Trigger-Postconditions. In unit testing, it’s known as Arrange-Act-Assert.

Key benefits

Using Given-When-Then in sequence is a great reminder for several great test design ideas. It suggests that pre-conditions and post-conditions need to be identified and separated. It suggests that the purpose of the test should be clearly communicated, and that each scenario should check one and only one thing. When there is only one action under test, people are forced to look beyond the mechanics of test execution and really identify a clear purpose.

When used correctly, Given-When-Then helps teams design specifications and checks that are easy to understand and maintain. As tests will be focused on one particular action, they will be less brittle and easier to diagnose and troubleshoot. When the parameters and expectations are clearly separated, it’s easier to evaluate if we need to add more examples, and discover missing cases.

How to make it work

A good trick, that prevents most of accidental misuse of Given-When-Then, is to use past tense for ‘Given’ clauses, present tense for ‘When’ and future tense for ‘Then’. This makes it clear that ‘Given’ statements are preconditions and parameters, and that ‘Then’ statements are postconditions and expectations.

Make ‘Given’ and ‘Then’ passive – they should describe values rather than actions. Make sure ‘When’ is active – it should describe the action under test.

Try having only one ‘When’ statement for each scenario.

Categories: Blogs

Very Short Blog Posts (26): You Don’t Need Acceptance Criteria to Test

DevelopSense Blog - Tue, 02/24/2015 - 22:08
You do not need acceptance criteria to test. Reporters do not need acceptance criteria to investigate and report stories; scientists do not need acceptance criteria to study and learn about things; and you do not need acceptance criteria to explore something, to experiment with it, to learn about it, or to provide a description of […]
Categories: Blogs

How To Manage Time with people who make stuff

The Social Tester - Tue, 02/24/2015 - 13:30

Paul Graham has a great article on Manager schedules and Maker schedules on his blog. It’s a post I revisit every week as I strive to work out how to balance my management tendencies with the NVM Dev team’s desire...
Read more

The post How To Manage Time with people who make stuff appeared first on The Social Tester.

Categories: Blogs

The Good, the Bad, and the Ugly of A/B Testing

Testing TV - Mon, 02/23/2015 - 17:49
A/B testing is a great technique to experiment with changes to your product. At Etsy we make extensive use of them to test out ideas; we’ve got 30+ running right now. Although the concept is simple, the execution is a bit tricker then you’d think. This covers the common, and a few of the not […]
Categories: Blogs

Reducing Teamicide with Lightning Bolt shaped Teams

Teamicide is the act of purposefully disbanding a team after they are done with a task or project.  While this may not sound particularly negative at first glance, an organization loses the benefit of achieving team productivity and team cohesion each time they disband a team.  When team’s form, they take time to gel as a team. This is an organizational investment that often isn't realized.
To gain some perspective, let’s take a moment to review Tuckman's model that discusses the gelling process.  Established by Bruce Tuckman in 1965, this model has four sequential phases (e.g., Forming, Storming, Norming, and Performing) that teams go through to effectively function as a unit, know each other's strengths, self-organize around the work, with optimal flow, and reduced impediments.  In relation to teamicide, if a team hasn't yet achieved the performing state, they will have invested in the time and team building effort without actually gaining the benefits of a performing team.   The irony is that while companies focus a lot on return on investment (ROI) in relation to the product, they inadvertently achieve no ROI since they disband teams and not allowing them to achieve performing.  
The next question is, why does management disband teams?  Do they not understand the harm they are doing to their organization when they disband teams?  Do they not respect the benefits of a performing team?  Or maybe they apply a move the team to the work method, when they really should be applying a move work to the team method.  Exploring the “move team to the work” method, this may occur because either there is a “form a team around a project” mindset or there is a belief that teams don’t have all of the skills or disciplines needed to handle the new types of work.   
So how do we solve this problem and gain the most from performing teams?  The first change that must be made is to move to (or experiment with) applying the “move work to the team” method.   This assumes that we have teams that have the skills and disciplines to handle a variety of work.  Therefore, the second change is to invest in building Lightning Bolt shaped teams. These are teams where each team member has a primary skill, a secondary skill, and even a tertiary skill
The shape of a lightening bolt has one spike going deep (primary skill) and at least 2 additional spikes of lessor depth (secondary and tertiary).   The purpose of having various depths of skills is for the team to be able to handle a broad range of work and for team members to be able to step up and fill gaps that other team members may not have or need help with.  Note: some have used the term “T-shaped” teams, but I find that the lightning bolt shape is more apropos to the several spikes of skills and the various depths that are needed.  
To create a lightning bolt shaped team, takes an investment in education.  This takes a commitment to educate each team member in both a secondary and tertiary skill.  As an example, let’s say that a developer has a primary skill of programming code.  As a secondary skill, they can also learn how to build database schemas and as a tertiary skill, they can write unit tests and run test cases.  The long-term benefit is that if the team members can develop additional skills, there is a greater likelihood that a team can work on a much wider range of work and then they can be kept together allowing the organization to gain the benefits of a high performing team.   This can reduce teamicide and increase the organization’s ability to produce more high quality product.
Have you seen teamicide occurring in your organization?  Have you seen the benefit of allowing a team to remain together long enough to become a high performing team?  If so, what level of skills were or are prevalent on the team? 
Categories: Blogs

The Rule of Three and Me

Hiccupps - James Thomas - Sat, 02/21/2015 - 10:20
You can find Weinberg's famous Rule of Three in a variety of formulations. Here's a couple that showed up when I went looking just now (1, 2):
If you can't think of at least three things that might be wrong with your understanding of the problem, you don't understand the problem. If I can’t think of at least three different interpretations of what I received, I haven’t thought enough about what it might mean. At work I challenge myself to come up with ideas in threes and, to keep that facility oiled, when I'm not testing I challenge myself to turn the crap joke I've just thought of into a triad. 
By providing constraints on the problem I find the intellectual joking challenge usefully similar to the intellectual testing challenge. Here's an example from last week where, after I happened onto the first one, I constrained the structure of the others to be the same and restricted the subject matter to animals:
  • If I had three pet lions I would call them Direct, Contour and What's My.
  • If I had three pet ducks I would call them Via, Aqua and Paroled On The Basis Of Good Con.
  • If I had three pet mice I would call them Enor, Hippopoto and That Would Be Like Turkeys Voting For Chris.
As an additional motivational aid I've taken to punting the gags onto Twitter. You can see some of them here ... but remember: I never said they were good.Image: De La Soul
Categories: Blogs

Reliable database tests with Respawn

Jimmy Bogard - Thu, 02/19/2015 - 19:21

Creating reliable tests that exercise the database can be a tricky beast to tame. There are many different sub-par strategies for doing so, and most of the documented methods talk about resetting the database at teardown, either using rolled back transactions or table truncation.

I’m not a fan of either of these methods – for truly reliable tests, the fixture must have a known starting point at the start of the test, not be relying on something to clean up after itself. When a test fails, I want to be able to examine the data during or after the test run.

That’s why I created Respawn, a small tool to reset the database back to its clean beginning. Instead of using transaction rollbacks, database restores or table truncations, Respawn intelligently navigates the schema metadata to build out a static, correct order in which to clear out data from your test database, at fixture setup instead of teardown.

Respawn is available on NuGet, and can work with SQL Server or Postgres (or any ANSI-compatible database that supports INFORMATION_SCHEMA views correctly).

You create a checkpoint:

private static Checkpoint checkpoint = new Checkpoint
{
    TablesToIgnore = new[]
    {
        "sysdiagrams",
        "tblUser",
        "tblObjectType",
    },
    SchemasToExclude = new []
    {
        "RoundhousE"
    }
};

You can supply tables to ignore and schemas to exclude for tables you don’t want cleared out. In your test fixture setup, reset your checkpoint:

checkpoint.Reset("MyConnectionStringName");

Or if you’re using a database besides SQL Server, you can pass in an open DbConnection:

using (var conn = new NpgsqlConnection("ConnectionStringName"))
{
    conn.Open();

    var checkpoint = new Checkpoint {
        SchemasToInclude = new[]
        {
            "public"
        },
        DbAdapter = DbAdapter.Postgres
    };

    checkpoint.Reset(conn);
}

Because Respawn stores the correct SQL in the right order to clear your tables, you don’t need to maintain a list of tables to delete or recalculate on every checkpoint reset. And since table truncation won’t work with tables that include foreign key constraints, DELETE will be faster than table truncation for test databases.

We’ve used this method at Headspring for the last six years or so, battle tested on a dozen projects we’ve put into production.

Stop worrying about unreliable database tests – respawn at the starting point instead!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

GTAC 2014 Coming to Seattle/Kirkland in October

Google Testing Blog - Thu, 02/19/2015 - 15:22
Posted by Anthony Vallone on behalf of the GTAC Committee

If you're looking for a place to discuss the latest innovations in test automation, then charge your tablets and pack your gumboots - the eighth GTAC (Google Test Automation Conference) will be held on October 28-29, 2014 at Google Kirkland! The Kirkland office is part of the Seattle/Kirkland campus in beautiful Washington state. This campus forms our third largest engineering office in the USA.



GTAC is a periodic conference hosted by Google, bringing together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It’s a great opportunity to present, learn, and challenge modern testing technologies and strategies.

You can browse the presentation abstracts, slides, and videos from last year on the GTAC 2013 page.

Stay tuned to this blog and the GTAC website for application information and opportunities to present at GTAC. Subscribing to this blog is the best way to get notified. We're looking forward to seeing you there!

Categories: Blogs

GTAC 2014: Call for Proposals & Attendance

Google Testing Blog - Thu, 02/19/2015 - 15:21
Posted by Anthony Vallone on behalf of the GTAC Committee

The application process is now open for presentation proposals and attendance for GTAC (Google Test Automation Conference) (see initial announcement) to be held at the Google Kirkland office (near Seattle, WA) on October 28 - 29th, 2014.

GTAC will be streamed live on YouTube again this year, so even if you can’t attend, you’ll be able to watch the conference from your computer.

Speakers
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations and lightning talks are 45 minutes and 15 minutes respectively. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 300 applicants for the event.

Deadline
The due date for both presentation and attendance applications is July 28, 2014.

Fees
There are no registration fees, and we will send out detailed registration instructions to each invited applicant. Meals will be provided, but speakers and attendees must arrange and pay for their own travel and accommodations.

Update : Our contact email was bouncing - this is now fixed.



Categories: Blogs

The Deadline to Sign up for GTAC 2014 is Jul 28

Google Testing Blog - Thu, 02/19/2015 - 15:21
Posted by Anthony Vallone on behalf of the GTAC Committee

The deadline to sign up for GTAC 2014 is next Monday, July 28th, 2014. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to add yours for consideration. If you would like to speak or attend, be sure to complete the form by Monday.

We will be making regular updates to our site over the next several weeks, and you can find conference details there:
  developers.google.com/gtac

For those that have already signed up to attend or speak, we will contact you directly in mid August.

Categories: Blogs