Skip to content

Feed aggregator

Reviewing "Context Driven Approach to Automation in Testing"

Chris McMahon's Blog - Fri, 06/24/2016 - 01:45


I recently had occasion to read the "Context Driven Approach to Automation in Testing". As a professional software tester with extensive experience in test automation at the user interface (both UI and API) for the last decade or more for organizations such as Thoughtworks, Wikipedia, Salesforce, and others, I found it a nostalgic mixture of FUD (Fear, Uncertainty, Doubt), propaganda, ignorance and obfuscation. 

It was weirdly nostalgic for me: take away the obfuscatory modern propaganda terminology and it could be an artifact directly out of the test automation landscape circa 1998 when vendors, in the absence of any competition, foisted broken tools like WinRunner and SilkTest on gullible customers, when Open Source was exotic, when the World Wide Web was novel. Times have changed since 1998, but the CDT approach to test automation has not changed with it. I'd like to point out the deficiencies in this document as a warning to people who might be tempted to take it seriously.

The opening paragraph is simply FUD. If we take out the opinionated language

poorly applied
terrible waste
confusion
pain
hard
shallow, narrow, and ritualistic
pandemic, rarely examined, and absolutely false

what's left is "Tool use in testing must therefore be mediated by people who understand the complexities of tools and of tests". This is of course trivially true, if not an outright tautology. The authors then proceed to demonstrate how little they know about such complexities.

The sections that follow down to the bits about "Invest in..." are mostly propaganda with some FUD and straw-man arguments about test automation strewn throughout. ("The only reason people consider it interesting to automate testing is that they honestly believe testing requires no skill or judgment" Please, spare me.) If you've worked in test automation for some time (and if you can parse the idiosyncratic language), there is nothing new to read here, this was all answered long ago. Again, much of these ten or so pages for me brought strong echoes of the state of test automation in the late 1990s. If you are new to test automation, consider thinking of this part of the document as an obsolete, historical look into the past. There are better sources for understanding the current state of test automation.

The sections entitled (as of June 2016) "Invest in tools that give you more freedom in more situations" and "Invest in testability" are actually all good basic advice, I can find no fault in any of this. Unfortunately the example shown in the sections that follow ignores every single piece of that advice.

Not only does the example that fills the final part of the paper ignore every bit of advice the authors give, it is as if the authors have chosen a project doomed to fail, from the odd nature of the system they've chosen to automate, to the wildly inappropriate tools they've chosen to automate it with.

Their application to be tested is a lightweight text editor they've gotten as a native Windows executable. Cursory research shows it is an open source project written in C++ and Qt, and the repo on github  has no test/ or spec/ directory, so it is likely to be some sort of cowboy code under there. I assume that is why they chose this instead of, say, Microsoft Word or some more well engineered application.

Case #1 and Case #2 describe some primitive mucking around with grep, regular expressions, and configuration. It would have been easier just to read the source on github. If this sort of thing is new to you, you probably haven't been doing this sort of work long, and I would suggest you look elsewhere for lessons.

Case #3 is where things get bizarre. First they try automating the editor with something called "AutoHotKey", which seems to be some sort of ad-hoc collection of Windows API client calls, which according to the AutoHotKey project history is wildly buggy as of late 2013 but has had some maintenance off and on since then. I would not depend on this tool in a production environment.

That fails, so then they try some Ruby libraries. Support for Windows on Ruby is notoriously bad, it's been a sticking point in the Ruby community for years, and any serious Ruby programmer would know that. Ruby is likely the worst possible language choice for a native Windows automation project. If all you have is a hammer...

Then they resort to some proprietary tool from HP. You can guess the result.

Again, assuming someone would want to automate a third-party Windows/Qt app at all, anyone serious about automating a native Windows app would use a native Windows language, C# or VisualBasic.NET, instead of some hack like AutoHotKey. C# and VisualBasic.NET are really the only reasonable choices for such a project.

It is as if this project has been deliberately or naively sabotaged. If this was done deliberately, then it is highly misleading; if naively, then it is simply sad.

Finally I have to point out (relevant to the article section "Invest in testability", and again strong shades of 1998) that this paper completely ignores the undeniable fact that the vast majority of modern software development takes place on the web, with the UI appearing in a web browser and APIs offered from servers over a network.  This article makes no mention that selenium/webdriver is a UI automation standard adopted by the World Wide Web Consortium (W3C), that the webdriver automation interface is fully supported by every major browser vendor:  Google Chrome, Mozilla Firefox, Microsoft Internet Explorer, Opera, and most recently Apple Safari, or that the Selenium API is fully supported in five programming languages: C#, Java, Ruby, Python, and Javascript, and partially supported in many more.

Ultimately, this article is mostly FUD, propaganda, and obfuscation. The parts that are not actually wrong or misleading are naive and trivial. Put it like this: if I were considering hiring someone for a testing position, and they submitted this exercise as part of their application, I would not hire them, even for a junior position. I would feel sorry for them.



Categories: Blogs

Automatic Problem Detection with Dynatrace

Can you imagine automatic problem detection being a reality?! What would it take to make it possible, practical and functional? Over the years we at Dynatrace have seen a lot of PurePaths being captured in small to very large applications showing why new deployments simply fail to deliver the expected user experience, scalability or performance. Since I started my […]

The post Automatic Problem Detection with Dynatrace appeared first on about:performance.

Categories: Companies

Don’t miss the latest in load testing and performance testing at this webinar

HP LoadRunner and Performance Center Blog - Thu, 06/23/2016 - 21:26

whats new in version 12.53 teaser.png

Keep reading to better understand the new capabilities of the new LoadRunner, Performance Center and Network Virtualization v. 12.53, attend this complete webinar.

Categories: Companies

Agile Hiring, Load Testing & Goal Management in Methods & Tools Summer 2016 issue

SQA Zone - Thu, 06/23/2016 - 16:09
Methods & Tools – the free e-magazine for software developers, testers and project managers – has published its Summer 2016 issue that discusses hiring for agility, load testing scripts errors, managing with goals on every level a ...
Categories: Communities

Catching Bugs Too Late

Sauce Labs - Thu, 06/23/2016 - 16:00

Putting quality first is critical. Teams must take ownership of quality, but to do so they have to create an environment that allows them to build quality in, instead of testing it out much further down the road to delivery. Finding bugs late is too costly if you aren’t yet to the point of being able to prevent them (implementing BDD). Ensure you can find them early.

Staying green is hard work!

I’ve seen many things change this year. My daughter began kindergarten. (How did THAT happen so fast?!) I also began blogging, and our department is trying to shift from Waterfall to Agile and Continuous Delivery, with teams shifting to own quality instead of tossing code over to QA… all great changes. But one thing has remained the same. We were still finding bugs late.

I’ve written many times about the importance of quality first. But how did our team take action on that? First, we HAD to have automation. Purely manual testing was just not going to cut it anymore. Don’t get me wrong, I still very much value human-based testing. But frankly, it can catch things too late. So, enter our automated tests. We began with what we called our pre-commit tests. These must be run — you guessed it — before you commit code! Yes, they are slower than unit or integration tests. But they take around 7-8 minutes (allowing time to go grab some coffee, stretch, whatever). They are our most critical features and workflows. Aside from running locally before committing, they are also scheduled and running many times over during the day with all the commits going on. Once we established that set of tests, we began our work on more user acceptance tests – still hero workflows, but trying to keep in mind the fine line between useful UI tests and too many tests (think of the testing pyramid).

Unfortunately we entered what I call the “dark period” where our once green tests were failing. The reasons are many. (That’s another story for another day.) Resources were shifted (or flat-out gone), and priorities changed. Long story short, we had no one available to either write tests or tend to them. It felt like we were going back to square one. People didn’t trust the tests. If you can’t trust the tests, what’s the point?

Fast forward several months, and everyone recognizes we need the automated tests. We are in the process now of stabilizing our tests. We focused on those pre-commits first and got them green again – yay! They are so green now, that when there’s a failure, we know it is something in the code (and we don’t automatically assume that it’s the test). Now we are moving on to the other tests.

It works! It really works!

Once we were stable on those critical tests, we had to figure out how to get people to care. I was suddenly in the business of sales!

First, we had to show the tests were stable – show everyone they weren’t flaky, suffering from timing issues, etc. We had about twenty solid builds of GREEN. Pretty! But even better than the nice soothing green on our Jenkins dashboard, we had stability.

Then, we had to show they were catching things. (It seems counterintuitive, wanting to see your test suite fail, but stay with me a minute). My team (consisting of one other person) was constantly running the tests – even locally, between the scheduled runs on Jenkins. We recruited a few engineers to run these tests prior to committing their code. Then came the bugs – and our tests caught them! At first, we held our breath as we debugged to see if it was the test before alerting the engineering managers. (It wasn’t! We found a bug!) Since teams had originally deemed the workflows as critical, these bugs were prioritized quickly, fixed, and we were back to green.

Don’t find them too late—or you’ll pay

Automation has been critical to our success. While we are still working on it, having a set of useful tests (even a small set) has proven its worth. We have caught several bugs that otherwise would not have been found until up to two weeks later. Why does this matter? (Greg Sypolt discusses the cost of a bug depending on when it is found in this blog post, based on research presented by IBM at the AccessU Summit 2015.) Say you find a bug when running locally as you’re still working on that feature – That’s $25. Wait until a test cycle? $500. Find the bug in production? You’re looking at a cool 15 grand. That’s right. $15,000.

As we stabilize our tests, we are reducing the risk of finding bugs in later cycles (whether testing or in production). That adds up. The reality is that you will introduce bugs as you code. It happens. But WHEN you find them is the game changer. Sure, it takes a lot of effort—an effort that many underestimate—but it will save you in the end.

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices. Most recently, she has focused on test strategy implementation and training, development process efficiencies, and will preach the value of Test-Driven Development to anyone that will listen. In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

Categories: Companies

Fall Selenium Conf, Save the Date & Call for Speakers!

Selenium - Thu, 06/23/2016 - 14:09

We’re excited to announce that we’ve finally determined where and when Selenium Conf will be happening this Fall.

Our initial goal was to bring the event to a new country, but for a number of reasons that proved more challenging than we’d hoped. But in 2012 we had the 2nd annual Selenium Conf in London, and we’re pleased to be bringing it back there this year!

The conference will be held at The Mermaid in downtown London on November 14-16:

  • The 14th will be all-day pre-conference workshops
  • The 15th-16th will be the conference

Go here to sign up for the email list for conference updates (e.g., when tickets go on sale) as well as submit a talk. Call for speakers are open from now until July 29th.


Categories: Open Source

Language Plugins Rock SonarQube Life!

Sonar - Thu, 06/23/2016 - 13:43

SonarAnalyzers are fundamental pillars of our ecosystem. The language analyzers play a central role, but the value they bring isn’t always obvious. The aim of this post is to highlight the ins and outs of SonarAnalyzers.

The basics

The goal of the SonarAnalyzers (packaged either as SonarQube plugins or in SonarLint) is to raise issues on problems detected in source code written in a given programming language. The detection of issues relies on the static analysis of source code and the analyzer’s rule implementations. Each programming language requires a specific SonarAnalyzer implementation.

The analyzer


The SonarAnalyzer’s static analysis engine is at the core of source code interpretation. The scope of the analysis engine is quite large. It goes from basic syntax parsing to the advanced determination of the potential states of a piece of code. At minimum, it provides the bare features required for the analysis: basic recognition of the language’s syntax. The better the analyzer is, the more advanced it’s analysis can be, and the trickier the bugs it can find.

Driven by the will to perform more and more advanced analyses, the analyzers are continuously improved. New ambitions in terms of validation require constant efforts in the development of the SonarAnalyzers. In addition, to be able to handle updates to each programming language, regular updates are required in the analyzers to keep up with each language’s evolution.

The rules



The genesis of a rule starts with the writing of its specification. The specification of each rule is an important step. The description should be clear and unequivocal in order to be explicit about what issue is being detected. Not only must the description of the rule be clear and accurate, but code snippets must also be supplied to demonstrate both the bad practice and it’s fix. The specification is available from each issue raised by the rule to help users understand why the issue was raised.

Rules also have tags. The issues raised by a rule inherit the rule’s tags, so that both rules and issues are more searchable in SonarQube.

Once the specification of a rule is complete, next comes the implementation. Based on the capabilities offered by the analyzer, rule implementations detect increasingly tricky patterns of maintainability issues, bugs, and security vulnerabilities.


Continuous Improvement


By default, SonarQube ships with three SonarAnalyzers: Java, PHP, and JavaScript.
The analysis of other languages can be enabled by the installation of additional SonarAnalyzer plugins.

SonarQube community officially supports 24 language analyzers. Currently about 3500 rules are implemented across all SonarAnalyzers.

More than half of SonarSource developers work on SonarAnalyzers. Thanks to the efforts of our SonarAnalyzer developers, there are new SonarAnalyzer versions nearly every week.

A particular focus is currently made on Java, JavaScript, C#, and C/C++ plugins. The target is to deliver a new version of each one every month, and each delivery embeds new rules.

In 2015, we delivered a total of 61 new SonarAnalyser releases, and so far this year, another 30 versions have been released.


What it means for you


You can easily benefit from the regular delivery of SonarAnalyzers. At each release, analyzer enhancements and new rules are provided. But, you don’t need to upgrade SonarQube to upgrade your analysis; as a rule, new releases of each analyzers are compatible with the latest LTS.

When you update a SonarAnalyzer, the static analysis engine is replaced and new rules are made available. But at this step, you’re not yet benefiting from those new rules. During the update of your SonarAnalyzer, the quality profile remains unchanged. The rules executed during the analysis are the same ones you previously configured in your quality profile.
It means that if you want to benefit from new rules you must update your quality profile to add them.

Categories: Open Source

DevOps delivers greater speed and quality with Performance built-in

HP LoadRunner and Performance Center Blog - Thu, 06/23/2016 - 05:25

Performance Testing ADM story.PNG

The evolution of DevOps and Performance Engineering has accelerated to an intersection. As a result, some questions have popped up on how they come together. Keep reading to learn more about this relationship.

Categories: Companies

Fixing SQL Server Plan Cache Bloat with Parameterized Queries

Developers often believe that database performance and scalability issues they encounter are issues with the database itself and, therefore, must be fixed by their DBA. Based on what our users tell us, the real root cause, in most cases, is inefficient data access patterns coming directly from their own application code or by database access frameworks […]

The post Fixing SQL Server Plan Cache Bloat with Parameterized Queries appeared first on about:performance.

Categories: Companies

Nexus Repository 3.0: Most Frequently Asked Questions - Answered

Sonatype Blog - Wed, 06/22/2016 - 21:52
Nexus Repository 3.0 has hit the streets and continues to spur insightful discussions on where we're headed with the platform. We recently held a one hour demonstration where we had off the chart community engagement with interactive QA. If you missed the demonstration, watch the recording here....

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Meeting Minutes from Velocity 2016

Its Velocity Time and the people who care about Performance, Continuous Delivery and DevOps are gathered in sunny Santa Clara, California. Thankful to be here I want to share my notes with our readers who don’t have the chance to experience it live. Lets dig right into it! Interview with Steve Souders himself

Categories: Companies

NEW EBOOK: Discover 20 TestTrack Superpowers

The Seapine View - Wed, 06/22/2016 - 13:00

In March of this year, we marked TestTrack’s 20th birthday. To celebrate this milestone birthday, we’re showcasing 20 of TestTrack’s “superpowers” in…

cover_3d_400x400

It’s no secret TestTrack has grown more powerful in the past 20 years. We’ve added more features, evolving TestTrack into a leading application lifecycle management (ALM) tool and a true Champion of Quality.

TestTrack: Champion of Quality is a fun, informative ebook that explores 20 of those features, such as email tracking, enhanced testing, item mapping rules, and more. We’ve even included a look at TestTrack’s newest muscular feature, Word export!

Inside, you’ll learn how to:

  • Beat tasks into submission with task boards!
  • Alert your team to danger with field value styles!
  • Slice through your data with filters!

Do you know everything TestTrack is capable of? Download your free copy of TestTrack: Champion of Quality and see what you’ve been missing!

Categories: Companies

Who I am and where I am June 2016

Chris McMahon's Blog - Wed, 06/22/2016 - 07:09


From time to time I find it helpful to mention where I am and how I got here. I have been pretty quiet since 2010 but I used to say a lot of stuff in public.

For the past year I have worked for Salesforce.org, formerly the Salesforce Foundation, the independent entity that administers the philanthropic programs of Salesforce.com. My team creates free open source software for the benefit of non-profit organizations.  I create and maintain automated browser tests in Ruby, using Jeff "Cheezy" Morgan's page_object gem.  I'm a big fan.

My job title is "Senior Member of the Technical Staff, Quality Assurance".  I have no objection to the term "Quality Assurance", that term accurately describes the work I do. I am known for having said "QA Is Not Evil".

Before Salesforce.org I spent three years with the Wikimedia Foundation , working with Željko Filipin  mostly, on a similar browser test automation project , but much larger.

I worked for Socialtext, well known in some circles for excellent software testing. I worked for the well known agile consultancy Thoughtworks for a year, just when the first version of Selenium was being released. I started my career testing life-critical software in the US 911 telecom systems, both wired/landline and wireless/mobile.

I have been 100% remote/telecommuting since 2007. Currently I live in Arizona, USA.

I used to give talks at conferences, including talks at Agile2006, Agile2009, and Agile2013. I've been part of the agile movement since before the Manifesto existed.  I attended most of the Google Test Automation Conferences  held in the US. I have no plans to present at any open conferences in the future.

I wrote a lot about software test and dev mostly around 2006-2010. You can read most of it at stickyminds  and TechTarget , and a bit at PragProg

I hosted two peer conferences in 2009 and 2010 in Durango Colorado called "Writing About Testing". They had some influence on the practice of software testing at the time, and still resonate from time to time today.

I create UI test automation that finds bugs. Before Selenium existed I was user #1 for WATIR, Web Application Testing In Ruby. I am quoted in both volumes of Crispin/Gregory Agile Testing , and I am a character in Marick's Everyday Scripting.
Categories: Blogs

NeoSense 2.0 Released

Software Testing Magazine - Tue, 06/21/2016 - 18:06
Neotys has announced NeoSense 2.0, an enhanced version of its synthetic monitoring solution for application performance and availability. The release of NeoSense 2.0, available with a free trial, helps teams increase the level of automation in their continuous performance validation workflow and pinpoint the root cause of application performance issues with greater accuracy. NeoSense 2.0 also adds an infrastructure monitoring dimension to its advanced synthetic monitoring capabilities so that users can get a deeper understanding of the underlying causes of an application’s performance problems. NeoSense 2.0 Key Enhancements Infrastructure Monitoring Dimension – NeoSense’s current synthetic transaction monitoring (STM) dimension provides insight on application performance and availability from the end-user perspective. NeoSense 2.0 adds a new infrastructure monitoring dimension (IM) to help identify the source of performance problems. IM works in conjunction with STM to alert teams of performance, health and availability issues on their applications. Every NeoSense subscription includes the new infrastructure monitoring dimension at no extra cost. Automated User Path Deployment – NeoSense 2.0 automates user path deployment, even in team collaboration environments. User paths designed in NeoLoad or the Neotys Design Studio can now be sent to NeoSense with a single click. If Continuous Integration servers like Jenkins and Bamboo are used to run automated NeoLoad performance tests, the user paths from those tests can be automatically deployed to NeoSense when tests pass. Finally, teams that collaborate on user path designs can now pull those latest designs from the Neotys Team Server (or any SVN server) to NeoSense [...]
Categories: Communities

CloudBees and ClicTest Form Partnership

Software Testing Magazine - Tue, 06/21/2016 - 17:48
ClicTest has announced a partnership with CloudBees. ClicTest is now a CloudBees Platinum Reseller and Services Partner. The partnership allows ClicTest to deliver a continuous testing platform, helping organizations to mitigate operational risks while enhancing planning, ensuring quality in software testing and, ultimately, accelerating software delivery. Continuous delivery with the CloudBees Jenkins Platform automates the release of software from the development environment through to the production environment, supports agile practices and can accelerate time-to-release dramatically – from weeks to hours. As part of the software delivery process, the ClicTest automation testing solution and cloud platform helps IT teams to identify functional, performance and security issues. “CloudBees and ClicTest together aim to deliver much higher software quality, reduce the time-to-release and help organizations to successfully adopt continuous testing as part of a continuous delivery process. With ClicTest and the CloudBees Jenkins Platform, our customers can fully automate build and testing processes and beyond – all the way through to production,” said Venkat Akula, managing director at ClicTest.
Categories: Communities

Usability Testing at the Cafe

Testing TV - Tue, 06/21/2016 - 16:16
Surprisingly, up to 85% of core usability problems can be found by observing just 5 people using your application. Conducting quick usability testing at a cafe is very effective, cheap, and doesn’t require any special tools. Resources Why You Only Need to Test with 5 Users Usability testing questionnaire
Categories: Blogs

The Sauce Journey – Emergent Leaders

Sauce Labs - Tue, 06/21/2016 - 15:00

In my last blog post I wrote about the way in which moving to SCRUM teams fosters communication, transparency, and trust, both internally among team members, and externally with customers. Achieving open communication like this is one of the main goals of Agile, but just as important is the development of leadership within the SCRUM teams.

Ideally, every SCRUM team is self-managing in regards to their own work. The Product Owner determines what will get done, the tactical decisions about how it gets done should be left up to the team. There is a simple philosophy behind this: those whose work focuses on a specialized area of the product know better how to improve it, and how much work will be involved, than anyone from outside of that group. The product owner within the team is there to advocate for the customer, and to decide when a minimally viable product is ready for release, but they don’t tell the team what to do or how to do it.

Open communication, transparency, and trust are essential for teams to become self-managing, because these are the foundational conditions that are necessary for the emergence of leaders. Leadership in SCRUM teams is not about titles, it’s about ideas. It’s about contributing to team communications, making decisions based on those communications, and then being able to execute. Because SCRUM leadership is based on an individual’s ability to listen, exercise judgement, and communicate, anyone can emerge as a leader, regardless of whether they have been doing software development for two years or twenty.

When I started at Sauce Labs, there were clear leaders in the Engineering organization, but their efforts were spread thin because they were the obvious leaders, and everyone turned to them for solutions and expertise. One of my top architects was the “official” owner one major infrastructure component of our service. He was also the “unofficial” owner of a second service. In his “spare time” he had developed a customer facing app, so he was de facto owner of that. And, since he had knowledge about other components of the service, he was constantly interrupted with questions from junior developers. For us to move further down the road with our development goals, we not only needed a way to give these leaders focus to their work, but we needed to develop new leaders, and provide the junior members of our organization with opportunities for growth. This was one of the main reasons for implementing SCRUM; while one goal was to bring a more rationalized approach to our development efforts, which we could quantify to management, the larger qualitative goal was to create an environment that would foster innovation and the emergence of a new cadre of leadership.

Naturally, not everyone adapts to this kind of cultural change. Those who have worked in a Waterfall, or even a Fast Waterfall, methodology, are used to being handed instructions, executing on those instructions, and moving on to the next task. If this is the way you have been trained to do things, SCRUM can seem like chaos – where are the functional specs, the technical specs, how am I supposed to know what to do? When we implemented SCRUM at Sauce there were a lot of questions, some resistance, and even some defections. This is all to be expected. Some personalities work better as individuals than as members of a team, and some are more comfortable with self-direction than others. What’s important is that implementing SCRUM helped all of us learn where we are as individuals when it comes to our professional activities, what gives us satisfaction and purpose in our work, and what we are really like within our teams.

Joe Alfaro is VP of Engineering at Sauce Labs. This is the fourth post in a series dedicated to chronicling our journey to transform Sauce Labs from Engineering to DevOps. Read the first post here.

Categories: Companies

Save 30% on Ranorex Runtime Floating Licenses

Ranorex - Tue, 06/21/2016 - 14:24

Don’t miss out on this fantastic offer: Only until June 30, 2016 you can save 30% on Ranorex Runtime Floating Licenses! This offer celebrates our much requested and long awaited feature Ranorex Remote, which is available with our latest major software release Ranorex 6.0.

A Ranorex Runtime Floating License enables you to run tests on additional physical or virtual machines. Now, Ranorex Remote takes remote test execution a step further. Using this new feature, you can:

  • deploy tests to Ranorex Agents for remote test execution directly out of Ranorex Studio with just a few clicks. This makes it easier to simultaneously run multiple automated tests in different test environments and configurations.
  • continue using your local machine during remote test execution, as remote testing won’t block your machine. You’ll receive an automatic notification once the report is ready.
  • share Ranorex Agents with your team.

Remote test execution has never been this easy! All you need is a Ranorex Runtime Floating License to set up a Ranorex Agent and use Ranorex Remote. So don’t just let this offer pass by, and order your Ranorex Runtime Floating License today!

Order a Ranorex Runtime Floating License

The post Save 30% on Ranorex Runtime Floating Licenses appeared first on Ranorex Blog.

Categories: Companies

Delivering High Quality Applications in a Mobile World

Telerik TestStudio - Tue, 06/21/2016 - 13:45
Testing mobile applications is not an easy process. There are many common challenges that must be considered before testing a mobile application. This blog post will help you get started. 2015-06-17T15:45:27Z 2016-06-21T11:29:22Z Shravanthi Alimilli
Categories: Companies

Too controversial?

On May 11 2016 TestNet (*)  held her spring conference with “Strengthen your foundation: new skills for testers” as the central theme. The call for papers that was send out made me frown.  It said:

“In the final keynote of the TestNet autumn event, speaker Rini van Solingen referred to the end of software testing as we know it. ‘What one can learn in merely four weeks, does not deserve to be called a profession’, he stated. But is that true? Most of our skills, we learn on the job. There are many tools, techniques, skills, hints and methods not typical for the testing profession but essential for enabling us to do a good job nonetheless. Furthermore the testing profession is constantly evolving as a result of ICT and business trends. Not only functional testing, but also performance, security or other test varieties. This presses us to expand our knowledge, not just the testing skills, but also of the contexts in which we do our jobs. The TestNet Spring Event 2016 is about all topics that are not addressed in our basic testing course, but enable us to do a better job: knowledge, skills, experience.”

I think that there are a lot of skills that are not addressed in our “basic testing course” where they should have been addressed. I am talking about basic testing skills! So I wrote an abstract for a keynote for the conference:

The theme for the spring event is “Strengthen your foundation: new skills for testers”. My story takes a step back: to the foundation! Because I think that the foundation of most testers is not as good as they think. The title would then be: “New skills for testers: back to basics!

Professional testers are able to tell a successful story about their work. They can cite activities and come up with a thorough overview of the skills they use. They are able to explain what they do and why. they can report progress, risk and coverage at any time. They will gladly explain what oracles and heuristics they use, know everything about the product they are testing and are deliberately trying to learn continuously.

It surprises me that testers regularly can’t give a proper definition of testing. Let alone that they are able to describe what testing is. A large majority of people who call themselves professional testers can not explain what they do when they are testing. How can anyone take a tester seriously if he/she can not explain what he/she is doing all day? Try it: go to one of your testing colleagues and ask what he or she is doing and why it contributes to the mission of the project. Nine out of ten testers I’ve asked this simple question, start to stutter.

What do you exactly do if you use a “data combination test” or a “decision table”? What skills do you use? “Common sense” in this context does not answer the question because it is not a skill, is it? I think of: modeling, critical thinking, learning, combine, observe, reasoning, drawing conclusions just to name a few. By looking in detail at what skills you are actually using, helps you recognize which skills you could/should train. A solid foundation is essential to build on it in the future!

How can you learn the right skills if you do not know what skills you are using in the first place? In this presentation I will take the audience back to the core of our business: skills! By recognizing the skills and training them, we are able to think of and talk about our profession with confidence. The ultimate goal is to tell a good story about why we test and value it adds.

We need a solid foundation to build on!

My keynote wasn’t selected. So I send it in as a normal session, since I really am bothered by the lack of insight in our community. But it didn’t make it on the conference program as a normal session either. Why?  Because it is too controversial they told me. After applying for the keynote the chairman called me to tell me that they weren’t going to ask me to do a keynote because the did want a “negative” sound on stage. I guess I can imagine that you do not want to start the day with a keynote who destroys your theme by saying that we need to strengthen our foundation first before moving on.

But why is this story too controversial for the conference at all? I guess it is (at least in the eyes of the program committee) because we don’t like to admit that we lack skills. That we don’t really know how to explain testing. I wrote about that before here.  It bothers me that we think our foundation is good enough, while it really isn’t! We need to up our game and being nice and ignoring this problem isn’t going to help us. A soft and nice approach doesn’t wake people up. That is why I wanted to shake this up a bit. To wake people up and give them some serious feedback … I wrote about serious feedback before here. But the Dutch Testing Community (represented by TestNet) finds my ideas too controversial…

 

(*) TestNet is a network of, by and for testers. TestNet offers its members the opportunity to maintain contacts with other testers outside the immediate work environment and share knowledge and experiences from the field.

Categories: Blogs

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today