As manager of a test team I try to get everyone on the team presenting regularly, to the team itself at least. A significant part of the tester role is communication and practising this in a safe environment is useful. There's double benefit, I like to think, because the information presented is generally relevant to the rest of the team. We most often do this by taking turns to present on product features we're working on.
I encourage new members of the team to find something to talk about reasonably early in their time with us too. This has some additional motivations, for example to help them to begin to feel at ease in front of the rest of us, and to give the rest of us some knowledge of, familiarity with and empathy for them.
I invite (and encourage the team to invite) members of staff from other teams to come to talk to us in our weekly team meeting as well. Again, there are different motivations at play here but most often it is simple data transfer. I used to do this more, and with a side motivation of building links across teams and exposing us to new and perhaps unexpected ideas, or generating background knowledge. But at one of our retrospectives it became clear that some of the testers felt that some of the presentations were not relevant enough to them and they'd rather get on with work.
It pays to listen to your team.
So, along with Harnessed Tester, I set up Team Eating which is a cross-company brown bag lunch. And it's just reached its first anniversary! And, yes, I know its name is a terrible pun. (But I love terrible puns.)
Here's a list of the topics we've had in the first year:
- Acceptance Testing and Benefits of BDD in Agile Methodology
- Easing the Pain of Legacy Tests
- IPython Notebook Demo
- Proxy Servers
- You're Having a Laugh
- So what is UX anyway? (And why does it matter to me?)
- Your Testing is a Joke
- Say "hi" to the User Personas
- Testing Responsive Websites
- Why DisplayLink need (Lean) Coffee with their test meetings
- CEWT #2
- Storytelling workshop
We've had three guest speakers (Chris George, Neil Younger and Gita Malinovska) and, as you can see, there's been a bit of a bias towards testing topics, although that reflects the interests of the speakers more than anything else. There's no constraints on the format (beyond practical ones) so we've had live demos, more traditional talks and, this week, an interactive storytelling workshop.
The response from the company has been good and we've had attendees from all teams and presenters from most. The more popular talks were probably those by Roger, our new (and first) UX specialist. He's done two: first to introduce the company to some ideas about what UX is and then later on how he was beginning to apply his expertise to a flagship project.
I've been really pleased with the atmosphere. There's a positive vibe from people who want to be there listening to their colleagues who have something that they want to share.
One surprise to me has been a reluctance from some of the audience to have their lunch in the meetings. Some people, I now find, consider it impolite to be eating while the presenter is talking. Given the feedback from my team which prompted us to start Team Eating, I was keen that it shouldn't take time away from participants' work and so fitting it into a lunch break seemed ideal.
But although eating is in the name, the team part is much the more important to me and I feel like it's serving the kind of purpose that I wanted in that respect. Quite apart from anything else, I'm personally really enjoying them and so here's to the next 12 months of rapport-building, information-sharing, fun-having Team Eating.
Are you fascinated by the result you have delivered by utilizing DevOps principles, but are still trying to figure out how to build-in performance? Are you looking to have performance automated throughout your continuous delivery and continuous testing lifecycle? Keep reading to learn how to accomplish this.
Yet, just like Security testing we can break Performance testing down into a set of discrete test types under the overall label of Performance. In doing this we give more opportunity for the test team to a level of Performance testing that draw on their understanding of the system or application under test. Let’s take the example of Performance testing a website, as it’s easy to get access to those and practice the techniques described. Most Performance testing is either benchmark because the site is new or comparative, because some changes have been made and we want to ensure the site is as performant as before. However, that covers performance from the user facing perspective. To get a complete picture we need to do Performance testing of the infrastructure too. This testing would include both the underlying infrastructure and connected network devices, plus the site exposed to users and the actions they perform. In summary then we could break-down Performance testing to the following types: Comparative Performance• Response Time• Throughput• Resource Utilisation
Full System Performance• Load• Stress• Soak For the purposes of this post, I’m going to ignore the Full System Performance and suggest in this scenario we need to get a 3rd party in to help us out. The comparative Performance testing of the website however is perfectly doable by the test team. Let’s see what and how.
Response Time ComparisonThe user’s perception of the time it takes the service to respond to a request they make, such as loading a web page or responding to a search, is the basis for Response Time comparison testing.Measuring Response Time
Response time should be measured from the start of an action a user performs to when the results of that action are perceived to have completed, for some singular task. The measurement must be taken from when a state change is triggered by the start of an action such as clicking a link to navigate from one page to another, submitting a search string or confirming a filter they have just configured on data already returned. For Services with a web front end, use the F12 developer tools in IE (for example) to monitor timings from request to completion. 1. Open IE, hit F12 and select ‘Network’, then click on the green > to record 2. Enter the target URL and capture the network information3. Click on ‘Details’ and record the total time taken Test EvidenceA timing in seconds should be taken and recorded as the result in the test case. Multiple time recordings are advisable to ensure there were not lulls or spikes in performance that skew the average result. ---
Throughput ComparisonThis measure is the time it takes to perform a number of concurrent transactions. This could be performing a database search across multiple tables or generating a series of reports.Measuring Throughput
Measuring Throughput from the user’s perspective is very similar to measuring response time, but in this case Throughput is concerned with measuring the time taken to perform several tasks at once. As with Response time, the measurement should be taken from the start of an action to its perceived end. A suitable action for Throughput might include the generation of weekly/monthly/yearly reports where data is drawn from multiple tables or calculations are performed on the data before a set of reports are produced. Monitor system responses in the same way for Response Comparison above, but also include checks of dates and timings on artefacts or data produced as part of the test. In this way the user facing timings plus the system level timings can be analysed and a full end to end timing derived.
Test EvidenceCareful recording of the time taken to complete the task is needed, as with throughput tests it may not always be obvious when a task has completed. For example, if outputting a series of files, check the created date and time for the first and last files to ensure the total duration is known. Record the results in the relevant test cases, ideally of several runs as suggested for Response time. ---
Resource UtilisationWhen the service is under a certain workload system resources will be used, e.g. processor, memory, disk and network I/O. It’s essential to assess what the expected level of usage is to ensure no unacceptable degradation in performance.
Measuring Resource UtilisationUnlike Response and Throughput comparisons, Resource Utilisation measurement can only be done with tools on the test system that can capture the usage of system resources as tests take place. As testing will not generally need to prove the ability of the service to use resources directly, it’s expected this testing will be combined with the execution of other test types, such as Response and Throughput, to assess the use of resources when running agreed tests. Given this, the testing would ideally be done at the same time as Response and Throughput. One example way to monitor resource usage is by using the Performance Monitoring tools in the Windows OS. To allow us to go back to the configuration of Monitors we set up it’s actually best to use Microsoft Management Console. Here’s how:
1. Open the Start/Windows search field and enter MMC to open the Microsoft Management Console2. In MMC add Performance Monitor snap-in via File > Add/Remove Snap-in...
3. Load up the template .msc file that includes the suggested monitors by going to File > Open and adding the .msc file
To do this, save ta copy of the file on GitHubhttps://github.com/MarkCTest/script-bucket/blob/master/peformance-config.msc
4. The monitoring of system resources will start straight away.5. To change the time scale that's being recorded; Right click on 'Performance Monitor', select 'Properties' and change the duration to slightly beyond the length of the test you're running.
Test EvidenceWhere possible extracted logs and screen shots should be kept and added to the test case as test evidence. Some analysis will of the results will need to be done and as with other comparative test types several runs are suggested.
So there we go, it’s easy to do simple performance checks that can then inform the full system performance testing or stand on their own if that’s all you need. Mark.
Customer expectations for their mobile applications are on the rise and meeting them is a demanding task. Keep reading to find out more about Hewlett Packard Enterprise’s comprehensive performance engineering solution for mobile applications to meet your customer demands.
In our new TestRail Highlight blog series we will take an in-depth look at select features of our test management software TestRail. As software projects evolve, every testing team needs to think about how they will handle changing requirements and test cases for their projects and releases over time. Today we will take a look at TestRail’s unique features that help help teams archive test runs, track the history of test case changes and easily manage baselines for different project release branches.
Not every team needs to use the full capabilities of these features. One of our main design goals for TestRail is to make the application as easy to use as possible, while at the same time providing all advanced features large teams need to grow with TestRail. For example, basically all teams using TestRail will benefit from TestRail’s test case history functionality or the ability to archive testing efforts. But not all (or even most!) teams need to use TestRail’s advanced baseline project mode. Please see below for an overview of TestRail’s rich project and test case versioning support!Try TestRail Now Archiving and Closing Test Runs/Plans
In addition to managing and organizing all current testing efforts, many teams adopt a test management tool so they can easily store, track and review past test results. TestRail makes it very easy to start many test runs against your test cases over time (e.g. for different iterations and project releases), and even start test runs in parallel (e.g. to test different platforms and configurations at the same time). This makes it easy to reuse the same case library without duplicating or synchronizing any test cases.
But if you are changing and improving your test cases for new project versions, wouldn’t old test runs show the wrong test case details? Isn’t it critical that old test runs show the exact test case details they were tested against? Enter TestRail’s Close Run/Plan feature. TestRail provides a unique feature to easily archive and close test runs and test plans. When you close a test run or plan, TestRail actually archives and copies all related test case details behind the scenes. So if you move, update or even delete test cases in the future, closed test runs wouldn’t be affected by this and would still always show the exact test case details your team tested against.
TestRail also prevents testers from making any additional changes to a closed test run and its results, so you can always review your previous test results and be sure that the archived runs weren’t modified. While at first a seemingly simple feature, TestRail’s Close Run/Plan option requires a lot of work behind the scenes to accomplish our goal of making closed test runs immutable. This functionality is critical for basically any testing team and surprisingly few test management tools actually offer similar features that come close to TestRail’s implementation.
Close test runs & plans for accurate test case and result archives Full Test Case Change History
TestRail makes it easy to update your test cases at any time, so you can directly improve and change your test cases during testing or when you review your case library. This is especially helpful when your application and development is under constant change and if you need to adopt to changed requirements quickly. Test cases changes are also automatically reflected in all active test runs. So especially teams that use exploratory testing and other agile testing techniques benefit from live updating and improving test cases during testing.
With all the changes teams make to test cases over time, it’s often helpful to know what changes were made, who made those changes and when a test case was updated. TestRail keeps a detailed log of all changes that were made to a test case and you can easily review this log. When you open a test case in TestRail, simply switch to the History tab from the sidebar. Not only will you be able to see when a test case was updated and who made the changes, but you will also see a detailed diff of all changed test case attributes. This also makes it easy to revert any changes by copying previous test case details to the latest version.
Full test case history to track test changes by person, date and content Project Suite Modes and Baseline Support
When we originally introduced TestRail 4.0 about 18 months ago, we added new suite modes to manage your test cases. In earlier TestRail versions we enabled test suites by default to organize your test cases in top level test suites and sections. As we invested a lot of resources over the years to make test suites much more scalable and as we introduced new view modes, we decided to default to a single test case repository per project. Teams can still enable and use test suites, but for most projects using a single case repository per project and organizing test cases via sections and sub sections works much better.
But this is not the only option we introduced at that time. Lesser known to many TestRail users, we also added new baseline support to projects. Baselines allow you to create multiple branches and test case copies within a project. Internally a baseline is similar to a test suite, and it includes all sections and test cases of the original baseline or master branch it was copied from. You can switch a project to baseline mode under Administration > Projects > edit a project.
Should your team use and enable baseline support for your projects? Probably not! Teams should only use baselines in a specific situation: if you have to maintain multiple major project versions for a long time in parallel. That is, your team needs to maintain and release multiple main branches (e.g. 1.x and 2.x) in parallel for many months or years, and the test case details will be quite different for each baseline over time, as each version will need different test cases. We designed baseline support specifically for this scenario and it’s a great way to manage multiple branches and versions of a project’s test case repository for multiple parallel development efforts.
Using baselines to maintain multiple parallel test case repository branches Try TestRail Now
The above mentioned features make it easy to manage, track and review different test case versions and changes over time. Not every team will need all of the above features though. Especially the baseline mode should be limited to projects where this is really required. But TestRail offers advanced versioning and history tracking features for all kinds of scenarios and configurations.
In addition to the mentioned tools to manage your test case versions, TestRail also comes with various helpful options to track your releases, iterations and sprints via milestones and test plans. Make sure to try TestRail free for 30 days to improve your testing efforts if you aren’t using it yet, and check out TestRail’s great versioning support.
I've been on a client site where we're using Visual Studio, Selenium WebDriver and C# for not only web front end but more system level automation testing.
As part of getting tooled up and informed as to how our favourite tools work with C# the team and I put together a Cheat Sheet to get us all started quickly. I thought I'd share that with you in a brief post.
Be warned, completing the below takes maybe an hour to get set up and then about 2 to 3 days full on to go through the material. If you’re working, with family, etc. expect a week with great focus.
One of the biggest challenges with adopting a new technology set is simply getting started. How often do we wish for a guiding hand to get us through the first baby steps and off building tests? Well, if you're a Test Architect like me, pretty much all the time! I hope the below helps.
1. Install Selenium IDE on FirefoxNo really. As I've said before the IDE is great for doing web page node discovery and grabbing those names, IDs, CSS classes, etc. quickly and easily. This allows you to do a rough Proof of Concept script to prove the automation flow and then export the Selenease commands as C# in this case.
You'll then strip out of the code the elements you want and discard the rest. The alternative is you can right-click, Inspect element and read the code. Just use the IDE.
Get it from the Chrome store or here:https://addons.mozilla.org/en-US/firefox/addon/selenium-ide/
2. Get Visual StudioIn order to structure and build out your C# code you'll want to grab a copy of Visual Studio. There are many flavours and if your company is a Microsoft house go get IT or whomever to provide you a copy. Failing that or if you're suffering budget restrictions you can grab a free version.
The best I've found is Visual Studio Community Edition. Once installed you'll need to sign-in with a Microsoft email, part of the universal account / ID approach they now use.
Get Community Edition from here:https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx
3. Learn C# BasicsIf you're new to C# then you'll need to learn a little. There's a great resource over on the Microsoft Virtual Academy which you can take for free:https://mva.microsoft.com/en-US/training-courses/c-fundamentals-for-absolute-beginners-16169?l=Lvld4EQIC_2706218949
I've been told the link can sometimes say the course has expired. If you see that, just hit YouTube: https://www.youtube.com/watch?v=bFdP3_TF7Ks
4. Practice Selenium C#If you want to jump straight in and not start mastering C# to get building out a framework at this point, then the site you want is this one: http://toolsqa.com/selenium-c-sharp/
Or possibly better still watch Learning Selenium Testing channel on YouTube:https://www.youtube.com/watch?v=qKUfnvG0VHU&feature=youtu.be
5. Practice, Practice, PracticeOnce you’re set-up and running with your first basic tests, be sure to practice practice and practice some more. Here’s some great sites to practice against:http://the-internet.herokuapp.com/ http://www.seleniumframework.com/demo-sites/ http://www.allthingsquality.com/2011/11/sites-for-practicing-your-web-testing.html
If you need a book then get the only book out there that has pretty much all the answers you need in one place: Selenium Recipes by Zhimin Zhan
How do you test your AngularJS applications? With Protractor. Protractor is an end-to-end testing framework for AngularJS applications. This getting started guide is for new software developers in test who are interested in learning about Protractor testing. By following this getting started guide, you’ll understand how to build a firm foundation and learn fundamental Protractor testing for AngularJS applications.Build a solid foundation
- Online Courses: codeschool.com (monthly fee), codeacademy.com (free) and jstherightway.org have excellent interactive courses with reading material, video tutorials, screencasts and programming challenges.
NodeJS is for server-side, and I only suggest taking a quick crash course.
- Online Course: codeschool.com/courses/real-time-web-with-node-js (monthly fee)
- Online Courses: codecademy.com/learn/learn-angularjs (free) and codeschool.com/courses/shaping-up-with-angular-js (monthly fee)
- Book: ng-book.com
Protractor supports AngularJS directive strategies, which allows you to test AngularJS applications without much effort. Protractor is a Node program, which is a wrapper around WebDriverJS. I recommend skimming through the WebDriverJS Users Guide, Protractor API and Protractor Style Guide before writing any tests. Protractor uses Jasmine or Mocha for its test syntax.
- Tutorial: angular.github.io/protractor/#/tutorial (free) and egghead.io/series/learn-protractor-testing-for-angularjs (monthly fee)
How Protractor works and interacts with AngularJS (workflow)
Spec and Configuration Files – Protractor needs two files to run, the test or spec file, and the configuration file. The spec file (test) is written using the syntax of the test framework you choose, such as Jasmine or Mocha along with Protractor API. The configuration file is simple. It tells Protractor how to configure the testing environment – Selenium server, which tests to run, browsers and more.
AngularJS directives – When searching for elements in the AngularJS app using Protractor, taking advantage of AngularJS directives will save you hours of pain and frustration. Avoid using CSS selectors like IDs and Classes as a last resort when writing Protractor tests. You heard me! This is a best practice of Protractor development. Avoid using CSS selectors.
Mocking – One of the main reasons for mocking is to prevent flaky tests. Everyone executing end-to-end tests comes to this crossroad. We can mock some or all of our services, HTTP backend, module and more.
Control Flow – WebDriverJS maintains a queue of pending promises, called the control flow, to keep execution organized.
elementExplorer and Elementor – used for debugging or first writing a Protractor test. You can enter a protractor locator or expression and elementExplorer/ elementor will test it against a live protractor instance. Elementor is considered an improved element finder for Protractor.Conclusion
Greg Sypolt (@gregsypolt) is a senior engineer at Gannett and co-founder of Quality Element. He is a passionate automation engineer seeking to optimize software development quality, while coaching team members on how to write great automation scripts and helping the testing community become better testers. Greg has spent most of his career working on software quality — concentrating on web browsers, APIs, and mobile. For the past five years, he has focused on the creation and deployment of automated test strategies, frameworks, tools and platforms.
So there you are: you’ve finally decided to install the SonarQube platform and run a couple of analyses on your projects, but it unveiled so many issues that your team doesn’t know where to start. Don’t be tempted to start fixing issues here and there! It could be an endless effort, and you would quickly be depressed by the amount of work that remains. Instead, the first thing you should do is make sure your development team fixes the leak. Apply this principle from the very beginning, and it will ensure that your code is progressively cleaned up as you update and refactor over time. This new paradigm is so efficient at managing code quality that it just makes the traditional “remediation plan” approach obsolete. Actually, so obsolete that related features will disappear in SonarQube 5.5: action plans and the ability to link an issue to a third party task management system.
“Why the heck are you dropping useful features? Again!?…”
Well, we’ve tried to dogfood and really use those features at SonarSource ever since we introduced them – but never managed to. Maybe the most obvious reason we never used them is that far before conceptualizing the “Leak” paradigm, we were already fixing the leak thanks to appropriate Quality Gates set on every one of our projects. And while doing so, nobody was feeling the need to rely on action plans or JIRA to manage his/her issues.
There are actually other reasons why those features never got used. First, action plans live only in the SonarQube server, so they don’t appear in your favorite task management system. Because of that, chances are that you will eventually miss the related dead-lines. This is why you might be tempted to “link issues” to your task management system. But this “Link to” feature isn’t any better. Let’s say you’re using JIRA in your company. When you link an issue to JIRA, the SonarQube integration automatically creates a ticket for that issue. So if you want to keep track of 100 issues, you’ll end up with 100 JIRA tickets that aren’t really actionable (you just have a link back to SonarQube to identify every single issue) polluting your backlog. What’s even worse is that when an issue gets fixed in the code, it will be closed during the next SonarQube analysis, but the corresponding ticket in JIRA will remain open! Anyway, issues in the SonarQube server and tickets in JIRA just don’t have the same granularity.
“Still, there are cases when I really want to create a remediation plan. How can I do that?”
As discussed previously, you should really avoid defining a remediation plan, and take the opportunity instead to spend the energy on “fixing the leak” instead. Still, occasionally, you might be forced to do so. The main case we can think of is when you absolutely want to fix critical bugs or vulnerabilities found on legacy code that might really affect your business if they pop up in production. In that scenario, indeed you might want to create a dedicated remediation plan so that your development team gets rid of this operational risk.
The good thing is that SonarQube already has everything you need to clearly identify all those issues and plan a task to make sure they got fixed – whatever task management system you’re using:
- In the SonarQube UI:
- Start tagging issues you want to fix with a dedicated and specific tag, like “must-fix-for-v5.2″
- Create a public “issue filter” that displays only issues tagged with ”must-fix-for-v5.2″
- In your task management system:
- Create a ticket in which you reference the URL of the issue filter
- Set a due date or a version
- You’re done! You have a remediation plan that you can manage like any other task and your team won’t forget to address those issues.
“I don’t need anything more then?”
Well, no. Defining remediation plans this way gives the best of both worlds: identifying issues to fix in the SonarQube UI, and planning the correspond effort in your own beloved task management system.
And once again, remember that if your team fixes the leak, chances are you will not need to create a remediation plan any longer. So yes, even if I’m the one who initially developed Action Plans and the “Link to” features a long time ago, I think it’s really time to say bye bye…
That got me thinking about how I arrived here, at a 16+ year long career in software testing. Now clearly I arrived a bit later than Gerald so I can tell you I have never and no doubt will never use a slide rule. In truth I doubt I even know what one is really.
Being amazed by a calculator aside and the amazing things you could do with that (2318008) the earliest computing thing I remember was getting an Oric Atmos. I can't even recall how it was programmed. I do remember plugging it in and nothing appearing on screen. Then discovering we had to tune in the portable TV my Mum had bought me to see the stunning output this thing could generate.
The next marvel I encountered in junior school, the world changing ZX81. How many of you remember those things? My two friend Chris Duignan and Shweb Ali formed the CAD computer club and blasted our way through many lunch times typing in the printed programmes we got from computer magazines. The problem was they were copies of printouts done on thermal paper. Consequently, they never worked first time. A ; or : is very hard to see on copied thermal printouts! Larger programmes went onto the 48K RAM pack, so long as it didn't move accidently and lose all your work.
Now at this point the home computing market started to introduce serious competition. Vying for attention at the same time where the Amstrad Commodore CPC464, Sinclair ZX Spectrum and the Commodore 64 if I remember correctly. My friends and I switched to the Spectrum camp pretty solidly.
Mine was a Spectrum 64 with those dandy rubber keys, so special.Sinclair ZX Spectrum and the Commodore 64
I also recall at some point getting my hands on a VIC 20. Can't remember what I ever did with this one though!
From here on it was all "IBM" PC's as they used to get called. That was the way forward. Many a DOS disc load after and productivity was sky high. The time of course being spent on my first video game: Alone in the Dark
Ah, those were the days. I'm just glad they're over and I can hardly remember what I ever did with these things or used them for. Give me my Win 7 and 10 boxes, MS tech and Office with ethernet connection any day!
Last night I was watching a lecture by Harry Collins in which he talks about the relationship between science and democracy and policy. The slide below shows how the values of science and democracy overlap (evidence-based, preference for reproducible results, clarity and others) but how science's results are wrapped by democracy's interests and perspectives and politics to create policies.
I spent some time thinking about how these views can serve as analogies for testing as a service to stakeholders.
But Collins says more that's relevant to that relationship in the lecture - much of it from his book Are We All Scientific Experts Now? In particular he argues that non-scientists' opinions on scientific matters should not generally be taken as seriously as those of scientists, those people who have dedicated their lives to the quest for deep understanding in an area.
He stratifies the non-scientific populus, though, making room for classes of expertise that are hard-earned - he terms them interactional expertise and experience-based expertise - that non-scientists can achieve and which make conversation with scientists, and even meaningful contribution to scientific debate, possible.
I find this a useful layer on the tester-stakeholder picture. Sure, most of our stakeholders might not know as much as we (think we) do about testing, about the craft and techniques of testing. But that doesn't mean that there aren't those who can talk to us knowledgeably about it. This kind of expertise might be from, say, reading (interactional) or perhaps from past testing practice or knowledge of the domain in which testing is taking place (experience-based) and I like to think that I am, and that we should be, open to it being valuable to us and whatever testing effort we're engaged in.
I recently heard an analogy about DevOps that I think describes it to a tee—pipelines.
Keep reading to find out why we need to rethink our thoughts about silos and instead embrace the pipelines that connect them.
“What do you want to do when you grow up?”
I don’t think that many of us would have answered this question by saying, “I want to be a Tester when I grow up!”
Still, as it we can see (from the recent State of Testing Report) many of us today feel proud to tell our friends and family that we work as Testers and are recognized for our contribution to technology and innovation.
What is “The State of Testing”?
The State of Testing is the largest testing survey worldwide (conducted by this QA Intelligence blog together with Tea Time with testers). Now third time running, with over 1,000 participants from more than 60 countries, and in collaboration with over 20 bloggers, the survey aims to provide the most accurate overview of the testing profession and the global testing community. Held yearly, the survey also captures current and future trends.
- Testing and development have become distributed tasks. With 70% of companies working with distributed teams in two, three or more locations. This requires adapting workflow habits and skills to maintain high productivity without close proximity to other teams involved in each project.
- Increase in the percentage of organizations where the testing function reports to Project Management rather than to a VP or Director of Quality (in comparison to last years’ report). This could be due to the trend of testing groups becoming part of the organic developments teams, for those implementing Agile or SCRUM.
- Formal training and certification is on the rise. This trend is true mostly for India and Western Europe, but is a trend that reflects the regard for testing as a profession that requires more formal training. While you might not agree that there is such a need for certification and formal training, we can still take it as complement to our professional recognition.
- Communication is still key. With nearly 80% of the responses, the leading “very important” skill a tester needs is good communication skills (3rd year in row by the way). In fact, only 2% of all respondents regarded this as non-important!I have touched on this point before in a previous blog post – Using your Kitchen as a Communication Channel
In a nutshell….
The accelerating pace of development is making our work more challenging than ever. And overall we are seeing a more serious approach towards quality and testing in our work-ecosystem.
Today, we feel that testing is seen as a critical activity by many of the same people who used to see testers as “unskilled individuals” doing the least important tasks in the end and slowing down delivery.
I mean, we always knew we had an important role in any successful product or application release, but it is becoming apparent that everyone else knows this as well.