Interviewing is like dating. You really just wish the right person would fall into your lap sometimes. Instead you spend hours picking and choosing the right one from a pile of candidates. You get to know them. Maybe you even take them on full time. Only then you realize that they’re not cut out for […]
Although NGINX conf 2015 took place back in September, not a day passes without someone asking me to summarize highlights from the event, future NGINX plans, what’s coming up, and related information requests, so I prepared this blog as an update on “what’s up with NGINX”. But first, I would like to once again thank the NGINX team for […]
No More Over-the-air Installation of Mobile Applications for iOS Over-the-air-installation (OTA) offered a great way for testers to simply scan a QR code and then proceed to install an iOS or Android mobile application on their smartphone, skipping the tedious process of syncing the app over iTunes via USB cable and enabling testers to test […]
From time to time, we tap our large community of experts to contribute in-depth pieces of content to the uTest blog. Although the diverse backgrounds and experiences of these experts are usually reflected in their writing, we thought that the community as a whole might be interested in a little background on these mysterious wordsmiths. Without further […]
Whenever I go to a QA conference, I’m struck by just how many managers relate how the training is well and good, but they can’t get their companies to implement it. The problem is usually a resource constraint, company culture, or lack of management buy-in.
I wanted to understand this a little more, so using my role as founder of the DC Software QA and Testing Meetup, I reached out to my members and found two QA managers interested in discussing their teams.About the Interviewees
Brian works for a federal government contractor on a development team consisting of 30 software and 5 QA engineers. The QA team has a manual testing background, with 2 of them having 3 years of UI automation experience. They all have a beginner to intermediate level of understanding of Agile processes, and largely worked in a Waterfall SDLC prior to their recent Agile adoption project. 3 of the members are aligned to Agile development teams while the other 2 are functioning in an extended fashion.
Sue works on a smaller team composed of both in-house and 3rd party development and QA team members, giving a total of 8 developers and 4 QA (plus a business analyst acting as QA). The QA team is able to conduct both front- and back-end tests for a Web-based product that serves a mix of commercial and government customers. The team members are dedicated to projects and cross-trained to back each other up.
Both managers are passionate, long term QA managers. They have worked with a variety of companies and projects and are both hands-on managers.Describe your Development Cycle
Sue: Sprints are 2 weeks long, followed by a 1 week regression, and then a release. During the sprint, the team runs targeted tests around the scope of the changes and exploratory tests when the feature is integrated.
Continuous Integration (CI) is only with the in-house team and it is only from the developer’s local machine to the deployment tool used. There is no CI to a test environment.
Brian: Sprints are 2 weeks and the number is dependent on the size of the development project. The development team desires CI, but it has not been implemented.Talk About Automation.
Brian: We are about 80% manual. Our automation tools consist of mostly UI-based automation through Rational Functional Tester (RFT). We have done a little work with Selenium IDE and are exploring a rollout of Selenium Webdriver as a possible replacement for RFT. We also just started using JMeter for performance testing of one application. Developers do not help with automation.
Sue: Right now automation is about 25% and growing. We are using Test Studio by Telerik. Tests are developed using the recorder, and if things change, then the team updates the actual code. While developers haven’t used the tool yet, they do run the scripts for integration testing.What are Your Biggest Pain Points?
- QA attends scrums, but they do not feel like they are part of the team. They bring up the issues that block them, but don’t seem to take them seriously. Quite often the QA manager must push the developers to react.
- QA isn’t given enough time to do their work.
- The requirements don’t have enough detail up front for scripts to be written. QA needs to be more involved upfront.
- Manual testing is tedious and redundant. It’s hard to keep morale and interest up.
- Interactions with the 3rd party team members can be tough. Poor communication plays a big part in this.
Brian: The biggest challenge is the security protocols used by our customers and how they limit access to our applications being tested. This is a major roadblock with regard to unattended automation. On top of this is the amount of busy-work and compliance with regulations, which can be limiting.Tell Me About Your Successes
Brian: We do great in that we’ve initiated the first professional QA operation that the customer has experienced. We’ve introduced automation, standardized reporting of defects, capture of critical metrics, and improved the way work is handled across our program.
Sue: Through strong training and mentorship, the QA team has become a ‘well oiled machine’. The team feels as one, and since they are cross-trained, they are able to support one another and have some variety in their tasks. Implementing “QA Guidelines”, and teaching the team how to be strong communicators has provided a strong foundation from which the team can build.Is There Anything Else You’d Like to Bring Up?
Sue: It’s tough to get good QA analysts. While the focus in the industry seems to be on hiring for automation skills, having a QA mindset is the real key.
Manual testing can be a bore. You need to find ways to be creative and make it fun — show your team new and interesting ways to test to keep them motivated and interested in testing. Teach them to think outside the box. Help them keep up-to-date with the latest tools, standards, and techniques in the industry.
Brian: Get your developers to think of things from a quality perspective. The engineer should dialogue system requirements, specs, problems, etc. with their QA Engineer on a daily basis as part of an Agile team. As the developer becomes familiar with the work the QA team does, they gain a different perspective on quality. Unfortunately, in Waterfall shops that are highly siloed, engineers don’t get that perspective.Summary
These interviews mimic what I have been hearing on the street. Companies are doing their best to be agile, and to implement automation. This is working to varying degrees of success.
Joe Nolan is the Mobile QA team lead at Blackboard. He has over 10 years of experience leading multi-nationally located QA teams, and is the founder of the DC Software QA and Testing Meetup.
Update: November 27th, 2015 at 6:25pm EST What does Black Friday look like for the Retailers? Here is a last look at some real customer traffic being seen by Dynatrace UEM. This is what retailers are seeing on there end; visits, landing pages, exit pages, conversions, bounces, user actions, client errors, etc… Everything they need […]
The post Black Friday/Cyber Monday Live Blog: Retailers’ Mobile & Web Performance appeared first on Dynatrace APM Blog.
MVPs help better and showcase just how powerful our .NET community can be. Today we wanted to celebrate two long-term members of the MVP community. Thank you for all your contributions.Agus Kurniawan
Recognized as a C# MVP since 2004, Agus Kurniawan spends his days as a lecturer with the Computer Science faculty at Bogor Agricultural University in in Bogor, Indonesia. He is currently pursuing a PhD in Computer Science, already having obtained a MS in Computer Science from Bogor U. Prior to his latest academic pursuits, he worked as a software engineer, solution architect, and consultant.
Adil considers himself a software craftsman in his role as Senior Developer at Nintex. A seven-time MVP, he has extensive experience in designing and developing enterprise scale applications on Microsoft .NET Framework. Lately he has been focusing on cross platform mobile app development on Windows, Android and iOS.
When he’s not developing, you can find Adil maintaining an active presence in offline and online technical communities, often participating as speaker in technical events, and enjoying life as a husband and father. Read about his latest projects on his blog and on Twitter @adilamughal.
I've found it interesting to read the recent flurry of thought on the testing/checking question, thoughts referring back to Testing and Checking Refined, by James Bach and Michael Bolton, in which the following definitions for the terms are offered:
- Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modelling, observation, inference, etc.
- Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
The definitions are accompanied by a glossary which attempts to clarify some of the other terms used. These chaps really do sweat the semantics but if I could be so presumptuous as to gloss the distinction I might say: testing is an activity which requires thought; checking is simply a rigid comparison of expectation to observation.
There has been much misunderstanding about the relationship between the two terms, not least that they are often seen in opposition to one another while at the same time testing is said to include checking, as in Why the testing/checking debate is so messy – a fruit salad analogy, where Joep Schuurkes laments:
What we’re left with are only two concepts, "testing" and "checking", and the non-checking part of testing is gone.In Exploring, Testing, Checking, and the mental model, Patrick Prill proposes that
When a human is performing a check, she is able to evaluate many assertions, that often are not encoded in the explicit check.For me, by the definitions above, this means that the human is not simply performing a check. The check is narrow while the human perspective can be broad (and deep, and lateral, ...) which, for Bolton and Bach, as I interpret it, means that this is testing. I think that this is also what Anders Dinsen is getting at in Why the dichotomy of testing versus checking is the core of our craft:
As a tester, I carry out checks when I test, but when I do, the checks I am doing are elements in the testing and the whole activity is testing, not checking.One thing that I find missing from the conversation that I've seen and heard is the notion of intent. I'm wondering whether it's useful to think about some action as a check or test depending on the way that the result is expected to be, or actually, used in the context of the person who is doing the interpretation.
Here's an example: in Migrate Idea I talked about some scripts that I designed to help me to gain confidence in the migration and upgrade of a service across multiple servers. The scripts consisted of a series of API requests followed by assertions, each of which had a very narrow scope. Essentially they were looking to see that some particular question gave particular answers against the pre- and post-migrated service on a particular server.
At face value, they appear to be checks, and indeed I used them like that when evaluating the migration server-by-server. After putting in the (testing) effort to craft the code, I expended little effort interpreting the results beyond accepting the result they gave (all clear vs some problem).
However, by aggregating the results across servers, and across runs, I found additional value, and this is something that I would happily call testing by the definitions at the top. So perhaps these scripts are not intrinsically a check or a test; they simply gather data.
Could it be that what is done with that data can help to classify the actions as (part of) a check or test? At different times, I conjecture that the same instance of some action can be checking, testing, both or neither. If I don't exercise thought, that (at best) means checking. If I do, that means testing.
One of the biggest hurdles in getting to Continuous Delivery and truly being Agile does not lie within the development team itself. Change requires a mindset that all people (managers and executives too) must adopt. Just as it takes a village to raise a child, it also takes a (corporate) village to raise a new culture. The movement away from Waterfall to Agile and Continuous cannot be handled just by one person.
Does the following situation sound familiar? Someone asks you for an estimate for the Level of Effort (LOE) of a feature. Not just a t-shirt size, but in days — something to be delivered sometime that year. You don’t have much time to really dive into it (it could even be during the same meeting that you first hear about the new feature), but you have to know all the details up front. So you give a number that you aren’t supposed to be held to because of all sorts of caveats you listed. But, someone remembers that number. And since someone said you are Agile (and Continuous Delivery), now you have to not only meet that estimate you gave up front, you are now tied to an Agile sprint with no room for error. This goes against everything you say you are!
To truly get to Continuous Delivery, you may need to step out of your comfort zone. Step away from promising a feature in a particular timeframe. Think priority. Think small chunks. Think iteratively. Get used to the unknown, but plan for it. Forecasting is not accurate and results in broken promises across the board. It certainly doesn’t help to hold someone’s feet to the fire if what they built today doesn’t match what they thought four months ago. The inexact nature of estimation doesn’t matter as much when you are delivering frequently and iteratively. Release early and often, and you can be dynamic in responding to customer feedback.
It is so important to have this ingrained not just in the team doing the work, but also in the people making decisions, prioritizing features, and talking to clients. If they are stuck in a Waterfall mentality, how can you progress to a Continuous model? I think everyone needs to know what it really means to be Continuous and Agile. It is a culture and a mindset — not just a process — that everyone needs to understand and believe in,from the individual contributor to the CEO.Turning the Tides
Let’s take a look at some ways teams may be struggling, and how they can adapt to turn the tide and get to Continuous Delivery.The StruggleThe Agile (or Continuous Delivery) Way (1) One person (or a small group) pre-defining granular stories and what is in or out of scope before it even gets to the team.Team gets epic, maybe milestones.
Team discusses epic together regarding open questions and overall definition of functionality.
Team defines milestones.
Team defines the stories for first the milestone and starts the backlog.
Team defines technical or planning spikes for subsequent milestones.
Team determines granular estimates for initial work, rough estimates for following work where possible.
Iterate. Telling someone WHAT to build
(Example - “Just build this feature”). Understanding WHY you are building it and WHO you are building it for.
(Example — “As a I want to be able to do so that I can ”). Planning and scoping everything (all stories, all tasks) in advance. Iterative planning that occurs every sprint, in priority order. Maybe you don’t get to a few stories. THAT’S OK.
You don’t know the effort for everything up front. Just what you are taking into the sprint.
Use backlog grooming and story mapping to flesh out stories and assign story points.
Focus on the higher priority things. Lower priority things can wait.
Iterate. Dev Managers make decisions on prioritizing backlogs based on ad hoc conversations with various people. Priorities need to be clearly communicated by PM, with daily (or at least weekly) involvement in backlog grooming.
Prioritization cannot happen in a vacuum. Overall feature/theme/epic priorities need to happen at the cross-tribe level and be visible to all. Accepting things into a sprint without knowing acceptance criteria.Don’t start work until everyone on the team knows what is needed to be successful. Driving towards feature complete, and not releasing until an epic is complete.Release early and often. Get customer feedback as soon as possible. The Minimum Viable Product (MVP) needs to be small. Think of it as a preview. This way you can pivot from customer feedback, and work on epics doesn’t go on for long periods of time. Making feature branches live for long periods of time creating a maintenance, merge, and regression. Headache.Leverage feature switches so that you can merge feature work into release branch as soon as it’s done, even if it won’t be visible to customers. No Agile training or prescribed process for team members.PM/management needs training in Agile and needs to commit to changing along with the dev teams.
All engineers, UID, QA, and PM involved in scrum teams need Agile training.
Let’s be clear, though — The Continuous model is not something that just happens overnight. It takes hard work and discipline, and understanding what is needed to address each aspect in order to be utilized successfully. If your leadership understands what it truly takes to be Agile, they can help you realize your goals. Without their transition to the Agile mindset, you are still Waterfall compressed into the pressures of an Agile timeframe.
(1) The opinions in this column are a combination of experience and research derived from Agile Testing: A Practical Guide for Testers and Agile Teams by Lisa Crispin and Janet Gregory, Chapter 3 – Cultural Challenges.
Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices. Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen. In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.
Profiling IIS and collecting coverage on .NET web apps continues to be a popular topic for both existing customers and organizations new to NCover. If you or your team are interested in profiling IIS, here are a few resources we have available this month that you may want to take advantage of. Please let us know if you have any questions. Enjoy!Webinar – Wednesday, December 2, 2015 11:00AM Eastern
Register now to learn the specific steps you can follow to collect code coverage on IIS and to ensure your web applications are ready to be deployed to customers. This webinar will cover how to quickly setup a project for covering IIS and all of your .NET web apps, show how to collect code coverage from any type of automated or manual test, provide details on how to ensure accurate code coverage in unique scenarios and provide an opportunity to ask questions specific to your situation. Register now for Covering IIS & Your .NET Web Apps.Resource Article on Covering IIS & Your .NET Web Apps
This resource article walks you through both the “auto-configure” method and the “manual” method for setting up a NCover project to cover IIS.Support Documentation on Covering IIS
This support article provides additional detail on the process of covering IIS and walks through the process of finding the process you want to cover and how you can use pre-coverage filters to focus coverage on only the desired areas of code. And, as always, if you have any question about topics covered here or just want to work with someone to help you and your team start collecting coverage on IIS, contact us and we would be happy to help!
The post Profiling IIS and Collecting Code Coverage on .NET Web Apps appeared first on NCover.
The 11/20 Weekend Trivia is now closed, the answer is TRUE – The most important aspect for usability for the web is accessibility. Make sure to check our our social platforms this Thursday, November 26, at 6pm (EST) for the winner! How to Win: Populate the form below with the correct answer for a chance to win. […]
There’s an App For That We all know how it works, and we’ve all done it. In the recent past, we are with our friends, families, or coworkers and one person will note this cool new app they’ve downloaded. They tell you how easy it makes accomplishing whatever task they set out to do, and […]
The post “Okay, Google” Looks to Make Downloading Apps Obsolete With Newest Venture appeared first on Software Testing Blog.
Note: The following is a guest submission to the uTest Blog from Sanjay Zalavadia of Zephyr. The quality assurance teams are made up of a number of professionals who are able to identify and mitigate issues within application builds. While their role hasn’t changed much, how the average tester looks has dramatically shifted from traditional standards. […]