(Thank you, Anne-Marie Charrett, for reviewing my work and helping with this post.)
One of the reasons I obsessively coach other testers is that they help me test my own expertise. Here is a particularly nice case of that, while working with a particularly bright and resilient student, Anita Gujrathi, (whose full name I am using here with her permission).
The topic was integration testing. I chose it from a list of skills Anita made for herself. It stood out because integration testing is one of those labels that everyone uses, yet few can define. Part of what I do with testers is help them become aware of things that they might think they know, yet may have only a vague intuition about. Once we identify those things, we can study and deepen that knowledge together.
Here is the start of our conversation (with minor edits for grammar and punctuation, and commentary in brackets):
What do you mean by integration testing?
[As I ask her this question I am simultaneously asking myself the same question. This is part of a process known as transpection. Also, I am not looking for “one right answer” but rather am exploring and exercising her thought processes, which is called the Socratic Method.]
Integration test is the test conducted when we are integrating two or more systems.
[This is not a wrong answer, but it is shallow, so I will press for more details.
By shallow, I mean that leaves out a lot of detail and nuances. A shallow answer may be fine in a lot of situations, but in coaching it is a black box that I must open.]
What do you mean by integrated?
That means kind of joining two systems such that they give and take data.
[This is a good answer but again it is shallow. She said “kind of” which I take as a signal that she may be not quite sure what words to use. I am wondering if she understands the technical aspects of how components are joined together during integration. For instance, when two systems share an operating space, they may have conflicting dependencies which may be discovered only in certain situations. I want to push for a more detailed answer in order to see what she knows about that sort of thing.]
What does it mean to join two systems?
[This process is called “driving to detail” or “drilling down”. I just keep asking for more depth in the answer by picking key ideas and asking what they mean. Sometimes I do this by asking for an example.]
For example, there is an application called WorldMate which processes the itineraries of the travellers and generates an XML file, and there is another application which creates the trip in its own format to track the travellers using that XML.
[Students will frequently give me an example when they don’t know how to explain a concept. They are usually hoping I will “get it” and thus release them from having to explain anything more. Examples are helpful, of course, but I’m not going to let her off the hook. I want to know how well she understands the concept of joining systems.
The interesting thing about this example is that it illustrates a weak form of integration– so weak that if she doesn’t understand the concept of integration well enough, I might be able to convince her that no integration is illustrated here.
What makes her example a case of weak integration is that the only point of contact between the two programs is a file that uses a standardized format. No other dependencies or mode of interaction is mentioned. This is exactly what designers do when they want to minimize interaction between components and eliminate risks due to integration.]
I still don’t know what it means to join two systems.
[This is because an example is not an explanation, and can never be an explanation. If someone asks what a flower is and you hold up a rose, they still know nothing about what a flower is, because you could hold up a rose in response to a hundred other such questions: what is a plant? what is a living thing? what is botany? what is a cell? what is red? what is carbon? what is a proton? what is your favorite thing? what is advertising? what is danger? Each time the rose is an answer to some specific aspect of the question, but not all aspects, but how do you know what the example of a rose actually refers to? Without an explanation, you are just guessing.]
I am coming to that. So, here we are joining WorldMate (which is third-party application) to my product so that when a traveller books a ticket from a service and receives the itinerary confirmation email, it then goes to WorldMate which generates XML to give it to my product. Thus, we have joined or created the communication between WorldMate and my application.
[It’s nice that Anita asserts herself, here. She sounds confident.
What she refers to is indeed communication, although not a very interesting form of communication in the context of integration risk. It’s not the sort of communication that necessarily requires integration testing, because the whole point of using XML structures is to cleanly separate two systems so that you don’t have to do anything special or difficult to integrate them.]
I still don’t see the answer to my question. I could just as easily say the two systems are not joined. But rather independent. What does join really mean?
[I am pretending not to see the answer in order to pressure her for more clarity. I won’t use this tactic as a coach unless I feel that the student is reasonably confident.]
Okay, basically when I say join I mean that we are creating the communication between the two systems
[This is the beginning of a good answer, but her example shows only a weak sort of communication.]
I don’t see any communication here. One creates an XML, the other reads it. Neither knows about the other.
[It was wrong of me to say I don’t see any communication. I should have said it was simplistic communication. What I was trying to do is provoke her to argue with me, but I regret saying it so strongly.]
It is a one-way communication.
[I agree it is one-way. That’s part of why I say it is a weak form of integration.]
Is Google integrated with Bing?
[One major tactic of the Socratic method is to find examples that seem to fit the student’s idea and yet refute what they were trying to prove. I am trying to test what Anita thinks is the difference between two things that are integrated and two things that are simply “nearby.”]
According to you, they are! Because I can Google something, then I can take the output and feed it to Bing, and Bing will do a search on that. I can Google for a business name and then paste the name into Bing and learn about the business. The example you gave is just an example of two independent programs that happen to deal with the same file.
So, if I test the two independent programs, haven’t I done all the testing that needs to be done? How is integration testing anything more or different or special?
At this point, Anita seems confused. This would be a good time to switch into lecture mode and help her get clarity. Or I could send her away to research the matter. But what I realized in that moment is that I was not satisfied with my own ideas about integration. When I asked myself “what would I say if I were her?” my answers sounded not much deeper than hers. I decided I needed to do some offline thinking about integration testing.
Lots of things in out world are slightly integrated. Some things are very integrated. This seems intuitively obvious, but what exactly is that difference? I’ve thought it through and I have answers now. Before I blog about it, what do you think?
We are sometimes asked, “What is the best configuration for my code review process?” That topic has even been debated internally at Seapine. There is no right way, but there are pros and cons to the different approaches. You might consider the following four strategies and determine what is best for your specific business needs.Review by File Owner
In this approach, each code file is assigned an owner who is an expert in that area. Every code change made to that file is reviewed by the owner regardless of what feature the code change belongs to. The file owner can create a code review container and conduct the code review at their convenience, regardless of whether the feature has been fully implemented. The advantage of this approach is that the reviewer is more likely to catch issues specific to that component and take ownership to keep that code file clean. For example, the owner would more easily identify newly added methods that duplicate existing functionality, misuse of a lock or mutex, incorrect usage of a specific library or toolkit, or not following UI or database standards. The disadvantage of this approach is that the file owner might not see the bigger picture for the feature and how the changes interact with other components.Review by Feature
In this approach, each feature or requirement is assigned a code reviewer. Every code change made as part of that feature is reviewed by a single person regardless of what code file is modified. The code review container is generally created and code reviews conducted once the feature is fully implemented, but the reviews could be done in smaller chunks if the feature has multiple sprints or milestones. The advantage of this approach is that the reviewer will fully understand the feature from top to bottom. For example, they would more easily identify poor flow of information between components or data that is gathered at the UI level but not saved to the database. The disadvantage of this approach is that the reviewer is not an expert in some areas of the code and might not catch component specific bugs.Review Both Ways
If quality is critical, you could approach code reviews both by file owner and by feature. All code changes made for a feature would be reviewed by a single feature code reviewer and potentially by multiple file owners (depending on how many code files were modified). From a quality perspective, this incorporates the advantages of both review approaches above. The disadvantage is higher cost since time spent on code review activities is doubled.Hybrid Approach
If you have certain files that are especially important and some features that are especially important, then you could consider a hybrid of the review by file owner and review by feature. In this approach, only critical files are assigned a file owner, which means only those files go through a review by file owner. Critical files may have characteristics such as a requiring high speed/scalability, low tolerance for bugs, or brittle code that often breaks when modified. Similarly only a subset of features goes through a code review. Features may be considered critical based on visibility, contractual obligations, or consequences of functionality failure. An advantage of this strategy is lower cost since less time is spent on code reviews, yet all critical changes are still reviewed. The disadvantage is that some code changes are reviewed twice (critical files modified as part of a critical feature), while other changes are not reviewed at all.Surround SCM Features
If you are using Surround SCM, functionality is available to support any of these code review approaches. Here are some Surround SCM features you may want to take advantage of in your code review process. If you need help configuring these in the way best suited for your company, Seapine’s professional services team would be happy to assist you.
- Code Reviews – Surround SCM has built in code review containers that manage the review process.
- Custom Fields – File owners can by identified via a custom field. The field format of the custom field should be SCM User, which automatically includes all Surround SCM users in the field’s dropdown list.
- Triggers – File owners can be notified by email if a file they own has been modified and needs to be reviewed. Configure a trigger to send an email to the user identified by the Owner custom field.
- Add to Code Review from History window – Highlight a file in the Source View window and then select the History command to display the file’s History window. Then highlight a range of file versions and select the Add to Code Review command to populate a code review container.
- Workflow states – When using the review by file approach, one way to track which versions have been code reviewed is via workflow states. Configure a trigger to change the state to Needs Review each time a file is modified. Each time the user perform a code review, they can review all changes since the last file version with a Reviewed state. After the code review is completed, change the file’s state to Reviewed.
- Filters – Setup a filer to easily see files you currently need to review. The filter criteria might have Owner set to <current user> and state set to Needs Review.
- Code Review Coverage Report – This report identifies what code changes have not yet been reviewed. For more details read this blog post.
I was inspired by Denali Lumma (@denalilumma) when she delivered a glimpse of the future in her talk about 2020 testing at the Selenium 2015 conference. The session was an excellent introduction that compared many scenarios of the minority testing elite versus the more common development team. The elite companies consider infrastructure FIRST, and the majority thinks about infrastructure LAST. It got my wheels turning regarding the future of software development. I don’t have all the answers right now, but I want to be part of the movement to plan and build architecture with quality. A few words come to mind when thinking about quality architecture — automation, scalability, recoverability, and analytics.Build a culture
When building a culture, avoid too much control. You want a culture that embraces freedom, responsibility, and accountability. Why is building a culture like this important? It allows passionate employees to innovate and find big-time solutions. You can’t plan for innovation. It naturally happens. When you give passionate employees an inch, they’ll take a mile. The future team culture needs to push the envelope and step outside their comfort zone.
This is slowly happening across the software development industry. The team makeup is being reshaped by removing specialized task silos (code, tests, continuous integration) and bridging the gaps between developers, QA, and DevOps, allowing them to move quickly and build quality up front.
The team needs to share tasks and responsibilities, but what does that mean? By increasing the team’s skills set and talent, everyone on the team can share specialized tasks and own quality. Here is an example of a team’s primary focus and who has shared responsibilities for every sprint:Team MembersWrite / Review CodeWrite / Review Automation
Tests (Unit > UI)DevOps Developer75%ShareShare QAShare75%Share DevOpsShareShare75%
The key is that everyone needs to embrace the new culture — one where QA and DevOps team members are embedded with developers and share responsibilities.Continue to focus on automation strategies
To improve the efficiency and reliability of a development project, the future needs minimal human involvement for all committed code. Teams will want to ship as soon as the code is ready and no later. The objective of automation is to simplify as much of the infrastructure with code that generates trustworthy reporting, allowing confidence in shipped features and bugs. The current standard for all companies must be: build, test, deploy, and recoverable infrastructure when things go wrong. The future of automation strategies should focus on testing pre-production and production environments. Remove the FEAR, and inject some chaos into your production infrastructure. Evaluate any failures that occur and find solutions to prevent those failures the next time.Everything needs to be SCALABLE
The year 2020 seems like a lifetime away for technology. I have learned one thing since Test Automation and DevOps entered the scene and took over the world’s software development — You’d better be ready to evolve and scale up quickly when change occurs. How do we prepare? Scale comes in many forms. (It doesn’t always mean cloud infrastructure.) Here is a list of ideas that comes to mind when we need to be scalable without affecting quality:
- Onboard new employees
- Cross-team training
- Deploying a process or policy change
- Cutting-edge technologies are born every day
- Application redesign
- Environment (machine-as-code, cloud-as-code)
Everything needs to be scalable. Are you prepared to evolve and scale when the change occurs?Repeatable and recoverable
The future of cloud computing is here to stay (for awhile). Moving quickly and reliably requires infrastructure-as-code. You should build environments for development, pre-prod, and production that are identical. There are a lot of technologies in this area, such as configuration management and containerization tools. Puppet and Chef are the most popular configuration management tools out there. They allow you to keep all your servers configured in a central place, and identical.
Cloud computing services will become the NORM for many reasons. They allow flexibility, disaster recovery, automation software updates, the ability to work from anywhere, security, and many other benefits. If you haven’t moved to cloud computing yet, it is only a matter of time before companies realize that the benefits are substantial enough to move their business into the cloud. The best defense against failures is cloud computing and configuration management tools.We need more ANALYTICS
Lastly, the future needs to focus on analytics. They will allow us to evaluate and recalibrate to improve processes, testing, application, infrastructure, and more, with instantaneous analytics alerting the team when things go wrong.Takeaways
- Build a culture that embraces freedom and responsibility
- Automation will continue to be part of the future
- Tools and processes power how changes move from developers to production
- Computers will be waiting for humans — humans won’t be waiting on computers
- Real-time analytics
Greg Sypolt (@gregsypolt) is a senior engineer at Gannett and co-founder of Quality Element. He is a passionate automation engineer seeking to optimize software development quality, while coaching team members on how to write great automation scripts and helping the testing community become better testers. Greg has spent most of his career working on software quality — concentrating on web browsers, APIs, and mobile. For the past five years, he has focused on the creation and deployment of automated test strategies, frameworks, tools and platforms.
Back in the days when smartphones began to dominate the market, customers started to access information on-the-go — emails, weather, news, sports etc. Many companies — including Walgreens –realized the need to address the specific requirements of the mobile user and typically introduced a mobile version of their primary website. Either it was a completely different […]
Last year, I interviewed Jerry Weinberg on Agile Software Development for the magazine that we produce at it-agile, the agile review. Since I translated it to German for the print edition, I thought why not publish the English original here as well. Enjoy.
Jerry, you have been around in software development for roughly the past 60 years. That’s a long time, and you certainly have seen one or another trend passing by in all these years. Recently you reflected on your personal impressions on Agile in a book that you called Agile Impressions. What are your thoughts about the recent up-rising of so called Agile methodologies?
My gut reaction is “ Another software development fad.” Then, after about ten seconds, my brain gets in gear, and I think, “Well, these periodic fads seem to be the way we advance the practice of software development, so let’s see what Agile has to offer.” Then I study the contents of the Agile approach and realize that most of it is good stuff I’ve been preaching about for those 60 years. I should pitch in an help spread the word.
As I observe teams that call themselves “Agile,” I see the same problems that other fads have experienced: people miss the point that Agile is a system. They adopt the practices selectively, omitting the ones that aren’t obvious to them. For instance, the team has a bit of trouble keeping in contact with their customer surrogate, so they slip back to the practice of guessing what the customers want. Or, they “save time” by not reviewing all parts of the product they’re building. Little by little, they slip into what they probably call “Agile-like” or “modified-Agile.” Then they report that “Agile doesn’t make all that much difference.”
I remember an interview that you gave to Michael Bolton a while ago where you stated that you learned from Bernie Dimsdale how John von Neumann programmed. The description appeared to me to be pretty close towards what we now call test-driven development (TDD). In fact, Kent Beck always claimed that he simply re-discovered TDD. That made me wonder, what happened in our industry between 1960s and the 2000s that made us forget the ways of smart people. As a contemporary witness of these days, what are your insights?
It’s perfectly natural human behavior to forget lessons from the past. It happens in politics, medicine, conflicts—everywhere that human beings try to improve the future. Jefferson once said, “The price of liberty is eternal vigilance,” and that’s good advice for any sophisticated human activity.
If we don’t explicitly bolster and teach the costly lessons of the past, we’ll keep forgetting those lessons—and generally we don’t. Partly that’s because the software world has grown so fast that we never have enough experienced managers and teachers to bring those past lessons to the present. And partly it’s because we don’t adequately value what those lessons might do for us, so we skip them to make development “fast and efficient.” So, in the end, our development efforts are slower and more costly than they need to be.
The industry currently talks a lot about how to bring lighter methods to larger companies. Since you worked on Project Mercury – the predecessor for Project Apollo from the NASA – you probably also worked on larger teams and in larger companies. In your experience, what are the crucial factors for success in these endeavors, and what are the things to watch out for as they may do more harm than good?
In the first place, don’t make the mistake of thinking that bigger is somehow automatically more efficient than smaller. You have to be much more careful with communications, and one small error can cause much more trouble than in a small project.
For one thing, when there are many people, there are many ways for new or revised requirements to leak into the project, so you need to be extra explicit about requirements. Otherwise, the project grows and grows, and troubles magnify.
It is very difficult to find managers who know how to manage a large project. Managers must know or learn how to control the big picture and avoid all sorts of micromanagement temptations.
A current trend we see in the industry appears to evolve around new ways of working, and different forms to run an organization. One piece of it appears to be the learning organization. This deeply connects to Systems Thinking for me. Recognizing you published your first book on Systems Thinking in 1975, what have you seen being crucial for organizations to establish a learning culture?
First of all, management must avoid building or encouraging a blaming culture. Blame kills learning.
Second, allow plenty of time and other resources for individual learning. That’s not just classes, but includes time for reflecting on what happens, visiting other organizations, and reading.
Third, design projects so there’s time and money to correct mistakes, because if you’re going to try new things, you will make mistakes.
Fourth, there’s no such thing as “quick and dirty.” If you want to be quick, be clean. Be sure each project has sufficient slack time to process and socialize lessons learned.
Finally, provide some change artists to ensure that the organization actually applies what it learns.
What would you like to tell to the next generation(s) of people in the field of software development?
Study the past. Read everything you can get your hands on, talk to experienced professionals, study existing systems that are doing a good job, and take in the valuable lessons from these sources.
Then set all those lessons aside and decide for yourself what is valuable to know and practice.
Thank you, Jerry.
Years ago I chucked a faulty video recorder and bought a cheap and compact PC to use as a PVR. (I run MythTV on Ubuntu, for those interested in such things.) Because me and Mrs Thomas don't watch telly that much, and record less, and because we're interested in not wasting electricity, we only have the box on when we're watching something on it or when we've scheduled something to record.
Of course, sometimes that means that we have to remember to leave it on. And we kept forgetting. But being a problem-solver, and interested in proportionate solutions, I implemented a quick fix. In fact it was more an initial trial, just a simple little sign that we stick next to the telly. It says VIDEO and has served us so well that we found no need for anything more sophisticated.
Until now. Our kids have come along and control the telly, operate the computer and so on. We're helping them to become interested in not wasting electricity too, and so their habit is to turn appliances off when they're done with them.
Do you see where this is going?
The word video means little to them. If it's anything at all it's something they watch on YouTube and nothing to do with recording, although it's not as alien as when I talk about taping something... And so our sign doesn't work any more; the girls just keep turning everything off as we have asked them to. Explaining carefully to them what the sign means, many times, hasn't helped.
Being a problem-solver, and aware that solutions can date and the problems they address can shift, and interested in meta-aspects of problem solving, I took a step back. Was I looking at this in the right way? What was really the problem here today? And whose problem is it?
The answers? Simply: No. The sign. Mine.
And so I've changed the sign. It now says Please don't turn the computer off.Image: https://flic.kr/p/drNBvr
Many organizations are familiar with exponential growth. .Performance Engineering is a top priority for nearly every organization today—which is creating the exponential growth for our Special Interest Group. In this group practitioners are collaborating and sharing their stores, so others can learn and leverage these experiences. Keep reading to find out how you can get involved.
It’s time for you to stop being content with the status quo and re-energize your QA career with Automation and DevOps — otherwise, you might find yourself fading away like Marty McFly! I’m talking to YOU, manual tester! And YOU, QA manager! Oh, and YOU TOO, automation engineer! Every one of you who has a vested interest in your career growth needs to familiarize yourself with automation and DevOps tools.Of Course You Need to Understand Automation
Let’s face it: In this day and age of software development, speed is the key to survival. In order to achieve clean builds, Continuous Integration, Continuous Delivery, and Agile development, manual testing just ain’t gonna cut it.
Everyone with the QA title needs to continuously build on their skill set, just like a developer. Even if you aren’t actively writing automation code, you still need to understand the capabilities and benefits of each type of automated test, especially the ones written by your development team. The team is relying on your expertise to guide them with acceptance criteria for stories, while bringing QA concepts to the table.Ok, Automation Yes, But Why DevOps?
How often are you left at the mercy of your DevOps team? DevOps is pulled in all directions with higher and higher priority tasks. Even worse, top tech companies are constantly raiding DevOps teams, so resources are quite often scarce. You can sit there and twiddle your thumbs, or you can learn some basic DevOps tasks to expedite your work, and leave the more complicated stuff to them.
QA teams constantly need special server and data setup. So many of these tasks are redundant. Server crashed unexpectedly? Don’t wait on DevOps, fix it yourself! Problems with certificates? Firewalls? A few quick lessons from your local DevOps team and you will be on your way to self-reliance! And when you have the confidence, you can start learning even more, such as Docker. (Here is a good link to a presentation by Chris Riley about What DevOps Means for QA.)But How?
If you’re asking yourself this, good! You are asking the right question. So how do you go about reinventing your career? I recommend these three tools as a jumpstart:
- Talk to a Recruiter – Wait, what? YES! You need all the incentive you can get! When you talk to a recruiter and discuss available jobs, you might find that your skill set is not in demand. But the best part is that a recruiter can tell you what is. This will help you define the direction you want to proceed in. Are you intimidated at the thought of talking to a recruiter? Then do some research online for jobs in your field. Don’t wait until you need a job only to discover the information you need too late!
- Take Classes – The best thing about modern technology is that you don’t even need to leave your house to take a class. Start small. Learn a simple tool using the tutorial provided with it to gain confidence, and add it to your skillset and knowledge base. You can then work your way up to more structured courses, even at the college level. And — It is probably free! (Check out this blog I wrote to give you an idea of what is available to you online.)
- Networking – This can be the most effective (and most daunting) tool at your disposal. Networking is powerful, so really consider it! Networking among your peers allows you to talk to people in the real world, see tool demos in action, and make mutually beneficial contacts.
Because this is the most important tool to help your career, it gets its own section.
If you live in a decent-sized city, you should be able to find a networking group for practically any technology field you are interested in. A simple search online should bring you just a few clicks from finding your next meeting.
If that fails, try Meetup.com. This site allows you to join for free, and find a local “meetup.” A meetup group is like a club for people who have a common interest. ( Like to play chess? There is probably a group dedicated to it.) Meetups span all topics, and there are regularly scheduled get-togethers. If someone determines there is a need for a group, that person will start one. Or, if you can’t find one, start your own. (You will be surprised how quickly you have members.)
I belong to a couple of QA-related meetups in the DC area, which range from 250-500 plus members. The organizers attempt to meet the needs of the majority, running anything from simple networking happy hours to tool demos and actual study sessions. A simple, recent search on the site using ‘QA’ showed eight meetups within 50 miles. Changing the search to ‘DevOps’ yielded at least 30, from a broad range to a specific focus. (Want to learn about Docker? There’s a meetup for that.)
Many networking group meetings, including meetups, are sponsored by technology companies eager to show off the latest and greatest tool. The cooler the tool, the better the turnout. DevOps meetups regularly have over 100 participants. It’s easy to network when you know you are all there for the same purpose. Don’t be intimidated and think you are a stranger in a strange land.Make the Time!
Perhaps the hardest part of this whole exercise is finding the TIME! You work late hours on projects. You have a family that needs attention. Your social calendar is fully booked.
Ideally, you can participate in any of the activities covered here at any time. But to gain what you need, you must commit to and allocate a minimum of three hours a week to your cause. It really is worth it.
Joe Nolan is the Mobile QA team lead at Blackboard. He has over 10 years experience leading multinationally located QA teams, and is the founder of the DC Software QA and Testing Meetup.
This joint initiative by the QA Intelligence blog and Tea-Time With Testers started 3 years ago and has gained quite a lot of clout in the testing community, with a growing amount of participants by the hundreds each year.
The survey seeks to identify and quantify trends and properties of the field of testing on a global scale. Repeating the survey annually enables monitoring changes from past to present, with the goal of provoking the worldwide QA community to fruitful discussion and possible improvement in the future.> How can you be a part of this? Fill out the survey! > Share on your social networks:
Join other bloggers and industry leaders and help spread the word about the survey, join the success.
Become a Collaborator
– Receive survey results before they go public
– Linked listing on QA blog as a Survey collaborator
– Your blog or site name and logo on the bottom of the survey results report
– Relate to survey results on your own website
Starting with Sauce Connect v4.3.13, new features have been added which aim to increase stability when testing websites behind the firewall.
1) Prevent Sauce Connect from shutting down when actively being used by job(s)
Adding the above argument prevents users from shutting down their tunnels while jobs are still running. Should the user attempt to close the tunnel, they will receive a warning as well as a count of tests currently using the tunnel (see example output below). Once the test(s) using the tunnel complete, the tunnel will close itself.
05 Jan 09:08:20 - Sauce Connect is up, you may start your tests. 05 Jan 09:11:13 - Cleaning up. 05 Jan 09:11:13 - Removing tunnel 3a248df76eb14145ad0401c6c4aaf690. 05 Jan 09:11:13 - Waiting for any active job using this tunnel to finish. 05 Jan 09:11:13 - Press CTRL-C again to shut down immediately. 05 Jan 09:11:13 - Number of jobs using tunnel: 1. 05 Jan 09:11:19 - Number of jobs using tunnel: 1. 05 Jan 09:11:25 - Number of jobs using tunnel: 1. 05 Jan 09:11:33 - Number of jobs using tunnel: 1. 05 Jan 09:11:41 - All jobs using tunnel have finished. 05 Jan 09:11:41 - Waiting for the connection to terminate... 05 Jan 09:11:42 - Connection closed (8). 05 Jan 09:11:42 - Goodbye.
If the user wants to force shut the tunnel, they can do so by sending another close command (e.g. Ctrl + C)
2) Prevent Sauce Connect shut down due to colliding tunnels
When this argument is added, any tunnel started with the same username and tunnel-identifier will be pooled together creating failover/load balancing for Sauce Connect. Once a pool of tunnels is established, newly started tests will be assigned to an active tunnel in the pool at random.
Note: In order to join a pool, each tunnel must be started with this argument.
Q: What happens when one tunnel in the pool crashes?
A: It is removed from the pool.
Q: What happens if tunnels have differing values for arguments such as –pac?
A: The –no-remove0colliding-tunnel only enforces that the username and tunnel-identifier be the same, all other options can differ from one tunnel in the pool to the next.
Q: What happens when one or more (but not all) of the tunnels in the pool have the argument –shared?
A: If a sub-account goes to use a parent tunnel from the pool, it will be connected only to ones in the pool which have –shared enabled.
Is ITIL (Information Technology Infrastructure Library) still relevant in the digital world? The short answer is…yes! The longer answer is: it depends on your organisation’s understanding and application of ITIL. In order to best answer the question it is important to take a step back and examine the goals and intent of ITIL. Quick refresher on ITIL […]