Get ready for the "Deep Dive" webinar on HP Network Virtualization. We take a 1.5 hour recorded session, and deliver it to you; enabling you to understand the capabilities of HP Network Virtualization, and how you can get started with them now.
Just released is the NV Freemium and the LoadRunner 12.50 Community Edition with Network Virtualization; both for free, and giving you results now.
Learn more now.
I’ve added some new ones, need to take one out. At some point, there should probably be a bunch of Scaled Agile haiku.
See the agile haiku page.
After recording, interviewing, editing and mixing, it’s finally here! Welcome to the first episode of Working As Designed: the uTest Podcast. This will be a place where you can find special interviews with helpful testing tips, the latest uTest news, and where you can meet the Community team! Feel free to leave a comment here, […]
The post Official Release of Working As Designed: the uTest Podcast, Episode 1 appeared first on Software Testing Blog.
Missed our earlier webinar, “Beyond the Release – CI That Transforms“? Check out the recap below.
In this webinar, we discussed the power of CI and possible considerations for the future. One of the more interesting aspects of the webinar was seeing the modern pipeline with the new Sauce CI Dashboard alongside Cloudbees’ automation templates. We also conducted a small CI survey at the beginning of the webinar, and ended with a Q&A.The Power of CI
Continuous Delivery and Deployment (CD) steal the show in DevOps conversations, but the reality is that Delivery and Deployment are not for every organization, nor are they already widely adopted. In order to move on to delivery and deployment, organizations must get Continuous Integration (CI) right — unless they were built from day one with the DevOps framework, and did not have to fit the processes into existing environments.
The reason CI is so powerful is that it allows you to dip your toe into the modern delivery pipeline without the risk or complexity of building out delivery and deployment all at once – and with the potential of failure. You can consider CI as the on-boarding for DevOps. And from a process and tool standpoint, CI is nearly identical to delivery and deployment, it means that once you get it right you can easily move on.Webinar – CI Survey Results
I’m pleased to say the results of the Sauce Labs CI survey were almost exactly what I expected, served with a side of surprise. For me, the most interesting aspect of the survey results is how they appear to be in conflict with the perceived high CI adoption and success rates already existing in the market. Let’s look at the results among 500+ attendees:
What Types of Automated Tests do you run?
- Unit 28%
- Functional 40%
- Integration 27%
- None 6%
6% of the attendees are not running automated tests at all! This was astonishing to me. I expected 1% at most, especially given the audience, because they are already familiar with automation. At a minimum, I would expect all companies to automate Unit testing. However there is a high likelihood that what this 6% is saying is that they have automated tests, just initiated manually.
The results also showed that many are running functional tests. This is great! However, only 27% are running integration tests. This is troubling because compared to the reported 45% who state they are doing CI already, the lack of integration testing would seem to contradict that statement. I suspect that this is a definition problem, where some may define CI as being simply a shared testing environment, and not really the CI process as described in the webinar.
Do you have an Integration environment?
77% of the audience reported having an integration environment, yet only 27% have automated integration testing. This could be an indication that infrastructure was the focus over process.
And the theme continued.
Where are you with CI?
- Thinking about it – 16%
- Just getting started – 37%
- Got it down; ready to take the next step – 45%
45% have it down! This also surprises me, as the earlier numbers indicate that 74% have not yet completed automation of the entire stack. Without full CI automation, it is a stretch to go to CD, where you have no choice but to automate everything. But the fact that 53% of organizations have not or are just starting is consistent with my observations.Webinar Q&A – Follow-Up Answers
Now I’d like to showcase my favorite part of the webinar, the Q&A. We received a great set of questions and were able to address most during the event, but here are the ones I want to highlight with additional responses:
Q: What are the challenges of using CI in a Cloud environment?
A: One of the misconceptions about CI infrastructure is that it is one location, and static. The idea of integration infrastructure is that it can be spun up and torn down on demand, and the best system is used for the process. For example, it is not necessarily true that all functional tests run with Unit. The biggest challenge of using Cloud solutions is integration and morphing existing processes. Where CI fails is most often in the lack of planning around the process. If the process is solid, then it’s relatively easy to introduce any number of integration environments, and have oversight on their results and repeated usage.
Q: What cultural factors are absolutely necessary for CI to (a) happen and (b) sustain?
A: Shared motivation for results, no barriers, and no ownership. Integration environments are like the application/code café. Everyone comes together. Which means that the creation of these environments needs to be autonomous, and open. For example, there cannot be a ticketing process to obtain CI VMs or access to Cloud solutions. And there needs to be flexibility in who can do what. So this means there cannot be any barriers between IT, Dev, and QA. QA should be able to suggest changes to the entire team, for example. You can achieve this by defining a shared objective that is all about results. The results equal finding bugs and resolving them faster. With this goal, more commits and more iterations in the integration environment will happen naturally. This drives more releases per commit, more automation in the CI environment, and more interaction among the entire team.
The beauty of the CI process is that failure is inevitable, and when issues occur, they have little impact on anything but time. So adding sensitivity to the environment only limits the ability to use it. What needs to be well established are configurations on the environment, such as framework versions, etc. This is where using a Cloud solution is nice because it ensures that consistency. Deploy, deploy, test, test as much and as fast as you can.
Q: How does CI help maintenance of a small application when the cost of maintaining CI is a bit high for the application?
A: It is true that the larger the team and application, the more justifiable the cost of infrastructure and tooling around CI. But on the flipside, the ability to set up API calls and Webhooks for smaller applications is easier than for those that are large. This is generally due to less dependencies and the number of integration points. For small applications, the goal should be CI that is 100% PaaS-based in Cloud testing environments, so that the ONLY effort is integration. This comes at a low cost when it is done by developers while they are coding.
Q: How effectively can we integrate a test automation suite into CI?
A: The basis of this question is a bit concerning because it would imply that the automation was developed in a silo. Good automation should be transportable from ad-hoc boxes to a standardized CI environment. And the engine that drives the combined set of automation is fairly easy to wire up. What is missing in this question is the type of testing.
One of the biggest benefits Sauce brings to functional testing specifically is the offloading of massive testing grids that come at a large dollar and opportunity cost. And here, like many other Cloud solutions, the integration point is an API with a secure tunnel to your on-prem or Cloud IaaS testing environment which interprets the results, runs scripts from your testing suite, and manages the rinse and repeat.
Q: Does optimized and good usage of CI actually have much of an effect on quality or release schedule? It’s only an automation of the checkout-build-deploy-test process, so it should technically not have any effect on quality of product or release schedule, right?
A: Ouch. This one hurt my soul a little. If CI is not substantially increasing the quality of your application and the number of releases you are working on, you’re doing it wrong. It should not just be a matter of automating existing processes. Instead, it should go like this:
Developers’ commits end up in fresh CI environments, therefore there will be more frequent releases to CI, so it follows that there will be more frequently run automated tests.
With more tests, more bugs are caught before delivery, thus the cost per bug is less because it happens earlier on in the process and there are fewer bugs in production.
It follows that the cost of resolved bugs is less, both in dollar and opportunity cost to the backlog … and all this means that your customers are happier.
If you consider volume of releases only, you can find ROI. But you can go beyond that: if you have a great CI environment, you can fail forward with higher risk functionality. This means product improvements come much faster, so both the real and opportunity ROI are tremendous.
We often forget that proactive is far better than reactive. Similarly, we forget that the more bugs you have the more bugs you will have, so without CI you are increasing the cost of all downstream processes.Conclusion
When you get CI right, you can move downstream to high-speed testing on mock applications, service virtualization, and pipelines driven by containers. On the flipside, if you do not get CI right, you cannot expect to move on to delivery and deployment.
Based on the survey and questions, it seems that there is a lot of confusion both on the definition of CI, and where the market actually is with CI maturity. It would indicate to me that there is a lot to learn when it comes to CI, and that there are a ton of possibilities for improvement as well.
Chris Riley is a technologist who has spent 12 years helping organizations transition from traditional development practices to a modern set of culture, processes and tooling. In addition to being a research analyst, he is an O’Reilly author, regular speaker, and subject matter expert in the areas of DevOps Strategy and culture and Enterprise Content Management. Chris believes the biggest challenges faced in the tech market is not tools, but rather people and planning.
Throughout Chris’s career he has crossed the roles of marketing, product management, and engineering to gain a unique perspective of how the deeply technical is used to solve real-world problems. By working with both early adopters and late, he has watched technologies mature from rough solutions to essential and transparent. In addition to spending his time understanding the market he helps ISVs selling B2D and practitioner of DevOps Strategy. He is interested in machine-learning, and the intersection of BigData and Information Management.
Through my Share Your PurePath program I can confirm that many software companies are moving towards a more service-oriented approach. Whether you just call them services – or Micro-services doesn’t really matter. If you want to get a quick overview of the concepts I really encourage you to read Benjamin Wootton’s blog and the comments […]
The post Monolith to MicroServices: Key Architectural Metrics to Watch appeared first on Dynatrace APM Blog.
And why wouldn't it be? After all, if industry analysts and virtualization experts are to be believed then cloud based computing and business solutions are going to be the NEXT BIG thing of this decade.
So I guess it is only natural if you find yourself to be asking yourself questions like 'what is cloud testing?', 'how to test on cloud?', 'how can we use cloud to better our testing?', 'how does cloud impact how we used to test before?' etc.
However, since all these queries pose different questions, the answers to them would be unique. For starters, if you are looking for cloud testing, it simply means a testing environment that utilizes cloud infrastructure for performing software testing.
How to leverage Cloud to Transform Software Testing?
If you are someone who heavily use tools while testing then IBM (IBM Cloud) and Hewlett-Packard have already jumped into the market for software testing in the cloud. Thankfully, if done smartly, cloud based computing can prove to be a great value-addition for both software development and testing. The reason is simple -- the very nature of a cloud based infrastructure allows for great team collaboration.
As an added advantage, cloud based testing (as well as programming) environments are easy to setup (on-demand). In today's tight budgeted IT world, this can be a much bigger advantage than it appears at first. It is no secret that IT managers are operating under a very tight budgetary constraint and when it comes to testing phase, the budget is even smaller.
Traditional approaches to setting up a test environment involves high cost to setup multiple servers with various OS, hardware configuration, browser versions etc. And if you are going to simulate user activity from different geographic locations you will have to setup test servers with localized regional language OS, which in turn can add up to the cost. But using cloud based infrastructure, the team wouldn't have to setup expensive physical servers -- rather, setting up new testing environment will be fast and efficient and VMs (virtual machines) and test servers can be launched and decommissioned as needed.
On the other hand, as a tester you might also be required to one of those ever emerging cloud based SaaS applications that aim to cater to various large and small customer base, on-demand. If you are testing such a cloud based application then your challenges are double-fold. Because, testing all the layers - from your application to the cloud service provider - is something that as a tester you will have to become efficient in.
As a closing note, if you are a tester and if you are intrigued by all these buzz surrounding cloud testing, then here are 2 main reasons why you might consider trying it out -- Cloud based software testing infrastructure greatly helps in reducing capital expenditure and these testing setups are highly scalable, thus allowing your team to expand or decommission your test servers on-demand, as needed.
Are you someone already using cloud testing? Share your experience with me and other readers by leaving your comment below.
I started writing this blog when I began my software testing career (exactly 9 years from today) and I don't know about you but I have run into plenty such software testing traps while working on various testing projects at various stages of my career. And every time I ran into them, it gave me a chance to look for magic spells, ways, methods, techniques, tricks, tips and anything and everything that could help me come out of such situations. And today's article is a compilation of some of those top 5 traps that I've ever run into in my software testing career and some of the ways that helped me overcome them, in my context. The following case points and suggested solutions can help you overcome many common real-life software testing problems.
#1 Running Out of Testing Ideas? This is by far the most common problem that a tester can run into while on a project. How many times have you been in a situation where you didn't know what else to test and how? I call this phenomenon as "tester’s block syndrome" [a condition, associated with testing as a profession, in which a tester may lose the ability to find new bugs and defects in the software that (s)he is testing]. If you're curious, which you should be (if you are or aim to become a good tester), then you can read more about it in the article titled The Se7en Deadly Sins in "Software Testing" that I wrote a while back.How to overcome this trap?Pair Testing: You can use Pair testing to your advantage to generate test ideas that seem to have dried up when you try alone. Pair testing is nothing but a testing technique where two testers work in pair to test the software under test.
BCA (Brute Cause Analysis): Testers can employ this unique brainstrom technique when one tester thinks about a bug and the other tester thinks of all possible functions and areas where this bug can manifest.
Think 'Out of the Box': Instead of thinking about the feature/function/application in front of you, rather try thinking in opposite directions. Take a step back and reassess the situation. Have you been trying to run functionality test when you ran out of ideas? How about performance, load and stress tests? How about tests involving data, structures, platforms, browsers, devices, operations? #2 Missing the Testing Goal?How many times were you in a team meeting where your manager or someone from the dev. team was talking about this cool new/enhanced feature that needs testing and everybody else in the meeting room appeared to be 'getting it' whereas it was only you who had no idea what it was? When in such situation, nodding your head as if you are able to understand everything may seem like the natural (easy) path but trust me; it is not the best path to go unless you want to end up in trouble later in the test planning and execution phase of this feature!How to overcome this trap?Ask Relevant Questions: The importance of good questioning skills can not be stressed enough if you plan to be an excellent tester. And this very skill can come to your rescue when you are trapped in a situation like the above. It's okay to admit you don't understand something and then get it clarified than to not admit and be ignorant for rest of your life.
Brainstorm: Okay, so you have asked tons of relevant questions about the upcoming feature/application/product that needs testing and have taken notes. Now what? Now is the time to pull your testing team and brainstorm ideas to find all sorts of possible test ideas, strategies, plans etc for this test project by gathering a list of ideas that come spontaneously by the teammates.
Read between the lines: More often than not, when starting working on a new product or technology or even a tool you can find some level of available documentation on the same to help you get started. But a word of advice -- take everything that you read there with a pinch of salt. I'm not saying not to read them at all. But when you do, be careful about all those things that might not have been put down in words but are implied. Sometimes, proactively being able to find and underhand these implied messages in the project documents can help you in a big way to understand the testing goal.#3 Suffering from In-attentional Blindness?How many times have you missed a very obvious bug or a defect or an error that was right there on the screen, staring right back you and yet you missed it because you were busy ticking off the other test items from the testing checklist or executing the test case document? Situations like these can be very embarrassing not only because you missed something that is so basic and so obvious but also because it happened when you were actually busy religiously following the test cases to find things just like these!How to overcome this trap?Stop blindly following the Test Case and Test Matrix: Before starting to use a test case for your testing always ask yourself the following questions and then adjust your test cases to fill any missing links.
- "What are the things that are covered by this test case? What are not?"
- "What portion of the product functionality does this test case cover?"
- "Can this test case be tested in any other methods or ways? If yes, how?"
Change the Focal Length of Your Testing Approach: When following the test cases and test matrix to test something, keep and open eye for anything else that might be going on during test execution. Explore other related areas even though they are not mentioned in your test case/matrix. A control object that flickers a little when you save your inputs in another section of the form, a ding sound coming from the speaker when certain button is clicked, a slight change in the color of a Submit button when you click inside another test area -- all of these subtle looking actions may be an indication of an approaching catastrophic system failure.#4 Not Sure if 'It' is Really Working... or Not?How many times have you come across issues that you didn't report as errors and bugs because you were not sure if it was really a bug or something that you did wrongly and later those same issues were found and picked up by a coworker or your manager or, god forbid, your clients or the customers?How to overcome this trap?Trust Your Tester's Instinct: IF your instinct is telling you that something is fishy and what you're observing and experiencing could very well be a bug, then follow your instinct and report it to the devs. After all, what could be the worst case scenario? The devs might come back and say it is something that you did wrong (misconfiguration of certain settings, misunderstanding of the actual feature etc) and not a bug. It is still much more better than ignoring it thinking it might not be a bug and later your manager or customer finding it.
Start with a fresh set of eyes: Fresh eyes find bugs, and if you are still unsure then take a short break and retest and confirm that what you're seeing is really not a bug.
Have it tested by a fellow tester: Pick one of your fellow testers and ask them to go through the same test scenario and see what they come up with.#5 What to Test and What can be Skipped... Safely?How many times have you been in a situation when you felt overwhelmed by the number of possibilities and choices to approach testing? With the complexity of software and technology becoming more complex day by day, often the number of things that a tester needs to consider while testing can be overwhelming. And with the project deadline approaching fast it can be very challenging to decide what to test, where to begin, how to begin and what can be skipped.
How to overcome this trap?Gather Intelligence Data: First of all, look at the existing bugs in your bug tracker tool and make a note of critical bugs. Talk to developers and ask them to think of top 10 most critical things in the product that affects majority of end-user functions and make a list of them too. Go though the review docs, user manuals, implementor's guide and basically anything that can give you an idea of things that are going to be most important for your customers and end users.
DIQ approach (Dive In/Quit): Now that you have the list of all these important things that need testing, let me introduce to you the magical DIQ approach (Dive In/Quit). In this testing approach, pick any of these most critical test items and just dive in and test. While testing, if it appears too hard for you then quit and take another item, dive in and test until you have exhausted all your test ideas on it. Repeat! So basically you take an item > dive in > quit when you can't test it any further > repeat it with another item > come back to initial item when you have finished all other test items.#And finally... Learn to Accept FAILURE, once in a while!Due to the intrinsic nature of complexity of modern day software and communications systems, software testing is turning more complicated. As a result, more efficient and effective testing heuristics, techniques and methodologies needs to emerge. If you are not evolving fast enough as a tester then the chance of failure is exponentially high and you should be prepared to face failure once in a while. After all, we are testers; not magicians! But as long as you are learning from your past mistakes, upgrading your testing skills and updating your testing heuristics to accommodate those mistakes so they never happen again, I think you should be fine.
I realize that it may be hard to rank these (dumb) reasons that people like to use for not doing enough testers (and hiring enough testers) but here are the top 5 stupid reasons people don't hire testers. Read on...My Product isn't Finished YetIn today's rapid development age where methodologies like agile scrum and sprint are mainstream, how more absurd could your excuse be than this one? Even if you work in an environment where several Beta versions are released first before the final product, will you be willing to risk losing your customer's trust and your reputation by releasing versions that are laced with defects?Also, will you be willing to bet that your star programmer doesn't leave you and join another organization with a dedicated testing team and proper QA methodologies in place, because he got fed up reading through and fixing hundreds of customer reported bugs everyday? The sooner you realize the importance of finding and fixing bugs earlier in the product cycle, not only will it save you revenue but also your reputation.Quality is Everyone's Responsibility; No Dedicated Testers are NeededSuch excuses usually come from teams that (at least believe that they) follow the mantra "Quality is everyone's responsibility", and hence they jump into this inappropriate misconception that you can actually get great results without dedicated testers. Theoretically, this all works. But the problem begins when everyone starts assuming that every other guy in the other cubicles are already testing the product and hence it is okay if he skips it.An extension to the above excuse that I hear frequently is that the programmers will become lazy and end up writing buggy code if they know there is a testing team responsible of finding the defects. But let's face it; programmers are either lazy or they're not! A programmer who takes pride in his work will rigorously test his code no matter whether or not you have a dedicated team of testers.We have Budget/Time Constraints.Who doesn't? Do-it-yourself testing by your programmers can save you some dollars and can even be effective (if they’re imaginative). Also, it is still cheaper to hire an average tester than it is to hire an average programmer. And if you don't hire testers, you're going to end up having your programmers doing testing. From my own experience, not only the programmers are mediocre when it comes to testing but they also tend to overlook errors in their code as compared to a tester testing it. Everyone has budget constraints. But great product teams are good at realizing the importance of a dedicated QA team on board and they know it is more of an investment than an unnecessary expense. And here are some things to consider if you're worrying about testing on a tight schedule.My Product is Perfect. It doesn’t need Testing.Actually, NO! If your product is perfect then either it is not a product or isn't actually perfect. In either of the cases, this means that all products need testing as long as they are complex enough to qualify as good usable products (software, website, web application etc).A separate QA team can build an 'Us vs Them mentality', which is not HealthyI've worked in teams where test and dev reported to the same manager, and also in teams where the testers reported to dedicated test managers. In my experience, both of these can work well provided the office politics is kept under control and the team's manager is responsible at ensuring so. Good teams realize that a dedicated testing team is essential to the team's overall success and that the QA team not only saves the programmers a lot of time (and credibility) by helping them fix defects before they find their way to the customers but also save the stakeholders substantial revenue that would otherwise be spent on fixing the bugs in a post-release scenario and would require subsequent patches to be released; not to mention the frustrated customers and angry investors.As per the 'Us vs Them mentality', it is in the hand of the team's managers and the stakeholders and how they manage their resources. There is a reason why people still use the old saying -- 'garbage in is garbage out'!
Let me know if I missed any more stupid excuses that people make justifying their decision not to hire more testers and not to do more testing. Happy Testing...
Hunter Industries’ product line includes patented gear-driven rotors, water-efficient sprinklers, weather sensors, valves, and controllers, as well as high-quality LED and low-voltage outdoor lighting. Many of their products have internal controllers that manage user settings and commands.
Prior to 2010, the engineers who worked on electronics and coding were also doing the testing for the controllers, and requirements, test cases, and defect tracking were being managed with a combination of Word documents and Excel spreadsheets. This process was sufficient for years, but as the company grew, manual review of the development and testing process to ensure quality and completeness became a burden for the controller division team.
While the team was already using TestTrack for defect tracking, their new software QA manager had extensive experience with competitor HP Quality Center.
The Seapine team worked closely with the new QA manager during both the assessment and budget development processes, providing temporary licenses so he could fully test the functionality of TestTrack. In the end, TestTrack won Samara over.
“Both [HP Quality Center and Seapine’s TestTrack] had the main features we needed, but Seapine as a whole offered such incredible customer service that it really tipped the scales for us,” said Kifah Samara, Software QA Manager.TestTrack Ensures Accountability and Confident Results
Soon after expanding their use to the full TestTrack suite, the controller division realized a new level of certainty. Samara no longer has to spend hours manually reviewing everyone’s work. With just a click or two, he can see if all the requirements are covered, easily identify any significant turning points, and confirm all the defects are linked to a test case.
From a management perspective, there is more accountability at all levels of the development process, with much less effort required from the quality assurance manager and project leads. Responsibilities and workflows are crystal clear, and results are logged every step of the way. To learn more, read the customer story.
The post Hunter Industries Strengthens QA Process with TestTrack appeared first on Blog.
The key aspect of this talk was the extension of the “code-as-configuration” model to nearly the entire Jenkins installation. Starting from a chaotic set of hundreds of hand-maintained jobs, corresponding to many product versions tested across various environmental combinations (I suppose beyond the abilities of the Matrix Project plugin to handle naturally), they wanted to move to a more controlled and reproducible definition.
Many people have long recognized the need to keep job configuration in regular project source control rather than requiring it to be stored in $JENKINS_HOME (and, worse, edited from the UI). This has led to all sorts of solutions, including the Literate plugin a few years back, and now various initialization modes of Workflow that I am working on, not to mention the Templates plugin in CloudBees Jenkins Enterprise.
In the case of Camunda they went with the Job DSL plugin, which has the advantage of being able to generate a variable number of job definitions from one script and some inputs (it can also interoperate meaningfully with other plugins in this space). This plugin also provides some opportunity for unit-testing its output, and interactively examining differences in output from build to build (harking back to a theme I encountered at JUC East).
They took the further step of making the entire Jenkins installation be stood up from scratch in a Docker container from a versioned declaration, including pinned plugin versions. This is certainly not the first time I have heard of an organization doing that, but it remains unusual. (What about Credentials, you might ask? I am guessing they have few real secrets, since for reproducibility and scalability they are also using containerized test environments, which can use dummy passwords.)
As a nice touch, they added Elasticsearch/Kibana statistics for their system, including Docker image usage and reports on unstable (“flaky”?) tests. CloudBees Jenkins Operations Center customers would get this sort of functionality out of the box, though I expect we need to expand the data sources streamed to CJOC to cover more domains of interest to developers. (The management, as opposed to reporting/analysis, features of CJOC are probably unwanted if you are defining your Jenkins environment as code.)
One awkward point I saw in their otherwise impressive setup was the handling of Docker images used for isolated build environments. They are using the Docker plugin’s cloud provider to offer elastic slaves according to a defined image, but since different jobs need different images, and cloud definitions are global, they had to resort to using (Groovy) scripting to inject the desired cloud configurations. More natural is to have a single cloud that can supply a generic Docker-capable slave (the slave agent itself can also be inside a Docker container), where the job directly requests a particular image for its build steps. The CloudBees Docker Custom Build Environment plugin can manage this, as can the CloudBees Docker Workflow plugin my team worked on recently. Full interoperation with Swarm and Docker Machine takes a bit more work; my colleague Nicolas de Loof has been thinking about this.
The other missing piece was fully automated testing of the system, particularly Jenkins plugin updates. For now it seems they prototype such updates manually in a temporary copy of the infrastructure, using a special environment variable as a “dry-run” switch to prevent effects from leaking into the outside world. (Probably Jenkins should define an API for such a switch to be interpreted by popular plugins, so that the SMTP code in the Mailer plugin would print a message to some log rather than really sending mail, etc.) It would be great to see someone writing tests atop the Jenkins “acceptance test harness” to validate site-specific functions, with a custom launcher for their Jenkins service.
All told, a thought-provoking presentation, and I hope to see a follow-up next year with their next steps!
We hope you enjoyed JUC Europe!
Here is the abstract for Christian's talk "From Virtual Machines to Containers: Achieving Continuous Integration, Build Reproducibility, Isolation and Scalability."
Here are the slides for his talk and here is the video.
If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
We're currently setting up a program to support community members' travel to Jenkins community events. Our goal is to enable more members of the community to meet each other and exchange ideas in person.
We're still hashing out the details, but it'll be available to every Jenkins community member. Apply, telling us what Jenkins-related event you'd like to attend and how awesome you are, and we may support your travel with up to 500 USD. For details on how this will work, see the current draft of the travel grant program.
The first person to be supported in this way is Pradeepto Bhattacharya from Pune, India. He was a speaker at this year's JUC Europe in London, and will give two talks at JUC US West next week—and we help him get there! He asked us a few weeks back whether the Jenkins project could support his trip to the US. We came to the conclusion that this would be a good idea—so good in fact, that we decided to build a regular program from it.
Are you planning to attend a JUC or similar event, but worry about the cost of travel? We may be able to help you out!
As a product manager, some days it seems like half of my time is spent calming down team members who are freaking out about the project. And on those days, most of the other half seems to be spent re-establishing and maintaining team alignment on a product release. (Usually this is because the people who need to calm down have spread their uncertainty around to others.)
I used to feel side tracked because I felt that I should be spending my time on market research, helping with monetization plans, supporting sales and marketing, planning and prioritizing features and helping make a decision on a UX improvement or to help the team re-prioritize a feature because of some unforeseen event. I didn’t expect to spend so much time helping keep people calm, and repeating the product vision, priorities and keeping the team on track. I’m a collaborative person, so I always have buy-in before proceeding. Why then, do some days seem like most of what I do is listen, talk, repeat myself, listen, talk and repeat myself some more? Didn’t we all agree on this stuff? What’s going on?
At first, when someone freaked out over something we had settled on and agreed to as a team, it was tempting to just point them the product vision statement written in huge letters and posted above our development board. Or opening a product roadmap, and pointing at it as the person ranting to me insisted we should be doing something else. What happened to the agreement that we had as a team, and more importantly, your agreement and endorsement? What happened?
What I learned is, a lot of times people are just freaking out. Some people freak out more than others, and we all freak out in different ways. As a product manager, I am often the first person they come to when they are freaking out. And, an important part of my job is to deal with freak outs. If I can keep people calm, focused and productive, that helps us reach our shipping targets.
The truth is, we are all emotional creatures and software development projects are incredibly difficult. We all freak out a bit on projects from time to time if we look at the big picture and think of all the work we have to do in such a short time. At any point in time during a software development project, it can be overwhelming if you think about it too much. However, people sometimes also freak out for good reasons.
Techies may get worried that we can’t meet commitments, or implementing a new technology isn’t going as well as planned. Or we may just get bored of the technology, learn about something new, and feel that there is a better way to move ahead than our current roadmap and plan. As business people, we get all kinds of tantalizing offers from potential customers, and we are very swayed by the people who are willing to spend money. It can take what seems like forever for a technical team to deliver a release (because it is so labour intensive), so we may get distracted away from the current release, and start talking about an idea for some time down the road. To be honest, I get freaked out a bit too. Here are some recent minor freakouts:
- Did we pick the correct web framework? What if it doesn’t support the browser that our first client uses in their organization!??
- Did I do enough research with our monetization model? What if my research and recommendations and my interpretations of consultations with experts are wrong?
- Did I misinterpret the usage statistics and engagement metrics from our apps which could potentially mess up our priorities for the next release? At worst, what if we change something that the majority of people are using and annoy our best customers?
See? I can be as neurotic as the next person. I deal with my own freakouts by having my own personal support network. Sometimes I just need to vent to someone else who isn’t directly involved with the team. Other times, I need to bounce ideas off of senior team members to re-evaluate our current path. What if I am missing something important? I also research any freakouts when others agree. And sometimes, we just have to find out and adjust accordingly when we release. If I guided us towards the wrong monetization model and the wrong priorities, we just need to be on top of it and adjust based on market feedback.
Once in a while, someone is right to freak out and disrupt our current path. They had some doubts, researched and realized we are off track and we need to change now. How do I tell the difference between someone who is freaking out and needs a bit of reassurance, and a real issue we need to deal with right now?
Usually, we get unsettled due to reasons that we can’t explain. Some people say our subconscious or our intuition are at work here. When someone brings up an issue but they can’t provide you with a clear explanation of what is wrong, let alone an alternate solution, it is important not to dismiss it. So I always ask for proof driven by research. Can you spend some time looking into the problem to see if it’s a real issue, or just a minor freakout? If we need to change, why? What is the business case? Where is the evidence I could use to show stakeholders we are off track? Once a team member starts to make their freak out defensible, two things usually happen:
- once they start to research, they realize their freak out is unfounded, and they calm down by looking at evidence that supports that we are on the right track)
- as they gather evidence, they are able to reinforce their misgivings by forming a better idea of what the problem is, and are able to communicate it much more convincingly
In the first case, we just let it go and move forward. In the second case, the freakout turns into a strategic business or technical decision.
Sometimes, freakouts are symptoms of communication problems, poor tools (or poor use of tools) or other issues unrelated to the project itself. These are important to watch for too, because even simple issues can cause people to spend time freaking out instead of working because something is wrong.
As my colleague Aaron West points out, it’s important to provide a safe environment and provide permission for people to freak out. If I am the person they feel safe freaking out to, and we can deal with their feelings in a healthy way, that minimizes them having to go to others and sidetracking them. If the environment is repressive, and there aren’t healthy outlets, people will undermine the mission of the project by releasing that tension in other ways.
When I presented for the first time at a large international software development conference, I was really nervous. The facilitator could tell, and tried to calm me down with a little pep talk. She told me about her office, how outnumbered the technical IT members were, and how they had insane deadlines with multimillion dollar projects all the time. The projects were heavily publicized, so any delay brought embarrassment to the company, as well as the potential to lose money. It sounded like a very stressful environment to work in, but she said she had learned to thrive there. It was a really easy place to get freaked out in, and a small team could waste time and effort if they got freaked out over the wrong things. So, she printed out a huge poster that said DON’T FREAK OUT! and hung it in the development team bullpen. That didn’t stop people from freaking out, but it helped calm people down, and reduced the freakouts over trivial issues. If someone came to her freaking out, they were freaking out for the right reasons. The message was that I really didn’t need to freak out about presenting my talk, there are a lot more important things in life to freak out over than public speaking.
Sometimes I have to tell myself not to freak out and I remember that pep talk and that DON’T FREAK OUT! poster. Is it really worth freaking out, or am I just stressed and worried? Late at night on a stressful project, all kinds of problems seem large and insurmountable. I have to ask myself, is this worth being freaked out over? Do I need to ask for another opinion? When others come to me freaking out, I need to help them either channel that energy into something productive, or decide that they are freaking out about something important, and we do need to change what we are doing.
Different projects have different levels of freak outs, and they occur during more stressful times on a project. I have to remember that helping people navigate freak outs is just as important as the cool tasks in my job description.
New technologies are often built upon the successes and ideas that have come before them. Always building off and taking advantage of previous technology, development and advancement becomes an iterative process. Leap Motion has taken this process to heart recently, using a Hackathon in San Francisco to augment the Oculus Rift and turn it into an Augmented […]
I get lots of questions from people who want to initiate change in an organisation that doesn’t want to change. It’s a very common question…
The post 10 Ways To Initiate Change In An Organisation That Doesn’t Want To Change appeared first on Rob Lambert.
Get firsthand training with Ranorex professionals and learn how to get the most out of Ranorex Studio and the Ranorex Test Automation Tools at one of these workshops.
Look at the schedules for additional workshops in the next few months.
We look forward to seeing you there!!
Following last month’s news about support for Windows 10 we’re tickled to announce that Sauce Labs now also supports automated testing on Microsoft Edge. As part of this update, we have upgraded our version of Edge from v.11 to v.20, thus adding more stability for both manual and automated tests.
In order to run a test on Edge, you would specify the following desired capabilities (or build the code, including advanced capabilities, using our Automated Test Configurator):
"platform": "Windows 10",
Login to get started – happy testing!
A collaboration by David Grabel and Mario Moreira
You’ve just been given a plum assignment, heading up a major new application development project. Congratulations! Your boss just got off the phone with a large off-shore contracting firm. At the labor prices they are quoting, we’ll save a fortune and come in under budget. He knows that you’ve been experimenting with virtual teams; it’s time to kick this into high gear and really cut our labor costs. DON’T DO IT!By the time you factor in the extra costs for travel, the high costs of the locally based support personnel (project managers, architects, etc.), the increased systems and telecommunications cost, the miscommunication caused by lack of face-to-face conversations, and the rewrites this will require, the cost savings will evaporate. They will never complete it on time and the missed revenue alone will eat up all of your savings.
There are good reasons to rely on virtual teams. Cost savings is not one of them. Real world constraints can make virtual teams unavoidable. Your company might have a liberal “work from home” policy. Your development centers could be scattered about a large campus, across the country or around the world. You may have a strong relationship with an off-shore development company that has delivered high quality software on time in the past. You might be partnering with a company from another state. All virtual teams are distributed, whether it’s a single member working from home or dozens of teams scattered around the world. Co-located teams will almost always be more efficient and effective than virtual or distributed teams. When virtual teams are unavoidable, the key to success is to Be Agile. If you follow the agile values and principles you can successfully deliver valuable working software, quickly, with high quality, even with virtual teams. Let us explore how the Agile values and principles can be put into action to help with virtual teams.
- Value people and interactions over processes and tools by enabling virtual and physical face-to-face conversations. If team members are in different time zones, encourage flexible hours and provide high quality video conferences for stand-ups and other ceremonies. Supplement the teams with collaboration tools. Bring the teams together periodically to learn each other’s business contexts, cultures, and individual needs. Virtual teams necessarily need to rely more on electronic tools like agile project managers and collaboration software. These tools help, but virtual teams need to nurture the interpersonal relationships that allow trust to develop. Trust within and across teams is vital to agile success.
- Value working software over comprehensive documentation by writing stories about the users experience and by delivering small increments quickly based on those stories. The traditional wisdom has been that virtual teams require very detailed requirements and design documents. These heavyweight artifacts don’t exist on agile projects. Very detailed requirements create the illusion of completeness and accuracy. All those details about what they system shall do obscure the problems we are trying to solve
- Value customer collaboration over contract negotiations by scheduling regular demos with customers to get their feedback and deepen the understanding of the business problems to be solved. This is more important than checking the boxes on a requirements document. This is a case where virtual meeting tools can bring remote teams and customers together even when they are physically apart.
The agile values and principles are the best guiding lights available today to make virtual teams work. If you have to use virtual teams, please consider using agile practices and staying true to the values and principles. To learn more about virtual teams and best practices to make them successful consider attending the webinar “Virtual Teams- Future or Fiction”. For more information go to http://www.eckfeldt.com/virtualteams/grabel
We’re so excited to bring you Working As Designed: the uTest Podcast. So excited, in fact, that we decided to give you a sneak peek into what to expect in the first episode of WAD: the uTest Podcast- which is expected to be released later this week. Don’t forget to take the discussions over to […]
The post Here’s A Sneak Peek at Working As Designed: the uTest Podcast appeared first on Software Testing Blog.