Putting quality first is critical. Teams must take ownership of quality, but to do so they have to create an environment that allows them to build quality in, instead of testing it out much further down the road to delivery. Finding bugs late is too costly if you aren’t yet to the point of being able to prevent them (implementing BDD). Ensure you can find them early.Staying green is hard work!
I’ve seen many things change this year. My daughter began kindergarten. (How did THAT happen so fast?!) I also began blogging, and our department is trying to shift from Waterfall to Agile and Continuous Delivery, with teams shifting to own quality instead of tossing code over to QA… all great changes. But one thing has remained the same. We were still finding bugs late.
I’ve written many times about the importance of quality first. But how did our team take action on that? First, we HAD to have automation. Purely manual testing was just not going to cut it anymore. Don’t get me wrong, I still very much value human-based testing. But frankly, it can catch things too late. So, enter our automated tests. We began with what we called our pre-commit tests. These must be run — you guessed it — before you commit code! Yes, they are slower than unit or integration tests. But they take around 7-8 minutes (allowing time to go grab some coffee, stretch, whatever). They are our most critical features and workflows. Aside from running locally before committing, they are also scheduled and running many times over during the day with all the commits going on. Once we established that set of tests, we began our work on more user acceptance tests – still hero workflows, but trying to keep in mind the fine line between useful UI tests and too many tests (think of the testing pyramid).
Unfortunately we entered what I call the “dark period” where our once green tests were failing. The reasons are many. (That’s another story for another day.) Resources were shifted (or flat-out gone), and priorities changed. Long story short, we had no one available to either write tests or tend to them. It felt like we were going back to square one. People didn’t trust the tests. If you can’t trust the tests, what’s the point?
Fast forward several months, and everyone recognizes we need the automated tests. We are in the process now of stabilizing our tests. We focused on those pre-commits first and got them green again – yay! They are so green now, that when there’s a failure, we know it is something in the code (and we don’t automatically assume that it’s the test). Now we are moving on to the other tests.It works! It really works!
Once we were stable on those critical tests, we had to figure out how to get people to care. I was suddenly in the business of sales!
First, we had to show the tests were stable – show everyone they weren’t flaky, suffering from timing issues, etc. We had about twenty solid builds of GREEN. Pretty! But even better than the nice soothing green on our Jenkins dashboard, we had stability.
Then, we had to show they were catching things. (It seems counterintuitive, wanting to see your test suite fail, but stay with me a minute). My team (consisting of one other person) was constantly running the tests – even locally, between the scheduled runs on Jenkins. We recruited a few engineers to run these tests prior to committing their code. Then came the bugs – and our tests caught them! At first, we held our breath as we debugged to see if it was the test before alerting the engineering managers. (It wasn’t! We found a bug!) Since teams had originally deemed the workflows as critical, these bugs were prioritized quickly, fixed, and we were back to green.Don’t find them too late—or you’ll pay
Automation has been critical to our success. While we are still working on it, having a set of useful tests (even a small set) has proven its worth. We have caught several bugs that otherwise would not have been found until up to two weeks later. Why does this matter? (Greg Sypolt discusses the cost of a bug depending on when it is found in this blog post, based on research presented by IBM at the AccessU Summit 2015.) Say you find a bug when running locally as you’re still working on that feature – That’s $25. Wait until a test cycle? $500. Find the bug in production? You’re looking at a cool 15 grand. That’s right. $15,000.
As we stabilize our tests, we are reducing the risk of finding bugs in later cycles (whether testing or in production). That adds up. The reality is that you will introduce bugs as you code. It happens. But WHEN you find them is the game changer. Sure, it takes a lot of effort—an effort that many underestimate—but it will save you in the end.
Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices. Most recently, she has focused on test strategy implementation and training, development process efficiencies, and will preach the value of Test-Driven Development to anyone that will listen. In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.
We’re excited to announce that we’ve finally determined where and when Selenium Conf will be happening this Fall.
Our initial goal was to bring the event to a new country, but for a number of reasons that proved more challenging than we’d hoped. But in 2012 we had the 2nd annual Selenium Conf in London, and we’re pleased to be bringing it back there this year!
The conference will be held at The Mermaid in downtown London on November 14-16:
- The 14th will be all-day pre-conference workshops
- The 15th-16th will be the conference
Go here to sign up for the email list for conference updates (e.g., when tickets go on sale) as well as submit a talk. Call for speakers are open from now until July 29th.
SonarAnalyzers are fundamental pillars of our ecosystem. The language analyzers play a central role, but the value they bring isn’t always obvious. The aim of this post is to highlight the ins and outs of SonarAnalyzers.
The goal of the SonarAnalyzers (packaged either as SonarQube plugins or in SonarLint) is to raise issues on problems detected in source code written in a given programming language. The detection of issues relies on the static analysis of source code and the analyzer’s rule implementations. Each programming language requires a specific SonarAnalyzer implementation.The analyzer
The SonarAnalyzer’s static analysis engine is at the core of source code interpretation. The scope of the analysis engine is quite large. It goes from basic syntax parsing to the advanced determination of the potential states of a piece of code. At minimum, it provides the bare features required for the analysis: basic recognition of the language’s syntax. The better the analyzer is, the more advanced it’s analysis can be, and the trickier the bugs it can find.
Driven by the will to perform more and more advanced analyses, the analyzers are continuously improved. New ambitions in terms of validation require constant efforts in the development of the SonarAnalyzers. In addition, to be able to handle updates to each programming language, regular updates are required in the analyzers to keep up with each language’s evolution.The rules
The genesis of a rule starts with the writing of its specification. The specification of each rule is an important step. The description should be clear and unequivocal in order to be explicit about what issue is being detected. Not only must the description of the rule be clear and accurate, but code snippets must also be supplied to demonstrate both the bad practice and it’s fix. The specification is available from each issue raised by the rule to help users understand why the issue was raised.
Rules also have tags. The issues raised by a rule inherit the rule’s tags, so that both rules and issues are more searchable in SonarQube.
Once the specification of a rule is complete, next comes the implementation. Based on the capabilities offered by the analyzer, rule implementations detect increasingly tricky patterns of maintainability issues, bugs, and security vulnerabilities.Continuous Improvement
The analysis of other languages can be enabled by the installation of additional SonarAnalyzer plugins.
SonarQube community officially supports 24 language analyzers. Currently about 3500 rules are implemented across all SonarAnalyzers.
More than half of SonarSource developers work on SonarAnalyzers. Thanks to the efforts of our SonarAnalyzer developers, there are new SonarAnalyzer versions nearly every week.
In 2015, we delivered a total of 61 new SonarAnalyser releases, and so far this year, another 30 versions have been released.What it means for you
You can easily benefit from the regular delivery of SonarAnalyzers. At each release, analyzer enhancements and new rules are provided. But, you don’t need to upgrade SonarQube to upgrade your analysis; as a rule, new releases of each analyzers are compatible with the latest LTS.
When you update a SonarAnalyzer, the static analysis engine is replaced and new rules are made available. But at this step, you’re not yet benefiting from those new rules. During the update of your SonarAnalyzer, the quality profile remains unchanged. The rules executed during the analysis are the same ones you previously configured in your quality profile.
It means that if you want to benefit from new rules you must update your quality profile to add them.
The evolution of DevOps and Performance Engineering has accelerated to an intersection. As a result, some questions have popped up on how they come together. Keep reading to learn more about this relationship.
Developers often believe that database performance and scalability issues they encounter are issues with the database itself and, therefore, must be fixed by their DBA. Based on what our users tell us, the real root cause, in most cases, is inefficient data access patterns coming directly from their own application code or by database access frameworks […]
The post Fixing SQL Server Plan Cache Bloat with Parameterized Queries appeared first on about:performance.
Its Velocity Time and the people who care about Performance, Continuous Delivery and DevOps are gathered in sunny Santa Clara, California. Thankful to be here I want to share my notes with our readers who don’t have the chance to experience it live. Lets dig right into it! Interview with Steve Souders himself
In March of this year, we marked TestTrack’s 20th birthday. To celebrate this milestone birthday, we’re showcasing 20 of TestTrack’s “superpowers” in…
It’s no secret TestTrack has grown more powerful in the past 20 years. We’ve added more features, evolving TestTrack into a leading application lifecycle management (ALM) tool and a true Champion of Quality.
TestTrack: Champion of Quality is a fun, informative ebook that explores 20 of those features, such as email tracking, enhanced testing, item mapping rules, and more. We’ve even included a look at TestTrack’s newest muscular feature, Word export!
Inside, you’ll learn how to:
- Beat tasks into submission with task boards!
- Alert your team to danger with field value styles!
- Slice through your data with filters!
Do you know everything TestTrack is capable of? Download your free copy of TestTrack: Champion of Quality and see what you’ve been missing!
From time to time I find it helpful to mention where I am and how I got here. I have been pretty quiet since 2010 but I used to say a lot of stuff in public.
For the past year I have worked for Salesforce.org, formerly the Salesforce Foundation, the independent entity that administers the philanthropic programs of Salesforce.com. My team creates free open source software for the benefit of non-profit organizations. I create and maintain automated browser tests in Ruby, using Jeff "Cheezy" Morgan's page_object gem. I'm a big fan.
My job title is "Senior Member of the Technical Staff, Quality Assurance". I have no objection to the term "Quality Assurance", that term accurately describes the work I do. I am known for having said "QA Is Not Evil".
Before Salesforce.org I spent three years with the Wikimedia Foundation , working with Željko Filipin mostly, on a similar browser test automation project , but much larger.
I worked for Socialtext, well known in some circles for excellent software testing. I worked for the well known agile consultancy Thoughtworks for a year, just when the first version of Selenium was being released. I started my career testing life-critical software in the US 911 telecom systems, both wired/landline and wireless/mobile.
I have been 100% remote/telecommuting since 2007. Currently I live in Arizona, USA.
I used to give talks at conferences, including talks at Agile2006, Agile2009, and Agile2013. I've been part of the agile movement since before the Manifesto existed. I attended most of the Google Test Automation Conferences held in the US. I have no plans to present at any open conferences in the future.
I wrote a lot about software test and dev mostly around 2006-2010. You can read most of it at stickyminds and TechTarget , and a bit at PragProg.
I hosted two peer conferences in 2009 and 2010 in Durango Colorado called "Writing About Testing". They had some influence on the practice of software testing at the time, and still resonate from time to time today.
I create UI test automation that finds bugs. Before Selenium existed I was user #1 for WATIR, Web Application Testing In Ruby. I am quoted in both volumes of Crispin/Gregory Agile Testing , and I am a character in Marick's Everyday Scripting.
In my last blog post I wrote about the way in which moving to SCRUM teams fosters communication, transparency, and trust, both internally among team members, and externally with customers. Achieving open communication like this is one of the main goals of Agile, but just as important is the development of leadership within the SCRUM teams.
Ideally, every SCRUM team is self-managing in regards to their own work. The Product Owner determines what will get done, the tactical decisions about how it gets done should be left up to the team. There is a simple philosophy behind this: those whose work focuses on a specialized area of the product know better how to improve it, and how much work will be involved, than anyone from outside of that group. The product owner within the team is there to advocate for the customer, and to decide when a minimally viable product is ready for release, but they don’t tell the team what to do or how to do it.
Open communication, transparency, and trust are essential for teams to become self-managing, because these are the foundational conditions that are necessary for the emergence of leaders. Leadership in SCRUM teams is not about titles, it’s about ideas. It’s about contributing to team communications, making decisions based on those communications, and then being able to execute. Because SCRUM leadership is based on an individual’s ability to listen, exercise judgement, and communicate, anyone can emerge as a leader, regardless of whether they have been doing software development for two years or twenty.
When I started at Sauce Labs, there were clear leaders in the Engineering organization, but their efforts were spread thin because they were the obvious leaders, and everyone turned to them for solutions and expertise. One of my top architects was the “official” owner one major infrastructure component of our service. He was also the “unofficial” owner of a second service. In his “spare time” he had developed a customer facing app, so he was de facto owner of that. And, since he had knowledge about other components of the service, he was constantly interrupted with questions from junior developers. For us to move further down the road with our development goals, we not only needed a way to give these leaders focus to their work, but we needed to develop new leaders, and provide the junior members of our organization with opportunities for growth. This was one of the main reasons for implementing SCRUM; while one goal was to bring a more rationalized approach to our development efforts, which we could quantify to management, the larger qualitative goal was to create an environment that would foster innovation and the emergence of a new cadre of leadership.
Naturally, not everyone adapts to this kind of cultural change. Those who have worked in a Waterfall, or even a Fast Waterfall, methodology, are used to being handed instructions, executing on those instructions, and moving on to the next task. If this is the way you have been trained to do things, SCRUM can seem like chaos – where are the functional specs, the technical specs, how am I supposed to know what to do? When we implemented SCRUM at Sauce there were a lot of questions, some resistance, and even some defections. This is all to be expected. Some personalities work better as individuals than as members of a team, and some are more comfortable with self-direction than others. What’s important is that implementing SCRUM helped all of us learn where we are as individuals when it comes to our professional activities, what gives us satisfaction and purpose in our work, and what we are really like within our teams.
Joe Alfaro is VP of Engineering at Sauce Labs. This is the fourth post in a series dedicated to chronicling our journey to transform Sauce Labs from Engineering to DevOps. Read the first post here.
Don’t miss out on this fantastic offer: Only until June 30, 2016 you can save 30% on Ranorex Runtime Floating Licenses! This offer celebrates our much requested and long awaited feature Ranorex Remote, which is available with our latest major software release Ranorex 6.0.
A Ranorex Runtime Floating License enables you to run tests on additional physical or virtual machines. Now, Ranorex Remote takes remote test execution a step further. Using this new feature, you can:
- deploy tests to Ranorex Agents for remote test execution directly out of Ranorex Studio with just a few clicks. This makes it easier to simultaneously run multiple automated tests in different test environments and configurations.
- continue using your local machine during remote test execution, as remote testing won’t block your machine. You’ll receive an automatic notification once the report is ready.
- share Ranorex Agents with your team.
Remote test execution has never been this easy! All you need is a Ranorex Runtime Floating License to set up a Ranorex Agent and use Ranorex Remote. So don’t just let this offer pass by, and order your Ranorex Runtime Floating License today!
“In the final keynote of the TestNet autumn event, speaker Rini van Solingen referred to the end of software testing as we know it. ‘What one can learn in merely four weeks, does not deserve to be called a profession’, he stated. But is that true? Most of our skills, we learn on the job. There are many tools, techniques, skills, hints and methods not typical for the testing profession but essential for enabling us to do a good job nonetheless. Furthermore the testing profession is constantly evolving as a result of ICT and business trends. Not only functional testing, but also performance, security or other test varieties. This presses us to expand our knowledge, not just the testing skills, but also of the contexts in which we do our jobs. The TestNet Spring Event 2016 is about all topics that are not addressed in our basic testing course, but enable us to do a better job: knowledge, skills, experience.”
I think that there are a lot of skills that are not addressed in our “basic testing course” where they should have been addressed. I am talking about basic testing skills! So I wrote an abstract for a keynote for the conference:
The theme for the spring event is “Strengthen your foundation: new skills for testers”. My story takes a step back: to the foundation! Because I think that the foundation of most testers is not as good as they think. The title would then be: “New skills for testers: back to basics!“
Professional testers are able to tell a successful story about their work. They can cite activities and come up with a thorough overview of the skills they use. They are able to explain what they do and why. they can report progress, risk and coverage at any time. They will gladly explain what oracles and heuristics they use, know everything about the product they are testing and are deliberately trying to learn continuously.
It surprises me that testers regularly can’t give a proper definition of testing. Let alone that they are able to describe what testing is. A large majority of people who call themselves professional testers can not explain what they do when they are testing. How can anyone take a tester seriously if he/she can not explain what he/she is doing all day? Try it: go to one of your testing colleagues and ask what he or she is doing and why it contributes to the mission of the project. Nine out of ten testers I’ve asked this simple question, start to stutter.
What do you exactly do if you use a “data combination test” or a “decision table”? What skills do you use? “Common sense” in this context does not answer the question because it is not a skill, is it? I think of: modeling, critical thinking, learning, combine, observe, reasoning, drawing conclusions just to name a few. By looking in detail at what skills you are actually using, helps you recognize which skills you could/should train. A solid foundation is essential to build on it in the future!
How can you learn the right skills if you do not know what skills you are using in the first place? In this presentation I will take the audience back to the core of our business: skills! By recognizing the skills and training them, we are able to think of and talk about our profession with confidence. The ultimate goal is to tell a good story about why we test and value it adds.
We need a solid foundation to build on!
My keynote wasn’t selected. So I send it in as a normal session, since I really am bothered by the lack of insight in our community. But it didn’t make it on the conference program as a normal session either. Why? Because it is too controversial they told me. After applying for the keynote the chairman called me to tell me that they weren’t going to ask me to do a keynote because the did want a “negative” sound on stage. I guess I can imagine that you do not want to start the day with a keynote who destroys your theme by saying that we need to strengthen our foundation first before moving on.
But why is this story too controversial for the conference at all? I guess it is (at least in the eyes of the program committee) because we don’t like to admit that we lack skills. That we don’t really know how to explain testing. I wrote about that before here. It bothers me that we think our foundation is good enough, while it really isn’t! We need to up our game and being nice and ignoring this problem isn’t going to help us. A soft and nice approach doesn’t wake people up. That is why I wanted to shake this up a bit. To wake people up and give them some serious feedback … I wrote about serious feedback before here. But the Dutch Testing Community (represented by TestNet) finds my ideas too controversial…
(*) TestNet is a network of, by and for testers. TestNet offers its members the opportunity to maintain contacts with other testers outside the immediate work environment and share knowledge and experiences from the field.
In Part 1 of this 2-part series, I walked through some lessons learned from the first incarnation of our project. The original project I’d still qualify as a success, in that it was delivered on-time, within budget, and is still under active development today. But we learned a lot of lessons from that project, and were lucky enough to have another crack at it so to speak when we started a new project, in the almost exact domain, but this time the constraints were quite a bit different.
In the first project, we targeted everyone that could possibly be involved with the overall process. This wound up to be a dozen state agencies and countless other groups and sub-groups. Quite a lot of contention in the model (also a great reason why you can never have a single master data model for an entire enterprise). We felt good about the software itself – it was modular and easy to extend, but the domain model itself just couldn’t satisfy all the users involved, only really a subset.
The second project targeted only a single aspect of the original overall legal process – the prosecution agency. Targeting just a single group, actually a single agency, brought tremendous benefits for us.Lesson 6: Cohesiveness brings greater clarity and deeper insight
Our initial conversations in the second project were somewhat colored by our first project. We started with an assumption that the core focus, the core domain would be at least the same as the monolith, but maybe a different view of it. We were wrong.
In the new version of the app, the entire focus of the system revolves around “cases”. I know, crazy that an app built for the day-to-day functions of a prosecution agency focuses centrally on a case:
Once we settled on the core domain, the possibilities then greatly opened up for modeling around that concept. Because the first app only tangentially dealt with cases (there wasn’t even a “Case” in the original model), it was more or less an impedance mismatch for its users in the prosecution agency. It was a bit humbling to hear the feedback from the prosecutors about the first project.
But in the second project, because our core domain was focused, we could spend much more time modeling workflows and behaviors that fit what the prosecution agency actually needed.Lesson 7: Be flexible where you need to, rigid in others
Although we were able to come to a consensus amongst prosecution agencies about what a case was, what the key things you could DO with a case were and the like, we couldn’t get any consensus about how a case should be managed.
This makes a lot of sense – the state has legal reporting requirements and the courts have a ton of procedural rules, but internal to an agency, they’re free to manage the work any way they wanted to.
In the first system, roles were baked in to the system, causing a lot of confusion for counties where one person wore many different hats. In the new system, permissions were hard-coded against tasks, but not roles:
The Permission here is an enum, and we tied permissions to tasks like “Approve Case” and “Add Evidence” and “Submit Disposition” etc. Those were directly tied to actions in our application, and you couldn’t add new permissions without modifying the code.
Roles (or groups, whatever) were not hardcoded, and left completely up to each agency how they liked to organize their work and decide who can do what.
With DDD it’s important to model both the rigid and flexible, they’re equally important in the overall model you build.Lesson 8: Sometimes you need to invent a model
While we were able to model quite well the actions one can perform with an individual case, it was immediately apparent when visiting different county agencies that their workflows varied significantly inside their departments.
This meant we couldn’t do things like implement a workflow internal to a case itself – everyone’s workflow was different. The only thing we could really embed were procedural/legal rules in our behaviors, but everything else was up for grabs. But we still wanted to manage workflows for everyone.
In this case, we needed to build consensus for a model that didn’t really exist in each county in isolation. If we focused on a single county, we could have baked the rules about how a case is managed into their individual system. But since we were building a system across counties, we needed to build a model that satisfied all agencies:
In this model, we explicitly built a configurable workflow, with states and transitions and security roles around who could perform those transitions. While no individual county had this model, it was the meta-model we found while looking across all counties.Lesson 9: Don’t blindly follow pattern advice
In the new app, I performed an experiment. I would only add tools, patterns, and libraries when the need presented itself but no sooner. This meant I didn’t add a repository, unit of work, services, really anything until an actual pain surfaced. Most of the DDD books these days have prescriptive guidance about what your domain model should look like, how you should do repositories and so on, but I wanted to see if I could simply arrive at these patterns by code smells and refactoring.
The funny thing is, I never did. We left out those patterns, and we never found a need to put them back in. Instead, we drove our usage around CQRS and the mediator pattern (something I’ve used for years but finally extracted our internal usage into MediatR. Instead, our controllers were pretty uniform in their appearance:
And the handlers themselves (as I’ve blogged about many times) were tightly focused on a single action, with no need to abstract anything:
I’ve extended this to other areas of development too, like front-end development. It’s actually kinda crazy how far you can get without jQuery these days, if you just use lodash and the DOM.Lesson 10: Microservices and anti-corruption layers are your friend
There is a downside to going to bounded contexts and away from the “majestic monolith”, and that’s integration. Now that we have an application solely dealing with one agency, we have to communicate between different applications.
This turned out to be a bit easier than we thought, however. This domain existed well before computers, so the interfaces between the prosecution and external parties/agencies/systems was very well established.
This was also the section of the book skipped the most, around anti-corruption layers and bounded contexts. We had to crack open that section of the book, dust it off, smell the smell of pages never before read, and figure out how we should tackle integration.
We’ve quite a bit of experience in this area it turns out, so it was really just a matter of deciding for each 3rd party what kind of integration would work best.
For some 3rd parties, we could create an entirely separate app with no integration. Some needed a special app that performed the translation and anti-corruption layer, and some needed an entirely separately deployed app that communicated to our system via hypermedia-rich REST APIs.
Regardless, we never felt we had to build a single solution for all involved. We instead picked the right integration for the job, with an eye of not reinventing things as we went.Conclusion
In both cases, I’d say both our systems were successful, since they shipped and are both being used and extended to this day. With the more tightly focused domain in the second system we were able to achieve that “greater insight” that the DDD book talks about.
In case anyone wonders, I intentionally did not talk about actors or event sourcing in this series – both things we’ve done and shipped, but found the applicability to be limited to inside a bounded context (or even more typically, a corner of a bounded context). Another post for another day!
We love open source. As advocates and contributors, we benefit from community participation and return the love via Open Sauce – free access to the Sauce Labs testing platform for open source projects. We’ve recently made some improvements to Open Sauce that enhances the UI and makes sharing results easier. Here’s the rundown:
- Simplified Badge Sharing Flow. Open Sauce users will see a new badge directly on their dashboards notifying them of the status of their latest build. Clicking on the badge will reveal a new window with inline links.
- New Build Status Badges.We’ve developed two new icons / badges that users can use to share the status of their latest builds on their GitHub pages or embed them anywhere where the information is needed.
- New Build Matrix. In addition to the badges, we’ve revised our existing browser matrix to allow our users to quickly see which browser/OS combinations the build ran against and which platforms experienced a failure.
- Redesigned Public Open Sauce Profiles. Clicking on one of the status badges or build matrixes will automatically redirect to a branded open sauce UI that allows anyone to see recent jobs performed by the open sauce user without the need to log in.
Finally, the UI changes are only targeted at our Open Sauce users and won’t be seen by users on any of our other plans. As before, users with private builds (any of our paid plans) can choose to use the new badges by adding an HMAC token.
If you have any questions, please drop a note to email@example.com.
The Sauce Labs Team