As part of the development process of SonarLint for Visual Studio we regularly check a couple of open source projects, such as Roslyn, to filter out false positives and to validate our rule implementations. In this post we’ll highlight a couple of issues found recently in Roslyn project.Short-circuit logic should be used to prevent null pointer dereferences in conditionals (S1697)
This rule recognizes a few very specific patterns in your code. We don’t expect any false positives from it, so whenever it reports an issue, we know that it found a bug. Check it out for yourself; here is the link to the problem line.
When body is null, the second part of the condition will be evaluated and throw a NullReferenceException. You might think that the body of a method can’t be null, but even in syntactically correct code it is possible. For example method declarations in interfaces, abstract or partial methods, and expression bodied methods or properties all have null bodies. So why hasn’t this bug shown up yet? This code is only called in one place, on a method declaration with a body.The ternary operator should not return the same value regardless of the condition (S2758)
We’re not sure if this issue is a bug or just the result of some refactoring, but it is certainly confusing. Why would you check isStartToken if you don’t care about its content?S2930)
Lately we’ve spent some effort on removing false positives from this rule. For example, we’re not reporting on MemoryStream uses anymore, even though it is an IDisposable. SonarLint only reports on resources that should really be closed, which gives us high confidence in this rule. Three issues (, , ) are found on the Roslyn project, where a FileStream, a TcpClient, and a TcpListener are not being disposed.S3427)
Mixing method overloads and default parameter values can result in cases when the default parameter value can’t be used at all, or can only be used in conjunction with named arguments. These three cases (, , ) fall into the former category, the default parameter values can’t be used at all, so it is perfectly safe to remove them. In each case, whenever only the first two arguments are supplied, another constructor will be called. Additionally, in this special case, if you call the method like IsEquivalentTo(node: myNode), then the default parameter value is used, but if you use IsEquivalentTo(myNode), then another overload is being called. Confusing, isn’t it?S2345)
It is good practice to explicitly set a value for your [Flags] enums. It’s not strictly necessary, and your code might function correctly without it, but still, it’s better safe than sorry. If the enum has only three members, then the automatic 0, 1, 2 field initialization works correctly, but when you have more members, you most probably don’t want to use the default values. For example here FromReferencedAssembly == FromSourceModule | FromAddedModule. Is this the desired setup? If so, why not add it explicitly to avoid confusion?S3168)
As you probably know, async void methods should only be used in a very limited number of scenarios. The reason for this is that you can’t await on async void method calls. Basically, these are fire and forget methods, such as event handlers. So what happens when a test method is marked async void? Well, it depends. It depends on your test execution framework. For example NUnit 2.6.3 handles them correctly, but the newer NUnit 3.0 dropped support. Roslyn uses xUnit 2.1.0 at the moment, which does support running async void test methods, so there is no real issue with them right now. But changing the return value to Task would probably be advisable. To sum up, double check your async void methods; they might or might not work as you expect. Here are two occurrences from Roslyn (, ).
Additionally, here are some other confusing pieces of code that are marked by SonarLint. Rule S2275 (Format strings should be passed the correct number of arguments) triggers on this call, where the formatting arguments 10 and 100 are not used, because there are no placeholders for them in the format string. Finally, here are three cases (, , ) where values are bitwise OR-ed (|) with 0 (Rule S2437).
We sincerely hope you already use SonarLint daily to catch issues early. If not, you can download SonarLint from the Visual Studio Extension Gallery or install it directly from Visual Studio (Tools/Extensions and Updates). SonarLint is free and already trusted by thousands of developers, so start using it today!
Selenium Conf India is happening this June 24-26 in Bangalore, India.
Tickets, call for speakers, and sponsorship slots are now available!
The team is proud to announce the release of 5.3, another paradigm-shifting version, with the addition of significant new features, and the return of popular functionality that didn’t make it in to 5.2:
- New Project Space which puts the focus on the Quality Gate and the Leak Period
- User tokens for authenticated analysis without passwords
- New web services to facilitate a build breaker strategy
- Cross-project duplication is back!
The most striking change in this version is the replacement of the default project dashboard with a new, fixed Project space highlighting the top four data domains: technical debt, coverage, duplications, and structure (which includes both size and complexity):
Because managing technical debt introduced during the Leak Period is so crucial, this streamlined, new project home page keeps the leak period (first differential period, which is now overridable at the project level), in the forefront. Both current and differential values are shown both textually and graphically:
Each of the four domains offers a detailed sub-page, available either through the “more” links on the Project Space or the relevant project menu items:Technical Debt:
Each domain page offers the same combination of current values (in blue, with clickthroughs) and leak period changes (yellow background) found on the main page, along with detailed numeric and graphical presentations designed to help you quickly zero-in on the worst offenders in your projects.
SonarSource feels so strongly about the value of the new Project Space and the domain pages that none of them are configurable. But your old dashboards are still available under the “Dashboards” menu item.User tokens for authenticated analysis without passwords
In version 5.2, we cut the last ties between analysis and the database. Now an analysis report is submitted to the server and all database updates take place server-side. In 5.3 we take the next step down the road of enhanced analysis security with the introduction of authentication tokens.
Now an administrator can create authentication tokens for any user.
Tokens may be used in for analysis and with web services. Simply pass the token as the login, and leave the password blank.
The list of user token names (but not values!) is easily visible, and existing tokens can be revoked at any time:
Users can’t generate their own tokens yet, but that’s coming soon.New web services to facilitate a build breaker strategy
In the implementation of a Continuous Inspection strategy, many people use Continuous Integration servers, such as Jenkins, to execute their SonarQube scans, and want to show as broken a run that includes new code that breaks fails the Quality Gate. Because of time constraints, the old hooks for that were removed in 5.2 and not replaced. In 5.3. we made it a priority to close this gap, so the functionality is now available to allow you to implement a build breaker strategy.
When the client-side scanner is done, it writes out a data file with the URL to call for the server-side processing status
Once the processing is successful,
you can use the analysis id to get the quality gate status
Also under the heading of returning favorites is cross-project duplication. The changes in 5.2 required serious API updates. In turn a rewrite of cross-project duplication detection was required – another priority in 5.3
Notably, 5.3 only provides cross-project duplication detection, not the detection of duplications across modules within a project, which is planned for 5.4.That’s All, Folks!
On December 15, the Toulouse JAM was co-hosted with the Toulouse JUG and Toulouse DevOps. Indeed it made sense since Jenkins is written in Java, makes use of Groovy code in many places (system groovy script, job dsl, workflow...), and it also made sense to co-organize with the local DevOps community since Jenkins is also a great tool to enable Continuous Integration, Continuous Delivery and automation in general. There were 103 RSVPs, with 80 to 90 people in attendance.
There were 3 talks planned for the evening:
- Job DSL Intro [fr], by Ghislain Mahieux
- Workflow plugin [fr], by Michaël Pailloncy (co-maintainer of the Build Trigger Badge plugin)
- Feedback on almost 10 years of CI and what's upcoming [fr], demo with Jenkins build scaling with Docker Swarm, by Baptiste Mathus
Note: presentations have been recorded (in french). They are still being processed, and once they are posted we will update this blog.
In our last update we mentioned there will be 2 Selenium Confs in 2016 — one in India, another somewhere else (TBD).
Well, we are pleased to announce the official dates and location for Selenium Conf India!
When: June 24th & 25th, 2016
Where: Bangalore, India (at The Chancery Pavilion Hotel)
Mark you calendars! We’ll have more details as they become available (e.g., call for speakers, ticket sales, etc.). To get the latest updates, be sure to sign up for the Selenium Conf mailing list.
Software projects often publish comparisons with other projects, with which they compete. These comparisons typically have a few characteristics in common:
- They aim at highlighting reasons why one project is superior – that is, they are marketing material.
- While they may be accurate when initially published, competitor information is rarely updated.
- Pure factual information is mixed with opinion, sometimes in a way that doesn’t make clear which is which.
- Competitors don’t get much say in what is said about their projects.
- Users can’t be sure how much to trust such comparisons.
Of course, we’re used to it. We no longer expect the pure, unvarnished truth from software companies – no more than from drug companies, insurance companies, car salesmen or government agencies. We’re cynical.
But one might at least hope that open source projects might do better. It’s in all our interests, and in our users’ interests, to have accurate, up-to-date, unbiased feature comparisons.
So, what would such a comparison look like?
- It should have accurate, up-to-date information about each project.
- That information should be purely factual, to the extent possible. Where necessary, opinions can be expressed only if clearly identified as opinion by it’s content and placement.
- Developers from each project should be responsible for updating their own features.
- Developers from each project should be accountable for any misstatements that slip in.
I think this can work because most of us in the open source world are committed to… openness. We generally value accuracy and we try to separate fact from opinion. Of course, it’s always easy to confuse one’s own strongly held beliefs with fact, but in most groups where I participate, I see such situations dealt with quite easily and with civility. Open source folks are, in fact, generally quite civil.
So, to carry this out, I’m announcing the .NET Test Framework Feature Comparison project – ideas for better names and an acronym are welcome. I’ll provide at least a temporary home for it and set up an initial format for discussion. We’ll start with MbUnit and NUnit, but I’d like to add other frameworks to the mix as soon as volunteers are available. If you are part of a .NET test framework project and want to participate, please drop me a line.
Webinar: Solve Performance Bottlenecks and Function Problems In Your
February 22, 2012
Source Test Workshop for Developers, Testers, IT Ops - Learn how
the Open Source Test Tools Make Test Development and Operation Easy
February 23, 2012
Source Test Workshop for CIOs, CTOs, Business Managers - Learn how
to bring Open Source Test tools and methodology into your organization
March 21, 2012
soapUI, Sahi, TestMaker Workshop for Testers, Developers, IT Ops
March 22, 2012
Open Source Performance Test Workshop for CIOs, CTOs, Business Managers
- Load and performance testing without hassle and cost
March 28, 2012
Open Source Performance Test Workshop for Developers, Testers, IT
Managers - The PushToTest Calibration Test Methodology explained
March 29, 2012
Selenium, soapUI, Sahi, TestMaker Performance Testing In Your
April 17, 2012
Open Source Performance Test Workshop for Developers, Testers, IT
April 18, 2012
Source Test Workshop for CIOs, CTOs, Business Managers
May 2, 2012
soapUI, Sahi, TestMaker Workshop for Testers, Developers, IT Ops
May 3, 2012
The Selenium Tutorial for Beginners has the following chapters:
- Selenium Tutorial 1: Write Your First Functional Selenium Test
- Selenium Tutorial 2: Write Your First Functional Selenium Test of an Ajax application
- Selenium Tutorial 3: Choosing between Selenium 1 and Selenium 2
- Selenium Tutorial 4: Install and Configure Selenium RC, Grid
- Selenium Tutorial 5: Use Record/Playback Tools Instead of Writing Test Code
- Selenium Tutorial 6: Repurpose Selenium Tests To Be Load and Performance Tests
- Selenium Tutorial 7: Repurpose Selenium Tests To Be Production Service Monitors
- Selenium Tutorial 8: Analyze the Selenium Test Logged Results To Identify Functional Issues and Performance Bottlenecks
- Selenium Tutorial 9: Debugging Selenium Tests
- Selenium Tutorial 10: Testing Flex/Flash Applications Using Selenium
- Selenium Tutorial 11: Using Selenium In Agile Software Development Methodology
- Selenium Tutorial 12: Run Selenium tests from HP Quality Center, HP Test Director, Hudson, Jenkins, Bamboo
- Selenium Tutorial 13: Alternative To Selenium
I wrote a Selenium tutorial for beginners to make it easy to get started and take advantage of the advanced topics. Download TestMaker Community to get the Selenium tutorial for beginners and immediately build and run your first Selenium tests. It is entirely open source and free!
Distributing the work of performance testing through an Agile epoc, story, and sprints reduces the testing effort overall and informs the organization's business managers on the service's performance. The biggest problem I see is keeping the testing transparent so that anyone - tester, developer, IT Ops, business manager, architect - follows a requirement down to the actual test results.
With the right tools, methodology, and coaching an organization gets the following:
- Process identification and re-engineering for Test Driven
- Installation and configuration of a best-in-class SOA Test Orchestration Platform to enable rapid test development of re-usable test assets for functional testing, load and performance testing and production monitoring
- Integration with the organization's systems, including test management (for example, Rally and HP QC) and service asset management (for example, HP Systinet)
- Construction of the organization's end-to-end tests with a team of PushToTest Global Professional Services, using this system and training of the existing organization's testers, Subject Matter Experts, and Developers to build and operate tests
- On-going technical support
The key to high quality and reliable SOA service delivery is to practice an always-on management style. That requires on-site coaching. In a typical organization the coaches accomplish the following:
- Test architects and test developers work with the existing
Team members. They bring expert knowledge of the test tools. Most
important is their knowledge of how to go from concept to test
- Technical coaching on test
automation to ensure that team members follow defined
Agile, Test Management, and Roles in SOA
Agile software development process normally focuses first on functional testing - smoke tests, regression test, and integration tests. Agile applied to SOA service development deliverables support the overall vision and business model for the new software. At a minimum we should expect:
- Product Owner defines User Stories
- Test Developer defines Test Cases
- Product team translates Test Cases into soapUI, TestMaker Designer, and Java project implementations
- Test Developer wraps test cases into Test
Scenarios and creates an easily accessible test record associated to
the test management service
- Any team member follows a User Story down into associated tests. From there they can view past results or execute tests again.
- As tests execute the test management system creates "Test
Execution Records" showing the test results
- To what extent will large organizations dump legacy test tools for open source test tools?
- How big would the market for private cloud software platforms be?
- Does mankind have the tools to make a reliable success of the complicated world we built?
- How big of a market will SOA testing and development be?
- What are the best ways to migrate from HP to Selenium?
The Scalability Argument for Service Enabling Your Applications. I make the case for building, deploying, and testing SOA services effectively. I point out the weakness of this approach comes at the tool and platform level. For example, 37% of an application's code simply to deploy your service.
How PushToTest Uses Agile Software Development Methodology To Build TestMaker. A conversation I had with Todd Bradfute, our lead sales engineer, on surfacing the results of using Agile methodology to build software applications.
"Selenium eclipsed HP’s QTP on job posting aggregation site Indeed.com to become the number one requisite job experience / skill for on-line posted automated QA jobs (2700+ vs ~2500 as of this writing,)" John Dunham, CEO at Sauce Labs, noted.
Run Private Clouds For Cost Savings and Control. Instead of running 400 Amazon EC2 machine instances, Plinga uses Eucalyptus to run its own cloud. Plinga needed the control, reliability, and cost-savings of running its own private cloud, Marten Mickos, CEO at Eucalyptus, reports in his blog.
How To Evaluate Highly Scalable SOA Component Architecture. I show how to evaluate highly scalable SOA component architecture. This is ideal for CIOs, CTOs, Development and Test Executives, and IT managers.
Planning A TestMaker Installation. TestMaker features test orchestration capabilities to run Selenium, Sahi, soapUI, and unit tests written in Java, Ruby, Python, PHP, and other langauges in a Grid and Cloud environment. I write about the issues you may encounter installing the TestMaker platform.
Repurposing ThoughtWorks Twist Scripts As Load and Performance Tests. I really like ThoughtWorks Twist for building functional tests in an Agile process. This blog and screencast shows how to rapidly find performance bottlenecks in your Web application using Thoughtworks Twist with PushToTest TestMaker Enterprise test automation framework.
4 Steps To Getting Started With The Open Source Test Engagement Model. I describe the problems you need to solve as a manager to get started with Open Source Testing in your organization.
Corellation Technology Finds The Root Cause To Performance Bottlenecks. Use aspect-oriented (AOP) technology to surface memory leaks, thread deadlocks, and slow database queries in your Java Enterprise applications.
10 Agile Ways To Build and Test Rich Internet Applicatiions (RIA.) Shows how competing RIA technologies put the emphasis on test and deploy.
Oracle Forms Application Testing. Java Applet technology powers Oracle Forms and many Web applications. This blog shows how to install and use open source tools to test Oracle Forms applications.
Saving Your Organization From The Eventual Testing Meltdown of Using Record/Playback Solely. The Selenium project is caught between the world of proprietary test tool vendors and the software developer community. This blog talks about the tipping-point.
Choosing Java Frameworks for Performance. A round-up of opinions on which technologies are best for building applications: lightweight and responsive, RIA, with high developer productivity.
Selenium 2: Using The API To Create Tests. A DZone Refcard we sponsored to explain how to build tests of Web applications using the new Selenium 2 APIs. For the Selenium 1 I wrote another Refcard, click here.
Test Management Tools. A discussion I had with the Zephyr test management team on Agile testing.
Migrating From HP Mercury QTP To PushToTest TestMaker 6. HP QTP just can't deal with the thousands of new Web objects coming from Ajax-based applications. This blog and screencast shows how to migrate.
10 Tutorials To Learn TestMaker 6. TestMaker 6 is the easier way to surface performance bottlenecks and functional issues in Web, Rich Internet Applications (RIA, using Ajax, Flex, Flash,) Service Oriented Architecture (SOA,) and Business Process Management (BPM) applications.
5 Easy Ways To Build Data-Driven Selenium, soapUI, Sahi Tests. This is an article on using the TestMaker Data Production Library (DPL) system as a simple and easy way to data-enable tests. A DPL does not require programming or scripting.
Open Source Testing (OST) Is The Solution To Modern Complexity. Thanks to management oversite, negligence, and greed British Petroleum (BP) killed 11 people, injured 17 people, and dumped 4,900,000 barrels of oil into the Gulf of Mexico in 2010. David Brooks of the New York Times became an unlikely apologist for the disaster citing the complexity of the oil drilling system.
Choosing automated software testing tools: Open source vs. proprietary. Colleen Fry's article from 2010 discusses why software testers decide which type of automated testing tool, or combination of open source and proprietary, to best meets their needs. We came a long way in 2011 to achieve these goals.
All of my blogs are found here.
Your organization may have adopted Agile Software Development Methodology and forgot about load and performance testing! In my experience this is pretty common. Between Scrum meetings, burn-down sessions, sprints, test first, and user stories, many forms of testing - including load and performance testing, stress testing, and integration testing - can get lost. And, it is normally not only your fault. Consider the following:
- The legacy proprietary test tools - HP LoadRunner, HP QTP, IBM
Rational Tester, Microsoft VSTS - are hugely expensive. Organizations
can't afford to equip developers and testers with their own licensed
copies. These tools licenses are contrary to Agile testing, where
developers and testers work side-by-side building and testing
- Many testers still cannot write test code. Agile developers write
unit tests in high level languages (Java, C#, PHP, Ruby.) Testers need
a code-less way to repurpose these tests into functional tests, load
and performance tests, and production service monitors.
- Business managers need a code-less way to define the software
release requirements criteria. Agile developers see Test
Management tools (like HP Quality Center QC) as a needless extra burden
to their software
development effort. Agile developers are hugely attracted to Continuous
Integration (CI) tools like Hudson, Jenkins, Cruise Control, and
Bamboo. Business managers need anintegrated CI and test platform
to define requirements and see how close to 'shipping' is their
Registration is free! Click here to learn more and register now:
- Writing Load Test Scripts
- Building Functional Tests for Smoke and Regression Testing
- Trying to use Selenium IDE and needing a good tutorial
- Configuring test management tools working with TestMaker, Sahi, and soapUI
- Needing To Compare Selenium Vs HP QuickTest Pro (QTP)
- Stuck While Doing Cloud Computing Testing
- Need Help Getting Starting with Load Testing Tools
Here Is What We Have For You Bring your best questions, issues, and bug reports on installing, configuring, and using PushToTest TestMaker to our free weekly Workshop via live Webinar. PushToTest experts will be available to answer your questions.
Frank Cohen, CEO and Founder at PushToTest, and members of the PushToTest technical team will answer your questions, show you where to find solutions, and take your feedback for feature enhancements and bug reports.
Every Thursday at 1 pm Pacific time (GMT-8)
At the Webinar:
- Register for the Webinar in-advance
- Log-in to the Webinar at the given day and time
- Each person that logs-in will have their turn, ask their question, and hear our response
- You may optionally share/show your desktop for the organizers to see what is going wrong and offer a solution
- The organizers will hear as many questions as will fit in 1 hour. No guarantee that everyone will be served.