If you’ve already taken a look at SonaQube 4.4, the title of this post wasn’t any news to you. The new version introduces two major changes to the way SonarQube presents data: the new rules space and the changes to the source viewer.
If you’ve been keeping up version to version, you’ve noticed new styling creeping in to the design. We formed a Web team this year to focus on transforming SonarQube’s interface into something as sexy as the underlying functionality, and the team is starting to hit its stride.
The new rules space is a reimagining of how to interact with rules. Previously, they were accessed only within the context of their inclusion (or not) in a single profile. Want to know if a given rule is present in multiple profiles? Previously, you had to hunker down because it could take a while.
Now rules have their own independent presentation, with multi-axis search.
All the search criteria from the old interface are still available, and several new ones have been added. The rule tags introduced in SonarQube 4.2 become searchable in 4.4, as do SQALE characteristics. And for most criteria you can search for multiple values. For example, it’s now easy to find rules in both “MyFirstProfile” and “MySecondProfile” simply by checking them both off in the profile dropdown.
At the bottom of the rule listing, you’ll see all the profiles it’s included in, along with the severity and any parameters for the profile. If you’re an administrator, you’ll have controls here to change a rule in its current profiles and to add it to new profiles. The search results pane on the left also features bulk change operations for administrators, allowing them to toggle activation in a profile for all the rules in the search results.
It’s also easy now to find clone-able rules such as XPath and Architectural Constraint in Java; they’re called “templates” starting in 4.4, and they get their own search criterion.
I shouldn’t forget to mention the second tier below the search criteria. It represents the categories the search results span: languages, repositories, and tags, and the search results can be further filtered by clicking on the entries there. (A second click deselects and unfilters). For instance, here’s the default search filtered to show C rules that have been tagged for MISRA C++:
The point of this radical overhaul is to give you, the user, a better way to explore rules; to see what rules are available, which rules are used where, and which rules you might want to turn on or ask your administrator to turn on.
One interesting aspect of this is the new ability to explore rule activation across languages. For rules that are implemented directly within a plugin, as opposed to coming from 3rd party tools like FxCop or FindBugs, you’ll see that when the same rule is implemented in multiple languages, it usually has the same key (there are a few historical exceptions.)
So, for example, now you can easily see whether the same standards are being enforced across all languages in your organization.
The new rules space is just one piece of our new attitude toward data. Next time I’ll talk about the complete rework of the component viewer. It’s a reimagining that’s just as radical as this one.
My apologies for the last minute announcement, but there will be a Jenkins user meet-up in Paris on Sep 10th 7:00pm, which is just next week. The event is hosted by Zenika. You'll hear from Gregory Boissinot and Adrien Lecharpentier about plugin development, and I'll be talking about workflow.
It's been a while we do a meet-up in Paris. Looking forward to seeing as many of you as possible. The event is free, but please RSVP so that we know what to expect.
JUC SF on October 23, 2014 is shaping up to be bigger and better this year.
Here’s what we have in store for you!Three Tracks
We’ve received a record high of 40 stellar proposals this year. To accommodate the many community proposals, we’ve decide to add a third track to the agenda. JUC SF sessions are now available for you to view. We have speakers from Google, Target, Gap, Cloudera, Ebay, Chicago Drilling Company, and much more. Register now for early bird price. The early bird price is only good until September 21, 2014.Live Stream
Have a beer while learning how to write Jenkins plugin. Steve Christou, Jenkins support engineer will lead this lecture from 3:30pm to 6:00pm. He will teach everything from how to get started, to techniques like writing a new CLI Command, to writing your own builder.Ask the Experts
Meet the Jenkins creator, committers, support engineers, and developers. We have dedicated time slot(s) for our attendees to get 1 on 1 access to our experts. Exact time is TBD. Ask them anything from plugins, configuration, technical support, to bug fixes.
Our current list of experts are:
- Andrew Bayer
- Gareth Bowles
- Steve Christou
- Jesse Glick
- Kohsuke Kawaguchi
- Dean Yu
Want to join our panel of experts? Contact Alyssa Tong email@example.comExhibit Mixer
Sixteen technology sponsors will be showcasing their newest technologies during the exhibition hour from 2:25 – 3:30pm. Grab a beer, visit with sponsors and see how they are using Jenkins.
This is just a taste of what you’ll see at JUC SF. We look forward to seeing you there!!
Jesse and I will walk through the source code of the workflow plugin, highlights key abstractions and extension points, and discuss how they are put together.
If you are interested in developing or retrofitting plugins to work with workflows, I think you'll find this session interesting.
(This is a guest post from Michael Neale)
Recently at the Docker Conference (DockerCon) the Docker Hub was announced.
The hub (which includes their image building and storage service) also provides some "official" images (sometimes they call them repositories - they are really just sets of images).
So after talking with all sorts of people we decided to create an official Jenkins image - which is hosted by the docker hub simply as "jenkins".
So when you run "docker pull jenkins" - it will be grabbing this image. This is based on the current LTS (and will be kept up to date with the LTS) - but does not include the weekly releases (yet). Having a jenkins image that is fairly basic (it includes enough to run some basic builds, as well as jenkins itself) built on the LTS, on the latest LTS of Ubuntu seemed quite convenient - and easy to maintain using the official Ubuntu/Debian packaging of Jenkins.
Docker is a great way to try and use server based systems - it brings all the dependencies needed and the images actually are portable (ie anywhere docker runs you can run docker images). There are official images for many popular server platforms (redis, mysql, all the linux distros and so on) so it seemed crazy to not include Jenkins along with this list. "docker run -p 8080:8080 jenkins" is all you need to get going with LTS Jenkins now. You can also use "docker run jenkins:1.554" to get the latest of that lineage of LTS releases, or pick a specific one: "docker run jenkins:1.554.3" if you like. Leaving off a version assumes the latest. Check the tags page to see what is available.
You can read more and see how you can use it here.
There has been some questions and discussions on how to make use of Jenkins with the docker hub for creating new and interesting docker image based workflows for deployment. In fact, Jenkins featured in one of the first slides of the first keynote of docker con: To make this dream a reality some additional plugins had to be created - but this leaves the possibility of working with the docker hub (builds, stores images) and Jenkins (workflow, testing, deployment) to build out some kind of a continuous pipeline for handling docker based apps. I attempted to describe this more here.
It will be interesting to watch this grow and change.
I'll talk about my recent chef/puppet integration work in Jenkins. Sven from Perforce will talk about how to leverage Perforce features from Jenkins, and then James Nord will talk about workflow. It will be a worthy 2 hours.
If the line up of talks will not be enough to sway you, you should also know that I will bring some Jenkins give-aways!
I'm not sure how many people to expect, but there's a cap at 80 people, so if you are thinking about coming, be sure to RSVP. Looking forward to seeing many of you there!
Finally, if you are in London, the usual suspects (CloudBees, PuppetLabs, XebiaLabs, MidVision, SOASTA, et al) are doing a free event titled "How To Accelerate Innovation with Continuous Delivery" that you might also be interested in.
The team is proud to announce the release of SonarQube 4.4, which includes many exciting new features:
- Rules page
- Component viewer
- New Quality Gate widget
- Improved multi-language support
- Built-in web service API documentation
With this version of SonarQube, rules come out of the shadow of profiles to stand on their own. Now you can search rules by language, tag, SQALE characteristic, severity, status (E.G. beta), and repository. Oh yes, and you can also search them by profile, activation, and profile inheritance.
Once you’ve found your rules, this is now where you activate or deactivate them in a profile – individually through controls on the rule detail or in bulk through controls in the search results list (look for the cogs). In fact, the profiles page no longer has it’s own list of rules. Instead, it offers a summary by severity, and a click through to a rule search.
Another shift in rule handling comes for what used to be called “cloneable rules”. We’ve realized that strictly speaking, these are really “templates” rather than rules, and now treat them as such.
Templates can no longer be directly activated in a profile. Instead, you create rules from them and activate those.Component viewer
The component viewer also experienced major changes in this version. The tabs across the top now offer filtering, which controls what parts of the code you see (E.G. only show me the code that has issue), and decoration, which controls what you see layered on top of the code (show/hide the issues, the duplications, etc.).
A workspace concept debuts in this version. As you navigate from file to file through either code coverage or duplications, it helps you track where you are and where you’ve been.
A new Quality Gate widget makes it clearer just what’s wrong if your project isn’t making the grade. Now you can see exactly which measures are out of line:
Multi-language analysis was introduced in 4.2 and it just keeps getting better. Now we’ve added the distribution of LOC by language in the size widget for multi-language projects.
We’ve also added a language criterion to the Issues search:
To find this last feature, look closely at at 4.4′s footer.
We now offer on-board API documentation.
This is a guest post from Tom Fennelly
Over the last number of weeks we've been trying to "refresh" the Jenkins UI, modernizing the look and feel a bit. This has been a real community effort, with collaboration from lots of people, both in terms of implementation and in terms of providing honest/critical feedback. Lots of people deserve credit but, in particular, a big thanks to Kevin Burke and Daniel Beck.
Current / Old Look & Feel
New Look & Feel
Among other things, you'll see:
- A new responsive layout based on <div> elements (as opposed to <table> elements). Try resizing the screen or viewing on a smaller device. More to come on this though, we hope.
- Updated default font from Verdana to Helvetica.
- Nicer form elements and nicer buttons.
- Smoother side panels e.g. Build Executors, Build Queues and Build History panes.
- Smoother project views with more modern tabs.
You might already be seeing these changes if you're using the latest and greatest code from Jenkins. If not, you should see them in the next LTS release.
We've been trying to make these changes without breaking existing features and plugins and, so far, we think we've been successful but if you spot anything you think we might have had a negative effect on, then please log a JIRA and we'll try to address it.
One thing we've "sort of" played with too is cleaning up of the Job Config page - breaking into sections and making it easier to navigate etc. This is a big change and something we've been shying away from because of the effect it will have on plugins and form submission. That said, I think we'll need to bite the bullet and tackle this sooner or later because it's a big usability issue.
Starting with Java Ecosystem version 2.2 (compatible with SonarQube version 4.2+), we no longer drive the execution of unit tests during Maven analysis. Dropping this feature seemed like such a natural step to us that we were a little surprised when people asked us why we’d taken it.
Contrary to popular belief we didn’t drop test execution simply to mess with people. :-) Actually, we’ve been on this path for a while now. We had previously dropped test execution during PHP and .NET analyses, so this Java-only, Maven-only execution was the last holdout. But that’s trivial as a reason. Actually, it’s something we never should have done in the first place.
In the early days of SonarQube, there was a focus on Maven for analysis, and an attempt to add all the bells and whistles. From a functional point of view, the execution of tests is something that never belonged to the analysis step; we just did it because we could. But really, it’s the development team’s responsibility to provide test execution reports. Because of the potential for conflicts among testing tools, the dev team are the only ones who truly know how to correctly execute a project’s test suite. And in the words of SonarSource co-founder and CEO, Olivier Gaudin, “it was pretentious of us to think that we’d be able to master this in all cases.”
And master it, we did not. So there we were, left supporting a misguided, gratuitous feature that we weren’t sure we had full test coverage on. There are so many different, complex surefire configuration cases to cover that we just couldn’t be sure we’d implemented tests for all of them.
Plus, This automated test execution during Java/Maven analysis had an ugly technical underbelly. It was the last thing standing in the way of removing some crufty, thorn-in-the-side, old code that we really needed to get rid of in order to be able to move forward efficiently. It had to go.
We realize that switching from test execution during analysis to test execution before analysis is a change, but it shouldn’t be an onerous one. You simply go from
mvn clean install
mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent install -Dmaven.test.failure.ignore=true
Your analysis will show the same results as before, and we’re left with a cleaner code base that’s easier to evolve.
My favorite part is, to quote, "Jenkins has an almost laughably dominant position in the CI server segment", and "With 70% of the CI market on lockdown and showing an increasing rate of plugin development, Jenkins is undoubtably the most popular way to go with CI servers."
If you want to read more about it and other 9 technologies that won, they have produced a beautifully formatted PDF for you to read.
Some time ago, we've built Jenkins bobble head figures. This was such a huge hit that everywhere I go, I get asked about them. The only problem was that it cannot be individually ordered, and we didn't have enough cycles to individually sell and ship them for those who wanted them.
So I decided to have the 3D model of Mr.Jenkins built, which would allow anyone to print them via 3D printer. I comissioned akiki, a 3D model designer, to turn our beloved butler into a fully-digital color-printable figure. He was even kind enough to discount the price with the understanding that this is for an open-source project.
The result was IMHO excellent, and when I finally came back to my house yesterday from a two-weeks trip, I found it delivered to my house: With the red bow tie, a napkin, a blue suit, and his signature beard, it is instantly recognizable as Mr.Jenkins. He's mounted on top of a red base, and is quite stable. I think the Japanese sensibility of the designer is really showing! Note that the material has a rough surface and it is not very strong, but that's what you trade to get full color.
I've put it up on Shapeways so that you can order it yourself. The figure is about 2.5in/6cm tall. The price includes a bit of markup toward recovering the cost of the design. My goal is to sell 25 of them, which will roughly break it even. Any excess, if it ever happens, will be donated back to the project.
Likewise, once I hit that goal, I will make the original data publicly available under CC-BY-SA, so that other people can modify the data or even print it on their own 3D printers.
This year marks the 3rd annual Jenkins User Conference in Israel. While the timing of the event turned out to be less than ideal for reasons beyond our control, that didn't stop 400 Jenkins users from showing up at the "explosive" event at a seaside hotel near Tel Aviv.
Shlomi Ben-Haim kicked off the conference by reporting that JUC Israel just keeps getting bigger, and that we sold out 2 weeks earlier and the team had to turn down people who really wanted to come in. The degree of adoption of Jenkins is amazing in this part of the world, and we might have to find a bigger venue next year to accomodate everyone who wants to come.
It turns out most of the talks were in Hebrew, so it was difficult for me to really understand what's going on, but the talks ranged from highly technical ones like how to provision Jenkins from configuration management (the server as welll as jobs), all the way to more culture focused one like how to deploy CD practice in an organization. Companies large and small were well represented, and I met with a number of folks who actively contribute to the community.
There were a lot of hall way conversations, and those of us at the booth had busy time.
Thanks everyone who came, thanks JFrog for being on the ground for the event (and congratulations for the new round of funding) and CloudBees for hosting the event. Please let us know if there are things we can do better, and see you again next year!
One of the challenges of running Jenkins User Conferences is to ballance the interest of attendees and the interest of sponsors. Sponsors would like to know more about attendees, but attendees are often weary of getting contacted. Our past few JUCs have been run by making it opt-in to have the contact information passed to sponsors, but the ratio of people who opt-in is too low. So we started thinking about adjusting this.
So our current plan is to reduce the amount of data we collect and pass on, but to make this automatic for every attendee. Specifically, we'd limit the data only to name, company, e-mail, and city/state/country you are from. But no phone number, no street address, etc. We discussed this in the last project meeting, and people generally seem to think this is reasonable. That said, this is a sensitive issue, so we wanted more people to be aware.
By the way, the call for papers to JUC Bay Area is about to close in a few days. If you are interested in giving a talk (and that's often the best way to get feedback and take credit on your work), please make sure to submit it this week.
Software projects often publish comparisons with other projects, with which they compete. These comparisons typically have a few characteristics in common:
- They aim at highlighting reasons why one project is superior – that is, they are marketing material.
- While they may be accurate when initially published, competitor information is rarely updated.
- Pure factual information is mixed with opinion, sometimes in a way that doesn’t make clear which is which.
- Competitors don’t get much say in what is said about their projects.
- Users can’t be sure how much to trust such comparisons.
Of course, we’re used to it. We no longer expect the pure, unvarnished truth from software companies – no more than from drug companies, insurance companies, car salesmen or government agencies. We’re cynical.
But one might at least hope that open source projects might do better. It’s in all our interests, and in our users’ interests, to have accurate, up-to-date, unbiased feature comparisons.
So, what would such a comparison look like?
- It should have accurate, up-to-date information about each project.
- That information should be purely factual, to the extent possible. Where necessary, opinions can be expressed only if clearly identified as opinion by it’s content and placement.
- Developers from each project should be responsible for updating their own features.
- Developers from each project should be accountable for any misstatements that slip in.
I think this can work because most of us in the open source world are committed to… openness. We generally value accuracy and we try to separate fact from opinion. Of course, it’s always easy to confuse one’s own strongly held beliefs with fact, but in most groups where I participate, I see such situations dealt with quite easily and with civility. Open source folks are, in fact, generally quite civil.
So, to carry this out, I’m announcing the .NET Test Framework Feature Comparison project – ideas for better names and an acronym are welcome. I’ll provide at least a temporary home for it and set up an initial format for discussion. We’ll start with MbUnit and NUnit, but I’d like to add other frameworks to the mix as soon as volunteers are available. If you are part of a .NET test framework project and want to participate, please drop me a line.
Webinar: Solve Performance Bottlenecks and Function Problems In Your
February 22, 2012
Source Test Workshop for Developers, Testers, IT Ops - Learn how
the Open Source Test Tools Make Test Development and Operation Easy
February 23, 2012
Source Test Workshop for CIOs, CTOs, Business Managers - Learn how
to bring Open Source Test tools and methodology into your organization
March 21, 2012
soapUI, Sahi, TestMaker Workshop for Testers, Developers, IT Ops
March 22, 2012
Open Source Performance Test Workshop for CIOs, CTOs, Business Managers
- Load and performance testing without hassle and cost
March 28, 2012
Open Source Performance Test Workshop for Developers, Testers, IT
Managers - The PushToTest Calibration Test Methodology explained
March 29, 2012
Selenium, soapUI, Sahi, TestMaker Performance Testing In Your
April 17, 2012
Open Source Performance Test Workshop for Developers, Testers, IT
April 18, 2012
Source Test Workshop for CIOs, CTOs, Business Managers
May 2, 2012
soapUI, Sahi, TestMaker Workshop for Testers, Developers, IT Ops
May 3, 2012
The Selenium Tutorial for Beginners has the following chapters:
- Selenium Tutorial 1: Write Your First Functional Selenium Test
- Selenium Tutorial 2: Write Your First Functional Selenium Test of an Ajax application
- Selenium Tutorial 3: Choosing between Selenium 1 and Selenium 2
- Selenium Tutorial 4: Install and Configure Selenium RC, Grid
- Selenium Tutorial 5: Use Record/Playback Tools Instead of Writing Test Code
- Selenium Tutorial 6: Repurpose Selenium Tests To Be Load and Performance Tests
- Selenium Tutorial 7: Repurpose Selenium Tests To Be Production Service Monitors
- Selenium Tutorial 8: Analyze the Selenium Test Logged Results To Identify Functional Issues and Performance Bottlenecks
- Selenium Tutorial 9: Debugging Selenium Tests
- Selenium Tutorial 10: Testing Flex/Flash Applications Using Selenium
- Selenium Tutorial 11: Using Selenium In Agile Software Development Methodology
- Selenium Tutorial 12: Run Selenium tests from HP Quality Center, HP Test Director, Hudson, Jenkins, Bamboo
- Selenium Tutorial 13: Alternative To Selenium
I wrote a Selenium tutorial for beginners to make it easy to get started and take advantage of the advanced topics. Download TestMaker Community to get the Selenium tutorial for beginners and immediately build and run your first Selenium tests. It is entirely open source and free!
Distributing the work of performance testing through an Agile epoc, story, and sprints reduces the testing effort overall and informs the organization's business managers on the service's performance. The biggest problem I see is keeping the testing transparent so that anyone - tester, developer, IT Ops, business manager, architect - follows a requirement down to the actual test results.
With the right tools, methodology, and coaching an organization gets the following:
- Process identification and re-engineering for Test Driven
- Installation and configuration of a best-in-class SOA Test Orchestration Platform to enable rapid test development of re-usable test assets for functional testing, load and performance testing and production monitoring
- Integration with the organization's systems, including test management (for example, Rally and HP QC) and service asset management (for example, HP Systinet)
- Construction of the organization's end-to-end tests with a team of PushToTest Global Professional Services, using this system and training of the existing organization's testers, Subject Matter Experts, and Developers to build and operate tests
- On-going technical support
The key to high quality and reliable SOA service delivery is to practice an always-on management style. That requires on-site coaching. In a typical organization the coaches accomplish the following:
- Test architects and test developers work with the existing
Team members. They bring expert knowledge of the test tools. Most
important is their knowledge of how to go from concept to test
- Technical coaching on test
automation to ensure that team members follow defined
Agile, Test Management, and Roles in SOA
Agile software development process normally focuses first on functional testing - smoke tests, regression test, and integration tests. Agile applied to SOA service development deliverables support the overall vision and business model for the new software. At a minimum we should expect:
- Product Owner defines User Stories
- Test Developer defines Test Cases
- Product team translates Test Cases into soapUI, TestMaker Designer, and Java project implementations
- Test Developer wraps test cases into Test
Scenarios and creates an easily accessible test record associated to
the test management service
- Any team member follows a User Story down into associated tests. From there they can view past results or execute tests again.
- As tests execute the test management system creates "Test
Execution Records" showing the test results
- To what extent will large organizations dump legacy test tools for open source test tools?
- How big would the market for private cloud software platforms be?
- Does mankind have the tools to make a reliable success of the complicated world we built?
- How big of a market will SOA testing and development be?
- What are the best ways to migrate from HP to Selenium?
The Scalability Argument for Service Enabling Your Applications. I make the case for building, deploying, and testing SOA services effectively. I point out the weakness of this approach comes at the tool and platform level. For example, 37% of an application's code simply to deploy your service.
How PushToTest Uses Agile Software Development Methodology To Build TestMaker. A conversation I had with Todd Bradfute, our lead sales engineer, on surfacing the results of using Agile methodology to build software applications.
"Selenium eclipsed HP’s QTP on job posting aggregation site Indeed.com to become the number one requisite job experience / skill for on-line posted automated QA jobs (2700+ vs ~2500 as of this writing,)" John Dunham, CEO at Sauce Labs, noted.
Run Private Clouds For Cost Savings and Control. Instead of running 400 Amazon EC2 machine instances, Plinga uses Eucalyptus to run its own cloud. Plinga needed the control, reliability, and cost-savings of running its own private cloud, Marten Mickos, CEO at Eucalyptus, reports in his blog.
How To Evaluate Highly Scalable SOA Component Architecture. I show how to evaluate highly scalable SOA component architecture. This is ideal for CIOs, CTOs, Development and Test Executives, and IT managers.
Planning A TestMaker Installation. TestMaker features test orchestration capabilities to run Selenium, Sahi, soapUI, and unit tests written in Java, Ruby, Python, PHP, and other langauges in a Grid and Cloud environment. I write about the issues you may encounter installing the TestMaker platform.
Repurposing ThoughtWorks Twist Scripts As Load and Performance Tests. I really like ThoughtWorks Twist for building functional tests in an Agile process. This blog and screencast shows how to rapidly find performance bottlenecks in your Web application using Thoughtworks Twist with PushToTest TestMaker Enterprise test automation framework.
4 Steps To Getting Started With The Open Source Test Engagement Model. I describe the problems you need to solve as a manager to get started with Open Source Testing in your organization.
Corellation Technology Finds The Root Cause To Performance Bottlenecks. Use aspect-oriented (AOP) technology to surface memory leaks, thread deadlocks, and slow database queries in your Java Enterprise applications.
10 Agile Ways To Build and Test Rich Internet Applicatiions (RIA.) Shows how competing RIA technologies put the emphasis on test and deploy.
Oracle Forms Application Testing. Java Applet technology powers Oracle Forms and many Web applications. This blog shows how to install and use open source tools to test Oracle Forms applications.
Saving Your Organization From The Eventual Testing Meltdown of Using Record/Playback Solely. The Selenium project is caught between the world of proprietary test tool vendors and the software developer community. This blog talks about the tipping-point.
Choosing Java Frameworks for Performance. A round-up of opinions on which technologies are best for building applications: lightweight and responsive, RIA, with high developer productivity.
Selenium 2: Using The API To Create Tests. A DZone Refcard we sponsored to explain how to build tests of Web applications using the new Selenium 2 APIs. For the Selenium 1 I wrote another Refcard, click here.
Test Management Tools. A discussion I had with the Zephyr test management team on Agile testing.
Migrating From HP Mercury QTP To PushToTest TestMaker 6. HP QTP just can't deal with the thousands of new Web objects coming from Ajax-based applications. This blog and screencast shows how to migrate.
10 Tutorials To Learn TestMaker 6. TestMaker 6 is the easier way to surface performance bottlenecks and functional issues in Web, Rich Internet Applications (RIA, using Ajax, Flex, Flash,) Service Oriented Architecture (SOA,) and Business Process Management (BPM) applications.
5 Easy Ways To Build Data-Driven Selenium, soapUI, Sahi Tests. This is an article on using the TestMaker Data Production Library (DPL) system as a simple and easy way to data-enable tests. A DPL does not require programming or scripting.
Open Source Testing (OST) Is The Solution To Modern Complexity. Thanks to management oversite, negligence, and greed British Petroleum (BP) killed 11 people, injured 17 people, and dumped 4,900,000 barrels of oil into the Gulf of Mexico in 2010. David Brooks of the New York Times became an unlikely apologist for the disaster citing the complexity of the oil drilling system.
Choosing automated software testing tools: Open source vs. proprietary. Colleen Fry's article from 2010 discusses why software testers decide which type of automated testing tool, or combination of open source and proprietary, to best meets their needs. We came a long way in 2011 to achieve these goals.
All of my blogs are found here.
Your organization may have adopted Agile Software Development Methodology and forgot about load and performance testing! In my experience this is pretty common. Between Scrum meetings, burn-down sessions, sprints, test first, and user stories, many forms of testing - including load and performance testing, stress testing, and integration testing - can get lost. And, it is normally not only your fault. Consider the following:
- The legacy proprietary test tools - HP LoadRunner, HP QTP, IBM
Rational Tester, Microsoft VSTS - are hugely expensive. Organizations
can't afford to equip developers and testers with their own licensed
copies. These tools licenses are contrary to Agile testing, where
developers and testers work side-by-side building and testing
- Many testers still cannot write test code. Agile developers write
unit tests in high level languages (Java, C#, PHP, Ruby.) Testers need
a code-less way to repurpose these tests into functional tests, load
and performance tests, and production service monitors.
- Business managers need a code-less way to define the software
release requirements criteria. Agile developers see Test
Management tools (like HP Quality Center QC) as a needless extra burden
to their software
development effort. Agile developers are hugely attracted to Continuous
Integration (CI) tools like Hudson, Jenkins, Cruise Control, and
Bamboo. Business managers need anintegrated CI and test platform
to define requirements and see how close to 'shipping' is their
Registration is free! Click here to learn more and register now: