Jesse and I will walk through the source code of the workflow plugin, highlights key abstractions and extension points, and discuss how they are put together.
If you are interested in developing or retrofitting plugins to work with workflows, I think you'll find this session interesting.
(This is a guest post from Michael Neale)
Recently at the Docker Conference (DockerCon) the Docker Hub was announced.
The hub (which includes their image building and storage service) also provides some "official" images (sometimes they call them repositories - they are really just sets of images).
So after talking with all sorts of people we decided to create an official Jenkins image - which is hosted by the docker hub simply as "jenkins".
So when you run "docker pull jenkins" - it will be grabbing this image. This is based on the current LTS (and will be kept up to date with the LTS) - but does not include the weekly releases (yet). Having a jenkins image that is fairly basic (it includes enough to run some basic builds, as well as jenkins itself) built on the LTS, on the latest LTS of Ubuntu seemed quite convenient - and easy to maintain using the official Ubuntu/Debian packaging of Jenkins.
Docker is a great way to try and use server based systems - it brings all the dependencies needed and the images actually are portable (ie anywhere docker runs you can run docker images). There are official images for many popular server platforms (redis, mysql, all the linux distros and so on) so it seemed crazy to not include Jenkins along with this list.
"docker run -p 8080:8080 jenkins" is all you need to get going with LTS Jenkins now.
You can also use "docker run jenkins:1.554" to get the latest of that lineage of LTS releases, or pick a specific one: "docker run jenkins:1.554.3" if you like. Leaving off a version assumes the latest. Check the tags page to see what is available.
You can read more and see how you can use it here.
There has been some questions and discussions on how to make use of Jenkins with the docker hub for creating new and interesting docker image based workflows for deployment.
In fact, Jenkins featured in one of the first slides of the first keynote of docker con:
To make this dream a reality some additional plugins had to be created - but this leaves the possibility of working with the docker hub (builds, stores images) and Jenkins (workflow, testing, deployment) to build out some kind of a continuous pipeline for handling docker based apps. I attempted to describe this more here.
It will be interesting to watch this grow and change.
I'll talk about my recent chef/puppet integration work in Jenkins. Sven from Perforce will talk about how to leverage Perforce features from Jenkins, and then James Nord will talk about workflow. It will be a worthy 2 hours.
If the line up of talks will not be enough to sway you, you should also know that I will bring some Jenkins give-aways!
I'm not sure how many people to expect, but there's a cap at 80 people, so if you are thinking about coming, be sure to RSVP. Looking forward to seeing many of you there!
Finally, if you are in London, the usual suspects (CloudBees, PuppetLabs, XebiaLabs, MidVision, SOASTA, et al) are doing a free event titled "How To Accelerate Innovation with Continuous Delivery" that you might also be interested in.
The team is proud to announce the release of SonarQube 4.4, which includes many exciting new features:
- Rules page
- Component viewer
- New Quality Gate widget
- Improved multi-language support
- Built-in web service API documentation
With this version of SonarQube, rules come out of the shadow of profiles to stand on their own. Now you can search rules by language, tag, SQALE characteristic, severity, status (E.G. beta), and repository. Oh yes, and you can also search them by profile, activation, and profile inheritance.
Once you’ve found your rules, this is now where you activate or deactivate them in a profile – individually through controls on the rule detail or in bulk through controls in the search results list (look for the cogs). In fact, the profiles page no longer has it’s own list of rules. Instead, it offers a summary by severity, and a click through to a rule search.
Another shift in rule handling comes for what used to be called “cloneable rules”. We’ve realized that strictly speaking, these are really “templates” rather than rules, and now treat them as such.
Templates can no longer be directly activated in a profile. Instead, you create rules from them and activate those.Component viewer
The component viewer also experienced major changes in this version. The tabs across the top now offer filtering, which controls what parts of the code you see (E.G. only show me the code that has issue), and decoration, which controls what you see layered on top of the code (show/hide the issues, the duplications, etc.).
A workspace concept debuts in this version. As you navigate from file to file through either code coverage or duplications, it helps you track where you are and where you’ve been.
A new Quality Gate widget makes it clearer just what’s wrong if your project isn’t making the grade. Now you can see exactly which measures are out of line:
Multi-language analysis was introduced in 4.2 and it just keeps getting better. Now we’ve added the distribution of LOC by language in the size widget for multi-language projects.
We’ve also added a language criterion to the Issues search:
To find this last feature, look closely at at 4.4′s footer.
We now offer on-board API documentation.
This is a guest post from Tom Fennelly
Over the last number of weeks we've been trying to "refresh" the Jenkins UI, modernizing the look and feel a bit. This has been a real community effort, with collaboration from lots of people, both in terms of implementation and in terms of providing honest/critical feedback. Lots of people deserve credit but, in particular, a big thanks to Kevin Burke and Daniel Beck.
You're probably familiar with how the Jenkins UI currently looks, but for the sake of comparison I think it's worth showing a screenshot of the current/old UI alongside a screnshot of the new UI.
Current / Old Look & Feel
New Look & Feel
Among other things, you'll see:
- A new responsive layout based on <div> elements (as opposed to <table> elements). Try resizing the screen or viewing on a smaller device. More to come on this though, we hope.
- Updated default font from Verdana to Helvetica.
- Nicer form elements and nicer buttons.
- Smoother side panels e.g. Build Executors, Build Queues and Build History panes.
- Smoother project views with more modern tabs.
You might already be seeing these changes if you're using the latest and greatest code from Jenkins. If not, you should see them in the next LTS release.
We've been trying to make these changes without breaking existing features and plugins and, so far, we think we've been successful but if you spot anything you think we might have had a negative effect on, then please log a JIRA and we'll try to address it.
One thing we've "sort of" played with too is cleaning up of the Job Config page - breaking into sections and making it easier to navigate etc. This is a big change and something we've been shying away from because of the effect it will have on plugins and form submission. That said, I think we'll need to bite the bullet and tackle this sooner or later because it's a big usability issue.
Starting with Java Ecosystem version 2.2 (compatible with SonarQube version 4.2+), we no longer drive the execution of unit tests during Maven analysis. Dropping this feature seemed like such a natural step to us that we were a little surprised when people asked us why we’d taken it.
Contrary to popular belief we didn’t drop test execution simply to mess with people. :-) Actually, we’ve been on this path for a while now. We had previously dropped test execution during PHP and .NET analyses, so this Java-only, Maven-only execution was the last holdout. But that’s trivial as a reason. Actually, it’s something we never should have done in the first place.
In the early days of SonarQube, there was a focus on Maven for analysis, and an attempt to add all the bells and whistles. From a functional point of view, the execution of tests is something that never belonged to the analysis step; we just did it because we could. But really, it’s the development team’s responsibility to provide test execution reports. Because of the potential for conflicts among testing tools, the dev team are the only ones who truly know how to correctly execute a project’s test suite. And in the words of SonarSource co-founder and CEO, Olivier Gaudin, “it was pretentious of us to think that we’d be able to master this in all cases.”
And master it, we did not. So there we were, left supporting a misguided, gratuitous feature that we weren’t sure we had full test coverage on. There are so many different, complex surefire configuration cases to cover that we just couldn’t be sure we’d implemented tests for all of them.
Plus, This automated test execution during Java/Maven analysis had an ugly technical underbelly. It was the last thing standing in the way of removing some crufty, thorn-in-the-side, old code that we really needed to get rid of in order to be able to move forward efficiently. It had to go.
We realize that switching from test execution during analysis to test execution before analysis is a change, but it shouldn’t be an onerous one. You simply go from
mvn clean install
mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent install -Dmaven.test.failure.ignore=true
Your analysis will show the same results as before, and we’re left with a cleaner code base that’s easier to evolve.
My favorite part is, to quote, "Jenkins has an almost laughably dominant position in the CI server segment", and "With 70% of the CI market on lockdown and showing an increasing rate of plugin development, Jenkins is undoubtably the most popular way to go with CI servers."
If you want to read more about it and other 9 technologies that won, they have produced a beautifully formatted PDF for you to read.
Some time ago, we've built Jenkins bobble head figures. This was such a huge hit that everywhere I go, I get asked about them. The only problem was that it cannot be individually ordered, and we didn't have enough cycles to individually sell and ship them for those who wanted them.
So I decided to have the 3D model of Mr.Jenkins built, which would allow anyone to print them via 3D printer. I comissioned akiki, a 3D model designer, to turn our beloved butler into a fully-digital color-printable figure. He was even kind enough to discount the price with the understanding that this is for an open-source project.
The result was IMHO excellent, and when I finally came back to my house yesterday from a two-weeks trip, I found it delivered to my house:
With the red bow tie, a napkin, a blue suit, and his signature beard, it is instantly recognizable as Mr.Jenkins. He's mounted on top of a red base, and is quite stable. I think the Japanese sensibility of the designer is really showing! Note that the material has a rough surface and it is not very strong, but that's what you trade to get full color.
I've put it up on Shapeways so that you can order it yourself. The figure is about 2.5in/6cm tall. The price includes a bit of markup toward recovering the cost of the design. My goal is to sell 25 of them, which will roughly break it even. Any excess, if it ever happens, will be donated back to the project.
Likewise, once I hit that goal, I will make the original data publicly available under CC-BY-SA, so that other people can modify the data or even print it on their own 3D printers.
This year marks the 3rd annual Jenkins User Conference in Israel. While the timing of the event turned out to be less than ideal for reasons beyond our control, that didn't stop 400 Jenkins users from showing up at the "explosive" event at a seaside hotel near Tel Aviv.
Shlomi Ben-Haim kicked off the conference by reporting that JUC Israel just keeps getting bigger, and that we sold out 2 weeks earlier and the team had to turn down people who really wanted to come in. The degree of adoption of Jenkins is amazing in this part of the world, and we might have to find a bigger venue next year to accomodate everyone who wants to come.
It turns out most of the talks were in Hebrew, so it was difficult for me to really understand what's going on, but the talks ranged from highly technical ones like how to provision Jenkins from configuration management (the server as welll as jobs), all the way to more culture focused one like how to deploy CD practice in an organization. Companies large and small were well represented, and I met with a number of folks who actively contribute to the community.
There were a lot of hall way conversations, and those of us at the booth had busy time.
Thanks everyone who came, thanks JFrog for being on the ground for the event (and congratulations for the new round of funding) and CloudBees for hosting the event. Please let us know if there are things we can do better, and see you again next year!
A few months ago, we started on an innocuous-seeming task: make the .NET Ecosystem compatible with the multi-language feature in SonarQube 4.2. What followed was a bit like one of those cartoons where you pull a string on the character’s sweater and the whole cartoon character starts to unravel. Oops.
Once we stopped pulling the string and started knitting again (to torture a metaphor), what came off the needles was a different sweater than what we’d started with. The changes we made along the way – fewer external tools, simpler configuration – were well-intentioned, and we still believe they were the right things to do. But many people were at pains to tell us that the old way had been just fine, thank you. It had gotten the job done on a day-to-day basis for hundreds of projects, and hundreds-of-thousands of lines of code, they said. It had been crafted by .NETers for .NETers, and as Java geeks, they said, we really didn’t understand the domain.
And they were right. But when we started, we didn’t understand how much we didn’t understand. Fortunately, we have a better handle on our ignorance now, and a plan for overcoming it and emerging with industry leading C# and VB.NET analysis tools.
First, we’re planning to hire a C# developer. This person will be first and foremost our “really get .NET” person, and represents a real commitment to the future of SonarQube’s .NET plugins. She or he will be able to head off our most boneheaded notions at the pass, and guide us in the ways of righteousness. Or at least in the ways of .NETness.
Of course it’s not just a guru position. We’ll call on this person to help us progressively improve and evolve the C# and VB.NET plugins, and their associated helpers, such as the Analysis Bootstrapper. He (or she) will also help us fill the gaps back in. When we reworked the .NET ecosystem there were gains, but there were also loses. For instance, there are corner cases not covered today by the C# and VB.NET plugins which were covered with the old .NET Ecosystem.
We also plan to start moving these plugins into C#. We’ve realized that just can’t do the job as well in Java as we need to. But the move to C# code will be a gradual one, and we’ll do our best to make it painless and transparent. Also on the list will be identifying the most valuable rules from FxCop and ReSharper and re-implementing them in our code.
At the same time, we’ll be advancing on these fronts for both C# and VB.NET:
- Push “cartography” information to SonarQube.
- Implement bug detection rules.
- Implement framework-specific rules, for things like SharePoint.
All of that with the ultimate goal of becoming the leader in analyzing .NET code. We’ve got a long way to go, but we know we’ll bring it home in the end.
One of the challenges of running Jenkins User Conferences is to ballance the interest of attendees and the interest of sponsors. Sponsors would like to know more about attendees, but attendees are often weary of getting contacted. Our past few JUCs have been run by making it opt-in to have the contact information passed to sponsors, but the ratio of people who opt-in is too low. So we started thinking about adjusting this.
So our current plan is to reduce the amount of data we collect and pass on, but to make this automatic for every attendee. Specifically, we'd limit the data only to name, company, e-mail, and city/state/country you are from. But no phone number, no street address, etc. We discussed this in the last project meeting, and people generally seem to think this is reasonable. That said, this is a sensitive issue, so we wanted more people to be aware.
By the way, the call for papers to JUC Bay Area is about to close in a few days. If you are interested in giving a talk (and that's often the best way to get feedback and take credit on your work), please make sure to submit it this week.
The other day I was explaining how to implement a new workflow primitive to Vivek Pandey, and I captured it as a recording.
The recording goes over how to implement the Step extension point, which is the workflow equivalent of BuildStep extension point. If you are interested in jumping on the workflow plugin hacking, this might be useful (and don't forget to get in touch with us so that we can help you!)
AsyncApprovals - rules and exceptions[Contributors: James Counts]In the end, all tests become synchronous. This means for a normal test we recommendHowever, If you are looking to test exceptions everything changes and you might want to use Removed BCL requirement[Contributors: James Counts & Simon Cropp]HttpClient is nice way of doing web calls in .Net. Unfortunately, at this time the BCL in nuget does unfortunate things to your project if you do not wish to use the HttpClient. This is a violation of a core philosophy of approvaltests
"only pay for the dependencies you use"
HttpClient was add ApprovalTests 3.6. Thanks to Simon for pointing and troubleshooting this error. It has now been removed.
Wpf Binding Asserts[Contributors: Jay Bazuzi]This is a bonus from v3.6It is a very hard thing to detect and report Wpf Binding Error. To even get the reports to happen you have to fiddle with the registry and then read and parse logs.No More! Now to you use BindsWithoutError to ensure that your Wpf binding are working.
We’ve got an ambitious vision for the C/C++ plugin this year. To fulfill it, we started with some under-the-cover improvements to the parser and the internal data model. Those improvements were really just a means to an end, but they’ve had the effect of markedly improving our ability to parse and analyze C and C++ code.
Unfortunately, they came with a downside: a higher analysis configuration burden. For instance, in order to correctly expand macros in the code (and we can, now), we need to know what the macro means. Which means that the macro definition needs to be passed in to the analysis.
Just contemplating the configuration update required for a single large system made me queasy, and I wasn’t the only one. So we set the main plugin aside for a little while this spring and wrote a build wrapper, which will eavesdrop on the tool of your choice (e.g. Make or MSBuild) to gather all the extra configuration data for you.
The build wrapper supports the Clang, GCC and MSVC compilers, and is available in 32-bit and 64-bit versions for Windows and Linux and a 64-bit version is available for OS X. Using it couldn’t be simpler. You drop it somewhere on your machine (make sure it’s executable on ‘nix systems), and prepend your build command with it:
build-wrapper --out-dir [output directory] make
Of course, it needs to be a full build that the wrapper is eavesdropping on, so ideally this command would have come after a make clean. And for MsBuild it would be something like:
build-wrapper --out-dir [output directory] msbuild /t:rebuild [other options]
The output directory is where the build wrapper writes its data files, creating the directory if it doesn’t exist. Currently, the build wrapper simply adds its files to the specified directory, but that behavior could change in the future (E.G. someday it might start by issuing rm [output directory]/*).
The build wrapper writes two files: build-wrapper.log and build-wrapper-dump.json. The .log file is just that – a log that Support may ask for if you ever contact them with questions. The .json file is the one that’s actually used during analysis. This screenshot of the build-wrapper-dump.json from my Linux build of CMake should give you an idea what these files look like:
I’m only posting a brief screenshot because the full file is 43,614 lines long (plus a blank line at the end). I’m not saying that all the information in the file is absolutely required for analysis, but it would have taken me a very long time to identify and specify the pieces that are.
Once the build is complete, and your .json file is written, it’s time to kick off a SonarQube analysis. But first you’ll need to tell SonarQube where to find all that extra configuration data the build wrapper just logged. In your sonar-project.properties add the following:
I end up with a properties file that’s only six lines long (including whitespace), and SonarQube has everything it needs to analyze my project:
sonar.projectName=CMake Linux CLang build
If you haven’t used the build-wrapper on your C/C++ projects yet, you should give it a try and let us know how it goes. Hopefully, it will help you drastically improve the quality of your analyses while dramatically decreasing the configuration.
Software projects often publish comparisons with other projects, with which they compete. These comparisons typically have a few characteristics in common:
- They aim at highlighting reasons why one project is superior – that is, they are marketing material.
- While they may be accurate when initially published, competitor information is rarely updated.
- Pure factual information is mixed with opinion, sometimes in a way that doesn’t make clear which is which.
- Competitors don’t get much say in what is said about their projects.
- Users can’t be sure how much to trust such comparisons.
Of course, we’re used to it. We no longer expect the pure, unvarnished truth from software companies – no more than from drug companies, insurance companies, car salesmen or government agencies. We’re cynical.
But one might at least hope that open source projects might do better. It’s in all our interests, and in our users’ interests, to have accurate, up-to-date, unbiased feature comparisons.
So, what would such a comparison look like?
- It should have accurate, up-to-date information about each project.
- That information should be purely factual, to the extent possible. Where necessary, opinions can be expressed only if clearly identified as opinion by it’s content and placement.
- Developers from each project should be responsible for updating their own features.
- Developers from each project should be accountable for any misstatements that slip in.
I think this can work because most of us in the open source world are committed to… openness. We generally value accuracy and we try to separate fact from opinion. Of course, it’s always easy to confuse one’s own strongly held beliefs with fact, but in most groups where I participate, I see such situations dealt with quite easily and with civility. Open source folks are, in fact, generally quite civil.
So, to carry this out, I’m announcing the .NET Test Framework Feature Comparison project – ideas for better names and an acronym are welcome. I’ll provide at least a temporary home for it and set up an initial format for discussion. We’ll start with MbUnit and NUnit, but I’d like to add other frameworks to the mix as soon as volunteers are available. If you are part of a .NET test framework project and want to participate, please drop me a line.
Webinar: Solve Performance Bottlenecks and Function Problems In Your
February 22, 2012
Source Test Workshop for Developers, Testers, IT Ops - Learn how
the Open Source Test Tools Make Test Development and Operation Easy
February 23, 2012
Source Test Workshop for CIOs, CTOs, Business Managers - Learn how
to bring Open Source Test tools and methodology into your organization
March 21, 2012
soapUI, Sahi, TestMaker Workshop for Testers, Developers, IT Ops
March 22, 2012
Open Source Performance Test Workshop for CIOs, CTOs, Business Managers
- Load and performance testing without hassle and cost
March 28, 2012
Open Source Performance Test Workshop for Developers, Testers, IT
Managers - The PushToTest Calibration Test Methodology explained
March 29, 2012
Selenium, soapUI, Sahi, TestMaker Performance Testing In Your
April 17, 2012
Open Source Performance Test Workshop for Developers, Testers, IT
April 18, 2012
Source Test Workshop for CIOs, CTOs, Business Managers
May 2, 2012
soapUI, Sahi, TestMaker Workshop for Testers, Developers, IT Ops
May 3, 2012
The Selenium Tutorial for Beginners has the following chapters:
- Selenium Tutorial 1: Write Your First Functional Selenium Test
- Selenium Tutorial 2: Write Your First Functional Selenium Test of an Ajax application
- Selenium Tutorial 3: Choosing between Selenium 1 and Selenium 2
- Selenium Tutorial 4: Install and Configure Selenium RC, Grid
- Selenium Tutorial 5: Use Record/Playback Tools Instead of Writing Test Code
- Selenium Tutorial 6: Repurpose Selenium Tests To Be Load and Performance Tests
- Selenium Tutorial 7: Repurpose Selenium Tests To Be Production Service Monitors
- Selenium Tutorial 8: Analyze the Selenium Test Logged Results To Identify Functional Issues and Performance Bottlenecks
- Selenium Tutorial 9: Debugging Selenium Tests
- Selenium Tutorial 10: Testing Flex/Flash Applications Using Selenium
- Selenium Tutorial 11: Using Selenium In Agile Software Development Methodology
- Selenium Tutorial 12: Run Selenium tests from HP Quality Center, HP Test Director, Hudson, Jenkins, Bamboo
- Selenium Tutorial 13: Alternative To Selenium
I wrote a Selenium tutorial for beginners to make it easy to get started and take advantage of the advanced topics. Download TestMaker Community to get the Selenium tutorial for beginners and immediately build and run your first Selenium tests. It is entirely open source and free!
Distributing the work of performance testing through an Agile epoc, story, and sprints reduces the testing effort overall and informs the organization's business managers on the service's performance. The biggest problem I see is keeping the testing transparent so that anyone - tester, developer, IT Ops, business manager, architect - follows a requirement down to the actual test results.
With the right tools, methodology, and coaching an organization gets the following:
- Process identification and re-engineering for Test Driven
- Installation and configuration of a best-in-class SOA Test Orchestration Platform to enable rapid test development of re-usable test assets for functional testing, load and performance testing and production monitoring
- Integration with the organization's systems, including test management (for example, Rally and HP QC) and service asset management (for example, HP Systinet)
- Construction of the organization's end-to-end tests with a team of PushToTest Global Professional Services, using this system and training of the existing organization's testers, Subject Matter Experts, and Developers to build and operate tests
- On-going technical support
The key to high quality and reliable SOA service delivery is to practice an always-on management style. That requires on-site coaching. In a typical organization the coaches accomplish the following:
- Test architects and test developers work with the existing
Team members. They bring expert knowledge of the test tools. Most
important is their knowledge of how to go from concept to test
- Technical coaching on test
automation to ensure that team members follow defined
Agile, Test Management, and Roles in SOA
Agile software development process normally focuses first on functional testing - smoke tests, regression test, and integration tests. Agile applied to SOA service development deliverables support the overall vision and business model for the new software. At a minimum we should expect:
- Product Owner defines User Stories
- Test Developer defines Test Cases
- Product team translates Test Cases into soapUI, TestMaker Designer, and Java project implementations
- Test Developer wraps test cases into Test
Scenarios and creates an easily accessible test record associated to
the test management service
- Any team member follows a User Story down into associated tests. From there they can view past results or execute tests again.
- As tests execute the test management system creates "Test
Execution Records" showing the test results
- To what extent will large organizations dump legacy test tools for open source test tools?
- How big would the market for private cloud software platforms be?
- Does mankind have the tools to make a reliable success of the complicated world we built?
- How big of a market will SOA testing and development be?
- What are the best ways to migrate from HP to Selenium?
The Scalability Argument for Service Enabling Your Applications. I make the case for building, deploying, and testing SOA services effectively. I point out the weakness of this approach comes at the tool and platform level. For example, 37% of an application's code simply to deploy your service.
How PushToTest Uses Agile Software Development Methodology To Build TestMaker. A conversation I had with Todd Bradfute, our lead sales engineer, on surfacing the results of using Agile methodology to build software applications.
"Selenium eclipsed HP’s QTP on job posting aggregation site Indeed.com to become the number one requisite job experience / skill for on-line posted automated QA jobs (2700+ vs ~2500 as of this writing,)" John Dunham, CEO at Sauce Labs, noted.
Run Private Clouds For Cost Savings and Control. Instead of running 400 Amazon EC2 machine instances, Plinga uses Eucalyptus to run its own cloud. Plinga needed the control, reliability, and cost-savings of running its own private cloud, Marten Mickos, CEO at Eucalyptus, reports in his blog.
How To Evaluate Highly Scalable SOA Component Architecture. I show how to evaluate highly scalable SOA component architecture. This is ideal for CIOs, CTOs, Development and Test Executives, and IT managers.
Planning A TestMaker Installation. TestMaker features test orchestration capabilities to run Selenium, Sahi, soapUI, and unit tests written in Java, Ruby, Python, PHP, and other langauges in a Grid and Cloud environment. I write about the issues you may encounter installing the TestMaker platform.
Repurposing ThoughtWorks Twist Scripts As Load and Performance Tests. I really like ThoughtWorks Twist for building functional tests in an Agile process. This blog and screencast shows how to rapidly find performance bottlenecks in your Web application using Thoughtworks Twist with PushToTest TestMaker Enterprise test automation framework.
4 Steps To Getting Started With The Open Source Test Engagement Model. I describe the problems you need to solve as a manager to get started with Open Source Testing in your organization.
Corellation Technology Finds The Root Cause To Performance Bottlenecks. Use aspect-oriented (AOP) technology to surface memory leaks, thread deadlocks, and slow database queries in your Java Enterprise applications.
10 Agile Ways To Build and Test Rich Internet Applicatiions (RIA.) Shows how competing RIA technologies put the emphasis on test and deploy.
Oracle Forms Application Testing. Java Applet technology powers Oracle Forms and many Web applications. This blog shows how to install and use open source tools to test Oracle Forms applications.
Saving Your Organization From The Eventual Testing Meltdown of Using Record/Playback Solely. The Selenium project is caught between the world of proprietary test tool vendors and the software developer community. This blog talks about the tipping-point.
Choosing Java Frameworks for Performance. A round-up of opinions on which technologies are best for building applications: lightweight and responsive, RIA, with high developer productivity.
Selenium 2: Using The API To Create Tests. A DZone Refcard we sponsored to explain how to build tests of Web applications using the new Selenium 2 APIs. For the Selenium 1 I wrote another Refcard, click here.
Test Management Tools. A discussion I had with the Zephyr test management team on Agile testing.
Migrating From HP Mercury QTP To PushToTest TestMaker 6. HP QTP just can't deal with the thousands of new Web objects coming from Ajax-based applications. This blog and screencast shows how to migrate.
10 Tutorials To Learn TestMaker 6. TestMaker 6 is the easier way to surface performance bottlenecks and functional issues in Web, Rich Internet Applications (RIA, using Ajax, Flex, Flash,) Service Oriented Architecture (SOA,) and Business Process Management (BPM) applications.
5 Easy Ways To Build Data-Driven Selenium, soapUI, Sahi Tests. This is an article on using the TestMaker Data Production Library (DPL) system as a simple and easy way to data-enable tests. A DPL does not require programming or scripting.
Open Source Testing (OST) Is The Solution To Modern Complexity. Thanks to management oversite, negligence, and greed British Petroleum (BP) killed 11 people, injured 17 people, and dumped 4,900,000 barrels of oil into the Gulf of Mexico in 2010. David Brooks of the New York Times became an unlikely apologist for the disaster citing the complexity of the oil drilling system.
Choosing automated software testing tools: Open source vs. proprietary. Colleen Fry's article from 2010 discusses why software testers decide which type of automated testing tool, or combination of open source and proprietary, to best meets their needs. We came a long way in 2011 to achieve these goals.
All of my blogs are found here.
Your organization may have adopted Agile Software Development Methodology and forgot about load and performance testing! In my experience this is pretty common. Between Scrum meetings, burn-down sessions, sprints, test first, and user stories, many forms of testing - including load and performance testing, stress testing, and integration testing - can get lost. And, it is normally not only your fault. Consider the following:
- The legacy proprietary test tools - HP LoadRunner, HP QTP, IBM
Rational Tester, Microsoft VSTS - are hugely expensive. Organizations
can't afford to equip developers and testers with their own licensed
copies. These tools licenses are contrary to Agile testing, where
developers and testers work side-by-side building and testing
- Many testers still cannot write test code. Agile developers write
unit tests in high level languages (Java, C#, PHP, Ruby.) Testers need
a code-less way to repurpose these tests into functional tests, load
and performance tests, and production service monitors.
- Business managers need a code-less way to define the software
release requirements criteria. Agile developers see Test
Management tools (like HP Quality Center QC) as a needless extra burden
to their software
development effort. Agile developers are hugely attracted to Continuous
Integration (CI) tools like Hudson, Jenkins, Cruise Control, and
Bamboo. Business managers need anintegrated CI and test platform
to define requirements and see how close to 'shipping' is their
Registration is free! Click here to learn more and register now: