Skip to content

Feed aggregator

Bug Battle Nearing Finish Line, Additional Bonus for Testers

uTest - Thu, 07/31/2014 - 20:32

The Olympics. The World Cup. All grand battles of human strength and wit must come to an end at some point, and the 2014 Summer Bug Battle is no dHopper-Magnifying-Gifferent.

We’re nearing the finish line for our first bug competition in nearly four years, with the days waning to get in your most impactful Desktop, Web and Mobile bug submissions from testing tools contained on our Tool Reviews site!

Testers have just six days left, until Wednesday, August 6th, and only the best battlers will take home all the due glory, respect, and the cash prizes of over $1000 for bugs that are not only the most crucial and impactful, but that are part of well-written bug reports.

As an added bonus on top of the five uTest t-shirts we’ll be giving away along with cash prizes, we have sweetened the pot even more for those that get their entries in by the end of day, Sunday, August 3rd — you’ll be eligible for a bonus drawing of 1 of 5 uTest t-shirts! But only if you enter by Sunday.

Yes, you’ll be eligible for one of the sweet uTest t-shirts you see below that Community Management colleague Andrew graciously models off for us (banana not included).

The long, nobly fought battle is nearly over, so be sure to ENTER NOW!



Categories: Companies

The Protocol Complexity Matrix and what it means for your load testing

HP LoadRunner and Performance Center Blog - Thu, 07/31/2014 - 19:09

protocol complexity matrix.PNGThere are a few general rules that apply to protocols and load testing. One is that if you want to make the job of scripting easier, it comes at the cost of memory.

This is because using more memory makes the protocol less scalable up to the point.


Keep reading to find out how you can easily understand the relationship between memory and how many users your load generator can run.

Categories: Companies

Testing on the Toilet: Don't Put Logic in Tests

Google Testing Blog - Thu, 07/31/2014 - 18:59
by Erik Kuefler

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

Programming languages give us a lot of expressive power. Concepts like operators and conditionals are important tools that allow us to write programs that handle a wide range of inputs. But this flexibility comes at the cost of increased complexity, which makes our programs harder to understand.

Unlike production code, simplicity is more important than flexibility in tests. Most unit tests verify that a single, known input produces a single, known output. Tests can avoid complexity by stating their inputs and outputs directly rather than computing them. Otherwise it's easy for tests to develop their own bugs.

Let's take a look at a simple example. Does this test look correct to you?

@Test public void shouldNavigateToPhotosPage() {
String baseUrl = "";
Navigator nav = new Navigator(baseUrl);
assertEquals(baseUrl + "/u/0/photos", nav.getCurrentUrl());

The author is trying to avoid duplication by storing a shared prefix in a variable. Performing a single string concatenation doesn't seem too bad, but what happens if we simplify the test by inlining the variable?

@Test public void shouldNavigateToPhotosPage() {
Navigator nav = new Navigator("");
assertEquals("", nav.getCurrentUrl()); // Oops!

After eliminating the unnecessary computation from the test, the bug is obvious—we're expecting two slashes in the URL! This test will either fail or (even worse) incorrectly pass if the production code has the same bug. We never would have written this if we stated our inputs and outputs directly instead of trying to compute them. And this is a very simple example—when a test adds more operators or includes loops and conditionals, it becomes increasingly difficult to be confident that it is correct.

Another way of saying this is that, whereas production code describes a general strategy for computing outputs given inputs, tests are concrete examples of input/output pairs (where output might include side effects like verifying interactions with other classes). It's usually easy to tell whether an input/output pair is correct or not, even if the logic required to compute it is very complex. For instance, it's hard to picture the exact DOM that would be created by a Javascript function for a given server response. So the ideal test for such a function would just compare against a string containing the expected output HTML.

When tests do need their own logic, such logic should often be moved out of the test bodies and into utilities and helper functions. Since such helpers can get quite complex, it's usually a good idea for any nontrivial test utility to have its own tests.

Categories: Blogs

Throwback Thursday: The 14.4k Modem

uTest - Thu, 07/31/2014 - 18:44

Every Thursday, we jump into the Throwback Thursday fray with a focus on technology from the past, like the 14.4k modem. These days, we get a little cranky when we can’t stream a two-hour HD movie from Netflix. When this happens to me, my internal dialog sounds a bit like: “How dare you, Internet, for making me watch this in standard definition! What is this, 1991?!”

We are, in fact, throwing it back to that exact year when the 14.4k modem was released.  14k_modem

A dial-up modem, for those of you who have never owned/used/seen/heard one, was the analog way to connect to the web. The word modem stands for modulator-demodulator. According to Wikipedia, it “is a device that modulates an analog carrier signal to encode digital information and demodulates the signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. Modems can be used with any means of transmitting analog signals, from light emitting diodes to radio. A common type of modem is one that turns the digital data of a computer into modulated electrical signal for transmission over telephone lines and demodulated by another modem at the receiver side to recover the digital data.”

If you were using a modem at home, it typically connected through your phone line, thus making it impossible to surf the web and talk to anyone at the same time. Surfing the web with a dial-up modem took dedication and lots of alone time. It was especially great when I was waiting forever for a web page to load and someone else in my house would pick up the phone and break the connection.

Then I would have to dial in to AOL again (the dominant dial-up Internet provider at that time) and listen to this symphony of technological wonder:

Once connected, I’d have to reload the page and just wait. Again.

At the zenith of Internet connectivity via modems, some households invested in a second phone line so that you could actually talk to your friend while looking at the same horrid Geocities web page for Dave Matthews Band.

Modem speeds also got faster – 14.4k quickly bowed to 28.8k a few years later. Then came the 33.6k modem and, finally, the 56k modem. Modems still exist today (think cable modems, DSL modems, etc.) but the act of analog dial-up Internet access isn’t used by most people anymore, with the exception of remote or very rural areas.

If you’re feeling nostalgic and want to throw back even more, be sure to check out our entire past library of Throwback Thursdays, from odes to Sega Saturn, to the glory days of AOL Instant Messenger.

Categories: Companies

Agility in an Agile Enterprise

The Kalistick Blog - Thu, 07/31/2014 - 18:06

Thursday 24th July saw our SCRUM master and Solution Architect Chris Littlejohns present a webinar on the Best Practices for Development Testing in Agile software development environments. The webinar provided high-level insight into some core Agile practices, defined the practices that optimize Agile, and showed how some Agile practices can make use of automated software testing practices. It defined that by applying these software testing practices, Agile development is greatly enhanced and common difficulties, such as the amount of effort to build in quality, are directly attacked by optimizing their efficiency. Overall, teams that apply these automated testing practices are more likely to deliver high-quality code within their resources and budget, and greater visibility of software quality via testing enables teams and stakeholders to address impending issues early in the development lifecycle, before they impact delivery commitments.

If you would like to watch the on demand webinar, you can do so here.


The post Agility in an Agile Enterprise appeared first on Software Testing Blog.

Categories: Companies

Multi-Stage CI with Jenkins in an Embedded World

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Steve Harris, SVP Products, CloudBees about a presentation given by Robert Martin of BMW at JUC Berlin.

Embedded systems development is an incredibly complex world. Robert (Robby) Martin of BMW spoke at JUC Berlin on the topic of Multi-Stage CI in an Embedded World (slides, video). Robby spent a lot of his career at Nokia prior to coming to the BMW Car IT team. While many of the embedded systems development and delivery principles are common between phones and cars, the complexity and supply chain issues for modern automobiles are much larger. For example, a modern BMW depends on over 100 million lines of code, much of which originates with external suppliers, each of whom have their own culture and QA processes. Robby used an example scenario throughout his presentation, where a development team consisting of 3 developers and a QA person produce a software component, which is then integrated with other components locally and rolled up for delivery as part of a global integration which must be installed and run as part of the overall product. 

The magnifying effect of an error at an early stage being propagated and discovered at a later stage becomes obvious. Its impact is most clearly felt in the end-to-end "hang time" needed to deliver a one-line change into a production product. Measuring the hang-time automatically and working to speed it up continuously is one of his key recommendations. Fast feedback and turnaround in the event of errors, and minimizing the number of commits within a change-triggered CI stage, is critical. Robby also clarified the difference and importance of using a proper change-triggered approach for CI, as opposed to nightly integration.

Robby described the multi-stage CI approach they're using, which is divided into four stages:
  1. DEV-CI - Single developer, max 5 minutes
  2. TEAM-CI - Single SW component, max 30 minutes
  3. VERTICAL-CI - Multiple SW components, max 30 minutes (e.g., camera system, nav system)
  4. SYSTEM-CI - System level, max 30 minutes (e.g., the car)
The first stage is triggered by a developer commit, and each subsequent stage is automatically triggered by the appropriate overall promotion criteria being met within the previous CI stage. Note how the duration, while minimal for developers, is still held to 30 minutes even at the later stages. Thus, feedback loops to the responsible team or developer are kept very short, even up to the product release at the end. This approach also encourages people to write tests, because it's dead obvious to them that better testing gets their changes to production more quickly, both individually and as a team, and lowers their pain.

One problem confronting embedded systems developers is limited access to real hardware (and it is also a problem for mobile development, particularly in the Android world). Robby recommended using a hardware "farm" consisting of real and emulated hardware test setups, managed by multiple Jenkins masters. He also noted how CloudBees' Jenkins Operations Center would help make management of this type of setup simpler. In their setup, the DEV-CI stage does not actually test with hardware at all, and depending on availability and specifics, even the TEAM-CI stage may be taken up into VERTICAL-CI without actual hardware-based testing.

Robby's recommendations are worthwhile noting:

  • Set up your integration chain by product, not by organizational structure
  • Measure the end-to-end "hang time" automatically, and continuously improve it (also key for management to understand the value of CI/CD)
  • Block problems at the source, but always as early as possible in the delivery process
  • After a developer commits, everything should be completely automated, including reports, metrics, release notes, etc
  • Make sure the hardware prototype requirements for proper CI are committed to by management as part of the overall program
  • Treat external suppliers like internal suppliers, as hard as that might be to make happen
  • Follow Martin Fowler's 10 practices of CI, and remember that "Commit to mainline daily" means the product - the car at BMW
Finally, it was fun to see how excited Robby was about the workflow features being introduced in Jenkins. If you watch his Berlin presentation and Jesse's workflow presentation from Boston JUC, you can really see why Jenkins CI workflow will be a big step forward for continuous delivery in complex environments and organizations.

-- Steven G.

Steven Harris is senior vice president of products at CloudBees. Follow Steve on Twitter.
Categories: Companies

HPC market continues to grow

Kloctalk - Klocwork - Thu, 07/31/2014 - 15:30

High performance computing continues to experience rising popularity as more organizations recognize the benefits offered by these systems.

Further verifying this trend, a recent report from market research firm IDC found that 33,577 HPC systems were shipped worldwide in the first quarter of this year, THE Journal noted. That marks a 0.4 percent increase from the same period in 2013. Despite this growth in units shipped, though, overall factory revenue dropped 9.6 percent year-over-year, totaling $2.3 billion.

As the news source explained, this dip is attributable to the growing popularity of cheaper systems. Revenue from the sale of high-end supercomputers ordered during this time fell to $580 million in the first quarter of 2014, a 32.7 percent drop year-over-year.

However, despite the relatively modest increase in units and drop in revenue, the IDC report suggests that the HPC market is poised for significant short- and mid-term growth, according to the news source. The high-end supercomputer market is projected to see a compound annual growth rate of 7.2 percent through 2018.

Low-end HPC systems – those with a cost below $100,000 – saw the most significant gains during the first quarter, up 11.4 percent. Systems in the $100,000 to $249,000 range grew 0.6 percent, while the $250,000 to $499,000 systems dipped 2.6 percent, the news source reported.

Growing applications
According to Earl Joseph, program vice president for technical computing at IDC, the projected growth for the HPC market is largely due to the expanding applications for these technologies.

"HPC technical server revenues are expected to grow at a healthy rate because of the crucial role they play in economic competitiveness as well as scientific progress," said Joseph, THE Journal reported. "As the global race toward exascale computing fuels the high end of the market, more small and medium-sized businesses and research organizations are exploiting HPC servers for advanced simulations and high performance data analysis."

Additionally, HPC solutions are becoming increasingly integral for research work at universities and other institutes of higher learning. For example, the University of California, Santa Cruz, recently invested in a new HPC system to supplement its existing Hyades supercomputer, which is used to perform complex astrophysics simulations.

"State-of-the-art computational resources have been pivotal in making UCSC one of the nation's leading centers for research in numerical astrophysics and planetary science," said Shawfeng Dong, associate project scientist and HPC system administrator at UCSC. "Hyades dramatically increases our ability to address some of the most fundamental scientific questions of our time."

Similarly, Oxford University recently added a hybrid scale-out network-attached storage system to support its Advanced Research Computing Centre, Campus Technology reported. The purpose of this addition was to increase the school's HPC power, making these resources available to Oxford University personnel on a wider basis.

This trend is likely to continue, further accelerating the HPC market's growth.

Categories: Companies

Case Study: Google’s Team Approach to Coverage

NCover - Code Coverage for .NET Developers - Thu, 07/31/2014 - 13:22

googleWe spend our days (and nights and really anytime we have) developing quality and beautiful .NET applications. We pour over our code, testing and coverage to make sure it is good. Some of us do that on our own while others are part of a larger network of teams with managers, developers and quality assurance members all along the way.

We all want to know that our code is good. We write tests to prove that and use coverage tools to show that we have strong tests. Rolling out a coverage process can feel pretty daunting whether you are in the same building or across the globe.

The team at Google recently gave us all a peek behind their curtain about implementing code coverage team-wide and its effects across the organization. (You can read the whole post here). We spliced it down into the Cliff’s Notes version with some key takeaways.

code-coverage-for-teamTheir team’s mission was to collect coverage related data and then develop and implement the code coverage practices company wide. To make it easy, they designed an opt-in system where engineers could enable two different types of coverage measurements for their projects: daily and per-commit. With daily coverage, Google ran all tests for their project, where as with per-commit coverage they ran only the tests affected by the commit. The two measurements are independent and many projects opted into both.

The feedback from Google engineers was overwhelmingly positive. The most loved feature they noted was surfacing the coverage information during code review time. This early surfacing of coverage had a statistically significant impact: their initial analysis suggests that it increased coverage by 10% (averaged across all commits).

Their process is ever-changing and growing. We will keep you posted on their activity along the way.

The post Case Study: Google’s Team Approach to Coverage appeared first on NCover.

Categories: Companies

The Benefits of Using TestTrack and Surround SCM: A Customer’s View

The Seapine View - Thu, 07/31/2014 - 12:00

ks85136A few weeks ago, I  shared how Seapine customer Segue Technologies adds story point metrics in TestTrack, which was just one of a series of blog posts Segue did about their use of Seapine’s solution. The series is packed with great information; check it out if you missed it the first time around.

Segue software engineer Irma Azarian recently added another interesting article to the mix, discussing the benefits Segue has gained from using TestTrack and Surround SCM together:

Surround SCM and TestTrack Pro are part of the same suite and meant to work together fairly seamlessly. When using these two tools together, if a ticket is assigned to a developer, it can be reviewed in TestTrack to retrieve all details. After code changes are completed, they can be checked in and attached to the associated ticket. This process helps keep track of all the application changes and all the work done for each ticket. All the team members and the quality control team can review the details of the tickets and code before testing.

Azarian says using TestTrack and Surround SCM “prevents issues with redundant work requests or issues falling through the cracks,” and makes it much easier for Segue to provide great IT services to their many federal, commercial, and non-profit clients. Read the entire article here.

Have a great story about how you’re using TestTrack, Surround SCM, or any of our other solutions? Let us know in the comments, or give us a shout on Twitter!

Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Ranorex 5.1.1 and Ranorex 5.0.4 released

Ranorex - Thu, 07/31/2014 - 10:40
We are proud to announce that Ranorex 5.1.1 and Ranorex 5.0.4 have been released and are now available for download.

Ranorex 5.1.1 General changes/Features
  • Added support for Firefox 31
  • Added support for ARM64 compiled iOS apps
  • Removed the requirement to link the MediaPlayerFramework for instrumenting iOS apps
  • Further improved mobile web elements to make attribute values similar to those of desktop web elements
  • Links shown in test case descriptions of a report are now opened in a separate browser window
Please check out the release notes for more details about the changes in this release.

Download latest Ranorex version here.
(You can find a direct download link for the latest Ranorex version on the Ranorex Studio start page.) 

Ranorex 5.0.4 General changes/Features 
  • Added support for Firefox 31
Ranorex 5.0.4 release notes...
Categories: Companies

Continuous Delivery: Deliver Software Faster and with Lower Risk

continuous-delivery-infographicContinuous Delivery is a methodology that allows you to deliver software faster and with lower risk. Continuous delivery is an extension of continuous integration - a development practice that has permeated organizations utilizing agile development practices.

Recently DZone conducted a survey of 500+ IT professionals to find out what they are doing regarding continuous delivery adoption and CloudBees was one of the research sponsors. We have summarized the DZone findings in an infographic.

Find out:
  • Most eye-opening statistic: The percentage of people that think they are following continuous delivery practices versus the percentage of people that actually are, according to the definition of continuous delivery
  • Who most typically provides production support: development, operations or DevOps
  • Which team is responsible for actual code deployment
  • How pervasive version control is for tracking IT configuration
  • The length of time it takes organizations from code commit to production deployment
  • Barriers to adopting continuous delivery (hint: they aren't technical ones)

View the infographic and learn about the current state of continuous delivery.
For more information:Get the CloudBees whitepaper: The Business Value of Continuous Delivery.

Categories: Companies

Geek Choice Awards 2014

RebelLabs started annual Geek Choice Awards, and Jenkins was one of the 10 winners. See the page they talk about Jenkins.

My favorite part is, to quote, "Jenkins has an almost laughably dominant position in the CI server segment", and "With 70% of the CI market on lockdown and showing an increasing rate of plugin development, Jenkins is undoubtably the most popular way to go with CI servers."

If you want to read more about it and other 9 technologies that won, they have produced a beautifully formatted PDF for you to read.

Categories: Open Source

Sign Up for the First-Ever Appium Roadshow on August 20th in New York City

Sauce Labs - Wed, 07/30/2014 - 18:00

appium_logoWe don’t know if you heard, but mobile is kind of a big deal.

Naturally, Appium – the only open source, cross-platform test automation tool for native, hybrid, and mobile web apps – emerged out of the need to Test All The (Mobile) Things.  Last May, battle-tested Appium 1.0 was released, and now this Appium show is hitting the road!

Details and ticket links below. Hope to see you in New York!


Sign Up for the First-Ever Appium Roadshow on August 20th

Appium Roadshow – NYC is a two part, day-long event held on Wednesday, August 20 at Projective Space – LES in Manhattan’s Lower East Side.

Part 1 - Appium in the Wild

8:30 AM – 1:00 PM – Free

The morning session will showcase presentations from Gilt Groupe, Sharecare, Softcyrlic, and Sauce Labs. Topics will cover real-world examples, lessons learned, and best practices in mobile app test automation using Appium. Featured speakers include:

  • Matthew Edwards – Mobile Automation Lead, Aquent
  • Daniel Gempesaw – Software Testing Architect, Sharecare
  • Matt Isaacs – Engineer, Gilt Groupe
  • Jonathan Lipps – Director of Ecosystem and Integrations, Sauce Labs
  • Sundar Sritharan – Delivery Manager, Softcrylic

This event is free. Breakfast and lunch included. Reserve your seat now – register here.

Part 2 – Appium Workshop

1:30 PM – 5:30 PM – $100

Matthew Edwards, a leading contributor to the Appium project, will provide a hands-on workshop to help you kick start your Appium tests.  He’ll talk you through how to set up the environment needed for native iOS and Android automation with Ruby.  You’ll then download and configure the to enable test writing. Then, Matthew will demonstrate how to kick up an Appium server and then run a test.

This event is limited to just 40 participants. Reserve your seat now – register here.


Categories: Companies

Outnumbered, Again

Sonatype Blog - Wed, 07/30/2014 - 17:36
I remember it clearly. Sitting down for breakfast, I opened the Sydney Morning Herald to see the latest headlines in Australia for the day. As I shuffled through the paper, I finally landed upon the Technology section and then noticed pages and pages of “help wanted” adds.

To read more, visit our blog at
Categories: Companies

Connect With Your Favorite Testers With New Profile Features

uTest - Wed, 07/30/2014 - 15:10

Since the launch of the new uTest in early May, we haven’t paused to build new features and functionality that can add value to your software testing lives. We know that you’re busy and keeping on top of the latest news and information in the testing world can be a challenge. Therefore, we’re happy to launch two new features today: Follow Me and Activity Feed.

The Follow Me feature is located on all uTester profiles, allowing you to easily get updates from your favorite uTesters at the click of a button, viewing the Activity Feed of their latest contributions to blog posts, tool reviews, and more. follow button

Following your favorite uTester is easy — just look for the blue Follow Me button in the lower right corner of their banner image. With one click, you will now receive updates every time that uTester posts a new comment, pens a blog post or University course, or reviews a new tool. Don’t know the profile URL of the person you want to follow? Find it here.

The Activity Feed is your one stop to see the latest updates from the people you’re following. Your activity feed is sortable by blog comment, blog post, University course, University comment, and tool review, so you can control what types of updates you see.

Linda-Activity Feed

The Activity Feed page is also where you can view and manage your follower list. New users are added at the top of the list, so you can identify your newest followers. We’ve also made it as easy as possible to unfollow or block users within the same window.

Linda - manage users

Not sure where to start with the new Follow Me feature? Here’s a small sample of uTesters to get you started!

Remember, you can search for any uTest profile on the search page. Additionally, review more or contribute your own list of follow-worthy testers on the forums!

Categories: Companies

Despite benefits, open source is not ideal for every situation

Kloctalk - Klocwork - Wed, 07/30/2014 - 15:00

Without a doubt, open source software has seen tremendous development in recent years. No longer relegated to the margins, open source now features prominently in many companies' operations, rapidly replacing proprietary solutions. As an increasing number of decision-makers have come to discover, open source offers numerous benefits, from reduced costs to greater flexibility.

Yet for all of the advantages offered by open source solutions, it is important for everyone, including open source proponents, to realize that this approach is not always ideal. However, as Smart Company contributor Andrew Sadauskas recently highlighted, sometimes open source supporters overlook this important detail.

Open source often, not always
Sadauskas illustrated this point by highlighting a review he wrote a few months ago for the open source operating system Kubuntu. His assessment was a mixture of both praise and criticism. However, the writer noted that even though the review was far from negative overall, for months thereafter he received argumentative comments from open source proponents, debating Sadauskas and his opinion that this particular open source offering was not for everyone.

This should not be seen as a particularly extreme view. Open source in general is an incredibly powerful approach to IT with countless potential applications, but it has not reached the point of complete saturation. This is even more the case when it comes to any given open source project or approach. Yet as Sadauskas explained, many open source advocates contested this notion, arguing that open source is always the answer.

This is a misguided and potentially damaging notion, the writer asserted.

"[T]here are many hidden costs in business that stem from using the wrong tech tool for the job, including lost productivity, the cost of IT staff for the initial setup and installation, maintenance costs, IT support costs and lost business opportunities," Sadauskas wrote.

"[T]he harsh truth for advocates is the open source option is not always the best option in the market, or the best choice for every business," he concluded.

When it comes to open source, organizations must realize that it can be just as risky as commercial software when it comes to initial set up and maintenance costs or potential losses of productivity when an issue occurs.

A careful approach
This does not mean that the open source movement as a whole has become overly ambitious or misguided. There is still good reason to believe that open source will eventually be the de facto solution for the vast majority of corporate IT needs and will also become commonplace among consumers. This trend is already well underway and most observers expect it to accelerate in the coming years.

This is true even despite the recent security issues that have gained prominence in the open source community. While Heartbleed, the OpenSSL vulnerability, is the most infamous of these, there have been a number of smaller but still significant flaws discovered in recent weeks. These issues have caused some to doubt the viability of open source's cybersecurity capabilities, and therefore its potential in numerous IT areas.

One of the most widely known concepts and advantages surrounding open source is the idea that with enough eyes on a given project, all bugs are shallow. OpenSSL was a unique case because, despite (or, arguably, due to) its widespread use, no one truly looked closely at the code itself to ensure its reliability. Everyone instead assumed that this must have been done by others, considering its widespread popularity.

To protect organizations from liability, it pays to adopt open source policies and tools that help identify where open source exists and the potential risks involved. Open source scanning is an effective way to discover both the known and unknown code within an organization and a comprehensive governance platform helps track, manage, and update packages so teams know exactly where issues may lie.

Without a doubt, though, Heartbleed further emphasizes the dual notions that open source solutions are not perfectly applicable in every situation and, furthermore, must be handled carefully and with best practices in every instance.

Categories: Companies

SSL Connectivity for all Central Repository users Underway

Sonatype Blog - Wed, 07/30/2014 - 12:23
We’ve had quite a bit of public scrutiny recently over how we’ve chosen to provide SSL access to Central for the last two years. At Sonatype, we have a history of investments in the Maven Central community, all of which are focused on improving the quality of the contents, increasing reliability...

To read more, visit our blog at
Categories: Companies

KISS method for bug reporting

Testlio - Community of testers - Wed, 07/30/2014 - 10:26

It’s essential to keep bug reports as clean and easy to read as possible. I love to use KISS method – Keep It Short Simple Stupid Straightforward – name it as you like :)
The point is that reporting bugs should be simple for testers and easy to read for developers.

Keep in mind for bug report

Short – keep words and sentences short, use as many words as needed and as less as possible.
Simple – reporting bugs won’t require knowing difficult words and terms, on the contrary use easy words and sentences.
Stupid – you are one tester among many, make it easy for everybody to understand what you mean.
Straightforward – go to the point!

Content in the bug report

In Testlio we help testers to write proper bugs through guidelines in the bug report. Here are some hints how I write bug reports.

  • Bug title
  • Title has to be as specific as possible. Title should include section where problem occurred. It’s not best practise to use the actual result as bug title. You could write the title after entering all the information about the bug.
    [Profile] Can not change profile picture

  • Environments
  • Include all background details about your testing – App version, os version, browser version, internet connection.

  • Steps to reproduce
  • It’s important to get the the core of the problem quickly. Use ‘>’ symbol to show navigation from one step to next.

    If “log in” is elementary for the issue there’s no reason to write it down as one step – too long. I use “log in” as a step, if the problem is log in/log out specific.
    Example how NOT TO write:
    Steps to reproduce:
    1. Open app
    2. Log in with valid credentials
    3. Tap to Settings
    4. On the displayed options select Profile
    5. Tap on Edit on the right top corner
    6. Tap on profile image
    7. Select any image
    8. Save changes

    Example how to write:
    Steps to reproduce:
    1. Settings > Profile > Edit
    2. Change Profile picture > Save

  • Expected Result
  • One sentence what you expected the functionality to do.

  • Actual Result
  • One sentence what the functionality did instead.

  • Attach a file
  • Regarding attachments, logs and videos, I believe we are all familiar with that old saying – A picture is worth a thousand words. That is 100% true.
    Bug report should include screenshot. If you are testing on a web browser, it’s best to create a screenshot with URL displayed, in mobile app just take the screenshot of the whole view. It’s important information for the developer.
    Crash report has to include crash report screenshot. Crashes are rarely reproducible and crash report has important details for developers.

Remember, if issue is reproducible, then it’s something that is fixable.

Written by Kristi – Account Manager at Testlio

Categories: Companies

StormRunner Load bringing you fast and simple performance testing on demand

HP LoadRunner and Performance Center Blog - Wed, 07/30/2014 - 05:23

When you think of the word “Cloud”, chances are that you don’t immediately think of weather. (This is an industry-blog after all). Your mind most likely thinks of the ways you can save money by moving to the cloudiStock_000018646128Small.jpg.


Now what do you think about when I say the word “Storm”? (Now I bet you are thinking about weather.)  As of July 24, my hope is that you will think performance testing when I say “storm”.  Keep reading to find out why you should look to the cloud for the latest in performance testing.





Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today