Skip to content

Feed aggregator

Refurbishing a Mail Slot and Doobell

Radyology - Ben Rady - Fri, 09/26/2014 - 15:01
When I moved into my house, the mailbox was in pretty sorry shape. It was corroded, and the mail flap was stuck open. On top of that, it had an integrated doorbell that didn't work. Lastly, the entire border of... Ben Rady
Categories: Blogs

Code Coverage and the Development Process

NCover - Code Coverage for .NET Developers - Fri, 09/26/2014 - 13:03

You may be thinking, geez NCover, you sure write a lot about code coverage. Guilty. Yes we do. We love it. We are happy to assist the .NET development community in building better applications and quality code. Since we are always talking about it, we get asked a lot of questions about it.

why_code_coverage_blogOne of the most frequent is why do I need code coverage in my development process?

Code coverage is part of the code quality umbrella. Code quality is one of the most important concerns of any software development organization. You want to be able to see if you have a predictable code base and also make changes without upsetting the apple cart. Higher quality code should result in happier clients and customers.

There are certain groups that find it more engrained in their method. For example, code coverage is most valuable as a feedback mechanism for test-driven or agile development methodologies. Both of these methods rely on a developmental feedback loop that promotes the addition of features while maintaining a predictable quality level. Rapid development and technical prowess will only yield breakthrough results if customers can rely on the quality of your product.

Covering your code also reduces the costs of debugging deployed code. Once code has shipped, fixing a bug’s cost increases, on average,  by 1,000%. While cost savings may not be on every developer’s mind, it is an important factor to be able to continue building quality code and keeping customers happy and coming back.

Do you have additional questions about integrating code coverage into your development process? Set up a time to chat with one of our team to help see if it is a fit for you.

The post Code Coverage and the Development Process appeared first on NCover.

Categories: Companies

CVE-2014-6271 impact on Jenkins

I suspect many of you have been impacted by CVE-2014-6271 (aka "shellshock" bash vulnerability.) We had our share of updates to do for various *.jenkins-ci.org servers.

Java application servers in general (including one that ships in Jenkins) do not fork off processes like Apache does to serve requests, so the kind of CGI attacks you see on Apache does not apply. We are currently unaware of any vulnerabilities in Jenkins related to CVE-2014-6271, and no plan to issue a patch for that.

That said, we did come up with one possible way attackers can exploit vulnerable bash through Jenkins, that you might want to be aware of.

When a build is parameterized, parameters are passed to the processes Jenkins launch as environment variables. So if you have a shell step (which uses bash by default), and if Eve only has a BUILD permission but not CONFIGURE permission, then Eve can exploit this vulnerability by carefully crafting parameter values, and have the bash runs arbitrary processes on the slave that run the build.

In most such scenarios, Eve would have to be an authenticated user on Jenkins. Jenkins also leaves the record of who triggered what build with what parameters, so there's an audit trail. But if your Jenkins fits this description, hopefully this serves as one more reason to update your bash.

Finally, to get notified of future security advisories from Jenkins, see this Wiki page.

Categories: Open Source

Bash 2014 – This Is Not a Party

Sonatype Blog - Thu, 09/25/2014 - 22:58
I can honestly say that although referred to by the media as Shellshocked, I am neither shocked nor awed. I can’t say that I am a fan of the latest glorification of bugs like Heartbleed and Shellshock in a fashion similar to tropical storms, but if it gets more people to pay attention to the...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

Don't Count on It

Hiccupps - James Thomas - Thu, 09/25/2014 - 21:47
Spoiler alert: there is no happy ending here.

Often a blog post or article will talk about some inspiration gained from outside of testing which was applied successfully to the author's testing and yielded useful results. Those are great posts to read  ... but this isn't one of them.

This one starts with inspiration from outside of testing followed by research, discussion, criticism and thought experiment but ultimately no application. The yield here is the documentation of all of that. Hopefully it's useful and still a good read.
What was the inspiration?I've long had the notion - sadly dormant and largely sidelined - that I'd like to look into the use of statistics in testing. (One of my earliest posts here was on entropy.) As testers we talk a lot about such things as the scientific method and about data, about running experiments, about crafting experiments such that they generate usable and valuable data, about making decisions on where to test next based on that data, about reporting on those experiments and about the information we can give to stakeholders. Statistics has the potential to help us in all of those areas.

I quoted Stuart Hunter recently:
the purpose of statistics is to analyse the data in such a way as to lead to new conjecture.Limiting ourselves to basic counts and averages - as I see us doing and do myself - is like a carpenter who only uses a tape measure and his bare hands.  Statistics gives us not only new avenues to explore, but a lexicon for talking about the likelihood of some result and also about the degree of confidence we can have in the likelihood itself. It provides tools for helping us to calculate those values and, importantly, is very explicit about the assumptions that any particular set of tools requires.

Frequently, it will also provide methods to permit an assumption to be violated but at some other cost - for example, penalising the confidence we have in a result - or to calculate which parameters of an experiment need to be changed by what amount in order to get a level of confidence that would be acceptable - say, by collecting twice as much data. It can be used to organise, summarise, investigate and extrapolate.

And then I read this sentence:
Ask any question starting with 'how many...?' and capture-recapture stands a good chance of supplying an answer. It's the last sentence in an article in Significance, the Royal Statistical Society's magazine, that's sufficiently light on the statistics that I could actually read it all the way to the end [Amoros]. The piece describes how, in 1783, Laplace used a reasonably simple statistical technique for estimating the population of France based on a couple of sets of incomplete data and some simplifying assumptions. Essentially the same technique is these days called Capture-Recapture (CR) and is used to estimate populations of all kinds from the number of fish in a pool to the incidence of cancer in a city.

I wondered whether there was a possibility of estimating bug counts using the technique and, if there was, under what assumptions and conditions it might apply.
Why would you want to estimate bug numbers? Good question. Among the many challenges in software testing are deciding where to apply test effort and when to stop applying that effort. Many factors help us to make those decisions, including our gut, the testing we've already done in this round, our experience, our expertise, information from users, information about the users, information from developers, information about the developers, historical information about the system under test and so on. While bare bug counts might not be that interesting, a relative bug "density" could be, in addition to other information.

Imagine that you could point to an analysis of the new features in your latest release, and see that, with a high degree of (statistical) confidence, a couple of them were more likely to contain more bugs than the others proportionate to some factor of interest (such as their perceived size or complexity or potential to interact with other parts of the product). Mightn't that be an interesting input to your decision making? How about if the statistics told you that (with low confidence) there wasn't much to choose between the features, but that confidence in the estimates would be increased with a defined amount of investigative effort. Could that help your planning?
Err, OK. How might it work?CR basically works with two samples of items from a population and uses a ratio calculated from those samples to estimate the size of the population. Let's say we'd like to know how many fish there are in a pond - that's the population we're trying to estimate. We will catch fish - the items - in the pond on two occasions - our two samples. On the first occasion we'll mark the fish we catch (perhaps with a tag) and release them back. On the second occasion we'll count how many of the marked fish we recaught:
Sample 1: 50 fish caught and marked
Sample 2: 40 fish caught, of which 10 are markedWhich gives an estimated number of fish in the pool of 40 * 50/10, i.e. 200. This accords reasonably well with naive intuition: we know there are 50 marked fish; we caught 20% of them in the second sample; the second sample was size 40 so we estimate that 40 is 20% of the total population, which would be 200.

In Laplace's work, pre-existing data was used instead of physical captures. The same thing is still done today in, for example, epidemiological studies. [Corrao] attempts to estimate the number of people with alcohol problems in the Voghera region of Italy using lists such as the members of the local Alcoholics Anonymous (AA) group and people discharged from the area's hospital with an alcohol-related condition. The lists form the samples but there is no capture and no marking. Instead we identify which individuals are the same in both data sets by matching on names, dates of birth and so on. Once we have that, we can use the same basic technique:
AA: 50 people
hospital: 40 people, of which 10 also attended AAand now the calculation is that the estimated number of people in the region with alcohol problems is, as before, 40 * 50/10.

If it could work, the analogy for bugs might be something like this: two periods of testing on the same system (or feature or whatever) generate two sets of bugs reports. These are our samples; bugs which are considered duplicates across the two periods are our captured items and so we have:
Period 1: 50 bugs found
Period 2: 40 bugs found, of which 10 are duplicates of those from Period 1giving 40 * 50/10 or 200 bugs as the estimate of the number of bugs to be found in the system under test. Note that this is not an estimate of undiscovered bugs, but the total including those already found.
Seems straightforward enough, but ...You're right to be doubtful. There are a bunch of assumptions imposed by the basic technique itself:
  • capture and recapture (or the data sets) are independent
  • all items have the same probability of being captured
  • we can identify instances of the same item in both samples
  • there is no change to the population during the investigation
It's not hard to find other questions or concerns either. With your tester head on you're already probably wondering about that estimation formula:
  • if the two samples are identical does the estimate say that we have captured the entire population? (Using the same formula as above, with N bugs found in both samples, we'd have N*N/N = N bugs estimated to be found, i.e. we estimate we've found them all.) Under what realistic circumstances would we be prepared to believe that? With what degree of confidence? 
  • if the two samples don't overlap at all the formula  results in a division by zero: N*M/0. In statistics this is generally dealt with by invoking a more sophisticated estimator (the formula) such as the Lincoln-Petersen model [WLF448].
  • is the method valid when samples are small or even zero? [Barnard], applying CR to software inspection, throws away data where 0 or 1 defects are found. [WLF448] says that the Lincoln-Petersen model can apply here too.
We can come up with issues of variability:
  • can we account for different sampling conditions? When catching fish, perhaps the nets are bigger for the second sample, or the water is rougher; in the case of bugs, perhaps the testers have different approaches, skill levels, domain knowledge and so on. 
  • can the method account for variation in opportunity to sample e.g. short time spent on sample 1 but large time spent on sample 2?
  • can attributes of the items be taken into account? For example some bugs are much more important than others. 
  • can different "kinds" of items be collected in a single sample? Is it right that "CR can only work with one kind of bug at a time. Stats on recapture of a particular fish says nothing about the population of snails." [Twitter]
There is discussion of these kinds of issues in the CR literature. The state of the art in CR is more advanced than the simplistic walk-through I've given here. More than two samples can be combined (e.g. in the [Corrao] paper I mentioned above there are five) and, as we've seen already, more sophisticated estimators used. [Tilling] talks about techniques for overcoming violation of the methodological assumptions, such as stratification of the samples (for example, looking for different "kinds" of bugs) and notes that appropriate experimental design can reduce bias and improve estimate accuracy.  [Barnard] and [Miller] discuss models which attempt to account for variability the in detection probability (to permit different kinds of bugs to be found with different probabilities) and detection capability (to permit different testers to have different probabilities of finding bugs).

I didn't come across references which said that sampling methodology had to be identical in each capture, although there is the underlying assumption that the probability of any item being captured is equal. I found examples in which the sampling method was undoubtedly different - although this was not part of the experimental design - but the analysis proceeded, e.g. [Nichols] where external factors - a raccoon interfering with the vole traps used in the study - caused some data to be discarded. Further, epidemiological studies frequently use data collected by disparate methods for reasons other than CR.

My understanding is that the items captured need only be as homogeneous as the study context requires. For instance, in the [Corrao] study multiple different alcohol problems were identified as relevant to the hospital sample. This was an experimental design decision. Increasing or decreasing the set would likely change the results. It's up to the experimenter to be clear about what they want to study, the way they are going to approach sampling it, and the strength of claim they can make based on that. More specific sampling may mean a less general claim. Statistics, like testing, is a sapient activity where judgement and intelligence are critical.

In addition to variability in the methodology, there is the question of variability - or uncertainty, or error due to the probabilistic nature of the techniques - in the estimates produced by it. As one correspondent put it "[I] suspect too many unknown [variables] to accurately estimate unknown population" [Twitter]. There's a very good chance this is true in the general case. The level of statistical error around the numbers might be sufficiently high to make them of limited utility. There are established techniques in statistics for estimating this kind of number. Using one online calculator and the figures in my examples above suggests a sample error of up to 14%.
And specifically for testing?Good point, let's consider a few questions directly related to the application of CR in testing, some of which came from a thread I started on [Twitter]:
  • "it is necessary to assume that finding bugs is anything like finding animals." [Twitter]
  • "[CR] assumes same methods find same bugs, and that all bugs can be found via same method?" [Twitter] 
  • "counting defects has very little practical use or pragmatic meaning." [Twitter]
I don't think it is true that we need to assume that finding bugs and animals are "anything like" each other: the epidemiological literature is good evidence of that. I also don't think it's true that CR assumes that all bugs can be found by the same method: again, the epidemiological application does not make this assumption.

We've already discussed why we might be prepared to consider estimating defect counts. I'd agree in general counting bugs is of little practical use, but I don't think I would advocate using the technique for finding counts as an end in itself, only as additional evidence for decision making, treated with appropriate caution. I'm interested in the idea that we might be able to derive some numbers from existing data or data that's gathered in the process of performing testing in any case.

It's not hard to think of other potential concerns. In testing we'll often choose not to repeat tests and the second sampling could be seen as repetition. But, is it the case that different testers in the same area with the same mission are repeating tests? Is it even the case that the same tester with the same mission is necessarily repeating a test? But then again, can you afford two testers with the same mission in the same area for gathering CR data when there are other areas to be tested?

If testers were being used to sample twice in the same area we might worry that their experience from the first sampling would alter how they approached the second. Certainly, they will have prior knowledge - and perhaps assumptions - which could influence how they tested in the second sample. Protection against this could include using different testers for each sample, deliberately assigning testers to do different things in each sample.

In order to make the most of CR we have to ensure that the sampling has the best chance to choose randomly from the whole of the population of interest. If the two samples are set up to choose from a only subset of the population (e.g. a net with a 1m pole in a pool 3m deep can never catch fish in two-thirds of the pool) then the estimate will be only of the subset population. Cast this back to testing now: in order to provide the most chance of sampling most widely we'd need different techniques, testers etc. But this is likely to either increase cost or introduce data sparsity.

Can we agree on what a bug is anyway and does it matter here? Is it important to try to distinguish fault from failure when counting? [Barnard] applies CR to Fagan inspections. The definition of this on Wikipedia describes a defect as a requirement not satisfied. If this is the same definition used in [Barnard] then it would  appear to exclude the possibility of missing or incorrect requirements. In [LSE] there's no definition but implicitly it seems broader in their attempts to find bugs in code review. Again, it's on the experimenter to be clear about what they're capturing and hence how they interpret the results and what they're claiming to estimate. Unless the experiment collects specific meta data accurately, the results won't be able to distinguish different classes of issue, which may make them weaker. And, of course, more data is likely to mean higher cost.

Bugs are subject to the relative rule; they are not a tangible thing, but only exist in relation to some person at some time, and possibly other factors. In this sense they are somewhat like self-reported levels of happiness found in psychological tests such as the Subjective Happiness Scale and less like a decision about whether the thing in the net is a fish or the patient in question has cancerous cells. The underlying variability of the construct will contribute to uncertainty in any analysis built on top of data about that kind of construct.

An interesting suggestion from [Klas] is that CR "can only be applied for controlling testing activities, not for planning them, because information collected during the current testing activity is required for the estimates." I'm not sure that the controlling/planning distinction would bother most context-driven testers in practice, but it's a valid point that until some current data is collected the method necessarily provides less value.
So, has anyone made it work?[Petersson] suggests that (at the time of his writing) there weren't many industrial studies of capture-recapture. Worryingly, [Miller] reviews a study that concludes that four or five samples will be needed (in software inspections rather than testing) to show value from any of the estimators considered although he does go on to point out shortcomings in the approach used. However, this might tie in with the thought above that there are too many unknown variables - getting more data can help to ameliorate that issue, but at the cost of gathering that extra data.

As noted earlier [Barnard] claims to have successfully applied the approach to defect detection using software inspections giving evidence of an improvement in the prediction of the need for reinspection over methods based purely on the feelings of the inspectors. The development methodology here (on software for the Space Shuttle) is pretty formal and the data used is historical, so provides a good basis for evaluation. This kind of inspection appears to exclude the possibility of identifying missing or incorrect requirements. In principle I don't think this affects the result - the Fagan inspection has a very restrictive oracle and so can only be expected to identify certain kinds of defects. With broader oracles, the scope for defect detection could presumably increase.

Have you tried to make it work youself?Well, I've been wondering about what might make a reasonable experiment, ideally with some validation of the estimates produced. I have ready access to several historical data sources that list defects and are of a reasonable size. They were collected with no intention of being used in CR and so are compromised in various ways but this is not so different from the situation that epidemiological studies encounter. The sources include:
  • Session-Based Test Management reports
  • customer support tickets
  • bug database
  • informal and ad hoc test and experiment reports
Some thoughts:

When CR is used in biology, it is not generally possible to capture the same item twice in the same sample. In the epidemiological case it is possible, for example if a patient has presented multiple times with different identities. In the bug case, it is perfectly possible to see the same issue over and over. The relative frequency of observation might be an interesting aspect of any analysis that CR will not exploit, if the data even existed.

We could consider randomly sub-dividing this report data into sets to provide multiple artificial samples. If we did that, we might initially think that we could use meta data that flags some bug reports as duplicates of one another. However, it will often be the case that bugs are observed but not reported because they're a duplicate and so no data will be gathered.

Further, if we use only the observed bugs - or perhaps, only the observed and recorded bugs - then we stand a good chance of making biased inferences [Tilling]. All sorts of factors determine whether observed issues will end up in whatever bug tracking tool(s) are being used.

Our session reports from SBTM are more likely to contain notes of existing issues encountered, so it might be more tractable to try to mine them. Other data collected in those reports would permit partitioning into "areas" of the product for consistency of comparison, at the expense of making data sparser. Typically, we won't run sessions with the same charter frequently, so objections about the probability of finding issues would be raised. However, we have sometimes run "all-hands" testing where we set up a single high-level mission and share a reporting page and test in areas guided by what others are doing.

Given the way my company operates, we're frequently working on daily builds. Effectively we're changing the environment in which our population lives and that churns the population, violating one of the assumptions of CR. To make any experiment more valid we'd probably need to try to use consistent builds.

To consider one scenario: I might like to be able to use data from reports generated in the late stages of a release cycle to estimate relative numbers (or maybe density, as above) of issues found in parts of the software and then compare that the relative figures obtained from our customer support tickets. If they match I might feel justified in having some faith in the predictive power of CR (for my application, tested the way we test it and so on). If they don't match, I might start casting around for reasons why. But, of course, this is confirmation bias. Any reasons I can come up with it failing to work, could have analogues which inadvertently caused it to appear to work too.

Right now, I don't have a good experiment in mind that would use the data I already have.

So what might a new experiment look like?Practical constraints based on what's been discussed here might include:
  • try to vary the testers across the two samples - to avoid prior knowledge tainting a second sample.
  • even if the two sets of testers are distinct, try to stop them talking to one another - to preserve independence of the samples.
  • ensure that the reporting framework for a sample permits issues seen to be recorded independently of the other sample - so the testers don't not file dupes of existing issues.
  • direct testers to use particular methods or be sure to understand the methods used - so as to give some idea of the space in which there was potential for bugs to be found in each sample, and where there was not.
  • be aware of the opportunity cost of collecting the data and the fact that multiple rounds might be needed to get statistical confidence
We might consider random assignment of testers to samples - treating it like assignment to experimental groups in a study, say. But this may be problematic on smaller teams where the low number of people involved would probably give a high potential for error.

I'd like to think of a way to evaluate the estimates systematically. Perhaps more directed testing in the same areas, by different testers again, for some longer period? Comparison against an aggregate of all issues filed by anyone against the product for some period? Comparison to later customer reports of issues? Comparison to later rounds of testing on different builds? Informal, anecdotal, retrospective analysis of the effects of using the results of the CR experiment to inform subsequent testing?

All of these have problems but it's traditional in this kind of scenario to quote Box's observation that while all models are wrong some are useful  in this kind of situation. It would not be unreasonable to use CR, even in contexts where its assumptions were violated, if it was felt that the results had utility.

Even given this, I don't have a new experiment in mind either.
Hah! Plenty of words here but ultimately: so what?When we talk about risk-based testing, how often do we attempt to quantify the risk(s) we perceive? Knight made an interesting distinction between risk and uncertainty where the former is quantifiable and the latter is not (and is related to Taleb's Black Swan events). Statistics can't tell you the outcome of the next coin toss but it can tell you over the longer term what proportion of coin tosses will be heads and tails. Likewise, statistical techniques are not going to guarantee that a particular user doesn't encounter a particular bug, but could in principle give bigger picture information on the likelihood of some (kinds of) bugs being found by some users.

You might already use bug-finding curves, or perhaps the decreasing frequency of observation of new issues, as data for your decision making. With appropriate caution, naturally [Kaner]. Statistics has the potential to help but I don't really see testers, myself included, using it. I was delighted when one of my team presented his simulations of a system we're building that used Poisson process  to model incoming traffic to try to predict potential bottlenecks, but it's symptomatic (for me) that when I was thinking about how I might set up and evaluate an experiment on the utility of CR I did not think about using hypothesis testing in the evaluation until long after I started.

Even if were were using statistics regularly, I would not be advocating it as a replacement for thinking. Take CR: imagine a scenario where the pond contains large visible and small invisible fish. Assume that statistical techniques exist which can account for sample biases, such as those listed above and others such as the bait used, the location and the time of day. If we sample with a net with holes bigger than the small fish, we will never catch any of them. If we don't use observation techniques (such as sonar) that can identify the invisible (to our eyes) we won't even know they're there. The estimates produced will be statistically valid but they won't be the estimates we want. Statistics gives answers ... to the questions we ask. It's on us to ask the right questions.

I'm very interested in practical attempts to use of statistics in software testing - with successful or unsuccessful outcomes. Perhaps you've used them in performance testing - comparing runs across versions of the software and using statistics to help see whether the variability observed is down to software changes or some other factor; or to perform checks to some "tolerance" where the particular instances of some action are less important than the aggregate behaviour of multiple instances; or perhaps modelling an application or generating data for it? If you have have experiences or references to share, I'd love to hear about them.
CreditsMany thanks to Joshua Raine, Peter Hozak and Jenny Thomas who gave me great feedback and suggestions on earlier drafts of this.
Image: https://flic.kr/p/n6HZLB
ReferencesAmoros: Jaume Amorós, Recapturing Laplace, significance (2014)

Barnard: Julie Barnard, Khaled El Emam, and Dave Zubrow, Using Capture-Recapture Models for the Reinspection Decision Software Quality Professional (2003)

Corraoa: Giovanni Corraoa, Vincenzo Bagnardia, Giovanni Vittadinib and Sergio Favillic
Capture-recapture methods to size alcohol related problems in a population, Journal of Epidemiology and Community Health (2000)

Kaner: Cem Kaner, Software Testing as a Social Science (2008)

Klas: Michael Kläs, Frank Elberzhager, Jürgen Münch, Klaus Hartjes and Olaf von Graevemeyer, Transparent Combination of Expert and Measurement Data for Defect Prediction – An Industrial Case Study, ICSE (2010)

LSE: The capture-recapture code inspection

Miller: James Miller, Estimating the number of remaining defects after inspection, Software Testing, Verification and Reliability (1998)

Nichols: James D. Nichols, Kenneth H. Pollock and James E. Hines, The Use of a Robust Capture-Recapture Design in Small Mammal Population Studies: A Field Example with Microtus pennsylvanicus,  ACTA THERIOLOGICA
 (1984)

Petersson: H. Petersson, T. Thelin, P. Runeson and C. Wohlin, "Capture-Recapture in Software Inspections after 10 Years Research - Theory, Evaluation and Application", Journal of Software and Systems,  (2004)

Pitt: Capture Recapture Web Page

Scott: Hanna Scott and Claes Wohlin, Capture-recapture in Software Unit Testing - A Case Study, ESEM (2008)

Tilling: Kate Tilling, Capture-recapture methods—useful or misleading?  International Journal of Epidemiology (2001)

Twitter: @jamesmarcusbach @raine_check @HelenaJ_M @JariLaakso @kinofrost @huibschoots, Twitter thread (2014)

WLF448: Fish & Wildlife Population Ecology (2008)
Categories: Blogs

uTesters and Applause Employees Gather for Rome uMeetup

uTest - Thu, 09/25/2014 - 20:19

italyItalian uTesters and Applause employees alike came together on September 10 as part of a local uMeetup in Rome, Italy.

Moritz Schoenberg, Senior Manager of Delivery Europe, and Yishai Cohen, VP, Business Development & Channel Sales for Applause, joined eight Italian testers over dinner and drinks at downtown Rome’s DoppioZero, a suggestion of Bronze-rated uTester Giuseppe Barbera.

One of the eight community members in attendance, Davide Savo, even made the five-hour drive all the way from Genova to Rome. Needless to say, our community shares quite the bond. It was also a homecoming of sorts for Applause’s own Moritz — while he has grown to Senior Manager of Delivery Europe, he started off as as a tester in our community, a former Most Valuable Tester to be exact!

Applause sponsored drinks and food for the evening, and discussion topics included everything
related to test cycles at uTest, from test case management, to one-on-one discussions on pain points within the Community, to most importantly, how uTest fits into these testers’ lives, as most of the testers are in QA in their day jobs.

uTesters at the meetup also shared their experiences regarding the uTest Community and Applause’s business model as a whole, and specifically called out Project Managers (PMs) Travis Price and Elinor Barak for their responsiveness and outstanding work. The Italian testers must be quite psychic — these two PMs recently took home a Tester of the Quarter recognition for Outstanding Project Managers.

But the evening wasn’t all business — Yishai and Moritz discussed IT trends with the uTesters including QA in Italy, and of course, the Apple event of the year that revealed the Apple Watch and iPhone 6.

A huge thanks to all of the Italian testers that joined us in Rome for a great and successful time — we hope our paths will cross again at a future uMeetup!

Categories: Companies

BlazeMeter Buys Loadosophia JMeter Reporting Tools

Software Testing Magazine - Thu, 09/25/2014 - 18:05
BlazeMeter, the leading load and performance testing as a self-service platform for mobile, web and APIs, today announced the acquisition of Loadosophia, which provides state-of-the-art analytics technology for JMeter users. Andrey Pokhilko, Loadosophia and JMeter-Plugins.org founder, joins BlazeMeter’s executive team as Chief Scientist. The acquisition of Loadosophia and the addition of Pokhilko’s technological expertise and performance testing experience enhance BlazeMeter’s product innovation and next generation performance testing solutions. BlazeMeter users will be able to utilize Loadosophia’s technology, and further innovations are on the product roadmap. Loadosophia’s customers will continue to be ...
Categories: Communities

Bad Deployments: The Performance Impact of Recursive Browser Redirect Loops

100% Coverage I just recently wrote a blog about BOTs causing unwanted traffic on our servers. Right after I wrote this blog I was notified about yet another “interesting” and unusual load behavior on our download page which is used by customers to download latest product versions and updates: If you see such a load […]

The post Bad Deployments: The Performance Impact of Recursive Browser Redirect Loops appeared first on Compuware APM Blog.

Categories: Companies

Talking with Tech Leads

thekua.com@work - Thu, 09/25/2014 - 15:04

I am proud to announce the release of my latest book, “Talking with Tech Leads” that is now available on Leanpub.

I started this book project over a couple of years ago when I discovered a lack of resources for helping developers transition into a role which demanded more than good development skills.

Talking with Tech Leads

Book Description: A book for Tech Leads, from Tech Leads. Discover how more than 35 Tech Leads find the delicate balance between the technical and non-technical worlds. Discover the challenges a Tech Lead faces and how to overcome them. You may be surprised by the lessons they have to share.

Buy it here on Leanpub.

Categories: Blogs

Visit Ranorex at Better Software Conference East 2014

Ranorex - Thu, 09/25/2014 - 10:00
Ranorex will be at the Better Software Conference East 2014 in Orlando, Florida on the 12th to 13th of November, 2014.

Discover the Better Software Conference East and you’ll become a better, more informed software professional with an outward, customer-focused view of development, with 100+ learning and networking opportunities:
  • Keynotes featuring recognized thought-leaders
  • In-depth half- and full-day tutorials
  • Concurrent sessions covering major issues and solutions
  • Pre-conference training classes
  • The Expo, bringing you the latest in software development solutions
  • Networking receptions, breakfasts, networking breaks, and lunches included
  • A full-day to explore unique challenges at the Agile Leadership Summit
We look forward to seeing you at our booth.


Categories: Companies

Analyzing Objective-C: the World of OS X and iOS within your Grasp

Sonar - Thu, 09/25/2014 - 06:50

With version 3.0 of the C / C++ plugin in August, 2014, support of the Objective-C language arrived.

Support of Objective-C in SonarQube was heavily awaited by the community, and has been in our dreams and plans for more than one year. You might wonder – why did it take us so long? And why now, when Apple has announced Swift? Why as a part of the existing plugin? I’ll try to shed light on those questions.

A year ago, there were only two developers in SonarSource’s language team, Dinesh Bolkensteyn and me. We’re both heavy hitters, but with more than a dozen language plugins, we weren’t able to give most of them, including C / C++, as much time as we wanted. Also we had technological troubles with analysis of C / C++. As you may know, source code in C / C++ is hard to parse, because… well, it’s a long story, which deserves a separate blog entry, so just take my word for it, it’s hard. And we didn’t want to provide a quick-win solution by locking ourselves and our users in to third-party tools, which wouldn’t play well in the long-term for the same reasons that third-party tools were a problem in other languages.

Today all that has changed. There are now seven developers on the language team (and room for more), with two dedicated to C / C++. We’ve spent the year not only on the growth of the team, but also on massive improvements to the entire C / C++ technology stack, while preserving its ease of use. At the same time, we’ve delivered eight new releases, with valuable new rules in each release. Since March, we’ve released about once a month, and plan to keep it up.

With solid new technical foundations in place, we were able to dream again about new horizons. One of them was Objective-C. It’s a strict superset of C in terms of syntax, so the work we had done improving the plugin also prepared us to cover Objective-C. Of course, with the announcement of Swift, actually covering Objective-C may not make sense to some, but there’s a lot of Objective-C code already out there, and as history has shown, old programming languages never die.

That’s why we decided to extend the existing plugin to cover Objective-C, and rebrand the plugin “C / C++ / Objective-C”, which is exactly what you see in SonarQube Update Center. Still, to better target the needs of the audiences we decided to have two separated licences: one for C / C++ and one for Objective-C.

And of course this means that out of the box, you get more than 100 Objective-C rules starting straight from the first version, as well as a build-wrapper to simplify analysis configuration. However, during implementation we also realized how unlike C Objective-C is, and for that reason we plan to add new rules targeting specifically Objective-C in the upcoming releases.

So don’t wait any longer, and put your software to the quality analysis!

Categories: Open Source

You don't have to write it (all) first...

Rico Mariani's Performance Tidbits - Thu, 09/25/2014 - 05:06

It seems like I get pretty much the same questions all the time.  A common one is, "Rico can you tell me if it would be ok for me to use <technology> to solve this <problem>?  How much does <technology> cost anyway?"

The answer is (nearly) always the same: "How the hell should I know?"

This is frequently followed by lamentations that there isn't some giant book of costs in which you can look up <technology> and see how expensive it is in <units>.

The contrapositive of this situation is where a person decides that they can't possibly know what anything is going to cost until they build the system and measure it.  I suppose if you're being pedantic that's true but as a practical matter nothing could be further from the truth.  Both situations have the same resolution.

If you want to get a sense of what something is going to cost, build a little gizmo that does kinda sorta what you have in mind and measure it to get a feel for the costs.  Don't do all the work of getting your business logic right and the specifics perfect, throw something together on the cheap that is going to give you a feel for the essential costs you are going to have. 

Here's a very specific example that came up not too long ago:  someone wanted an opinion on what it would cost to convert their data protocol from http to https.  How do you get that answer?  Well getting a complete answer will require a big investment but getting a sense of if you can afford this and what to look for is remarkably easy.  Write a little thingy that fetchs about the right amount of data with http then replace it with https -- this is like a 10 minute exercise.  Then look at both from a CPU perspective, and maybe network I/O, and DLLs loaded/code used too.  Try various request sizes and in particular some that are kinda like what you intend to do.  This can very quickly give you a sense of whether this is something you can afford and will give you excellent insights into what your final costs will actually be.  As your real code comes along, keep looking at it using your initial measurements as a touchstone.  If your costs start to be wildly different than your initial forecast, then you missed something.  If that happens you're going to know a lot sooner than the finish line that things are not going as expected.

This technique works in many domains -- database connection and query costs, UI frameworks, data processing, serialization, pretty much anything -- it's kind of like test driven development but for performance factors rather than correctness.

Whatever you do, don't tolerate a "we won't know until the end" attitude or, just as bad, "ask some expert, they're bound to know..."

Experiments are your friend.

 

 

Categories: Blogs

More Agile Testing by Lisa Crispin and Janet Gregory available on October 10th, 2014

TestDriven.com - Thu, 09/25/2014 - 02:05
http://www.amazon.com/More-Agile-Testing-Addison-Wesley-Signature/dp/0321967054
Categories: Communities

Jenkins in JavaOne 2014

There'll be several talks that touch Jenkins. The first is from me and Jesse called Next Step in Automation: Elastic Build Environment [CON3387] Monday 12:30pm.

Then later Tuesday, there's Building a Continuous Delivery Pipeline with Gradle and Jenkins [CON11237] from Benjamin Muschko of Gradleware.

Thursday has several Jenkins talks. One is The Deploy Factory: Open Source Tools for Java Deployment [CON1880] from Bruno Souza (aka the Java Man from Brazil) and Edson Yanaga. In this same time slot, guys from eBay are doing Platform Upgrades as a Service [CON5685], which discusses how they rely on automation to make platform upgrades painless. Then Mastering Continuous Delivery and DevOps [CON1844] from Michael Huttermann.

In the exhibit area, the Jenkins project doesn't have its own booth (JavaOne is too expensive for that), but I'll be at the CloudBees booth, so is Jesse Glick. Find us at the booth for any Jenkins questions or impromptu hacking session, which would really help us as we get distracted from the booth duties that way. Or just drop by to get stickers, pin badges, and other handouts to take for your colleagues.

And finally, Script Bowl 2014: The Battle Rages On [CON2939] gets an honorable mention because our own Tyler Croy is representing JRuby against other scripting languages, including my favorite Groovy. Hmm, who should I root for...

Categories: Open Source

More Jenkins-related continuous delivery events in Chicago, Washington DC, and San Francisco

The usual suspects, such as CloudBees, XebiaLabs, SOASTA, PuppetLabs, et al are doing a Jenkins-themed continuous delivery event series called "cdSummit." The event is free, has a nice mix of user/vendor talks, and has an appeal to managers and team leads who are working on and struggling with continuous delivery and automation.

I've spoken in the past events, and I enjoyed the high-level pitches from various speakers. The last two events at Paris and London filled up completely, so I suspect others have liked them, too.

If you live near Chicago, Washington DC, or San Francisco, check out the date and see if you can make it. RSVP is from here. If you do, be sure to pick up Jenkins stickers and pin badges!

Categories: Open Source

Growing Agile: A Coach’s Guide to Agile Testing

TestDriven.com - Thu, 09/25/2014 - 00:03
https://leanpub.com/AgileTesting/read
Categories: Communities

Software Testing Events Fall 2014 Preview

uTest - Wed, 09/24/2014 - 20:49

For those of us in the northern hemisphere, fall is officially here, and that means the folks at Applause and uTest will be diving head first into our fall eBusiness Communication Duplicate modelvent lineup. To that end, we wanted to share with you some of the awesome events at which we will be in attendance.

STARWEST
In just two weeks, the STARWEST conference kicks off in Anaheim, California. Hosted by SQE, an organization that has delivered training, support, research, and publications to software managers, test professionals, and quality engineers worldwide since 1986, this top-notch conference caters exclusively to the needs of quality assurance professionals.

While you’re there, you won’t want to miss keynotes from such notables as Paco Hope, a principal consultant for Cigital, and Julie Gardiner, the principal consultant and head of QA for Redmind.

STPCon Fall
In the first week of November, the STP Conference will be heading to the Mile High City – Denver, Colorado. STPCon is a fantastic event where test leadership, management and strategy converge. The hottest topics in the industry are covered including agile testing, performance testing, test automation, mobile application testing, and test team leadership and management.

Be sure to check out some of the featured speakers including Lynn McKee and her Keynote on ‘Quality: Good, Bad and Reality,’ and also Mark Tomlinson in his track ‘Infuriatingly Fun Performance Puzzles.’

If there already wasn’t an incentive for you to head to Denver, uTest has also managed for an exclusive discount for uTesters, so be sure to get more information on this discount code if you plan on registering soon for STPCon.

EuroSTAR
The EuroSTAR Conference is Europe’s premier software testing conference. This year, the event takes place in Dublin, Ireland from November 24-27. This event will feature keynotes from some heavy hitters including Professor Andy Stanford-Clark, the Chief Technologist for IBM’s consulting business in Energy and Utilities for the UK and Ireland – who will be giving a talk on The Internet of Things. Also featured is Isabel Evans, who with over 30 years of IT experience, will be giving a talk on what to do when a Change Program goes wrong.

If you haven’t registered for any of these events yet, what are you waiting for? Don’t miss your opportunity to network (or commiserate) with your peers, share ideas and learn from the best in the business. If you do plan on being at one of these events, swing by our booth and tell them I sent you!

Also, be sure to check out our complete list of upcoming events for 2014 — and beyond — over at our Events calendar.

Categories: Companies

Recap: Sauce Labs at Selenium Conf 2014 [VIDEO]

Sauce Labs - Wed, 09/24/2014 - 19:35

We’re still glowing from this year’s Selenium Conf in Bangalore. Santi Ordonez, Ashley Wilson, and Isaac Murchie attended the event and represented Sauce Labs.

The Selenium Conference is a volunteer-run, non-profit event presented by members of the Selenium Community. The goal of the conference is to bring together Selenium developers & enthusiasts from around the world to share ideas, socialize, and work together on advancing the present and future success of the project.

Santi, Ashely, and Isaac shared with us some of their insight into the crowd this year that we in turn wanted to share with you. Of approximately 450 attendees, they estimated around 75% purchased their own ticket, therefore the personal interest level in Selenium was high.  The attendees were extremely technical at this event, even more so than in Boston’s conference. Only 5 people confessed to being IDE users, while everyone said they use WebDriver. Win!

Sauce at SelenConf

Santi achieved near star status at the Sauce Labs booth; it was crowded the entire time.

Santi

Meanwhile, Isaac gave a great talk titled “Selenium in the Palm of your Hand: Appium and Automated Mobile Testing.”

Image via Selenium Conf 2014

Image via Selenium Conf 2014

Check out Isaac’s talk here:

For more great resources courtesy of Selenium Conf, check out the links below, and make sure to follow Santi and Isaac on Twitter for the latest.

Selenium Conf Resources:

Spread the word: @SeleniumConf

Categories: Companies

On-Demand Webinar: Evolving Your Product Development Practices for the Internet of Things

The Seapine View - Wed, 09/24/2014 - 16:29

Thanks to everyone who participated in the “Evolving Your Product Development Practices for the Internet of Things” webinar. The webinar recording is now available if you weren’t able to attend or if you would like to watch it again. Additionally, the questions and answers from the webinar follow the video.

Questions & Answers

How do you tie requirements to test cases in Seapine TestTrack?
It’s done automatically, as part of the process of creating test cases. When working with a requirement, there’s a button to “Create Test Case from Requirement” that copies data from the requirement (configurable via field mapping rules) and also creates a link back to the requirement (also configurable, via link definitions).

What Internet of Things (IoT) products or vendors has Seapine worked with in the past?
The IoT is kind of nebulous, we have a lot of customers doing complex product development that involves hardware, software and services together. This includes medical device manufacturers like Heartware, automotive suppliers such as Gentex, and manufacturers including Bobcat Corporation. You can view a list of customers here.

What sort of configuration management and change control does Seapine offer?
You can read about our configuration management solutions here.

Is one license enough for an enterprise?
Probably not, the software is licensed on a per-user basis. In general you would want one license for every user that’s going to be accessing the system on a regular basis.

I understand we can import Word and Excel into TestTrack. Can we export requirements and test cases to Word and Excel for external validation?
Yes, we have configurable templates for exporting requirements to Word. For Excel exports, we support CSV formatted output of most data within the system.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Cyberattack strikes more than 1,000 US businesses

Kloctalk - Klocwork - Wed, 09/24/2014 - 14:40

In recent years, the threat posed by malware has grown significantly. Businesses in virtually every sector must now invest in high-quality cybersecurity solutions in order to protect their networks from increasingly sophisticated, determined hackers.

Unfortunately, many firms' current efforts in this capacity are not sufficient, as recent events highlighted. The Department of Homeland Security announced that a new malware program, known as Backoff, affected more than 1,000 American businesses, resulting in millions of incidents of stolen payment card data. The scope and damage wrought by Backoff put into perspective the need for companies to implement superior cybersecurity measures. Static code analysis can and should play a key role in such efforts. 

A widespread attack
As The New York Times noted, the DHS announcement revealed that Backoff caused much greater damage than was initially believed. While initially industry experts presumed that only a small number of organizations were affected, later investigations proved that Backoff had successfully attacked at least 1,000 companies, including such major firms as Target and UPS stores.

Backoff first gained attention in late July, when the DHS, Secret Service and National Cybersecurity and Communications Integration Center recommended that companies conduct thorough analyses of their cash register systems, just in case Backoff had managed to gain access to these networks. As the news source pointed out, antivirus programs were unable to identify this malware. This lack of detection allowed hackers to steal millions of customers' credit card data with impunity until affected companies implemented countermeasures.

The nature of the malware
As The New York Times explained, hackers regularly use malware to scan corporate systems in search of remote access opportunities, effectively using these third-party networks as stepping stones to their actual targets. When successful, the cybercriminals than use computers to guess usernames and passwords, until they eventually gain access to the corporate network. At this point, the criminals can steal payment card data, which in turn can be used for fraud or sold on the black market.

A big part of the problem is the nature of payment card technology, as Avivah Litan, a security analyst for Gartner Research, told the news source. 

"The weakness is the magnetic stripe. I can buy a mag stripe reader on eBay and easily read all the data from your credit card," said Litan, The New York Times reported. "It's an antiquated technology from the '60s."

Europay-MasterCard-Visa, the new chip-based smart card standard, makes it far more difficult for hackers to counterfeit, and therefore could provide a lot more security for companies and consumers. However, while the credit card industry established a deadline of October 2015, the source noted that many industry observers expect the majority of retailers to miss this date, due to the cost of upgrading payment terminals.

Protective measures
The Backoff malware program is particularly dangerous because it cannot be detected via most traditional antivirus efforts. The DHS therefore recommended that retailers contact their service providers, antivirus vendors and cash register system vendors directly to determine whether their systems have been compromised.

Additionally, during its July advisory, the Secret Service and DHS urged companies to engage in both two-factor authentication and data encryption. The former strategy requires employees to use a one-time password in conjunction with their normal credentials, which can thwart malware programs' data-theft efforts. Encryption offers further protection by making customer data unusable to hackers who gain access to the corporate network.

However, to truly protect themselves and their customers from the threat posed by hackers, companies need to go even further in their cybersecurity efforts. Specifically, they should embrace solutions that can dig even deeper and provide more robust protection. For example, static code analysis tools can identify security holes within the code itself. This allows companies to recognize potential vulnerabilities before they are leveraged by hackers, delivering a more proactive approach to cybersecurity and data protection. 

Beyond decreasing the risk of a data breach, static code analysis tools also can help retailers and other companies to ensure they remain in compliance with all relevant industry standards, both now and well into the future. As data breach events become increasingly commonplace and damaging, it is likely that industry standards will toughen, adding greater importance to such compliance efforts. 

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today