Skip to content

Feed aggregator

Best Practices for Creating and Using Home Page Widgets

The Seapine View - Tue, 09/30/2014 - 12:00

TestTrack Home page widgetI wrote a previous blog post about how to create a widget. In this post, I’ll provide some best practices to help you make the most of TestTrack widgets. I’ll cover setting up security and sharing permissions, and provide recommendations on using colors to better call attention to key performance indicators (KPIs). Once you’ve set up a few widgets and users are starting to apply them to the Home page, you’ll likely get feedback on what’s working or not working. In the coming weeks, I’ll be providing a variety of sample widgets that you can pick and choose from based on the needs of your team. The Home page widgets are still relatively new, so be sure to check back often to make sure you’re making the most of them.

Setting Up Security Permissions

There are three permissions that impact creating and using widgets.

Create and edit widgets

In Security Groups, you can set/unset the Administration > Configure Home Widgets option to control who can create and edit widgets. If you’re upgrading from an older version of TestTrack, this option will be turned off by default. Make sure you you set that option for at least yourself to ensure someone can create widgets.

Filter sharing

The first step in creating a widget is to create a new filter or select an existing one. When someone clicks on the widget from their Home page, they’ll be taken to a list window with that filter
applied. Make sure the filter is shared with everyone or matches the widget’s share permissions; otherwise your users will end up seeing an error message when they try to view the details of a widget.
Error on widget drill-down

Widget sharing

Just like filters, widgets can be shared with one or more security groups. If you share an existing widget with a new group, be sure to review the associated filter to ensure it’s also shared with that group.

Configure Color Mappings

There are a few ways to use color with widgets. Here are some ways we’ve seen used successfully internally and by customers.

Single color mapping

Use a single color to identify item types or to highlight critical pieces of information, no matter what they’re showing. If you want to show urgency with one color, use the scaling capability by selecting the Scale color to show transitions between mappings checkbox when setting up the widget. This will maintain the single color but provide some context by scaling the color lighter or darker based on the KPI value. For example:

  • Red for blocked test runs, whether there’s 0 or 100 of them
  • Purple for metrics associated with requirements or user stories
  • Dark blue for “my” items showing requirements to review, tests to run, or defects to fix
2-color mapping

Use 2 colors for “binary” metrics, where things are either “good” or “bad.” For KPIs where anything greater than 0 is bad, use 2 colors to immediately call attention to them.

  • Security holes/defects, where 0 is green and anything greater than 0 shows red
  • “My” requirements for review, where 0 is white and anything greater than 0 shows green
  • P1 defects in the current sprint, where 0 is green and anything greater than 0 shows red
3-color mapping

Use 3 colors to create a classic “stoplight” KPI, where things can go from “good” to “concerned” to “not good.”

Multi-color mapping

TestTrack supports up to 10 different color bands in a single widget, but using more than three colors is challenging and doesn’t work very well. Typically users will struggle to remember what each color means, and in practice I’ve not seen many situations where interpreting the data is complicated enough to need more than 3 colors. If you think you need more than 3 colors, consider trying the scaling option between three colors first. This will result in each of the 3 colors being lighter or darker depending on how close the KPI value is to the color band.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Ranorex 5.1.3 Released

Ranorex - Tue, 09/30/2014 - 10:59
We are proud to announce that Ranorex 5.1.3 has been released and is now available for download. General changes/Features
  • Added support for Firefox 33
Please check out the release notes for more details about the changes in this release.

Download latest Ranorex version here.
(You can find a direct download link for the latest Ranorex version on the Ranorex Studio start page.) 

Categories: Companies

Do you talk the talk?

PractiTest - Tue, 09/30/2014 - 09:00

A couple of nights ago my husband, who is a civil engineer, asked “What are you working on?“.

Oh, just reading this article about Agile Testing and Continuous Integration” I replied off hand.

To what he replied [in a cautious tone]: “OK… That sounds interesting, I guess…” .

It is a known fact that every profession has it’s own jargon.

professional jargonThis exclusive vocabulary doesn’t only help us to understand what each of us needs and does as part of our work flow, but it also creates the identity of the profession by generating a sense of belonging for the individual (ask any basic sociology student and they will elaborate on this point).

 

The issue is that you are so immersed in your profession that you don’t even realize you are using a separate vocabulary until someone else starts asking you the meaning of words and acronyms. But after reading this post I’m sure you will start to notice that many of the words and phrases we use at work are not really obvious to the people we meet “after hours”.

Why is this interesting?

One of the major findings of the last “State of Testing Survey”  conducted by PractiTest, showed that communication skills are a key factor and considered a major challenge that testers face (99% of respondents rated this an important to very important skill).

communication skills

This is an issue when it comes to communication with people outside of your testing team and especially with management. The difference in professional vocabulary can become an obstacle and even an intrusion to your work!

Keep that in mind:

1. When you are “breaking in” a new team member/ tester, it is important to add the professional jargon everyone else around him/ her will use to the learning plan. After all language is part of your profession’s culture, and your new recruit isn’t fluent in your native tongue.

2. When reporting project stats to other teams, management and also to customers you need to be aware that many phrases will need “translation” to pass on your message clearly.

3. When working with distributed teams, it helps to have some kind of common vocabulary, as you are working and communicating across countries, cultures and languages.

 

Here are some examples off the top of my mind:

Rugby scrumSCRUM - an agile development approach (NOT a method of restarting a play in Rugby)

Agile – Testing methodology (NOT a anatomical yoga friendly ability)

STD – Software Test Design (NOT a sexually transmitted disease…)

Risk Management –  Handling your project’s risks (NOT worrying about your future healthcare or financial situation).

Sanity – No last minutes bugs were introduced before release (NOT a mental state)

Comment:
If you have any more examples of professional jargon you wish to share with us that will make us smile or even better make us think, please feel free to share them back in a comment.

Categories: Companies

Nexus OSS Meets NuGet

Sonatype Blog - Tue, 09/30/2014 - 01:00
The NuGet package manager has become the standard for developing software on the Microsoft platform which includes.NET and the NuGet Gallery that has emerged as a large public open source package repository. Sonatype Nexus, on the other hand, is the standard repository or component manager software...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

ShellShock: Bug or Flaw?

The Kalistick Blog - Mon, 09/29/2014 - 23:14

As the repercussions from the ShellShock disclosure ripple through the security and business worlds, I wanted to contribute some thoughts on the issue from Coverity’s point of view. However, before drawing any conclusions, it’s instructive to first consider what type of vulnerability ShellShock actually is: a coding bug?  A design flaw?  Analysis on this is still ongoing but I see three main issues at play covering both design and coding issues:

  1. Bash uses environment variables to allow functions to be exported to children shells by executing the code in the environment variables. As Michał Zalewski puts it, this is a “hack” and a rather risky way to achieve their desired results. Since this was part of the design for this feature, this is definitely a design flaw however the risky coding pattern itself could also be flagged as a code defect.
  2. Bash did not properly detect the end of the function definition inside the environment variable. This allowed arbitrary code to follow the function definition which would automatically be executed when the child shell is initialized. This is a coding bug. It was never indended behavior and this is part of the reason why remote code execution is possible in the first place. (For context, the initial patch addressed only this issue.)
  3. There is a mismatch of assumptions between Bash and External Systems calling Bash to do things. In other words there is a fundamental design flaw here. The basic assumption Bash makes is that environmental variables are mostly trusted and not typically influenced by an outside attacker. External systems calling bash know that untrusted data gets passed in to bash via environment variables however they assumed that there was no way that data in an environment variable would be executed as code. Combine these two bad assumptions and you end up in the unfortunate scenario we are currently in – RCE affecting a large number of public web servers, GIT repositories, and so forth.

So what can a SAST tool do to help in this scenario? The risky coding pattern in Issue #1 actually seems like the most fruitful approach since it is certainly something that a SAST tool could detect. Coverity can detect these types of issues using the TAINTED_STRING checker provided environment variables are distrusted.  However, this comes at the expense of much higher false positives on most code bases. This could easily happen with the specific Bash issue since the code execution itself is intended behavior.  It’s only when you take a more holistic view and consider how Bash interacts with third-party systems (Issue #3) that you start considering different trust models and realize the full extent of the problem. I would argue that this is the key failure here and is, unfortunately, not one frequently addressed in the Open Source Community nor by most enterprises.

Design flaws are inherently difficult to track and manage especially when the threat landscape for an application changes.  This has certainly been the case for Bash which was first released in 1989; well before the rise of browsers and the world wide web.  As new types of integration become standard, assumptions that once held no longer make any sense and it unless there is a concerted effort to track and manage these assumptions and threats, it’s easy for new risks to go unnoticed.

There are a lot of open questions left on the table here that I cannot give adequate justice to in this blog post. However it’s worth highlighting a few for security-minded folks to contemplate: What can FOSS projects do to better avoid these kinds of design issues? How can application security organizations and professionals better help the FOSS community to identify coding bugs and design flaws? How can threat modelling be better utilized by application maintainers over an applications lifetime? How can we do a better job identifying and tracking design flaws to begin with?  Solutions to these types of questions are really needed to better address design flaws and also improve application security for enterprise and open source software.

The post ShellShock: Bug or Flaw? appeared first on Software Testing Blog.

Categories: Companies

Performance Quiz #14: Memory Locality etc. Bonus Round!

Rico Mariani's Performance Tidbits - Mon, 09/29/2014 - 21:54

[Note: I accidently pasted an x64 result in place of an x86 result. As it happens the point I was trying to make was that they were very similar, which they are, but they aren't identical... corrected.]

Thanks to some careful readers I discovered why the shuffling factors discussed in my previous blog entry were so strange.  The rand() method I was using only returns numbers between 0 and MAX_RAND which is 32767 on my system.  That meant that shuffling became a non-factor as the number of elements in the array increased.

I've since switched the code to use this instead:

#include <random> // at the top of the file

    void Shuffle(int m)
    {
        std::mt19937 gen(0);  // repeatable results desired
        std::uniform_int_distribution<> dist(0, _n - 1);

        for (int nn = 0; nn < m; nn++)
        {
            int i = dist(gen);
            int j = dist(gen);

            if (i == j) continue;

            int tmp = _perm[i];
            _perm[i] = _perm[j];
            _perm[j] = tmp;            
        }
    }

With this change shuffling becomes generally effective again and the result is that non-locality dominates the effects, which is really a lot more like what we would expect.  I thought that the effect might top off on very large array sizes but it does not.  The normal perfecting of in order memory continues to give us benefits even at very large array sizes.  Let's look at the architectural differences once again. 

Pointer implementation with no changes
sizeof(int*)=4, sizeof(T)=12
  shuffle,   0.00,   0.01,   0.10,   0.25,   0.50,   1.00
     1000,   1.99,   1.99,   1.99,   1.99,   1.99,   1.99
     2000,   1.99,   1.99,   1.99,   1.99,   1.99,   1.99
     4000,   1.99,   1.99,   2.20,   2.49,   2.92,   3.20
     8000,   1.85,   1.96,   2.38,   3.13,   3.88,   4.34
    16000,   1.88,   1.96,   2.56,   3.40,   4.59,   4.94
    32000,   1.88,   2.17,   3.52,   6.20,   8.16,  11.24
    64000,   1.88,   2.33,   4.45,   8.13,  11.87,  15.96
   128000,   2.06,   2.27,   5.15,   9.32,  14.81,  18.29
   256000,   1.90,   2.35,   5.90,  10.99,  16.26,  21.39
   512000,   1.92,   2.77,   6.42,  11.59,  17.55,  23.71
  1024000,   2.22,   3.42,   7.96,  14.58,  22.97,  30.00
  2048000,   2.54,   4.68,  17.47,  38.54,  62.76,  90.17
  4096000,   2.50,   5.70,  25.17,  53.21,  89.79, 121.06
  8192000,   2.50,   6.31,  30.36,  63.07, 106.04, 142.09
 16384000,   2.50,   6.68,  32.94,  68.79, 115.24, 156.78
 32768000,   2.55,   7.22,  34.64,  73.71, 123.98, 165.47
 65536000,   2.55,   7.96,  36.49,  73.39, 126.17, 168.88

 OK here we see just how profound the non-locality effects are.  The cost per item goes up to a whopping 168.88 at the bottom of the table for fully shuffled data.  And you can see that even a small amount (1%) of non-locality makes a crushing difference.  You can also clearly see that while the data fits none of this matters a whit.  But even at 4k elements we're already starting to see the effects.

Now let's compare the same program run on x64.

Pointer implementation with no changes
sizeof(int*)=8, sizeof(T)=20
  shuffle,   0.00,   0.01,   0.10,   0.25,   0.50,   1.00
     1000,   2.28,   1.99,   1.99,   1.99,   2.28,   1.99
     2000,   2.28,   2.42,   2.56,   2.84,   3.27,   3.27
     4000,   2.28,   5.12,   2.92,   3.84,   4.62,   5.05
     8000,   2.17,   2.28,   3.13,   4.34,   5.62,   5.90
    16000,   2.24,   2.49,   3.95,   7.02,   8.28,  10.97
    32000,   2.25,   2.54,   5.00,   8.98,  13.33,  16.75
    64000,   2.24,   2.91,   6.01,  10.22,  15.69,  19.93
   128000,   2.40,   2.93,   7.06,  11.51,  18.80,  22.25
   256000,   2.45,   2.96,   7.16,  13.97,  19.75,  24.89
   512000,   3.06,   3.49,   8.25,  19.20,  25.23,  28.36
  1024000,   3.59,   5.39,  16.21,  34.18,  60.90,  83.61
  2048000,   3.79,   7.18,  27.84,  58.19,  97.72, 130.46
  4096000,   3.79,   8.18,  34.17,  70.37, 118.09, 153.55
  8192000,   3.78,   8.72,  37.35,  76.35, 128.28, 166.00
 16384000,   3.76,   9.27,  39.17,  81.54, 137.48, 173.84
 32768000,   3.77,  10.06,  40.25,  83.91, 140.59, 178.86
 65536000,   3.74,  10.94,  43.60,  86.91, 142.89, 183.30

OK you can see the increase in size of the data structure significantly makes the unshuffled data worse, just like before.  And the size difference actually dilutes the non-locality somewhat.  It's merely 49 times worse on x64 rather than 66 times worse like on x86.  But it's still hella bad.   Note that a 66% data growth in this case is resulting in a 46% performance hit.  So actually x64 is doing worse overall not just because of raw data growth, the larger pointers are causing other inefficiencies.

Now I was going to skip the unaligned stuff thinking it would all be the same but I was wrong about that, these data end up being interesting too.  In the interest of saving space I'm just providing the first few rows of the table and the last few for these 4 experiments.

Pointer implementation with bogus byte in it to force unalignment
sizeof(int*)=4, sizeof(T)=13
  shuffle,   0.00,   0.01,   0.10,   0.25,   0.50,   1.00
     1000,   2.28,   1.99,   2.28,   2.28,   1.99,   2.28
     2000,   1.99,   2.13,   2.13,   2.13,   2.13,   2.13
     4000,   2.06,   2.13,   2.70,   3.06,   3.63,   4.05
 16384000,   2.75,   7.39,  35.18,  74.71, 124.48, 167.20
 32768000,   2.74,   7.83,  37.35,  77.54, 126.54, 170.70
 65536000,   2.69,   8.31,  37.16,  77.20, 129.02, 175.87

OK naturally x86 is somewhat worse with unaligned pointers, but not that much worse. Mostly we can attribute this to the fact that the data structure size is now one byte bigger.  The 2.69 in the leading column is likely a fluke, so I'm going to go with 2.74 as the final value for unshuffled data.  Note that the execution time degraded by about 7.4% and the data grew by about 8.3% so really this is mostly about data growth and nothing else.  The shuffled data grew 4%, so the caching effects seem to dilute the size growth somewhat.

Pointer implementation with bogus byte in it to force unalignment
sizeof(int*)=8, sizeof(T)=21
  shuffle,   0.00,   0.01,   0.10,   0.25,   0.50,   1.00
     1000,   2.28,   2.28,   2.28,   2.28,   2.28,   2.28
     2000,   2.42,   2.42,   2.84,   3.27,   3.70,   3.84
     4000,   2.42,   2.49,   3.34,   4.48,   5.48,   5.97
 16384000,   3.81,   9.58,  40.01,  83.92, 142.32, 181.21
 32768000,   3.96,  10.34,  41.85,  88.28, 144.39, 182.84
 65536000,   3.96,  11.37,  45.78,  92.34, 151.77, 192.66

Here's where things get interesting though.  On x64 the cost of unshuffled data grew to 3.96ns per item from 3.74.  That's a 5.8% growth.  And the shuffled data grew to 192 from 183, also about 5%.  That's pretty interesting because the data also grew 5%, exactly 5% as it turns out, from 20 to 21 bytes.

What happens if we grow it a little more to get perfect alignment?

Pointer implementation with extra padding to ensure alignment
sizeof(int*)=4, sizeof(T)=16
  shuffle,   0.00,   0.01,   0.10,   0.25,   0.50,   1.00
     1000,   1.99,   1.99,   1.99,   1.99,   1.99,   1.99
     2000,   1.99,   1.99,   2.13,   2.13,   2.28,   2.13
     4000,   2.06,   1.99,   2.56,   3.34,   3.98,   4.05
 16384000,   3.04,   7.77,  35.52,  74.26, 125.54, 163.54
 32768000,   3.06,   8.44,  36.86,  77.08, 129.97, 168.43
 65536000,   3.08,   9.26,  38.16,  78.69, 129.42, 171.52

Well on x86, growing it doesn't help at all.  The size hit brings us to 3.08 and 171.52 on the bottom rows, the first number is worse, increasing from 2.74 so padding certainly didn't help there.  The second number is somewhat better... ~171 vs. ~175 perhaps alignment is helping us to avoid cache splits but it's only a 2% win and we paid a lot of space to get it (16 bytes instead of 13 or 23% growth.  What about on x64?)

Here the situation is interesting.   

Pointer implementation with extra padding to ensure alignment
sizeof(int*)=8, sizeof(T)=24
  shuffle,   0.00,   0.01,   0.10,   0.25,   0.50,   1.00
     1000,   1.99,   1.99,   1.99,   2.28,   1.99,   1.99
     2000,   2.13,   2.28,   2.70,   2.99,   3.56,   3.70
     4000,   2.28,   2.28,   2.99,   3.91,   4.48,   4.98
 16384000,   4.30,   9.79,  38.55,  80.31, 133.95, 168.14
 32768000,   4.29,  10.81,  39.99,  83.10, 137.47, 168.60
 65536000,   4.29,  11.45,  44.03,  88.23, 143.56, 176.47

Look at that... things did get worse on unshuffled data due to the size growth, we're up to 4.29.  That's pretty much the result I expected, data size is king, especially in ordered data.  However, the last column is suprising.  With shuffled data we're now at 176.46.  Even though we grew the structure to 24 bytes it's actually running quite a lot faster than the 196 we got in the worst case.  Again I blame the fact that the random access is subject to cache splits and those are not present if the data is aligned.  In fact even a small amount of randomness is enough to make the aligned datastructures better.  At 10% shuffling we were already ahead despite the fact that the data is bigger.  So bottom line here, the unaligned pointer wasn't costing us much but creating cache splits definitely does, and we see those in the unshuffled data.  [Note: this is a bit of a stretch because I didn't actually gather stats on cache splits, I know that the odd size packed data must have them and they seem sufficient to explain the behavior, whilst the in order data will be largely immune, but that isn't full confirmation, so I could be wrong.)

Finally, looking at using indices instead of pointers.

Standard index based implementation
sizeof(int*)=4, sizeof(T)=12
  shuffle,   0.00,   0.01,   0.10,   0.25,   0.50,   1.00
     1000,   3.70,   3.41,   3.41,   3.41,   3.70,   3.41
     2000,   3.41,   3.56,   3.41,   3.41,   3.41,   3.41
     4000,   3.63,   3.41,   3.63,   3.98,   4.34,   4.62
     8000,   3.45,   3.56,   3.84,   4.62,   5.19,   5.69
    16000,   3.52,   3.56,   4.00,   4.85,   5.67,   6.33
    32000,   3.48,   3.64,   5.12,   7.24,   9.83,  12.07
    64000,   3.48,   3.76,   6.10,   9.52,  13.78,  17.54
   128000,   3.48,   3.87,   6.70,  10.74,  15.90,  20.23
   256000,   3.48,   3.95,   7.34,  11.96,  17.48,  22.48
   512000,   3.46,   4.01,   7.75,  13.03,  19.69,  25.45
  1024000,   3.70,   4.89,  10.27,  16.68,  25.41,  32.78
  2048000,   3.80,   6.03,  19.54,  39.37,  65.81,  89.51
  4096000,   3.80,   7.12,  27.58,  55.90,  94.37, 125.29
  8192000,   3.78,   7.72,  32.42,  65.97, 110.21, 146.96
 16384000,   3.79,   8.07,  35.15,  71.50, 119.32, 159.43
 32768000,   3.63,   8.19,  35.50,  72.99, 121.11, 161.69
 65536000,   3.78,   9.18,  37.63,  76.34, 123.47, 164.85

As before, the x86 code is slower... there is nothing but downsize there since the pointers were already 32 bits.

Standard index based implementation
sizeof(int*)=8, sizeof(T)=12
  shuffle,   0.00,   0.01,   0.10,   0.25,   0.50,   1.00
     1000,   3.41,   3.41,   3.41,   3.70,   3.41,   3.70
     2000,   3.41,   3.56,   3.41,   3.41,   3.41,   3.41
     4000,   3.56,   3.41,   3.63,   4.05,   4.34,   4.55
     8000,   3.41,   3.52,   3.88,   4.59,   5.19,   5.65
    16000,   3.43,   3.61,   4.00,   4.84,   5.76,   6.31
    32000,   3.47,   3.64,   5.08,   7.24,   9.78,  12.20
    64000,   3.48,   3.75,   6.11,   9.52,  13.84,  17.54
   128000,   3.49,   3.86,   6.71,  10.72,  15.86,  20.23
   256000,   3.48,   3.96,   7.34,  11.96,  17.83,  22.74
   512000,   3.45,   4.00,   7.75,  12.80,  19.16,  25.49
  1024000,   3.70,   5.27,  16.56,  16.10,  23.88,  32.46
  2048000,   3.80,   6.10,  19.48,  39.09,  66.05,  89.76
  4096000,   3.80,   6.94,  26.73,  54.67,  91.38, 125.91
  8192000,   3.79,   7.72,  32.45,  66.14, 110.40, 146.90
 16384000,   3.62,   7.98,  35.10,  71.72, 120.31, 159.13
 32768000,   3.77,   8.43,  36.51,  75.20, 124.58, 165.14
 65536000,   3.77,   9.15,  37.52,  76.90, 126.96, 168.40

And we see that the x64 run times very similar.  x64 provides no benefits, but we can use the data reduction provided by 32 bit pointers to save space and net a significant savings in execution time with even modestly (1%) shuffled data, an entirely normal situation.  And the space savings don't suck either.  The raw speed when comparing to fully ordered data is about a wash.

Just for grins I hacked up an implementation that uses 16 bit integers, here's what it looks like on x64, keeping in mind that you could only use this if your array sizes were known to be less than 64k (which they often are frankly). Or if you could flexibly swap it in.

Standard  short index based implementation
sizeof(int*)=8, sizeof(T)=6
  shuffle,   0.00,   0.01,   0.10,   0.25,   0.50,   1.00
     1000,   3.98,   3.98,   3.98,   3.98,   3.98,   3.98
     2000,   3.84,   3.98,   3.84,   3.98,   3.84,   3.84
     4000,   3.91,   4.05,   3.91,   3.91,   3.91,   3.91
     8000,   3.95,   3.98,   4.12,   4.37,   4.69,   4.94
    16000,   4.00,   3.98,   4.34,   4.84,   5.49,   6.04
    32000,   3.95,   4.06,   4.46,   5.11,   6.11,   6.28
    64000,   3.93,   4.09,   5.55,   7.05,   9.77,  11.45

What's interesting here is that  even though it's smaller the short word fetching on the x64 seems inferior and so at 64k elements the unshuffled cost is higher.  But if the data varies then the space savings kicks in and you get a nice win.  Its sort of like moving yourself one notch down in the table at that point.

Categories: Blogs

Don’t Fear the Code: How Basic Coding Can Boost Your Testing Career

uTest - Mon, 09/29/2014 - 21:28

Testers-who-codeThis piece was originally posted by our good friends over at SmartBear Software. Be sure to also check out Part II of this blog, entitled, “Six Ways Testers Can Get In Touch With Their Inner Programmer.”

It’s vital to acknowledge from the outset that I am a reluctant programmer. I know how to program. I can piece together programs in a variety of languages, but it’s not something I consider myself accomplished at doing.

As a software tester, this is a common refrain that I have personally heard many times over the years. It’s so common that there is a stereotype that “people who can program, program. People who can’t program, test the code of programmers.” I disagree with that statement, but having compiled enough personal anecdotes over twenty years, I see why many people would have that view.

I see a traditional dividing line between a “programmer’s mindset” and a “tester’s mindset.” The easiest way that I can describe the difference is, to borrow from Ronald Gross’ book Peak Learning, a  “stringer’s” vs. a “grouper’s” approach to tasks and challenges.  If you are one who likes to work with small components, get them to work together, and “string” them into larger systems that interact, then you have a “programmer’s mindset.” If you look at things from different levels and “group” the items, see where there might be bad connections, and see if those bad connections can be exploited, then you exhibit a “tester’s mindset.” This is a gross oversimplification, but this idea helped me put into word why programming was a challenge for me. It was a “stringer” activity, and I was a “grouper.”

I’ve used that as an excuse for years. I said to myself, “Well, it’s OK…I’m a tester. A grouper. I think differently. I don’t particularly like to code, so that’s OK, I’ll just be awesome elsewhere.” I’ve since come to realize that I was wrong. I’d been looking at this whole coding thing the wrong way, and not being completely honest with myself. The truth is, I was afraid. I was not quick with writing new code, I couldn’t solve the real problems that I had, and I was impatient, not willing to put in the real time necessary to get good at it. More to the point, it just wasn’t all that much fun to do. I’ve since changed my mind on many of those points.

So, what in the world do I think I am doing writing an article to other software testers, telling them they shouldn’t fear code? Because we shouldn’t. It isn’t magic. It is systems thinking—science and logic—mixed with some rules that dictate where things go and when. In fact, I’m willing to bet that, if I were to sit down with a “non-coding” software tester, I’d be able to show them that they write code all the time.

Every Tester Programs

If you have ever taken multiple commands and created a macro, or a shell script, and grouped commands into a single file and made it executable, you are programming. Yes, I realize that it doesn’t fit the common image we have when we talk about programming. We are not writing applications. We are not using some spiffy compiler and elaborate language. Still, if you are grouping commands together in one place to execute them, adding a variable here and there so that you can extend a shell script and make it a little more dynamic, or parsing the output of test logs to make a report, you are indeed programming.

Even with software testing tools that are advertised as “no programming required…if you are modifying files, changing values, switching the order, pointing different places…yes, you are indeed programming. All that’s different is the order of magnitude and the level of sophistication.

Programming Need Not be a Barrier

One of the bigger issues that I have come across as I’ve talked to software testers who lament their “inability to program” is the fact that they are mixing up their intentions. When a software tester says, “I am not a programmer,” what they are most likely saying is “I am someone who has not invested the time and energy into learning a variety of programming languages and techniques with the goal of making software for other people to use.”

That’s a fair statement, and yes, when put in that light, there are a lot of us software testers who are not “production-grade shipping software level master programmers.”

That may seem a bit heavy, but work with me here. If I ride a snowboard, am I only allowed to call myself a snowboarder if I enter slopestyle events or halfpipe competitions? No, of course not. Then why should I feel like I have no right to call myself a “programmer” just because I haven’t shipped an application to market, or written some elaborate framework?

The more difficult issue is that, for many of us, programming is a smaller part of what we do. I much prefer exploration and open engagement with my own eyes to writing automated scripts, but the fact is, I can use my eyes a lot more frequently and effectively if I can identify the repetitive work that can be automated via programming. Often, it’s not that we can’t do the work, but that the work feels burdensome, onerous, or just plain irritating. When programming is boring and painful, we will not do it unless we absolutely have to. Therefore, I want to suggest that we try to find ways to make it less onerous, and make it, well, fun.

Michael Larsen is a software tester based out of San Francisco, California. Michael started his pursuit of software testing full-time at Cisco Systems in 1992. After a picture-87071-1360261260decade at Cisco, he’s worked with a broad array of technologies and in industries including virtual machine software, capacitance touch devices, video game development and distributed database and web applications.

Michael is a member of the Board of Directors for the Association for Software Testing, the producer of and a regular commentator for the SoftwareTestPro.com podcast “This Week in Software Testing,” and a founding member of the “Americas” Chapter of “Weekend Testing.” Michael also blogs at TESTHEAD and can be reached on Twitter at @mkltesthead.

Categories: Companies

Using Selenium at Joomla

Testing TV - Mon, 09/29/2014 - 18:57
Many organizations are using Selenium-IDE and RC for testing softwares. This presentation explains how Joomla use Selenium-webdriver to setup a software testing suite. In an era of highly interactive and responsive software processes where many organizations are using some form of methodology, test automation is frequently becoming a requirement for software projects. Joomla! has its […]
Categories: Blogs

Code Spotter Beta: Now Available For Everyone!

The Kalistick Blog - Mon, 09/29/2014 - 18:51

Starting today, we are opening up our beta for Code Spotter to anyone interested in trying out this one of a kind cloud-based platform for finding defects in Java code.

Use of the Code Spotter service remains entirely free for the duration of this ongoing beta with absolutely no strings attached or restrictions imposed. So go ahead and upload whatever Java code you’d like, however much you’d like, and as often as you’d like.

In addition, we would like to sincerely thank all of our early participants for providing us their valuable feedback and suggestions since we first launched our closed beta for Code Spotter a couple months ago. As a team, we continue to be intensely focused on improving the overall ease of use, workflow design and utility of Code Spotter; to this end, I’d like to share a few key feature requests and enhancements to the platform which are now live:

• Email notifications to alert you once analyses complete and defects are available for review
• Automatic detection of Subversion repositories for better workflow management
• Dashboard page auto-refreshes plus a new progress bar indicates the status for jobs underway
• Defect event details are now shown within the Eclipse editor
• Major usability enhancements – streamlined navigation, collapsible issues list, persistent filters

You can get started now with a free Code Spotter account here.

We look forward to any comments you may have for us, and hope you enjoy using Code Spotter!

Best regards,
Dennis Chu, Product Manager

The post Code Spotter Beta: Now Available For Everyone! appeared first on Software Testing Blog.

Categories: Companies

QTP to Selenium Migration

Software Testing Magazine - Mon, 09/29/2014 - 18:25
Hewlett-Packard’s Unified Functional Testing (UFT), formerly and better known as QuickTest Professional (QTP), has been one of the leading software testing tool in the market. Is it really worth migrating from this tool to Selenium? If you think No, then think again. We recently migrated a client from QTP to Selenium, and the results was a 80% savings in execution time using one single machine. This case study shares the challenges faced initially and how to manage a framework with high re-usability and execution. Video producer: http://seleniumconf.org/
Categories: Communities

Meet the uTesters: Moises Ramos

uTest - Mon, 09/29/2014 - 16:53

Moises Ramos is a Gold-rated tester, Test Team Lead and 2013 uTester of the Year who lives in Barcelona, Spain, with his wife and two daughters. Moises began his career as a developer, changed to systems administration, and currently has been leading a Service Desk Moises Ramos - Meet uTestersteam, System Administration, and Application Management team for more than 10 years. He has been a uTester since February 2013.

Be sure to also follow Moises‘ profile on uTest as well so you can stay up to date with his activity in the community!

uTest: What defines a ‘valuable’ bug to you?

Moises: Sometimes in a uTest cycle, customers specify in the overview which area of the application or functionality they are interested in. If so, bugs related to this area are the most valuable.

If there are no details in the overview, I try to put myself in the customer’s shoes. If I were the QA manager for this application, what issues should I try to uncover? Misaligned text in Terms and Conditions? Or a problem affecting the checkout process? It’s also important to understand the customer’s business and the real purpose of the application or site you are testing.

Of course, the value of a bug not only depends on the bug itself, but also on how detailed and accurate it has been reported.

uTest: Android or iOS?

Moises: This question is like, “Who do you love the most? Mom or Dad?” Both, of course! I can’t imagine living without Android or iOS.

Having said that, I prefer iOS cycles, because issues are more easily reproduced. I like to review other testers’ bugs and try to reproduce on my device (I learn a lot doing this!), and this is easier with iOS. On Android, it depends on the device, OS version, settings…and there are far more bugs that I can’t reproduce.

uTest: What’s your favorite part of being in the uTest Community?

Moises: Without a doubt, the collaborative environment and the willingness to help every one of its members. I’ve been really impressed with this from the first day I joined the community.

Because it is an open community, where everybody in the world can join in, no one knows each other and some of its members are not full-time testers, it can be difficult to keep this team spirit and level of professionalism in every project. Of course, thanks to the Community Management team, it’s always stressing the importance of the uTester Code of Conduct, controlling who can rise in the community with its ratings system, and ensuring that every tester always has all needed information and training.

uTest: What is the one tool you use as a tester that you couldn’t live without?

Moises: I know that everybody says this, but Jing is the tool I use the most. The Android SDK is also a must-have tool — I use it to collect device logs. And definitely I couldn’t live without Google Translator. As a non-native English speaker, I need to use it often.

uTest: What keeps you busy outside testing?

Moises: I have two daughters (11 and five years old), so they do their best to fill up all my free time (and non-free, too). I love to be with them and with my wife, and play, watch TV or go for walks together.

Beyond this, I like almost every sport (running, cycling, swimming, going to the gym, tennis, paddle, soccer, trekking…). The only problem is that after my family, my job and uTest, there is not much time left for sports, so sadly I’m quickly losing competitiveness!

Categories: Companies

.NET Guides From Beginner to Pro

NCover - Code Coverage for .NET Developers - Mon, 09/29/2014 - 13:07

The .NET community is one filled with seasoned pros and up and coming developers.  We salute two .NET developers that have a proven track record of giving back to our great community.  Keep up the awesome work!

Christos Matskas

ncover_mvp_christos_matskas_twitterChristos Matskas is a senior software engineer, a keen technologist and is passionate about cross-platform software development. Based in Glasgow, United Kingdom, he is also the founder of SoftwareLounge and co-founder at TowzieTyke. His website http://cmatskas.com/ offers a wealth of information for programmers both new and old and offers a variety of helpful and informative how-to’s across platforms. Follow him on twitter @ChristosMatskas.

Anil Kumar Pandey

ncover_mvp_anil_kumar_pandey_twitterAnil Kumar Pandey loves to participate in online communities, in discussion groups and on blogs. His main technical skills include C#, VB.NET, ASP.NET, ASP.NET MVC, SQL Server, WCF, WPF, Silverlight, SSRS, HTML, Javascript, JQuery and Windows mobile. He does blogging on various Microsoft technologies and you can find his posts on .Net Helper. He is also the First Platinum Level member of www.dotnetspider.com and was awarded as the Most Valuable Member of  www.dotnetspider.com for the last 6 years. Anil is also a mentor of many students and junior programmers. Keep up with his tips on his blog or follow him on Twitter at @sankkrit.

The post .NET Guides From Beginner to Pro appeared first on NCover.

Categories: Companies

NightwatchJS - JavaScript web automation with Selenium-Webdriver

Yet another bloody blog - Mark Crowther - Mon, 09/29/2014 - 09:49
As the website says "Nightwatch.js is an easy to use Node.js based End-to-End (E2E) testing solution for browser based apps and websites... Simple but powerful syntax which enables you to write tests very quickly, using only Javascript and CSS selectors."

If you're happy writing a bit of JavaScript, then Nightwatch is an interesting option to look at. It uses Selenium Webdriver at it's core and so fits the tech stack commonly used for web testing.

This post is a no-fluff Quickstart to get nightwatch.js set-up on a WINDOWS system - becuase as usual the Nix* crew have posted for that side already. Here nothing is 'explained'. For a more details and how to set-up on other systems, see the Nightwatch website and Github repo.

https://github.com/beatfactor/nightwatch
http://nightwatchjs.org/ 

This is describing set-up on a **Windows 7 Professional** 64-bit system with Firefox installed. 

For the epicness, in a new tab, open this http://www.infinitelooper.com/?v=nu6ht1CwZO0 and leave it open until you are complete!

Mark

***

1) Base folder  
  • On your system create a folder called dev on the root, e.g. C:\dev

2) Install Node.js
  • Under C:\dev create a new folder called nodejs.  
  • Go to http://nodejs.org/ and install nodejs in the new folder, ensure you include the npm (Node Package Manager) tool in your installation.  

3) Install nightwatch.js  

  • Under C:\dev create a new folder called nightwatch.  
  • Got to Start > Run > and type 'cmd' to get a console window.  
  • Type `npm install nightwatch` and note the location and structure of the install.  

4) Get Selenium Server  

5) File to call the runner  

  • On your system navigate to C:\Dev\nightwatch\node_modules\nightwatch  
  • Create a new file called nightwatch.js
  • Add the following line and save the file; `require('nightwatch/bin/runner.js');`

Basic set-up is now complete!  

6) Start Selenium
  • Open a console window (or reuse the one from step 3) and navigate to: C:\Dev\nightwatch\node_modules\nightwatch\lib
  • Now type `java -jar sel-serv.jar` to start Selenium Server.  
  • Open Firefox and navigate to `http://localhost:4444/` to check the server is up (ignore the 403 error).  

7) Run some tests!
  • Open a new console window.  
  • Navigate to C:\Dev\nightwatch\node_modules\nightwatch\
  • Run all example tests by typing `node nightwatch.js`  
  • Run a group group of tests by typing `node nightwatch.js -g google`  
  • Run a single test by typing `node nightwatch.js -t examples/tests/nightwatch.js`  


Yay!  Your first nightwatch.js tests!

Mark.

p.s. Look out for the post on creating a VBScript report viewer for Nightwatch.



NightwatchJS

Categories: Blogs

Last Chance to Participate: Take the Customer Experience Survey Now

Ranorex - Mon, 09/29/2014 - 09:30
The Ranorex Customer Experience Survey 2014 is closing on the 6th of October, 2014.

Please take this opportunity to let us know how you use Ranorex within your test initiatives projects by participating in this survey.

You can rest assured that the information submitted will be kept confidential.

The survey should only take about 4 minutes to complete – and your participation may save you time in the future when working with Ranorex!

Give us your feedback...

Categories: Companies

Performance Quiz #14: Memory Locality, x64 vs. x86, Alignment, and Density

Rico Mariani's Performance Tidbits - Mon, 09/29/2014 - 05:26

[

Note: Mystery of the shuffling is solved, the rand() method I was using returns only numbers between 0 and 32k, so shuffling was ineffective in large array sizes.  I will post an update.  Thank you Ryan!

See the new entry for the updated results.

 It's been a very long time since I did a performance quiz and so it's only right that this one covers a lot of ground.  Before I take even one step forward I want you to know that I will be basing my conclusions on:

  1. A lot of personal experience
  2. A micro-benchmark that I made to illustrate it

Nobody should be confused, it would be possible to get other results, especially because this is a micro-benchmark.  However these results do line up pretty nicely with my own experience so I'm happy to report them.  Clearly the weight of the "other code" in your application would significantly change these results and yet they illustrate some important points, as well as point out a mystery...  But I'm getting ahead of myself discussing the answers.... First, the questions:

 

Q1: Is x64 code really slower than x86 code if you compile basically the same program and don't change anything just like you said all those years ago? (wow, what a loaded question)

Q2: Does unaligned pointer access really make a lot of difference?

Q3: Is it important to have your data mostly sequential or not so much?

Q4: If x64 really is slower, how much of it relates to bigger pointers?

 

OK kiddies... my answers to these questions are below.... but if you want to make your own guesses then stop reading now... and maybe write some code to try out of a few theories.

 

 

 

 

 

Keep scrolling...

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Are you ready?

 

 

 

 

OK.

To answer these questions I wrote a benchmarking program (see link at the bottom of this posting) that creates a data structure and walks it.  The primary thing that it does is allocate an array with ever increasing size and then build a doubly-linked list in it.  Then it walks that list forwards then backwards.  The time it takes to walk the list is what is measured, not the construction.  And the times reported are divided by the number of items, so in each case you see the cost per item.  Each item is of course visited twice, so if you like the numbers are scaled by a factor of two.  And the number reported is in nanoseconds.

To make things more interesting, I also shuffle the items in the list so that they are not in their original order.  This adds some randomness to the memory access order.  To shuffle the data I simple exchange two slots a certain percentage of times starting from 0% and then growing quickly to 100%.  100% shuffles means the number of exchanges is equal to the number of items in the list, that's pretty thoroughly mixed.

And as another datapoint I run the same exact code (always compiled for maximum speed) on x64 and then on x86.  It's the exact same machine,  my own home desktop unit.  Which is a hexacore high end workstation.

And then some additional test cases.  I do all that 4 different ways.  First with regular next and prev pointers and an int as the payload.  Then I add a bogus byte just to make the alignment totally horrible (and by the way it would be interesting to try this on another architecture where alignment hurts more than Intel but I don't happen to have such a machine handy).  To try to make things better I add a little padding so that things still line up pretty good and we see how this looks.  And finally I avoid all the pointer resizing by using fixed size array indices instead of pointers so that the structure stays the same size when recompiled.

And without further ado, here are the results.  I've put some notes inline. 

 


Pointer implementation with no changes sizeof(int*)=4  sizeof(T)=12   shuffle 0% 1% 10% 25% 50% 100% 1000 1.99 1.99 1.99 1.99 1.99 1.99 2000 1.99 1.85 1.99 1.99 1.99 1.99 4000 1.99 2.28 2.77 2.92 3.06 3.34 8000 1.96 2.03 2.49 3.27 4.05 4.59 16000 1.97 2.04 2.67 3.57 4.57 5.16 32000 1.97 2.18 3.74 5.93 8.76 10.64 64000 1.99 2.24 3.99 5.99 6.78 7.35 128000 2.01 2.13 3.64 4.44 4.72 4.8 256000 1.98 2.27 3.14 3.35 3.3 3.31 512000 2.06 2.21 2.93 2.74 2.9 2.99 1024000 2.27 3.02 2.92 2.97 2.95 3.02 2048000 2.45 2.91 3 3.1 3.09 3.1 4096000 2.56 2.84 2.83 2.83 2.84 2.85 8192000 2.54 2.68 2.69 2.69 2.69 2.68 16384000 2.55 2.62 2.63 2.61 2.62 2.62 32768000 2.54 2.58 2.58 2.58 2.59 2.6 65536000 2.55 2.56 2.58 2.57 2.56 2.56   Average 2.20 2.38 2.86 3.27 3.62 3.86 Overall 3.03

 

This is the baseline measurement.  You can see the structure is a nice round 12 bytes and it will align well on x86.  Looking at the first column, with no shuffling, as expected things get worse and worse as the array gets bigger until finally the cache isn't helping much and you have about the worst you're going to get, which is about 2.55ns on average per item.

The results for shuffling are not exactly what I expected.  At small sizes, it makes no difference.  I expected this because basically the entire table is staying hot in the cache and so locality isn't mattering.  Then as the table grows you see that shuffling has a big impact at about 32000 elements.  That's 384k of data.  Likely because we've blown past a 256k limit.

Now the bizarre thing is this: after this the cost of shuffling actually goes down, to the point that later on it hardly matters at all.  Now I can understand that at some point shuffled or not shuffled really should make no difference because the array is so huge that runtime is largely gated by memory bandwidth regardless of order.  However... there are points in the middle where the cost of non-locality is actually much worse than it will be at the endgame.

What I expected to see was that shuffling caused us to reach maximum badness sooner and stay there.  What actually happens is that at middle sizes non-locality seems to cause things to go very very bad...  And I do not know why :)

But other than that one anomaly things are going pretty much as expected.

Now let's look at the exact same thing, only it's now on x64

 

Pointer implementation with no changes

sizeof(int*)=8  sizeof(T)=20   shuffle 0% 1% 10% 25% 50% 100% 1000 2.28 2.28 2.28 1.99 2.28 1.99 2000 2.28 2.28 2.56 2.99 3.13 3.27 4000 2.28 2.35 3.06 3.91 4.84 5.26 8000 2.28 2.38 3.27 4.48 5.9 6.15 16000 2.36 2.63 4.12 6.28 8.53 10.2 32000 2.36 2.68 5.3 9.24 13.4 16.76 64000 2.25 2.9 5.5 8.28 10.36 10.62 128000 2.42 2.92 4.86 6.31 6.49 6.34 256000 2.42 2.74 4.25 4.52 4.43 4.61 512000 2.75 3.86 4.31 4.42 4.56 4.48 1024000 3.56 4.82 5.42 5.42 5.28 5.21 2048000 3.72 4.36 4.64 4.64 4.66 4.67 4096000 3.79 4.23 4.2 4.23 4.2 4.23 8192000 3.77 3.99 3.98 4 3.99 3.99 16384000 3.75 3.88 3.87 3.87 3.89 3.89 32768000 3.78 3.86 3.83 3.8 3.81 3.83 65536000 3.74 3.8 3.79 3.81 3.83 3.82   Average 2.93 3.29 4.07 4.83 5.50 5.84 Overall 4.41 X64/X86 1.46

 

Well would you look at that... the increased data size has caused us to go quite a bit slower.  The average ratio shows that execution time is 1.46 times longer.  This result is only slightly larger than typical in my experience when analyzing data processing in pointer rich structures.

Note that it doesn't just get bad at the end, it's bad all along.  There's a few weird data points but this isn't an absolutely controlled experiment.  For instance the 1.99 result for 1000 items isn't really indicating that it was better with more shuffling.  The execution times are so small that timer granularity is a factor and I saw it swiching between 1.99 and 2.28.  Things get a lot more stable as n increases.

Now let's look what happens when the data is unaligned.

 

Pointer implementation with bogus byte in it to force unalignment

sizeof(int*)=4  sizeof(T)=13   shuffle 0% 1% 10% 25% 50% 100% 1000 1.99 1.99 1.99 1.99 2.28 1.99 2000 2.13 2.13 2.13 2.13 2.13 2.13 4000 2.13 2.13 2.49 3.06 3.7 3.91 8000 2.1 2.17 2.88 3.88 4.76 5.33 16000 2.1 2.2 3.08 4.21 5.4 6.17 32000 2.17 2.39 4.21 6.92 10.1 12.83 64000 2.16 2.46 4.5 6.74 8.18 8.62 128000 2.14 2.45 4.13 5.19 5.4 5.41 256000 2.14 2.41 3.61 3.78 3.77 3.77 512000 2.18 2.51 2.97 3.12 3.16 3.11 1024000 2.45 3.12 3.44 3.43 3.46 3.54 2048000 2.76 3.3 3.36 3.35 3.37 3.36 4096000 2.75 3.08 3.05 3.04 3.07 3.05 8192000 2.75 2.9 2.88 2.9 2.9 2.9 16384000 2.75 2.82 2.82 2.82 2.82 2.82 32768000 2.74 2.78 2.77 2.79 2.77 2.78 65536000 2.74 2.76 2.75 2.75 2.76 2.76   Average 2.36 2.56 3.12 3.65 4.12 4.38 Overall 3.37

 

This data does show that things got somewhat slower.  But also data size grew by about 8%.  In fact if you look at first column and compare the bottom row there, you'll find that amortized execution at the limit grew by 7.4%  Basically the same as the data growth.  On the other hand, changes due to shuffling were greater so that the overall index grew by 8.3%.  But I think we could support the conclusion that most of the growth had to do with the fact that we read more memory and only a small amount of it had to do with any extra instruction cost.

Is the picture different on x64?

 

Pointer implementation with bogus byte in it to force unalignment

sizeof(int*)=8  sizeof(T)=21   shuffle 0% 1% 10% 25% 50% 100% 1000 2.28 2.28 2.28 2.28 2.28 2.28 2000 2.42 2.42 2.84 3.27 3.7 3.84 4000 2.42 2.49 3.34 4.48 5.55 6.12 8000 2.56 2.52 3.7 5.23 6.4 7.15 16000 2.61 2.81 4.85 7.36 9.96 12.02 32000 2.53 2.86 5.8 10.18 15.25 18.65 64000 2.53 2.94 5.88 9.14 11.33 11.64 128000 2.53 2.94 5.41 7.11 7.09 7.09 256000 2.57 3.09 5.14 4.96 5.07 4.98 512000 3.21 3.58 5.29 5.05 5.14 5.03 1024000 3.74 5.03 5.94 5.79 5.75 5.94 2048000 4.01 4.84 4.96 4.93 4.92 4.96 4096000 4 4.47 4.49 4.46 4.46 4.46 8192000 3.99 4.21 4.21 4.21 4.06 4.21 16384000 3.97 4.08 4.08 4.07 4.08 4.08 32768000 3.96 4.02 4.02 4.03 4.03 4.03 65536000 3.96 3.99 4 3.99 4 3.99   Average 3.13 3.45 4.48 5.33 6.06 6.50 Overall 4.83 X64/X86 1.43

 

The overall ratio was 1.43 vs the previous ratio of 1.46.  That means the extra byte did not disproportionally affect the x64 build either.  And in this case the pointers are really crazily unaligned.  The same shuffling weirdness happen as before.

Unaligned pointers don't seem to be costing  us much.

What about if we do another control, increasing the size and realigning the pointers.

 

Pointer implementation with extra padding to ensure alignment

  sizeof(int*)=4  sizeof(T)=16   shuffle 0% 1% 10% 25% 50% 100% 1000 1.99 1.99 1.99 1.71 1.99 1.99 2000 1.99 1.99 2.13 2.13 2.13 2.13 4000 2.28 1.99 2.49 3.34 3.7 4.05 8000 1.99 2.06 2.74 3.66 4.59 5.08 16000 2.04 2.26 3.16 4.18 5.32 6.06 32000 2.04 2.35 4.44 7.43 10.92 14.2 64000 2.04 2.38 4.6 7.03 8.74 9.11 128000 2.03 2.37 4.24 5.42 5.58 5.59 256000 2.05 2.36 3.66 3.84 3.83 4.07 512000 2.22 2.59 3.15 3.37 3.1 3.39 1024000 2.76 3.81 4.1 4.09 4.26 4.18 2048000 3.03 3.66 3.83 3.82 3.78 3.78 4096000 3.04 3.42 3.4 3.43 3.41 3.42 8192000 3.06 3.23 3.24 3.23 3.24 3.24 16384000 3.05 3.15 3.14 3.14 3.13 3.14 32768000 3.05 3.1 3.1 3.09 3.1 3.09 65536000 3.07 3.08 3.07 3.08 3.07 3.08   Average 2.45 2.69 3.32 3.88 4.35 4.68 Overall 3.56

 

Well in this result we converge at about 3.07, and our original code was 2.55.  Certainly re-aligning the pointers did not help the situation.  We're actually just 20% than the original number and 12% worse than the unaligned version.

And let's look at x64...

 

Pointer implementation with extra padding to ensure alignment

  sizeof(int*)=8  sizeof(T)=24   shuffle 0% 1% 10% 25% 50% 100% 1000 1.99 1.99 1.99 1.99 1.99 1.99 2000 2.13 2.28 2.7 2.99 3.7 3.7 4000 2.2 2.28 2.99 3.84 4.55 4.84 8000 2.42 2.38 3.34 4.37 4.98 5.37 16000 2.45 2.68 4.55 7.04 9.71 11.88 32000 2.46 2.8 5.43 9.25 13.48 17.16 64000 2.42 2.8 5.46 8.46 10.37 10.7 128000 2.4 2.8 5 6.43 6.55 6.56 256000 2.51 3.18 4.92 5.34 5 4.89 512000 3.9 4.7 5.97 6.5 5.63 5.59 1024000 4.15 5.24 6.34 6.28 6.24 6.33 2048000 4.32 5.13 5.28 5.33 5.34 5.27 4096000 4.32 4.78 4.77 4.81 4.78 4.79 8192000 4.29 4.55 4.55 4.56 4.55 4.54 16384000 4.28 4.42 4.42 4.43 4.42 4.42 32768000 4.3 4.36 4.37 4.37 4.38 4.37 65536000 4.23 4.38 4.35 4.34 4.34 4.33   Average 3.22 3.57 4.50 5.31 5.88 6.28 Overall 4.79 X64/X86 1.35

 

Now with the extra padding we have 8 byte aligned pointers, that should be good right?  Well, no, it's worse.  The top end is now about 4.3 nanoseconds per item compared with about 4 nanoseconds before.  Or about 7.5% worse having used more space.  We didn't pay the full 14% of data growth so there is some alignment savings but not nearly enough to pay for the space.  This is pretty typical.

 

And last but not least, this final implementation uses indices for storage instead of pointers.  How does that fare?

 

Standard index based implementation

 

sizeof(int*)=4  sizeof(T)=12   shuffle 0% 1% 10% 25% 50% 100% 1000 3.41 3.7 3.41 3.41 3.41 4.27 2000 3.41 3.56 3.41 3.41 3.41 3.98 4000 3.41 3.48 3.63 3.98 4.41 4.62 8000 3.41 3.59 3.88 4.62 5.23 5.76 16000 3.43 3.48 4.02 4.8 5.76 6.31 32000 3.5 3.64 5.1 7.2 9.8 11.99 64000 3.48 3.74 5.41 7.26 8.52 8.88 128000 3.49 3.72 5.1 5.98 6.17 6.18 256000 3.48 3.7 4.66 4.82 4.83 4.82 512000 3.52 3.72 4.13 4.24 4.14 4.3 1024000 3.57 4.25 4.6 4.59 4.46 4.43 2048000 3.79 4.23 4.37 4.35 4.36 4.34 4096000 3.77 4.05 4.06 4.06 4.06 4.07 8192000 3.77 3.91 3.93 3.92 3.91 3.93 16384000 3.78 3.84 3.83 3.83 3.84 3.84 32768000 3.78 3.8 3.8 3.8 3.8 3.79 65536000 3.77 3.78 3.78 3.78 3.8 3.78   Average 3.57 3.78 4.18 4.59 4.94 5.25 Overall 4.39

 

Well, clearly the overhead of computing the base plus offset is a dead loss on x86 because there is no space savings for those indexes.  They are the same size as a pointer so messing with them is pure overhead.

However... let's look at this test case on x64...

 

Standard index based implementation

sizeof(int*)=8  sizeof(T)=12   shuffle 0% 1% 10% 25% 50% 100% 1000 3.41 3.41 3.41 3.98 3.41 3.41 2000 3.41 3.41 3.7 3.41 3.41 3.41 4000 3.41 3.48 3.63 3.98 4.34 4.76 8000 3.45 3.45 3.84 4.48 5.33 5.69 16000 3.48 3.57 3.98 4.78 5.71 6.28 32000 3.48 3.64 5.11 7.16 9.69 11.99 64000 3.48 3.73 5.37 7.2 8.47 8.84 128000 3.48 3.72 5.1 5.96 6.25 6.14 256000 3.49 3.69 4.66 4.83 4.82 4.88 512000 3.52 3.72 4.22 4.22 4.22 4.24 1024000 3.59 4.01 4.31 4.53 4.45 4.4 2048000 3.8 4.27 4.33 4.25 4.35 4.38 4096000 3.8 3.97 4.06 4.06 4.07 4.06 8192000 3.79 3.92 3.92 3.93 3.93 3.91 16384000 3.77 3.84 3.83 3.82 3.85 3.85 32768000 3.76 3.81 3.81 3.8 3.8 3.81 65536000 3.76 3.78 3.78 3.79 3.78 3.78   Average 3.58 3.73 4.18 4.60 4.93 5.17 Overall 4.37 X64/X86 1.00

 

And now we reach our final conclusion... At 3.76 the top end is coming in a dead heat with the x86 implementation.  The raw x64 benefit in this case is basically zip.  And actually this benchmark tops out at about the same cost per slot as the original pointer version but it uses quite a bit less space (40% space savings).  Sadly the index manipulation eats up a lot of that savings so in the biggest cases we only come out about 6% ahead.

 

 

Now of course it's possible to create a benchmark that makes these numbers pretty much whatever you want them to be by simply manipulating how much pointer math there is, vs. how much reading, vs. how much "actual work".   

And of course I'm discounting all the other benefits you get from running on x64 entirely, this is just a memory cost example, so take this all with a grain of salt.  If there's a lesson here it's that you shouldn't assume things will be automatically faster with more bits and bigger registers, or even more registers.

The source code I used to create this output is available here

 

*The "Average" statistic is the average of the column above it

*The "Overall" statistic is the average of all the reported nanosecond numbers.

*The x64/x86 ratio is simply the ratio of the two "Overall" numbers

Categories: Blogs

Wow I love git-tf!

Rico Mariani's Performance Tidbits - Sat, 09/27/2014 - 00:10

I switched to git about 3 years ago because the portability was so great.  Moving work between computers couldn't be easier.  But when I did that I lost all my TFS history from TFS express I had been using up until then.  4 years of useful history.

Last week I saw this git-tf thing so I restored my old TFS databases on a box, put Team Server Express on that puppy and did an in place upgrade. Bam, up it comes as fast as you can say 'I wish I had a faster internet connection"

5 minutes later I had a git clone of my old stuff..

Then I just added a remote, did a fetch, and rebased my whole master branch on top of my old history.

And just like that it's all back!

OK my commit ID's are all different now but that's ok, it's just me anyway.

I pushed it all up to my visual studio account using VS Git integration and now all of it is backed up.

I feel like someone just returned a lost pet to my house :)

Categories: Blogs

Reacting to Shellshock

Kloctalk - Klocwork - Fri, 09/26/2014 - 22:22

The code security industry is reeling from news that a flaw in the widely-used GNU Bash shell, dubbed Shellshock, could enable attackers to hack into vulnerable systems around the world. There have already been reports of exploits seen live and industry experts are both trying to combat the problem and quantify its impact. It already has four entries in the US National Vulnerability Database, covering similar flaws found after the original one, CVE-2014-6271.

Interpreting Bash

While Bash (Bourne-again shell) has been adopted and installed on many computers for over twenty years, it’s not surprising that the problem wasn’t discovered sooner – it’s the result of a type of security flaw called command injection in a fairly obscure feature of Bash command processing. Here’s an example of the exploit in action:

Here, the env command is setting an environment variable COLOR that contains an empty function () { :;}; and a command to open a second Bash shell to print out the string “I hate colors” using echo. If Bash were working properly, “vulnerable” wouldn’t get printed out as it’s buried inside the COLOR environment variable that’s never used. Since Bash is flawed, it treats the string after the function, echo vulnerable, as a real command and executes it.

Not a big deal in this example but if an attacker were to run a malicious command instead, bad things could happen. Since Bash can also be used to invoke other programs, an attacker can remotely impact unprotected systems anywhere.

Preventing the problem

This flaw falls under the Common Weakness Enumeration (CWE) type CWE-78, or “Improper Neutralization of Special Elements used in an OS Command (‘OS Command Injection’).” This category covers flaws that are the result of improper or incorrect neutralization of elements that could modify an intended operating system command, as we saw above.

While the industry is scrambling to understand and fix the problem in the wild, it’s good to know that these types of flaws are easily detectable by static code analysis. In the case of Klocwork, these flaws are detected by three checkers as the developer is writing the code:

NNTS.TAINTED – finds code that uses string manipulation functions with character arrays that may not be null terminated, resulting in potential buffer overflows and security problems

SV.CODE_INJECTION.SHELL_EXEC – finds code that accepts command lines that are influenced by external input, resulting in the execution of potentially malicious commands

SV.TAINTED.INJECTION – finds code that doesn’t validate input from the user or outside environment, potentially resulting in the execution of arbitrary commands, unexpected values, or altered control flow

In our next article, we’ll take a look at an actual example of this and see how Klocwork reports the flaw. Can you guess which of the above checkers finds problems related to environment commands?

Learn more:

• Read this white paper to understand injection flaws and how to prevent them (PDF)

• See all the CWE weaknesses that Klocwork detects

Categories: Companies

Top Tweets from CITCON 2014

uTest - Fri, 09/26/2014 - 22:08

This past weekend, Croatia played host to CITCON (Continuous Integration and Testing Conference), the Continuous Delivery conference that pre-dates the term “Continuous Delivery.” CITCON brings people together from every corner of the software development industry to discuss Continuous Delivery and the practices that go along with it.

Here are the top 5 tweets from attendees using #CITCON:

#citcon Zagreb starts tonight! We toured the venue yesterday & it is perfect! Entrance on pic. Go up to Floor 2. pic.twitter.com/u7KO7M0QEy

— citcon (@citcon) September 19, 2014

Citconers from some 10 countries from all over the world are presenting themselves on #citcon Zagreb pic.twitter.com/ySPjOMPC6T

— Davor Banovic (@banovotz) September 19, 2014

#citcon topics: gems like "turning over the inverted test pyramid", "prioritizing and managing large amounts of tests", "how to be awesome"!

— PaulJulius (@PaulJulius) September 19, 2014

Current state of the #citcon schedule. pic.twitter.com/oBv8ovfWGq

— Jeffrey Fredrick (@Jtf) September 19, 2014

@pauljulius at #citcon : "oh, you're using feature branching? Too bad you're not doing continuous integration"

— Douglas Squirrel (@douglassquirrel) September 20, 2014

And while CITCON may have just wrapped up, there’s plenty more on the horizon in the testing circuit. Be sure to check out uTest’s Events Calendar for the latest conferences in 2014 and beyond.

Categories: Companies

One Week in With the iPhone 6: An Average Joe’s Review

uTest - Fri, 09/26/2014 - 20:23

I’m not a tester in my day job, and I don’t claim to be — I leave that to our great community of 150,000+ testers (my middle name is indeed Joseph, iphone 6though, so I can make the Average Joe claim without feeling ashamed of lying to you).

That being said, I enjoy technology as much as our testers do, many of which have already snapped up iPhone 6s for testing on customers’ apps hungry for validation of their iPhone 6 optimizations. I too was eager to get my hands on the iPhone 6, albeit for different motives.

I set my alarm for 2:45 AM ET a couple of weeks ago, got a cup of hot coffee brewing, and flexed my fingers over the keyboard in anticipation of a mad rush of folks pre-ordering. I pre-ordered the 64 GB Space Gray model of the standard iPhone 6, and it arrived on my doorstep last Friday. Here are my thoughts one week into the much-ballyhooed launch.

The Design

OK, so #Bendgate, in my opinion, has been blown way out of proportion. Apple even alluded to the fact that there have only been about 9 real support calls about it, which leads me to believe that the same social media posts about #Bendgate or #Bendghazi are recycled over and over again. Is there a problem with some of the iPhone 6 Plus models…sure. But not at the levels one may think.

In short, the iPhone 6 is big and beautiful — a natural extension of the iPhone line and a worthy successor to the 5 (my previous phone). Although awkward at first, it made the 5 look like a child’s toy and had me wondering what I was previously doing with such a small phone. It does look prone to breaking, however, so I will be investing in a case very soon. That being said, this is beautiful industrial design at its finest — the aluminum back with curved glass, forsaking the sharpness of the previous design, makes this feel like a “premium” product in my hands.

The Battery Life

This is an area I was definitely disappointed in — I’m not a power-user by any means, but still found that the battery barely eeked out more time than my two-year-old iPhone 5 after an evening of moderate use of music and video, along with email. I wouldn’t have minded a bit more thickness of the phone (thinness isn’t a selling point for me, anyways) for the sake of a far-better battery.

iOS 8

I’m going to be the first to admit that I haven’t explored the nuances of iOS 8 in depth yet, but many have, and because of that, it’s obviously a good time for app developers to get on the testing bandwagon. 30.95% of all iOS traffic is now coming from iOS 8 just five days after its official release. That being said, some of the small nuances I have noticed have been useful, if not enjoyable, given that this release has gone out with far less fanfare than the flat design overhaul of 7 in September 2013.

Particularly, suggested words that pop up when sending messages or lines of text within an app proved to be less annoying than I thought they would. When typing a message to my brother about the band “Cage the Elephant,” before I had finished my thought, the technology was smart enough to suggest ‘Elephant,’ making for a quicker message.

The double-tap of the home button for easier “reachability” to items at the top of the screen proved far less useful for me on the standard 6 — I anticipate those with the “phablet” version of the 6 will find this quite useful since that phone is the size of a small child.

Beyond my limited personal experience with 8, the latest version of iOS is already undoubtedly a developer’s playground (that will also require rigorous testing for new apps). I cannot wait for some of these apps and new features as the months move on, especially Apple Pay, which could revolutionize payments with Google Wallet still failing to completely break through to the mainstream. I also want to explore the interoperability between apps a lot more in iOS 8– something that is supposed to even surpass Android in that department.

Was this thing tested?

As I’ve already mentioned, the bent phone fiasco was blown way out of proportion. However, the software side of things hasn’t been exactly smooth for a company that has made its mark with a smooth user experience. The recent iOS 8.0.1 botched release was pretty embarrassing for Apple, necessitating 8.0.2 a mere day later. I also notice, still after a week, a nagging issue where my Wi-Fi is cutting out and I manually have to reconnect to the network. Additionally, in SMS text messages from iOS 7.0 users, I am unable to receive attachments sent to me (if only there was some little company that I knew of that could have helped out with some in-the-wild testing).

In short though, I have no doubt in my mind that Apple won’t overcome these hiccups in future software releases. They have a track record of hiccups, anyways — ‘Antennagate,” Apple Maps bringing folks to a city destination located in the depths of the Atlantic Ocean, etc. But never has one of these things been such a black mark that it has rendered these phones failures, and the 6 is no different — it is a beautiful representative of the iPhone line, and the best smartphone out there.

I haven’t touched the surface of what is possible with 6 one week in, so needless to say I am not yet bored at this early juncture. Only time will tell, however, if features like NFC technology for Apple Pay — the phone feature for which I am most excited — will pan out like I envision.

What are your thoughts on the iPhone 6 or 6 Plus? Let us know in the Comments below.

Categories: Companies

HPC delivers predictive analytics benefits

Kloctalk - Klocwork - Fri, 09/26/2014 - 15:05

Predictive analytics can prove incredibly valuable for businesses in virtually every sector. These solutions enable superior decision-making and long-term strategizing, delivering a major competitive edge. For these efforts, big data is critical. Companies must discover insight from tremendous quantities of unstructured and semi-structured information.

To this end, high performance computing is critical. As TechTarget contributor Bill Claybrook recently highlighted, HPC tools, when combined with raw data, can yield sophisticated, useful predictions for organizations.

HPC and big data analytics
As Claybrook pointed out, HPC systems are specifically designed to work with large amounts of information, as well as to provide accurate models and simulations.

This makes HPC ideal for predictive analytics efforts, the writer explained. After all, big data analytics is essentially the application of advanced analytics techniques to unstructured data sets. HPC is essentially a more powerful version of this very same process.

Not that these technologies are identical. Writing for Midsize Insider, Jason Hannula noted there are a number of key, fundamental differences between HPC and the type of analytics performed in traditional big data environments. Ultimately, though, the technologies are similar enough to reasonably apply HPC to big data, and therefore glean predictive analytics as a result.

Adapting to HPC
To enjoy the benefits offered by HPC-based predictive analytics, though, firms must take a number of important steps.

For starters, Hannula pointed out that companies must acquire and implement enhanced memory storage. Such resources are needed in order to support HPC's greater pattern-processing capabilities, especially in an environment defined by large data volumes moving at high velocities.

The writer also noted that companies must invest in the appropriate human resources to fully utilize high performance data analytics. There are specific skill sets necessary to successfully implement and manage these computing solutions, and these skills are not frequently found in a company's existing IT department. In most cases, decision-makers will need to seek out and hire IT professionals with robust experience utilizing such technologies if they want to maximize the value of their predictive analytics efforts.

On a related note, Hannula argued that personnel from the business side of operations must also be heavily involved in the HPC solutions. Leaving these responsibilities entirely in the hands of the IT department will inevitably undermine potential value.

"[T]he transition to predictive analytics requires the continuous involvement of business experts to validate data relationships and early-phase results," Hannula wrote. HPDA may seem to be an IT-centric functionality, but the implementation is to serve a business need for predictive results, and the business area must be an active partner."

Finally, organizations interested in pursuing HPC for predictive analytics must ensure they have the right support tools in place. For example, firms need to invest in debuggers specific to HPC to minimize program complexity and ease the way for future technology migrations. Without such dedicated solutions, the HPC system becomes far more difficult to manage, which consequently limits the value of the predictive analytics produced.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today