Skip to content

Feed aggregator

TestTrack Web Challenge Part 3: Pro Tips

The Seapine View - Mon, 12/29/2014 - 13:00

Well, here it is folks. The one you’ve been waiting for. Part 3 of my TestTrack Web challenge. I know for a fact that, had there been a physical line, people would be lining up to read this like they lined up for the iPhone 6!

To be honest, the tips I am about to share aren’t really anything new. They are, however, transformative if you decide to take the TestTrack Web challenge and only use the web client for your daily tasks.

When I discovered some of these tips it was like that ‘yes moment’ when everything changes and you can see clearly!


So enough bloat text. Here’s the breakdown of my favorite tips will improve your TestTrack Web experience just as they’ve improved mine:

ProTip #1: Use bookmarks


I’m the dog getting smacked by the cat of coolness, AKA ‘Bookmarks’

OK, I know this seems like a no-brainer, but before I thought of it, I was completely at the mercy of using the left navigation for everything. Don’t get me wrong, this gets you to testing, issue tracking, requirements, folders, and reports. However, it can sometimes take a few extra clicks to get to where you’re going.

For example if I’m viewing the Folders list and a co-worker asks me about a specific requirement, I have to: 1) Click the Requirement link, 2) Click the Requirements list tab, 3) Search for the requirement or filter the list to what I want to view, 4) View/edit the requirement to see the data. That’s around 3-4 clicks just to view an item for a five-minute conversation. Then, I have to travel back to the Folders list to view what I was looking at previously. Talk about a time killer. In the native client, I simply view the Requirements list and I’m right there. If I have a toolbar set up, this process takes 1-2 clicks max!

As I’ve always said, “The TestTrack Web client is not lacking in features. It has different features.” Here is where bookmarks come in. Using your web browser, you can create bookmarks for pretty much every window location in the TestTrack web client. Using bookmarks, I quickly move between filtered item lists to see exactly what I want to. Here’s my bookmark dropdown:


ProTip #2: Write test cases on the Text tab

OK, we’ve got to have a talk. TestTrack has two different ways to write test cases in the native client. The first way uses the default ‘grid view’. This is the respectable way: it gives you access to everything in a clear ordered table. All you see are your steps and that’s it.

Blog 5

But if you haven’t noticed, I like to live life on the wild side. That’s why I use the ‘text view’.

Blog 6

This view allows me to quickly write out all of my steps and expected results. No more buttons for adding steps and comments. Nope, I’m freestyling it using my quick set of step codes:

Blog 7

These codes are always available on the Text View tab in test cases. Simply expand the ‘Show markup codes’ control and start living life on the edge. Plus, this makes copying and pasting an entire group of steps super easy!

 ProTip #3: Use item reports

I know I’ve mentioned reports in the past when dealing with writing test cases, but that doesn’t mean they can’t be used for other purposes. Just today, I was running a test run and I noticed the Description had a lot of information that was difficult to view in the normal view:

Blog 8

I couldn’t scroll horizontally without scrolling down out of the data at the top of the field. This makes viewing rather hard. While I could view the entire WYSIWYG field in an expanded editor, I felt this approach would become time consuming when switching between multiple test runs. The ‘better way’ of viewing multiple items at once, especially with large amounts of data in a specific field, is to use the Reports option for a selected item:

Blog 9

The generated reported allows quick scrolling between items and helps me decipher which test run I want to run first:

 Blog 10

Pro Tip #4: Have an open mind

I’ve been using the web client for my main TestTrack use for the past several months now and I cannot recommend it enough. TestTrack Web is different from the native client. So, just because you could do a task in the native client one specific way does not mean that will be the most effective way to perform that task using the web client. Because TestTrack Web is different, you will need to perform basic daily tasks differently. At first, it can be frustrating making the changes needed to be efficient in the web client. However, if you take the time to do this, I can honestly say you will not regret it!

Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

What Do You Mean by Agile Tester?

Testing TV - Fri, 12/26/2014 - 18:44
What makes an Agile tester ‘Agile’? Is there really that much difference between agile and non-agile software testing? This session takes a look at the experiences of a tester embedded into an Agile team and examines the skills an Agile tester should have in their arsenal. Video producer:
Categories: Blogs

How Spotify Test in Continuous Deployment

Software Testing Magazine - Fri, 12/26/2014 - 18:36
What kind of software testing do we need to have in a continuous deployment pipeline? What does it take to push a commit all the way to deployment? Fully automated. Several times a day. Model-based testing has played a key factor at Spotify as a part of the automated end-user acceptance testing. The talk presents how Spotify built the Continuous Deployment pipeline for its web player, and changed the way manual testing was performed. Also, the tools involved which mostly are open source (Gerrit, Selenium, GraphWalker) will be discussed. Video producer: ...
Categories: Communities

CopenhagenContext, Copenhagen, Denmark, February 26-27 2015

Software Testing Magazine - Fri, 12/26/2014 - 17:15
CopenhagenContext is a three-day conference taking place in Copenhagen and focused on software testing, more precisely on Context-Driven Testing but the topics presented are not limited to this area of testing. The first and the last day are dedicated to full-day workshops. In the agenda of CopenhagenContext you can find topics like ” Inspiring Context-Driven Testing”, “Pair-Wise Testing Explained”, “Review by Testing: Analyzing a Specification by Testing the Product”, “Take 5 on Team Dysfunction”, ” Test Cases are Not Testing: Toward a Performance Culture”, “When to let go – An automation ...
Categories: Communities

What’s On Testers’ Holiday Wishlists?

uTest - Wed, 12/24/2014 - 15:00

The holidays are here, and you know what that means. Plenty of forced conversations with extended family and awkward exchanges with that one aHoliday-Wish-List-21unt that always has a little too much spiked egg nog. But the holidays don’t have to be quite so frightful (unlike the weather outside). In fact, they are a joyous and festive time for many people — especially our testing community.

On that note, our uTesters recently discussed some of the tech gifts on their wishlists this holiday season. Here’s our community, in their own words on what’s on these lists:

  • I always wanted to have a big TV at home, and I just spotted a bargain smart TV (they’re much more awesome than I even thought), along with a PS4! This will be a very unproductive Christmas break.
  • Thanks to uTest, I have been able to buy an iPhone 6 for my wife and a new iPad Air 2 for myself
  • I was looking for a Samsung curved UHD, because I’ve always wanted a big TV screen; Also some console like PS4 or Xbox One
  • I just bought an Android tablet on black Friday as a gift to myself
  • I never had a Mac, so I would love a MacBook or an iMac. Unfortunately, Santa was lazy and didn’t make enough money this year, so maybe the Easter Bunny can bring it.

Some testers were more practical:

  • My wife and I bought a new bed earlier this year that we targeted as our “big” present to ourselves
  • A tester also requested a buoyancy control device (BCD) so that he can “pretend to be Jacque Cousteau while exploring the depths of the sea”

Finally, one gift this holiday season was one that you just couldn’t put a price tag on — a uTester from abroad got an early wish granted by becoming a permanent resident of the United States.

Testers out there — what is on your holiday wishlist this season?

Categories: Companies

Mobile and the Gartner AADI Summit 2014

HP LoadRunner and Performance Center Blog - Wed, 12/24/2014 - 03:04

AADI tweet 4.pngLas Vegas has always something new going on: a new concert, a new show, new entertainment or even a new fancy tower. I found this to be especially true when I was there recently for the Gartner Application Architecture, Development & Integration (AADI) Summit 2014.


Keep reading to find out the latest in mobile and how it applies to a new hotel in Vegas that will never open. 

Categories: Companies

Happy Holidays from HP StormRunner Load. 3 new presents.

HP LoadRunner and Performance Center Blog - Tue, 12/23/2014 - 23:06

iStock_000021187110Medium.jpgI don’t know about you, but I’m quickly wrapping up 2014.  What an AWESOME year it has been!  The spirit of innovation and new technology is everywhere.   Here’s one highlight that I want to share with you:


Since launching, StormRunner Load in October, we’ve been busy working to make sure you have the ideal platform for massive cloud and agile testing.   Here are a few highlights of what you will find as NEW features in StormRunner Load.

Categories: Companies

uTest Debuts Projects Board, New Profile Design

uTest - Tue, 12/23/2014 - 20:51

As we outlined in our Town Hall Meetings last week, we’ve been busy making some big improvements on the uTest site and we’re happy to tell you about the recent enhancements we rolled out this week.

Paid Projects Board

uTesters may be familiar with using the uTest Forums for finding active Paid Projects that have unique requirements. We debuted the Projects Board to streamline the search and application process, and to easily call out which projects are most urgent.

  • Featured Listings will appear at the top of the main page. These projects have urgent deadlines or have very unique requirements.
  • Project Dates are an estimation of the start and end dates of the project. As you know, test cycle dates can slip for a variety of reasons, but the start/end dates are meant to give you an idea of when a project will run.
  • Tags are visible on each listing and can be used to further search for listings with that same tag.

Remember, only uTesters with Expanded Profiles are eligible for Paid Projects. Need to expand your profile? Learn how in uTest University.


Enhanced Profile Experience

The uTest Profile also got a UX/UI upgrade and now includes customizable features. You can connect your social profiles to your uTest profile so that fellow testers can connect with you on social media as well. We’ve also made it easier to recommend or to request a recommendation from a fellow uTester. If you haven’t visited your profile lately, swing by and see what’s new.

Don’t forget: Check out our Leaderboard or our Search page if you are looking for uTesters to help fill your network activity feed.


Revamped Home Page

Last, but not least, our new home page is designed to get you where you want to go. We also have a new Getting Started page for testers who are new to uTest or who have yet to join us. Be sure to check out our site and let us know what you think in the comments below!

New-utest-home page

Categories: Companies

AutoMapper 3.3 feature: Projection conversions

Jimmy Bogard - Tue, 12/23/2014 - 20:42

AutoMapper 3.3 introduced a few features to complement the mapping capabilities available in normal mapping. Just like you can do ConvertUsing for completely replacing conversion between two types, you can also supply a custom projection expression to replace the mapping expression for two types during projection:

Mapper.CreateMap<Source, Dest>()
    .ProjectUsing(src => new Dest { Value = 10 });

Occasionally, you don’t want to replace the entire mapping, but there’s just a constructor you need to have some custom logic in:

Mapper.CreateMap<Source, Dest>()
    .ConstructProjectionUsing(src => new Dest(src.Value + 10));

AutoMapper can automatically match up source members to destination constructor parameters, but if things don’t quite line up or you need some additional logic, you have a place to do so.

Finally, AutoMapper can automatically convert types to strings (just calling .ToString()) for any time it sees a string on the destination type:

public class Order {
    public OrderTypeEnum OrderType { get; set; }
public class OrderDto {
    public string OrderType { get; set; }
var orders = dbContext.Orders.Project().To<OrderDto>().ToList();

If you need to do more custom logic for a string conversion, you can either supply a global conversion (ProjectUsing) or a member conversion (MapFrom).

Finally, if you have some custom projection expansions and you only want to expand certain members on the destination, you can supply a series of projection expressions and only those explicitly specified members will be included in the resulting expansion:

    parameters = null,
    dest => dest.Customer,
    dest => dest.LineItems);
// or string-based
    parameters = null,

I would usually recommend against doing this, but I’ve seen cases such as OData that this sort of scenario is expected. In the next post, I’ll cover one of the more interesting features in 3.3 – parameterized projections. Enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

New uTest Platform Features: Bug Reporting and Communication Enhancements

uTest - Tue, 12/23/2014 - 18:12
“Quality is not an act — it’s a habit.”


New Features: +1

The nature of uTest testing is such that the tester who files a bug report first receives the credit. This can be frustrating for testers who invest the time to find a bug only to see that it has been previously reported, but on a different environment. While the tester who files the bug first will continue to receive the credit, we want to increase the collaborative nature of the testing experience and allow testers to +1 another’s bug report.

We are pleased to announce that today we released a +1 Feature to the uTest Platform for testers on paid projects. This feature allows testers to confirm the reports of others by selecting one or multiple environments on which they reproduced the issue. They can also enter an optional comment with clarifications.



In addition to fostering a more collaborative testing team, we also hope that testers will use this feature to increase the quality that uTest delivers to its customers by demonstrating which issues have the highest instance of reproducibility.

Feature Improvements

Copy & Paste Attachments: Testers can now copy and paste screenshots directly from their ‘clipboard’ into their bug reports. This will make it much easier to add screenshots when testing web and desktop applications. Note: The feature is only available in Chrome at this time.

Browser Tab Notifications: When testers open a test cycle in the Platform, they will now see the number of unread messages from that cycle in the tab of their browser.


Currency Conversions: Testers are now able to convert their current payout and their payment history to the currency of their choice via the Settings page. Two important things to note about this feature: First, testers will still be paid in US dollars. This conversion will just show how much was earned in the local currency. Second, conversions of earnings will be at today’s rate rather than historical, so converting past payments will be a bit skewed depending on the currency’s historical valuation.


Chat Notifications: Testers now have the option of enabling their computer to make a ‘beep’ sound when they receive new chat messages. This can be activated or disabled via the Settings page.


If you like what you see, feel free to leave your comments below, or share your ideas on these and other recent platform updates by visiting the uTest Forums. We’d love to hear your suggestions, and frequently share this valuable feedback with our development team for future platform iterations!

Categories: Companies

We Automated 12 Android Phones To Sing ‘Appy Holidays To You [VIDEO]

Sauce Labs - Tue, 12/23/2014 - 17:56

With Holiday Season and the close of the fiscal year approaching, we brought in all of our remote workers to spend some time together. We had the obligatory corporate holiday party, a late-night LAN party, and countless, invaluable meetings that laid the foundation for software development projects in the new year.

Our VP of Engineering also arranged our first hackathon, in which we submitted ideas for programming projects that were tangentially related to our business. To participate, we were broken into teams and powered through our ideas and execution over two days.

Some of our developers have been interacting with Android phones, but haven’t had a chance to play with them. Thus, I ended up on a team of three with the goal of getting multiple Android devices to sing in harmony.

We’re happy to share the result of our efforts with this cheerful holiday video.

See the original video here. Follow the conversation on Reddit here and here and on HackerNews here.

So, how does it work?

We went with a pretty quick-and-dirty approach, seeing as this was a two-day event. Having multiple devices sync musical notes in realtime was definitely out of the question, so we opted for seeing if we could coordinate the phones to begin playing their individuals parts of a song at the same time.

Android devices are finicky and difficult to coordinate, many of their operations take varying amounts of time, even on identical devices. Simply pushing a song to each of them and telling them to play, results in some phones playing within a second while others can be up to four seconds behind.

Here’s what we did:

@classam wrote a python script which takes any MIDI file and distributes its separate instrumental parts onto an arbitrary number of tracks. It’s pretty neat: specify just two tracks, and half the parts of the song will be played on one track, and half on the other. If you specify more tracks than available parts, it copies the more significant parts onto the leftover tracks so every phone will feel like it’s a valued member of the choir.

I wrote an Android app which runs on the phones. The app runs on each phone and listens on a socket. When it gets the ‘GO’ command over the USB cable, it plays the music file it’s been given. It also displays a musical visualization of the sound it’s playing so you can match up the phones to the sounds you hear.

@etmillin wrote a javascript program which finds all the Android devices connected to a computer, gets a connection to the song App running on each one, and sends the ‘GO’ command to all of them at once.

When you glue our three pieces together, the following happens:

  • A MIDI song gets broken into parts, one for each phone
  • The parts of the song get uploaded, one to each phone
  • The song-playing app gets started on all the phones
  • The ‘GO’ command gets sent to all the phones at once

Now they all play together!

Other hackathon projects included experimenting with different types of network messaging solutions, devOps management tools, ECMAscript7, contributing to open source projects like Travis CI, and building hardware which displays the state of our continuous integration build.

Be warned, we didn’t polish the code for this blog post but you can find it on github:

-Jonah Stiennon, Ecosystems, Sauce Labs

Want to work at Sauce Labs? Submit your resume!

Categories: Companies

Webinar Q&A: Continuous Delivery and Pipeline Traceability with Jenkins and Chef

Thank you to everyone who joined us on our webinar, the recording is now available.

Because of the technical difficulties in recording this webinar, here is another recording of the missing information.

And the slides are here.

Below are the questions we received during the webinar Q&A:

Q: Has Chef-client output come out on console or have specific things been captured in this traceability case?
A: I just wanted to show the generated report JSON at the very end of the Chef run. By default, the output is usually redirected to /var/log/chef/client.log, but if you run Chef-client manually you’ll see the output on the stdout of course.

Q: On one slide, it was mentioned to use Environments to version recipes. This is a best practice question then, does that slide then suggest that environments should be of type "appName-Version" and applied to nodes? It points to the cookbooks that are versioned. So that would be different than using Environments of like "test", "dev", etc.
A: For clarity, the recommendation is to use Environments to pin specific cookbook versions to a particular subset of nodes. You can still use environment names like “test”, “dev”, etc. For example, I have “mycookbook” version 1.2.3. I want to roll out “mycookbook” version 1.3.0. Set your CI job to first update “dev” with ‘cookbook “mycookbook”, “= 1.3.0”’, while “test” has ‘cookbook “mycookbook”, “= 1.2.3”’. When you are ready to ready to promote this change from dev to test, the CI job that promotes this to test should set the definition for the “test” environment to ‘cookbook “mycookbook”, “= 1.3.0”’.

Q: What licensing restrictions are there on using open-source Chef in the enterprise? Is it just the enterprise features?
A: You can use OSS Chef for free, no restrictions. In fact, you can also demo the Enterprise paid features from OSS. You can use all the Enterprise features for under 25 nodes for free. For more than 25 nodes, you have to license the Enterprise features.

Q: Do we have a cookbook for the Weblogic server ?
A: You can see all the cookbooks shared in Supermarket at but you can also find many others not on Supermarket via GitHub. There is a weblogic cookbook currently on the Chef Supermarket site.

Q: Would a simple app code only change result in a change to a cookbook (tweak version of app to deploy?) and also trigger app unit / acceptance tests during the Test Kitchen execution?
A: You don't *have* to structure change that way, but I'd recommend it. I find it's easier for auditing to discover what happened, where and when. You could skip those tests, but I wouldn't recommend it unless you're in some emergency breakfix scenario.

Q: Can you recommend a Jenkins plugin for pipeline?
A: For the build pipeline plugin, I'd recommend Build Flow. You could start with something simpler like the Build Pipelines plugin, but that doesn't allow for concurrency. You might not need concurrency. But you might. :-) So I'd recommend starting there.

Q: How do you decide what version to automatically increment in the Jenkins Build Job?

A: Your cookbooks are tagged with versions in them: X.Y.Z (major.minor.patch). If you submit any new change without rolling X or Y, the pipeline just increments Z based on the last known good version. If you roll X or Y, Z resets to zero and the CI job manages versions from there. This makes Z in whatever source you submit effectively useless. The pipeline owns Z, you own X and Y.
Q: Do you have example cookbooks showcasing this setup available for review somewhere, like a GitHub repo?
A: Sure, you can grab the the scripts, cookbooks and configs used in this webinar from my GitHub repository:

Q: Can you provide some tools/utilities for testing the Chef code? If I am not using Vagrant can I use Test-Kitchen. It would be great if you could some utilities which can be used to test Chef code without using Vagrant.

A: Is there a particular aversion to using vagrant? You could run tests by hand, I suppose. Recommended practice is to use vagrant via test-kitchen; it makes your life much easier. You could also try something like minitest and enable the chef-minitest report handler. If there’s a reason you can’t use vagrant, let’s talk about that and figure out a workable approach.
Q: Does Vagrant simply spin up a VM with Chef installed in it?
A: Test-kitchen effectively manages your vagrant files and sets them up to automatically bootstrap chef and grab your cookbook code out of your PWD. You can also use the kitchen config file to pass configs on to vagrant directly if you want addl functions. Check out the bento project for more details on what happens to your vagrant box --
Q: Can you put together a list of all technologies in this demo (Chef, Jenkins, Ruby, Vagrant, etc.)?
A: I think the best is to check out the source code from my GitHub repository. It has a README which contains the answers for your question. My GitHub repository can be found here:

Q: How does the Jenkins plugin obtain information about the status of the Chef deployment?
A: Chef generates a report at the end of Chef-client run and the Chef-handler-Jenkins gem - which is included in the cookbooks - selects only the file related changes and sends a POST request to a specific Jenkins URL. The Chef Tracking Plugin handles that POST data. To be more precise: the Chef Tracking Plugin exposes an API endpoint: http://<JENKINS_URL>/chef/report which is used by Chef to send the reports.

Q: Just saw that Chef mentioned it had deployed a war as a result of the Jenkins job. Is it possible instead to deploy other types of artifacts? E.g. install an .exe in Windows, or apply RPM update packages in SUSE/RHEL?
A: Sure, you can deploy any type of files. There is absolutely no restriction.

Q: Is there a way to trigger deployments from within Jenkins?
A: Sure, you can. You can configure Jenkins to start a build when a commit has been pushed to the repository, or a job A can trigger job B if the build was successful. There are a lot of plugins as well which makes this easier. I’d suggest this one:

Q: IS there any weblogic cookbook?
A: Answered above.

Q: Does Chef use something like a snapshot to revert any manual changes made to nodes or does it only apply settings outlined in a recipe?
A: Snapshots are not used or recommended. Everything in Chef is explicit: Chef only manages settings outlined in your recipe. Chef is not a magic pony and it does not somehow automagically understand everything that has changed on your system.

Q: We speak about serverspec (which tests the server) but is chefspec also needed? To test the cookbook-recipe code?.. Or is Serverspec good enough of a test to move things into production?
A: There are a number of test frameworks to use and I’m a big fan of chefspec. The proposed workflow was an example of the types of functions we may want. I would recommend both testing your code with chefspec and testing your resulting infrastructure with serverspec at different points in your development lifecycle.

Q: What is the best method of using Chef master/master or master/slave setup?
A: High Availability in Chef is accomplished as active/passive. More information on running Chef in HA mode can be found here --

Q: How can I perform hotfix configs using Chef?
A: I think the question is how to deploy a hotfix in this type of pipeline? In the proposed workflow, the most obvious way to deploy a hotfix is to merge it straight into master (bypassing the validation/code review cycles). Merge the code and let your pipeline promote that programatically.

Q: Can we spawn VMs in the cloud instead of using Vagrant?
A: Since version 1.1, Vagrant is no longer tied to VirtualBox and also works with other virtualization software such as AWS EC2. Check for specific vagrant plugins here:, especially the vagrant-aws plugin. Vagrant is only a wrapper around VirtualBox, KVM, AWS EC2, DigitalOcean and so on.

Q: How do you see the future of Chef given the rise of Docker? Do you see that the adoption of Chef will increase or decrease as the adoption of Docker increases?
A: Containers also require configuration. It will be interesting to see where Docker goes in the future. While creating a container is simple with Docker, running complex infrastructure topologies in production is difficult. Complexity never entirely goes away, as engineers we may just move it to different parts of the stack. Configuration management still has a strong role even in a containerized world. I encourage you to look at the work we’ve done with Chef Container for more ways to use Chef with container --
Categories: Companies

Automated Performance Analysis for Web API Tests

A modern web application is typically not restricted to be used via a web frontend. It also provides functionality that is used elsewhere like mobile apps. Think about an e-commerce site today: a shop does not get its business exclusively from sales on its web shop, but also through mobile apps, and through rich-client applications […]

The post Automated Performance Analysis for Web API Tests appeared first on Dynatrace APM Blog.

Categories: Companies

Automated Performance Analysis for Web API Tests

A modern web application is typically not restricted to be used via a web frontend. It also provides functionality that is used elsewhere like mobile apps. Think about an e-commerce site today: a shop does not get its business exclusively from sales on its web shop, but also through mobile apps, and through rich-client applications […]

The post Automated Performance Analysis for Web API Tests appeared first on Dynatrace APM Blog.

Categories: Companies

Nexus 2.11.1 – Why It’s Time to Upgrade

Sonatype Blog - Tue, 12/23/2014 - 11:00
TL; DR: The release of Nexus 2.11.1 includes a fix for the security vulnerability CVE-2014-9389. Whenever a new Nexus release becomes available there are a myriad of reasons to upgrade. The team always seems to manage to bring in some really useful new features or bug fixes that you have been...

To read more, visit our blog at
Categories: Companies

Low Level Considerations for VS of the Future (an old memo)

Rico Mariani's Performance Tidbits - Tue, 12/23/2014 - 10:44

I wrote this a long time ago.  It's very interesting to me how applicable this is today.  And how it is not very specific to Visual Studio at all...

Low Level Considerations for VS of the Future
Rico Mariani, Sept 12/2007


I’ve been giving much thought to what enabling steps we have to take to make our VS12+ experience be something great.  I will not be trying to talk about specific features at all, I’m not sure we are anywhere near the point where  we could talk about such things but I do consider the prototype we’ve seen as sort of representative of what the underlying needs would be for a whatever experience we ultimately create.

So without further ado, here are some areas that will need consideration with information as to what I think we need to do, broadly, and why.

Memory Usage Reduction

Although we can expect tremendous gains in overall processor cycles available to us within the next decade we aren’t expecting similar gains in memory availability – neither L2/L0 capacity, nor total bandwidth.  Project sizes, file sizes, and code-complexity generally is going up but our ability to store these structures is not increasing.
To get past this, and enable the needed parallelism we must dramatically reduce our in-memory footprint for all key data structures.  Everything must be “virtualized” so that just what we need is resident.  In a IDE world with huge solutions in many windows vast amounts of the content must be unrealized.  Closing is unnecessary because things that are not displayed have very little cost.

Investment -> Ongoing memory cost reduction


In addition to having less overall memory usage, our system must also have very dense data structures.  Our current system trends to the “forest of pointers” direction with in-memory representations being not only larger than on-disk representations but also comparatively rich in pointers light on values.

More value oriented data-structures with clear access semantics (see below) will give us the experience we need.  The trend to expand data to “pre-computed” forms will be less favorable than highly normalized forms because extra computation will be comparatively cheaper than expanding the data and consuming memory.   Memory is the new disk.

Investment -> Drive density into key data structure, use value rich types


Key data structures, like documents, need the notion of transactions to create unit of work needs.  This is important for many cases where rollback is critical.  There need not be anything “magic” about this – it doesn’t have to be transacted memory, it’s merely transaction support in the API.  Important to help resolve parallelism concerns, and disconnected data concerns, the notion that reads and writes might fail is important and at that point atomicity of operations is crucial.  Transacted documents may or may not imply locking or versioning but don’t have to.  See the next topic.

Investment -> Key data-structures gain a transacted API
Investment -> Key data-structures use transactions to communicate “scope of change” after edits.


Having an isolation model means you can understand what happens when someone changes the data structure out from under you.  What changes will you see and what won’t you.   How much data can you read and expect to be totally self-consistent.

Investments -> Key data-structures must have an isolation model

A UX that is not strongly STA based

The IDE of the future must focus presentation in a single concentrated place but, in a game-like fashion, produce “display lists” of primitives, delivered to the rendering system in an “inert” fashion (i.e. as data not as API calls) so that presentation is entirely divorced from computing the presentation and the rendering system is in fact always ready to render.

This is entirely harmonious with the fact that we want all these display lists to be virtualized. You need not “close” windows – merely moving them out of frame is enough to reclaim the resources associated with the display.  The 1000s of windows you might have cost you nothing when you are not looking at them.

Investment -> A standard method for pipelining ready-to-render displays
Investment -> an STA free programming model with non-STA OS services (e.g. file open dialog)

Twin Asynchronous Models Pervasively

It is easiest to illustrate these two needs by example:

  1. Search for “foo” 
    • This can be done in a totally asynchronous fashion with different “threads” proceeding in parallel against different editor buffers or other searchable contexts.  All the “foos” light up as they are discovered.
    • Foos that are not visible need not be processed, so this sort of parallelism is “lazy”
  2. Rename “foo” to “bar”
    • We can still do this asynchronously but now there is an expectation that we will eventually do it all and of course we must do all the foos not just the visible ones
    • Generally this requires progress, and a transaction
    • It can fail, or be cancelled, these are normal things

Both of these represent two cases where we can use the “coarse” parallelism model.  Of course we also would like to do fine-grained parallelism in places and in fact searching could be amenable to this as well.  Fine-grained has the opportunity to keep locality good because you stream over the data once rather than access disparate data streams all at once but it is of course more complicated.  Enabling both of these requires the previous investments:

  • Limited memory usage in the face of many threads running
  • Ability to access the data in slices that are durable and limited in span
  • Clear boundaries for write operations that may happen while the work is going on
    • This allow allows for notifications
    • This allows for a clear place to re-try in the event of deadlocks/failures
  • Isolation so that what you see when you change or update is consistent to the degree that was promised (it’s part of the contract)
  • No STA UI so that you can present the results in parallel without any cosmic entanglements outside of the normal locking the data-structures require – no  “stealth” reentrancy

Investment -> “Lazy” asynchronous display operations
Investment -> Asynchronous change operations
Investment -> Failure tolerant retry like in good client server applications


With the above investments in place it becomes possible to use multi-threading APIs to create high speed experiences that allow for real-time zooming, panning, searching, background building, rich intellisense, and other algorithms that can exploit the coarse parallelism abundant in large solutions.   This can be done with a variety of threading models – I like data pipelined models myself but they aren’t necessary.  Access to key data-structures increasingly appears to be similar to how a database would be handled to customers of the data.

Even with little to no change in actual processor dispatch mechanisms we get these benefits:

  • The underlying data storage is enabled to take advantage of CPU resources available
  • Other factors which would make parallelism moot are mitigated

Under these circumstances a “game quality” UX is possible – that’s the vision for VS12.  To get there, we need to start thinking about these considerations now so that our designs begin to reflect our future needs.

Categories: Blogs

Selenium Hangout 6 Recap

Selenium - Tue, 12/23/2014 - 03:28

01:35 – 9:45 W3C Update
Notes from most recent W3C Meeting
– changes to the get_attribute method call
– screenshots (changing to viewport only, eventually will support whole page)
The WebDriver W3C working group has a GitHub repo now
– WebDriver will move from a “REST-ish” to a more “RESTful” interface

11:23 – 16:00 Selenium 3 Status Update

16:05 – 17:10 Marionette (FirefoxdDiver rewrite) testing help 
Marionette Roadmap

17:20 – 19:27 ChemistryKit rewrite
Announcement blog post

17:28 – 20:24 Visual Testing Part 1
Getting Started with Visual Testing
Applitools (visual testing cloud solution built on top of WebDriver)

20:25 – 23:47 Selenium Guidebook in Java!
The Selenium Guidebook

23:52 – 29:51 Visual Testing Part 2
Web Consistency Testing
Why MogoTest won’t be open sourcing it’s code after shutting down
Michael Tamm’s GTAC talk on Fighting Layout Bugs
Getting Started with Visual Testing

Categories: Open Source

When Programmers (and Testers) Do Their Jobs

DevelopSense Blog - Tue, 12/23/2014 - 00:23
For a long time, I’ve admired Robert (“Uncle Bob”) Martin’s persistent advocacy of craftsmanship in programming and software development. Recently on Twitter, he said . @LlewellynFalco When programmers do their jobs, testers find nothing. — Uncle Bob Martin (@unclebobmartin) December 8, 2014 One of the most important tasks in the testing role is to identify […]
Categories: Blogs

Webinar Q&A: Analyze This! Jenkins Cluster Operations and Analytics

Thank you to everyone who joined us on our webinar, the recording is now available.

And the slides are here.

Below are the questions we received during the webinar Q&A:

Q: Is the access control able to serve as a middle point between users and a backing AD/LDAP setup? Defining custom groups that just matter to Jenkins, for instance. Or it just centralizes the config?
A: Yes, CloudBees Role Based Access Control allows you to use a group provided by AD/LDAP or to define your groups in Jenkins.

Q: For these ES analytics, what DB strategy do you use actually? I mean NOSQL or conventionally RDBMS?
A: We use Elasticsearch, which is a document-oriented database and search engine.

Q: How well does the Operation Center servers scale, can it run on multiple instances with a load balancer?
A: Jenkins Operations Center can be clusterized with a load balancer. The load on JOC is limited because it is mostly an orchestrator. JOC can orchestrate dozens of Masters and hundereds of slaves

Q: How do you sync jobs, configs, etc. among Jenkins masters?
A: Jobs and configurations are not synced between masters per se. If you are referring to the HA feature in Jenkins Enterprise, this is done via a shared filesystem between the hot and cold master.

Q: Can the update center help to deploy any resources to the instance's file system that are not part of the Jenkins configuration or plugins? Or is the update limited to the bounds of Jenkins?
A: Custom Update Centers not only serve plugin and Jenkins-core files but also serve tool installers. Popular Tool Installers are installers of Git, JDK, JVM, Maven ... In that sense, Update Centers also deal with deployment of resources of slaves

Q: Do the analytics support a sort of change-back or throttling model to prevent greedy jobs from hogging too much of the resource pool?
A: Analytics is only a reporting engine. It does not affect the slave scheduling behavior.

Q: Are the metrics you generate limited by the amount of history you retain in your Jenkins instance?
A: Builds are reported in real-time, but you can re-index historical builds using a cluster operation. Builds are retained for 3 years by default in the analytics database, even if they are deleted on the remote Jenkins instance.

Q: Is there an API that will allow us to serve up the Jenkins performance charts on an internal website to our clients?
A: We provide the elasticsearch api which you can access using a Jenkins API key.

Q: Are there alerts in form of notifications on analytics sent to admins?
A: You can configure email alerts to be sent when internal metrics reach a threshold.

Q: We periodically see heap or permgen issues in our builds, but the JVM is the one called up by the Maven process to compile the code, not the master instance itself. Would the analytics view allow us to see the JVM memory for the JVM running the compiles?
A: No, Analytics does not include the JVM memory at this time.

Q: If you only have 2 VMs/servers, would it be best just to have 2 masters, or would it be best to create slaves on the existing hardware as the masters to segregate?
A: It's usually best to run builds on slaves before you begin adding more masters.

Q: Can you export the analytics/metrics to an external graphite/grafana server?
A: The performance metrics can be reported to graphite using DropWizard metrics graphite plugin.

Q: Would this be able to interact with something like the Jenkins Mesos plugin similar to the system eBay has set up? I'd like to use Docker containers for my slaves.
Categories: Companies

Testbirds and Perfecto Mobile Partner on Mobile Testing

Software Testing Magazine - Mon, 12/22/2014 - 18:05
Testbirds, a crowdtesting company in Europe, and Perfecto Mobile, a mobile application quality vendor, have announced a partnership that combines their extensive testing expertise to further increase mobile application quality. Both companies specialize in mobile testing solutions for enterprises to test and optimize their applications. Perfecto Mobile offers the Continuous Quality Lab, their on-demand, cloud-based offering that enables the testing (automated and manual functional testing, monitoring and performance) of mobile apps under any real-world end-user condition throughout all stages of the software development lifecycle. Testbirds provides a crowd of professional testers ...
Categories: Communities

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today