Skip to content

Feed aggregator

A Tech Lead Paradox: Delivering vs Learning - 53 min 57 sec ago

Agile Manifesto signatory Jim Highsmith talks about riding paradoxes in his approach to Adaptive Leadership.

A leader will find themselves choosing between two solutions or two situations that compete against each other. A leader successfully “rides the paradox” when they adopt an “AND” mindset, instead of an “OR” mindset. Instead of choosing one solution over another, they find a way to satisfy both situations, even though they contradict one another.

A common Tech Lead paradox is the case of Delivering versus Learning.

The case for delivering

In the commercial of software development, there will always be pressure to deliver software that satisfy user needs. Without paying customers, companies cannot pay their employees. The more software meets user needs, the more a company earns, and the more the company can invest in itself.

Business people will always be asking for more software changes as there is no way of knowing if certain features really do meet user needs. Business people do not understand (and cannot be expected to fully understand) what technical infrastructure is needed to deliver features faster or more effectively. As such, they will always put pressure on to deliver software faster.

From a purely money-making point of view, it is easy to interpret delivering software as the way of generating more earnings.

The case for learning

Software is inherently complex. Technology constantly changes. The problem domain shifts as competitors release new offerings and customer needs change in response and evolve through constant usage. People, who have certain skills, leave a company and new people, who have different skills, join. Finding the right balance of skills to match the current set of problems is a constant challenge.

From a technologist’s point of view, learning about different technologies can help solve problems better. Learning about completely different technologies opens up new opportunities that may lead to new product offerings. But learning takes time.

The conflict

For developers to do their job most effectively, they need time to learn new technologies, and to improve their own skills. At the same time, if they spend too much time learning, they cannot deliver enough to help a company to reach their goals, and the company may not earn enough money to compensate their employees and in turn, developers.

Encouraging learning at the cost of delivering also potentially leads to technology for technology’s sake – where developers use technology to deliver something. But what they deliver may not solve user needs, and the whole company suffers as a result.

What does a Tech Lead do?

A Tech Lead needs to keep a constant balance between finding time to learn, and delivering the right thing effectively. It will often be easier for a Tech Lead to succumb to the pressure of delivering over learning. Below is advice for how you can keep a better balance between the two.

Champion for some time to learn

Google made famous their 20% time for developers. Although not consistently implemented across the entire organisation, the idea has been adopted by several other companies to give developers some creative freedom. 20% is not the only way. Hack days, like Atlassian’s ShipIt days (renamed from FedEx days) also set aside some explicit, focused time to allow developers to learn and play.

Champion learning that addresses user needs

Internally run Hack Days encourage developers to unleash their own ideas on user needs, where they get to apply their own creativity, and often learn something in the process. They often get to play with technologies and tools they do not use during their normal week, but the outcome is often focused on a “user need” basis, with more business investment (i.e. time) going towards a solution that makes business sense – and not just technology for the sake of technology.

Capture lessons learned

In large development teams, the same lesson could be learned by different people at different times. This often means duplicated effort that could have been spent learning different or new things. A Tech Lead can encourage team members to share what they have learned with other team members to spread the lessons.

Some possibilities I have experienced include:

  • Running regular learning “show-and-tell” sessions – Where team members run a series of lightning talks or code walkthroughs around problems recently encountered and how they went about solving it.
  • Update a FAQ page on a wiki – Allows team members to share “how to do common tasks” that are applicable in their own environment.
  • Share bookmark lists – Teams create a list of links that interesting reads based on problems they have encountered.
Encourage co-teaching and co-learning

A Tech Lead can demonstrate their support for a learning environment but encouraging everyone to be a student and a teacher at the same time. Most team members will have different interests and strengths, and a Tech Lead can encourage members to share what they have. Encouraging team members to run brown bag sessions on topics that enthuse them encourage an atmosphere of sharing.

Weekly reading list

I know of a few Tech Leads who send a weekly email with interesting reading links to a wide variety of technology-related topics. Although they do not expect everyone to read every link, each one is hopeful that one of those links will be read by someone on their team.

If you liked this article, you will be interested in “Talking with Tech Leads,” a book that shares real life experiences from over 35 Tech Leads around the world. Now available on Leanpub.

Categories: Blogs

Working with Documents: A Key Challenge for Medical Device Development

The Seapine View - 3 hours 13 min ago

After analyzing the results of this year’s State of Medical Device Development Survey, we identified three key challenge areas within the industry: managing risk, working with documents, and overcoming barriers to improvement.

In a previous blog post, we looked at what the survey revealed about managing risk, and how TestTrack can help with that important facet of medical device development. In this post, we’re going to look at the challenges of working with documents.

When asked to identify their most time-consuming tasks, respondents put “documenting work” and “reviewing documentation” at the top of the list.

top-time-consuming-tasks 600

Document-Centric Processes Waste Time

If you had a choice between spending your time on refining the product and getting it to market faster, or managing compliance documentation, which would you choose? You’d choose to work on the product, right?

The problem is, getting new devices to market depends on proving that you’ve complied with all applicable regulations. In order to do that, you have to have your documentation in order.

Many companies attempt to meet this challenge by sharing the reports, traceability matrices, and other necessary documentation on a network or in a document control system. As we previously pointed out, this is bad for risk management. It’s also a major productivity killer.

Development teams lose valuable time as they struggle to manually manage these documents. That’s why many companies are moving away from document-centric systems. The industry made a huge leap forward in this area from 2011 to 2013.

artifact-vs-document-centric 600

Although that progress seemed to stagnate this year, we’re confident the industry will continue to move toward artifact-centric systems as the productivity benefits become more widely recognized.

Improving Traceability

Possibly the biggest timesaver delivered by an artifact-centric solution like TestTrack is in leveraging traceability.

Teams using a document-centric approach spend an inordinate amount of time digging through documents to ensure accurate traceability from design through code and testing. Hours, days, or even weeks can be lost to maintaining the trace matrix.

time update trace matrix 600

Nearly half of survey respondents reported wasting nearly a day or more each time they need to update the traceability matrix.

When maintaining traceability takes that much time, you’re forced to chose between creating the trace matrix early and adding massive time to the schedule, or creating it late in the process and wasting less time—but also losing valuable traceability data. Only 19% said they create the traceability matrix at the very beginning of the development process; 13% said they wait until right before submission.

when-is-trace-matrix-created 600 2

TestTrack Is Artifact-Centric

TestTrack allows you to focus on working with individual project assets or artifacts:

  • Requirements
  • User stories
  • Release planning
  • Sprints
  • Assignments
  • Work items
  • Test cases
  • Defect resolutions
  • Releases
  • Specifications
  • Risk controls and analyses

These artifacts can be sent out for review to only the people responsible for each piece, with TestTrack centralizing their changes. User A will see user B’s changes in real time and can adjust their updates and feedback accordingly, eliminating the need to merge changes.

An artifact-centric approach can also easily support various development methodologies—spiral, iterative, parallel, Agile, Waterfall, and other hybrid alternatives.

To regain the lost productivity, medical device development teams need to get out from under the burden of document-centric systems. Migrating to an artifact-centric approach with TestTrack allows for much better data, risk, gap, and impact analyses. With TestTrack, users can focus on tasks instead of constantly reviewing and updating documents.

Learn more about TestTrack and our other solutions for the medical device industry.

Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Ask Me Another

Hiccupps - James Thomas - Sat, 11/22/2014 - 07:59
I just wrote a LinkedIn recommendation for one of my team who's leaving Cambridge in the new year. It included this phrase
unafraid of the difficult (to ask and often answer!) questions And he's not the only one. Questions are a tester's stock-in-trade, but what kinds of factors can make them difficult to ask? Here's some starters:
  • the questions are hard to frame because the subject matter is hard to understand
  • the questions have known answers, but none are attractive 
  • the questions don't have any known answers
  • the questions are unlikely to have any answers
  • the questions put the credibility of the questionee at risk
  • the questions put the credibility of the questioner at risk
  • the questions put the credibility of shared beliefs, plans or assumptions at risk
  • the questions challenge someone further up the company hierarchy
  • the questions are in a sensitive area - socially, personally, morally or otherwise
  • the questions are outside the questioner's perceived area of concern or responsibility
  • the questioner fears the answer
  • the questioner fears that the question would reveal some information they would prefer hidden
  • the questioner isn't sure who to ask the question of
  • the questioner can see that others who could are not asking the question
  • the questioner has found that questions of this type are not answered
  • the questioner lacks credibility in the area of the question
  • the questioner lacks confidence in their ability to question this area
  • the questionee is expected not to want to answer the question
  • the questionee is expected not to know the answer
  • the questionee never answers questions
  • the questionee responds negatively to questions (and the questioner)
  • the questionee is likely interpret the question as implied criticism or lack of knowledge
Some of these - or their analogues - are also reasons for a question being difficult to answer but here's a few more in that direction*:
  • the answer will not satisfy the questioner, or someone they care about
  • the answer is known but cannot be given
  • the answer is known to be incorrect or deliberately misleading
  • the answer is unknown
  • the answer is unknown but some answer is required
  • the answer is clearly insufficient
  • the answer would expose something that the questionee would prefer hidden
  • the answer to a related question could expose something the questionee would prefer hidden
  • the questioner is difficult to satisfy
  • the questionee doesn't understand the question
  • the questionee doesn't understand the relevance of the question
  • the questionee doesn't recognise that there is a question to answer
Much as I could often do without them - they're hard! - I welcome and credit difficult questions. 

Because they'll make me think, suggest that I might reconsider, force me to understand what my point of view on something actually is. Because they expose contradictions and vagueness, throw light onto dark corners, open up new possibilities by suggesting that there may be answers other than those already thought of, or those that have been arrived at by not thinking.

Because they can start a dialog in an important place, one which is the crux of a problem or a symptom or a ramification of it.

Because the difficult questions are often the improving questions: maybe the thing being asked about is changed for the better as a result of the question, or our view of the thing becomes more nuanced or increased in resolution, or broader, or our knowledge about our knowledge of the thing becomes clearer.

And even though the answers are often difficult, I do my best to give them in as full, honest and timely a fashion as I can because I think that an environment where those questions can be asked safely and will be answered respectfully is one that is conducive to good work.

* And we haven't taken into account the questions that aren't asked because they are hard to know or the answers that are hard purely because of the effort that's required to discover them or how differences in context can change how questions are asked or answered, how the same questions can be asked in different ways, willful blindness, plausible deniability, behavioural models such as the Satir Interaction Model and so on.

Thanks to Josh Raine for his comments on an earlier draft of this post.
Categories: Blogs

Why Are Testers Uninterested in Upgrading Their Skill Sets?

uTest - Fri, 11/21/2014 - 22:47

“The only type of testing that I can do is manual testing.”Distance-Education
“Test automation is very important, but I am too busy now to learn something new.”
“Test automation is useful, but I will learn it when I will need it.”
“I am interested in test automation, but I don’t know any programming and it will take a long time to learn it.”
“I want to learn test automation, but my employer does not have any training programs.”

Have you ever heard any of these stories? I have, and not only once, but many times, about test automation, load testing, and web service testing.

Most of the testers I know say in one way or another that they would like to learn more about their profession but, “not now, maybe later, when the conditions will be better, when they will need the new skills in their job, when their employer will pay for their training, when someone will train them for free, when they will be less busy, etc.” The list goes on.

People sometimes say the same things about fitness: I will do it tomorrow, I will do it when I will have more time, when I will need it, etc. I have certainly done this many times as well.

But why exactly are testers not interested in learning new skills? Actually, to take things a bit further, why are testers the least interested in upgrading their skills out of all people that work in IT? Can it be because testing is seen as an easy job that anyone can do? Or because there is still no formal education track for testing? Or because some testers could not do other IT jobs well and needed a way out? Because of complacency? Maybe because of affluence and a high standard of living? Or possibly because of the illusion that things that they did yesterday will be there for them forever?

Who knows. I certainly don’t. But it is something that I see a lot. And recently, I asked other people what they think about it: Are testers the IT people least interested in learning new things?

One of the people I asked is a development director for a large development company with thousands of developers and hundreds of testers. He hires lots of testers all the time. The other three I talked with are IT recruiters who know the IT market very well. They all agreed with my observation. And none of them had better answers than me.

What do you think?

Alex Siminiuc is a uTest Community member and Gold-rated tester and Test Team Lead on paid projects at uTest. He has also been testing software applications since 2005…and enjoys it a lot. He lives in Vancouver, BC, and blogs occasionally at

Categories: Companies

How to make ANY code in ANY system unit-test-friendly

Rico Mariani's Performance Tidbits - Fri, 11/21/2014 - 00:37

There are lots of pieces of code that are embedded in places that make it very hard to test.  Sometimes these bits are essential to the correct operation of your program and could have complex state machines, timeout conditions, error modes, and who knows what else.  However, unfortunately, they are used in some subtle context such as a complex UI, an asynchronous callback, or other complex system.  This makes it very hard to test them because you might have to induce the appropriate failures in system objects to do so.  As a consequence these systems are often not very well tested, and if you bring up the lack of testing you are not likely to get a positive response.

It doesn’t have to be this way.

I offer below a simple recipe to allow any code, however complex, however awkwardly inserted into a larger system, to be tested for algorithmic correctness with unit tests. 

Step 1:

Take all the code that you want to test and pull it out from the system in which it is being used so that it is in separate source files.  You can build these into a .lib (C/C++) or a .dll (C#/VB/etc.) it doesn’t matter which.  Do this in the simplest way possible and just replace the occurrences of the code in the original context with simple function calls to essentially the same code.  This is just an “extract function” refactor which is always possible.

Step 2:

In the new library code, remove all uses of ambient authority and replace them with a capability that does exactly the same thing.  More specifically, every place you see a call to the operating system replace it with a call to a method on an abstract class that takes the necessary parameters.  If the calls always happen in some fixed patterns you can simplify the interface so that instead of being fully general like the OS it just does the patterns you need with the arguments you need. Simplifying is actually better and will make the next steps easier.

If you don’t want to add virtual function calls you can do the exact same thing with a generic or a template class using the capability as a template parameter.

If it makes sense to do so you can use more than one abstract class or template to group related things together.

Use the existing code to create one implementation of the abstract class that just does the same calls as before.

This step is also a mechanical process and the code should be working just as well as it ever did when you’re done.  And since most systems use only very few OS features in any testable chunk the abstract should stay relatively small.

Step 3:

Take the implementation of the abstract class and pull it out of the new library and back into the original code base.  Now the new library has no dependencies left.  Everything it needs from the outside world is provided to it on a silver platter and it now knows nothing of its context.  Again everything should still work.

Step 4:

Create a unit test that drives the new library by providing a mock version of the abstract class.  You can now fake any OS condition, timeouts, synchronization, file system, network, anything.  Even a system that uses complicated semaphores and/or internal state can be driven to all the hard-to-reach error conditions with relative ease.  You should be able to reach every basic block of the code under test with unit tests.

In future, you can actually repeat these steps using the same “authority free” library merging in as many components as is reasonable so you don’t get a proliferation of testable libraries.

Step 5:

Use your code in the complex environment with confidence!  Enjoy all the extra free time you will have now that you’re more productive and don’t have bizarre bugs to chase in production.


Categories: Blogs

Google Test Automation Conference: Video From Days 1 & 2

uTest - Fri, 11/21/2014 - 00:24

The Google Test Automation Conference (GTAC) is an annual test automation conference hosted by Google, bringing together engineers to discuss advances in test automation and the test engineering computer science field.

GTAC 2014 was recently held just a few weeks ago at Google’s Kirkland office (Washington State, US), and we’re happy to present video of talks and topics from both days of the conference.

If 15-plus hours of video below just isn’t enough, be sure to also check out all of our Automation courses available at uTest University today.

Categories: Companies

HP Discover 2014 Barcelona -- MUST ATTEND SESSIONS

HP LoadRunner and Performance Center Blog - Thu, 11/20/2014 - 22:01


Are you ready for HP Discover 2014 in Barcelona? I know I am! Check out this blog to learn more about the 'Must Attend Sessions' you need to sign up for now.



Categories: Companies

The Unexpected Truth About UI Test Automation Pilot Projects: A Survey Report

Telerik TestStudio - Thu, 11/20/2014 - 16:40
We wanted to gain a better understanding of what it takes to be successful in the UI test automation field, so we can better guide our customers on a path to success with their automation projects. That’s why we decided to do this survey. Our goal was to explore the first steps teams in the field of automated functional testing take, as well as where they are today with their automation efforts and what helped them get there.
Categories: Companies

Holiday shoppers Are Less Patient than Last Year!

Like last year , Dynatrace asked 2000 holiday shoppers in the United States which channels they will use to do their holiday shopping and what they expect regarding the experience. Last year the need for speed was one of the key findings and this year speed matters even more. In fact, 46% of the holiday […]

The post Holiday shoppers Are Less Patient than Last Year! appeared first on Dynatrace APM Blog.

Categories: Companies

Unwrap TestTrack 2015 Today and See the New Interactive Task Boards

The Seapine View - Thu, 11/20/2014 - 12:30

Seapine has an early holiday gift for you, and you can get a sneak peek now. It’s TestTrack 2015, and it includes a shiny new feature—interactive task boards!

TestTrack 2015’s interactive task boards bring cutting-edge project planning capabilities to TestTrack—whether you’re using Waterfall, Agile, or any other product development methodology. Task boards are alternate views of folder contents that can help your team communicate and measure progress during a sprint, release, or other milestone.

With task boards you can:

  • Organize and visualize work with cards, columns, and swimlanes
  • Plan and collaborate as a team during during stand-ups, retrospectives, issue triage, and other team meetings
  • Provide flexibility for your entire organization with support for multiple boards, configured to match each team’s process
  • Give your team real-time visibility into work at the project, sprint, and user level

You also won’t want to miss the What’s New webinar on December 10. Paula Rome, Seapine product manager, will demonstrate the task boards and other new TestTrack 2015 features, and answer your questions during the 30-minute webinar.

The best part? You don’t have to wait to unwrap TestTrack 2015! After registering for the sneak peek and the webinar, you’ll have immediate access to the TestTrack sandbox so you can try out the new task boards.

Register for the TestTrack 2015 Sneak Peek today!

Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

iOS 8.1 App Testing

Ranorex - Thu, 11/20/2014 - 11:00
Ranorex 5.2 comes with full support for Apple’s brand new mobile operating system.

Save time by automating your iOS 8.1 apps.

Download Ranorex 5.2

Upgrade for free with your valid subscription (You'll find a direct download link to the latest version of Ranorex on the Ranorex Studio start page.)
Categories: Companies

A Personal History of Microcomputing (Part 2)

Rico Mariani's Performance Tidbits - Thu, 11/20/2014 - 09:59

I could spend a long time writing about programming the PET and its various entry points, and I’m likely going to spend disproportionate time on the CBM family of computers because that’s what I know, but I think it’s important to look at other aspects of microcomputers as well and so my sojourn into 6502 assembly language will have to be cut short.  And anyway there’s room for programming examples elsewhere.

To make a decent microcomputer you need to solve certain supplemental problems… so this is the Peripherals edition of this mini-history.


Now here I’m really sad that I can’t talk about Apple II storage systems.  But I can give you a taste of what was possible/normal in 1979.  Tapes.  Tapes my son, lots of tapes.  Short tapes, long tapes, paper tapes, magnetic tapes, and don’t forget masking tape – more on that later.

Many computers (like the KIM) could be connected to a standard cassette player of some kind, the simplest situation just gave you some kind of connector that would provide input and output RCA jacks and you bring your own cassette player.

Paper type was also used in some cases, in those the paper tape insertion would effectively provide the equivalent of keystrokes on some TTY that was connected via say RS232 (and I say that loosely because usually it was just a couple of pins that behaved sorta like RS232 if you crossed your eyes enough).  Likewise paper tape creation could be nothing more than a recording of printed output which was scientifically created so as to be also be valid input!  If that sounds familiar it’s because the same trick was used to provide full screen editing on PET computers – program listings were in the same format as the input and so you could just cursor up there and edit them some and press enter again.

OK, but let’s be more specific.  The PET’s tape drive could give you about 75 bytes/sec, it was really double that but programs were stored twice(!), for safety(!!), which meant that you could fit a program as big as all the available memory in a 32k PET in about 10 minutes of tape.  Naturally that meant that additional tape would just create fast forward nightmares so smaller tapes (and plenty of them) became somewhat popular.  I must have had a few dozen for my favorite programs.   Also backups were good because it got cold in Toronto and magnetic tape was not always as robust as you might like.   Plus you could rewind one with a pencil and it wouldn’t take so long, always a plus.

But the real magic of the PET’s tape was that the motor was computer controlled.  So if you got a big tape with lots of programs on it, it often came with an “index” program at the front.  That program would let you choose from a menu of options.  When you had selected it would instruct you to hit the fast forward button (which would do nothing) and strike a key on the pet.  Hitting the key would then engage the fast forward for just the right amount of time to get you to where the desired program was stored on the tape and the motor would stop!  Amazing!  What a time saver!

The timelines for other manufacturers is astonishingly similar, it seems everyone decided to get into the game in 1977 and things developed very much in parallel in all the ecosystems.  Apple, and Radio Shack were highly harmonious schedules.

But what about disk drives, surely they were a happening thing?  And indeed they were.  On the Commodore side there were smart peripherals like the 2040 and 4040 dual floppy drives.  Now they pretty much had to be that way because there was so little memory to work with that if you had to sacrifice even a few kilobytes to a DOS then you’d be hurting.   But what smarts, here’s what you do when you insert a new floppy

open 1,8,15:  Print #1, “I0”

or you could get one free command in there by doing

open 1,8,15,”I0”

And then use print for new commands.  To load a program by name simply do this:

load “gimme”,8

and then you can run it same as always. 

But how do you see what’s on your disk?  Well that’s easy, the drive can return the directory in the form of a program, which you can then list

load “$0”,8

And there you have all your contents.  Of course this just wiped your memory so I hope you saved what you had…

Well, ok, it was a total breakthrough from tape but it was hardly easy to use, and the directory thing was not really very acceptable.  But fortunately it was possible to extend the basic interpreter… sort of.  By happenstance, or maybe because it was slightly faster, the PET used a tiny bit of self-modifying code to read the next byte of input and interpret it.  You could hack that code and make it do something other than just read the next byte.  And so were born language extensions like the DOS helper.   Now you had the power to do this:


To initialize drive zero, and,


To print the directory without actually loading it!  Amazing!


Could be used instead of the usual load syntax.

From a specs perspective these 300 RPM babies apparently could do about 40 KB/s transfer internally but that slowed down when you considered the normal track-to-track seeking and the transfer over IEEE488 or else the funky serial IEEE488 of the 1541.   I think if you got 8KB/s on parallel you’d be pretty happy.  Each disk stored 170k!

Tapes soon gave way to floppies… and don’t forget to cover the notch with masking tape if you don’t want to accidently destroy something important.  It was so easy to get the parameters backwards in the backup/duplicate command


Mean duplicate drive 1 from drive 0 but it was best remembered Destroy 1 using 0.

Suffice to say there has been a lot of innovation since that time.


It certainly wasn’t the case that you could get cheap high-quality output from a microcomputer in 1977 but you could get something.  In the CBM world the 2022 and 2023 were usable from even the oldest pet computers and gave you good solid dot matrix quality output.  By which I mean very loud and suitable for making output in triplicate. 

Letter quality printers were much more expensive and typically not in anything like an interface that was “native” to the PET.  I think other ecosystems had it better.  But it didn’t matter, the PET user port plus some software and an adapter cable could be made centronics compatible or a different cable and you could fake RS232 on it. That was enough to open the door to many other printer types.  Some were better than others.  We had this one teletype I’ll never forget that had the temerity to mark its print speeds S/M/F for slow, medium, and fast – with fast being 300 baud.   Generously, it was more like very slow, slow, and medium – or if you ask me excruciatingly slow, very slow, and slow.  But this was pretty typical.

If you wanted high quality output you could get a daisywheel printer, or better yet, get an interface that let you connect a daisywheel typewriter.  That’ll save you some bucks… but ribbons are not cheap. 

They still get you on the ink.

With these kinds of devices you could reasonably produce “letter-quality” output.  But what a microcosm of what’s normal the journey was.  Consider the serial protocol: 7 or 8 bits? parity or no? odd or even?  Baud rate?  You could spend a half hour guessing before you saw anything at all.  But no worries, the same software to talk to a TRS-80 Votrax synthesizer and speak like you’re in Wargames.

Now I call these things printers but you should understand they are not anything like what you see today.  The 2023 for instance could not even advance the page without moving the head all the way from side to side.  Dot matrix printers came out with new features like “bi-directional” meaning they could print going left to right and then right to left so they weren’t wasting time on the return trip.  Or “logic seeking” meaning that the printer head didn’t travel the whole length of the printed line but instead could advance from where it was to where it needed to be on the next line forwards or backwards.   A laser printer it ain’t.

Double-density dot matrix for “near-letter-quality” gave you a pretty polished look.  132 character wide beds were great for nice wide program listings but options were definitely more limited if you were not willing to roll your own interface box.

Still, with a good printer you could do your high school homework in a word processor, and print it in brown ink on beige paper with all your mistakes corrected on screen before you ever wrote a single character.

So much for my Brother Electric.  Thanks anyway mom.


Categories: Blogs

Sauce Connect Gets a Speed Boost & WebSocket Support

Sauce Labs - Thu, 11/20/2014 - 03:59

Sauce Connect was designed with security as priority one. But given this technology’s critical position in your testing process we know that performance and utility are important, too. For that reason we have made two major improvements to Sauce Connect tunnels.

  1. Faster startup times. Enhancements to the underlying technology enable Sauce Connect tunnels to startup up to three times faster.
  2. WebSocket support. The new tunnels support use of the WebSocket protocol in tested applications.


We are in the process of gradually migrating all Sauce Connect tunnels to the new architecture so you can expect to begin experiencing the benefits over the coming weeks.

Categories: Companies

Coming Soon – A Reimagined Sauce Labs UI

Sauce Labs - Thu, 11/20/2014 - 03:08

We are passionate about building products and services that help our users maximize the value they get out of their continuous integration and continuous delivery workflows. And while our core products serve this mission well, especially if you have integrated your CI server and are passing us test statuses, we realized we can do even more. We are excited to announce that we have begun work to completely overhaul the Sauce Labs UI and create a new experience specially designed for CI/CD workflows. The new UI will begin rolling out in phases next month.

1. Redesigned Dashboard

The first update to roll out will be a completely redesigned dashboard which will take the place of your account page located at The new dashboard is designed to aggregate your tests into builds, akin to what you would see on your CI/CD dashboard. The status of each build will be available at a glance as well as a summary of test statuses across the entire build. You can even watch a build progress from the dashboard as test statuses will be updated in real-time. For the new dashboard to work best you’ll need to send us both test statuses and build numbers. If you’re not sending this information now, there’s no time like the present to get set up. And if you’re not yet running your tests through CI/CD, the dashboard will still work beautifully for you, organizing your individual tests clearly in chronological order.

2. New Build Page

Builds will be a brand new concept within the Sauce Labs UI so they will ship with their own brand new view. The build page will show you complete details of the build itself, including run times and status, as well as a complete rundown of all tests associated with that build. The build page will serve as your jumping-off point for diving into test failures.

3. Redesigned Test Page

While the test page will remain functionally similar to the page you see today, we’ll be rolling out a refreshed UI to bring the page in line with the rest of the new experience. Expect a modernized look and feel, enhanced readability, and clean delineation of information.

4. New Archives Page

We’re replacing the current test listing found at with a new archives page. The archives page will be the home of all your account activity including builds, automated tests, and manual tests. This new page will ship with powerful and precise filtering, giving you the tools you need to quickly pinpoint exactly what you’re looking for.

The new UI will be available in beta before its full release. If you’re interested in being an early adopter, let us know at


We always love talking with our customers, so if you have questions about the upcoming UI changes, would like to share your experience with the existing UI, or have ideas you’d like to see brought to life get in touch with us at or reach out to me on twitter.

Categories: Companies

DevOps and the Resurrection of QA

IBM UrbanCode - Release And Deploy - Thu, 11/20/2014 - 01:25

A couple years ago I wrote that QA would be a natural mediator for DevOps discussions at it is traditionally between Dev and Ops, understands Dev’s speed and has concern for release quality that Ops respects. Two things had me convinced that this pattern wouldn’t happen. The first is the general lack of respect that the QA org is given in many shops. The second, was that QA teams were vanishing quickly – either having budgets cut brutally or being absorbed into development. That blog post was left behind when we moved blogs.

I’m starting to see some interesting signs of life. Release Management is often reporting through QA now and gaining respect and prominence. The best RM teams are playing the role of DevOps facilitator really well. The other trend is the rise of DevOps aware Quality Engineering orgs. One DevOps team I know reports through QE and cares for build automation, deploy/release automation, and helps dev teams setup their automated test harnesses.

The shift that seems to be working out is one Elisabeth Hendrickson (dir, QE @ Pivotal labs) talked about at the recent DevOps Enterprise Summit. Modern QE isn’t about rows of people following test scripts. It’s about the care and feeding of feedback loops. Because feedback loops and naturally cross-silo, the affinity with with DevOps is pretty clear.

Finally, we are starting to see this play out in the tools space. One of my favorite products is our MobileFirst Quality Assurance because while it has clever ways for testers to file bugs from within the context of the app, it also instruments the app to drive data-heavy feedback from users. We are seeing feedback from the field being included in the domain of a QA tool. Awesome. How many QA/QE teams are carefully tracking behavior in production beyond reproducing incident reports? The successful ones will include nurturing those feedback loops, not just the “tell the dev what they broke this week” loop.

Categories: Companies

42,000 Nexus Repository Managers, and Growing!

Sonatype Blog - Thu, 11/20/2014 - 00:10
Over the past 15 months, active Nexus instances have grown from 21,000 to 42,000.  Wowza.   That is news worth sharing, because you made it happen! This means our global Nexus customer base added 47 new instances every single day over that same period.  47 a day!  And the volume of active instances...

To read more, visit our blog at
Categories: Companies

Testing Tool Showdown: liteCam HD vs. Mobizen

uTest - Wed, 11/19/2014 - 23:36

7a9a23a7651f16f378279c983cd8039a_400x400Clear, to-the-point bug reports that are backed up with solid evidence are a must for testers when it comes to communicating with developers and getting to the root cause of issues quickly.

And that evidence comes in the form of attachments, which add to a bug report by offering proof of the bug’s existence, enabling the customer or developer to reproduce and quickly rectify the issue at hand.

But with all of the options out there, we wanted to single out a couple of options that could get testers started, so we took to two popular screen recording tools from our uTest Tool Reviews in liteCam and Mobizen.


liteCam has a four-star average review from our uTesters, and while a couple of testers appreciated that “it packs all the features they need in an single UI that greatly improves their video recording workflow,” performance issues with frequent crashes marred the experience for one tester. What liteCam also has going for it is a Free (videos are watermarked) and Paid edition of the product.


Mobizen is also a popular screen recording tool amongst our tester base, with an identical four-star average review. Testers have called out its high frame rate, ease of use and installation, and great support on tablets. Additionally, another key standout of this particular tool is that it is 100% free.

Which of these screen recording tools gives you the most bang for your buck when it comes to bug report documentation? Be sure to leave your feedback in the Tool Reviews section of uTest or in the comments below.

If you end up choosing one of these options, also be sure to check out our recent uTest University courses on how to set up liteCam HD or Mobizen for screen recording.

Categories: Companies

Continuous Delivery in a .NET World

Adam Goucher - Quality through Innovation - Wed, 11/19/2014 - 17:05

Here is one the other talk I did at Øredev this year. The original pitch was going to be show a single character commit and walk it through to production. Which is in itself a pretty bold idea for 40 minutes, but… But that pitch was made 7 months ago with the belief we would have Continuous Delivery to production in place. We ended up not hitting that goal though so the talk became more of a experience report around things we (I) learned while doing it. I would guess they are still about a year away from achieving it given what I know about priorities etc.

Below is the video, and then the deck, and the original ‘script’ I wrote for the talk. Which in my usual manner deviated from on stage at pretty much every turn. But, stories were delivered, mistakes confessed to, and lots of hallways conversations generated so I’m calling it a win.

CONTINUOUS DELIVERY IN A .NET WORLD from Øredev Conference on Vimeo.

Continuous Delivery in a .NET World from Adam Goucher

I’ll admit to have being off the speaking circuit and such for awhile and the landscape could have changed significantly, but when last I was really paying attention, most, if not all talks about Continuous Delivery focused on the ‘cool’ stack such as Rails, and Node, etc. Without any data to back up this claim at all, I would hazard a guess that there are however more .NET apps out there, especially behind the corporate firewall than those other stacks. Possibly combined. This means that there is a whole lot of people being ignored by the literature. Or at least the ones not being promoted by a tool vendor… This gap needs to be addressed; companies live and die based on these internal applications and there is no reason why they should have crappy process around them just because they are internal.

I’ve been working in a .NET shop for the last 19 months and we’re agonizingly close to having Continuous Delivery into production… but still not quite there yet. Frustrating … but great fodder for a talk about actually doing this in an existing application [‘legacy’] context.

Not surprisingly, the high level bullets are pretty much the same as with other stacks, but there of course variations of the themes that are at play in some cases.

Have a goal
Saying ‘we want to do Continuous Delivery’ is not an achievable business goal. You need to be able to articulate what success looks like. Previously, success as looked like ‘do an update when the CEO is giving an investor pitch’. What is yours?

Get ‘trunk’ deliverable
Could you drop ‘trunk’ [or whatever your version control setup calls it] into production at a moment’s notice? Likely not. While it seems easy, I think this is actually the hardest part about everything? Why? Simple … it takes discipline. And that is hard. Really hard. Especially when the pressure ramps up as people fall back to their training in those situations and if you aren’t training to be disciplined…

So what does disciplined mean to me, right now…

  • feature flags (existence and removal of)
  • externalized configuration
  • non assumption of installation location
  • stop branching!!

Figure out your database
This, I think, is actually the hardest part of a modern application. And is really kinda related to the previous point. You need to be able to deploy your application with, and without, database updates going out. That means…

  • your tooling needs to support that
  • your build chains needs to support that
  • your application needs to support that (forwards and backwards compatible)
  • your process needs to support that

This is not simple. Personally, I love the ‘migration’ approach. Unfortunately… our DBA didn’t.

Convention over Configuration FTW
I’m quite convinced of two things; this is why RoR and friends ‘won’ and why most talks deal with them rather than .NET. To really win at doing Continuous Delivery [or at least without going insane], you need to standardize your projects. The solution file goes here. Images go here. CSS goes here. Yes, the ‘default’ project layout does have some of that stuff already figured out, but it is waaaaay too easy to go of script in the name of ‘configurability’. Stop that! Every single one of our .NET builds is slightly different because of that at 360, which means that we have to spend time when wiring them up and dealing with their snowflake-ness. I should have been able to ‘just’ apply a [TeamCity] template to the job and give it some variables…

Make things small [and modular]
This is something that has started to affect us more and more. And something that doesn’t be default in the RoR community with their prevalence of gems. If something has utility, and is going to be across multiple projects, make it a Nuget package. The first candidate for this could be your logging infrastructure. Then your notifications infrastructure. I have seen so much duplicate code…

Not all flows are created equal
This is a recent realization, though having said that, is a pretty obvious one as well. Not all projects, not all teams, not all applications have the same process for achieving whatever your Continuous Delivery goal is. Build your chains accordingly.

Automate what should be automated
I get accused of splitting hairs for this one, but Continuous Delivery is not about ‘push a button, magic, production!’. It is all about automating what should be automated, and doing by hand what should be done by hand. But! Also being able to short circuit gates when necessary.

It is also about automating the right things with the right tools. Are they meant for .NET or was it an afterthought? Is it a flash in the pan or is it going to be around? Does its project assumptions align with yours?

Infrastructure matters
For Continuous Delivery to really work, and this is why its often mentioned in the same breath as DevOps (we’ll ignore that who problem of ‘if you have devops you aren’t doing devops’…), the management of your infrastructure and environments needs to be fully automated as well. This is very much in the bucket of ‘what should be automated’. Thankfully, the tooling has caught up to Windows so you should be working on this right from the start. Likely in tandem with getting trunk deliverable.

But even still, there are going to have to be things that you need to drop down to the shell and do. We made a leap forward towards our goal when we let Octopus start to control IIS. But they don’t expose enough hooks for the particular needs of our application so we have to use the IIS cmdlets to do what we need afterwards. And there is absolutely nothing wrong with this approach.

Its all predicated by people
Lastly, and most importantly, you need to have the right people in place. If you don’t, then it doesn’t matter how well you execute on the above items, you /will/ fail.

Categories: Blogs

What about Microsoft Component Extensions for C++?

Sonar - Wed, 11/19/2014 - 08:32

After my previous blog entry about the support of Objective-C, you could get the impression that we’re fully focused on Unix-like platforms and have completely forgotten about Windows. But that would be a wrong impression – with version 3.2 of the C / C++ / Objective-C plugin released in November, 2014, support for the Microsoft Component Extensions for Runtime Platforms arrived in answer to customer needs. The C-Family development team closely follows discussions in the mailing list for customer support, so don’t hesitate to speak about your needs and problems.

So what does “support of Microsoft Component Extensions for Runtime Platform” mean? It means that the plugin is now able to analyze two more C++ dialects: C++/CLI and C++/CX. C++/CLI extends the ISO C++ standard, allowing programming for a managed execution environment on the .NET platform (Common Language Runtime). C++/CX borrows syntax from C++/CLI, but targets the Windows Runtime (WinRT) and native code instead, allowing programming of Windows Store apps and components that compile to native code. Also could be noted there is not much static analyzers capable to analyze those dialects.

So now the full list of supported C++ dialects looks quite impressive – you can see it in the configuration page:

And this is doesn’t even count the C and Objective-C languages!

You also may notice from the screenshot above, that now there is clear separation between the ISO standards, the usual Microsoft extensions for C/C++ (which historically come from Microsoft Visual Studio compiler), and GNU extensions (which historically come from GCC compiler). The primary reason for the separation is that some of these extensions conflict with each other, as an example – the Microsoft-specific “__uptr” modifier is used as an identifier in the GNU C Library. To ease configuration, the plugin option names closely resemble the configuration options of GCC, Clang and many other compilers.

But wait, you actually don’t need to specify the configuration manually, because you can use the build-wrapper for Microsoft Visual Studio projects just like you can with non-Visual Studio projects. Just download “build-wrapper” and use it as a prefix to the build command for your Microsoft Visual Studio project. As an example:

build-wrapper --out-dir [output directory] msbuild /t:rebuild

and just add a single property to configuration of analysis:[output directory]

The build wrapper will eavesdrop on the build to gather configuration data, and during analysis the plugin will use the collected configuration without the headaches of manual intervention. Moreover, this works perfectly for projects that have mixed subcomponents written with different dialects.

So all this means that from now you can easily add projects written using C++/CLI and C++/CX into your portfolio of projects regularly analysed by SonarQube.

Of course, it’s important that the growth of supported dialects is balanced with other improvements, and that’s certainly the case in this version: we made several improvements, added few rules and fixed 28 bugs. And we’re planning to go even further in the next version. Of course, as usual there will be new rules, and improvements, but we’ll also be adding a major new feature which will make analysis vastly more powerful, so stay tuned.

In the meantime, the improvements in version 3.2 are compatible with all SonarQube versions from 3.7.4 forward, and they’re worth adopting today.

Categories: Open Source

Single Sign-On Support for Enterprise Now In Beta

Sauce Labs - Wed, 11/19/2014 - 02:27

At Sauce Labs, we are hard at work identifying new ways to make adoption and usage of our products as simple and frictionless as possible. For larger organizations onboarding hundreds of users, managing access and security can quickly become challenging. To simplify the onboarding process and provide greater account security, we have rolled out integrations for four popular Single Sign-On (SSO) providers, including Ping Identity, OneLogin, Okta, and Microsoft Active Directory Federation Service (ADFS). At a high level, an SSO Identity Provider (IdP) provides a single gateway through which users can access an array of applications without logging into each application separately. A user logs into the IdP with one set of credentials and gains access to all connected applications through that same login.

This new integration reduces the likelihood that users will spend time on password recovery or account access issues and gives account owners greater control over account security. Account owners can optionally require users to log in via a corporate IdP, completely eliminating risks associated with standard account credentials.

How It Works

Our SSO support is based on the SAML 2.0 Browser POST profile. Below is a high-level representation of how authentication between the IdP and Sauce Labs is performed.

  1. User signs into the IdP via a web browser and attempts to access Sauce Labs service.
  2. IdP generates a SAML response in XML.
  3. IdP returns encoded SAML response to the browser.
  4. Browser forwards the SAML response to the Assertion Consumer Service (ACS) URL.
  5. Sauce Labs verifies the SAML response.
  6. Upon successful verification, user is granted access to Sauce Labs.


Enabling SSO For Your Account

The new SSO integrations are currently available through an open beta. If you are an Enterprise account owner and you would like to be placed in the beta, simply email us at and let us know with which provider you are interested in integrating. We will contact you to set up a kickoff and get you squared away. If you are not currently an Enterprise customer and are interested in learning about this and other Enterprise features, contact our sales team.

If you have existing Sauce Labs users, we have developed a quick and painless transition process ensuring your users are able to keep their activity history and data. Once your account is enabled for SSO, your users can access Sauce Labs via the IdP. They will be presented with the option to create a new account or log into their existing account. They need only provide their existing Sauce Labs credentials and sign in. That’s it – the transition process will be completed instantly and that user will be able to access Sauce Labs from the IdP in the future.

SSO Provider Partnerships

In conjunction with the release of our SSO integrations, we are pleased to announce partnerships with some of our amazing service providers. Once your account is enabled for SSO, you will be able to easily connect to your IdP through Ping Identity’s Application CatalogOkta’s Application Network (OAN), or OneLogin’s Connector.

Further Reading

If you do not currently use a Single Sign-On service provider and are interested in learning more about our integrated services, follow the links below.

Ping Identity

We love talking with our users so feel free to reach out to us at with any comments, feedback, or requests.


Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today