Skip to content

Feed aggregator

How to make ANY code in ANY system unit-test-friendly

Rico Mariani's Performance Tidbits - Fri, 11/21/2014 - 00:37

There are lots of pieces of code that are embedded in places that make it very hard to test.  Sometimes these bits are essential to the correct operation of your program and could have complex state machines, timeout conditions, error modes, and who knows what else.  However, unfortunately, they are used in some subtle context such as a complex UI, an asynchronous callback, or other complex system.  This makes it very hard to test them because you might have to induce the appropriate failures in system objects to do so.  As a consequence these systems are often not very well tested, and if you bring up the lack of testing you are not likely to get a positive response.

It doesn’t have to be this way.

I offer below a simple recipe to allow any code, however complex, however awkwardly inserted into a larger system, to be tested for algorithmic correctness with unit tests. 

Step 1:

Take all the code that you want to test and pull it out from the system in which it is being used so that it is in separate source files.  You can build these into a .lib (C/C++) or a .dll (C#/VB/etc.) it doesn’t matter which.  Do this in the simplest way possible and just replace the occurrences of the code in the original context with simple function calls to essentially the same code.  This is just an “extract function” refactor which is always possible.

Step 2:

In the new library code, remove all uses of ambient authority and replace them with a capability that does exactly the same thing.  More specifically, every place you see a call to the operating system replace it with a call to a method on an abstract class that takes the necessary parameters.  If the calls always happen in some fixed patterns you can simplify the interface so that instead of being fully general like the OS it just does the patterns you need with the arguments you need. Simplifying is actually better and will make the next steps easier.

If you don’t want to add virtual function calls you can do the exact same thing with a generic or a template class using the capability as a template parameter.

If it makes sense to do so you can use more than one abstract class or template to group related things together.

Use the existing code to create one implementation of the abstract class that just does the same calls as before.

This step is also a mechanical process and the code should be working just as well as it ever did when you’re done.  And since most systems use only very few OS features in any testable chunk the abstract should stay relatively small.

Step 3:

Take the implementation of the abstract class and pull it out of the new library and back into the original code base.  Now the new library has no dependencies left.  Everything it needs from the outside world is provided to it on a silver platter and it now knows nothing of its context.  Again everything should still work.

Step 4:

Create a unit test that drives the new library by providing a mock version of the abstract class.  You can now fake any OS condition, timeouts, synchronization, file system, network, anything.  Even a system that uses complicated semaphores and/or internal state can be driven to all the hard-to-reach error conditions with relative ease.  You should be able to reach every basic block of the code under test with unit tests.

In future, you can actually repeat these steps using the same “authority free” library merging in as many components as is reasonable so you don’t get a proliferation of testable libraries.

Step 5:

Use your code in the complex environment with confidence!  Enjoy all the extra free time you will have now that you’re more productive and don’t have bizarre bugs to chase in production.

 

Categories: Blogs

Google Test Automation Conference: Video From Days 1 & 2

uTest - Fri, 11/21/2014 - 00:24

The Google Test Automation Conference (GTAC) is an annual test automation conference hosted by Google, bringing together engineers to discuss advances in test automation and the test engineering computer science field.

GTAC 2014 was recently held just a few weeks ago at Google’s Kirkland office (Washington State, US), and we’re happy to present video of talks and topics from both days of the conference.

If 15-plus hours of video below just isn’t enough, be sure to also check out all of our Automation courses available at uTest University today.

Categories: Companies

HP Discover 2014 Barcelona -- MUST ATTEND SESSIONS

HP LoadRunner and Performance Center Blog - Thu, 11/20/2014 - 22:01

HPDBarcelona.jpg

Are you ready for HP Discover 2014 in Barcelona? I know I am! Check out this blog to learn more about the 'Must Attend Sessions' you need to sign up for now.

 

 

Categories: Companies

The Unexpected Truth About UI Test Automation Pilot Projects: A Survey Report

Telerik TestStudio - Thu, 11/20/2014 - 16:40
We wanted to gain a better understanding of what it takes to be successful in the UI test automation field, so we can better guide our customers on a path to success with their automation projects. That’s why we decided to do this survey. Our goal was to explore the first steps teams in the field of automated functional testing take, as well as where they are today with their automation efforts and what helped them get there.
Categories: Companies

Holiday shoppers Are Less Patient than Last Year!

Like last year , Dynatrace asked 2000 holiday shoppers in the United States which channels they will use to do their holiday shopping and what they expect regarding the experience. Last year the need for speed was one of the key findings and this year speed matters even more. In fact, 46% of the holiday […]

The post Holiday shoppers Are Less Patient than Last Year! appeared first on Dynatrace APM Blog.

Categories: Companies

Unwrap TestTrack 2015 Today and See the New Interactive Task Boards

The Seapine View - Thu, 11/20/2014 - 12:30

Seapine has an early holiday gift for you, and you can get a sneak peek now. It’s TestTrack 2015, and it includes a shiny new feature—interactive task boards!

TestTrack 2015’s interactive task boards bring cutting-edge project planning capabilities to TestTrack—whether you’re using Waterfall, Agile, or any other product development methodology. Task boards are alternate views of folder contents that can help your team communicate and measure progress during a sprint, release, or other milestone.

With task boards you can:

  • Organize and visualize work with cards, columns, and swimlanes
  • Plan and collaborate as a team during during stand-ups, retrospectives, issue triage, and other team meetings
  • Provide flexibility for your entire organization with support for multiple boards, configured to match each team’s process
  • Give your team real-time visibility into work at the project, sprint, and user level

You also won’t want to miss the What’s New webinar on December 10. Paula Rome, Seapine product manager, will demonstrate the task boards and other new TestTrack 2015 features, and answer your questions during the 30-minute webinar.

The best part? You don’t have to wait to unwrap TestTrack 2015! After registering for the sneak peek and the webinar, you’ll have immediate access to the TestTrack sandbox so you can try out the new task boards.

Register for the TestTrack 2015 Sneak Peek today!

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

iOS 8.1 App Testing

Ranorex - Thu, 11/20/2014 - 11:00
Ranorex 5.2 comes with full support for Apple’s brand new mobile operating system.

Save time by automating your iOS 8.1 apps.

Download Ranorex 5.2

Upgrade for free with your valid subscription (You'll find a direct download link to the latest version of Ranorex on the Ranorex Studio start page.)
Categories: Companies

A Personal History of Microcomputing (Part 2)

Rico Mariani's Performance Tidbits - Thu, 11/20/2014 - 09:59

I could spend a long time writing about programming the PET and its various entry points, and I’m likely going to spend disproportionate time on the CBM family of computers because that’s what I know, but I think it’s important to look at other aspects of microcomputers as well and so my sojourn into 6502 assembly language will have to be cut short.  And anyway there’s room for programming examples elsewhere.

To make a decent microcomputer you need to solve certain supplemental problems… so this is the Peripherals edition of this mini-history.

Storage

Now here I’m really sad that I can’t talk about Apple II storage systems.  But I can give you a taste of what was possible/normal in 1979.  Tapes.  Tapes my son, lots of tapes.  Short tapes, long tapes, paper tapes, magnetic tapes, and don’t forget masking tape – more on that later.

Many computers (like the KIM) could be connected to a standard cassette player of some kind, the simplest situation just gave you some kind of connector that would provide input and output RCA jacks and you bring your own cassette player.

Paper type was also used in some cases, in those the paper tape insertion would effectively provide the equivalent of keystrokes on some TTY that was connected via say RS232 (and I say that loosely because usually it was just a couple of pins that behaved sorta like RS232 if you crossed your eyes enough).  Likewise paper tape creation could be nothing more than a recording of printed output which was scientifically created so as to be also be valid input!  If that sounds familiar it’s because the same trick was used to provide full screen editing on PET computers – program listings were in the same format as the input and so you could just cursor up there and edit them some and press enter again.

OK, but let’s be more specific.  The PET’s tape drive could give you about 75 bytes/sec, it was really double that but programs were stored twice(!), for safety(!!), which meant that you could fit a program as big as all the available memory in a 32k PET in about 10 minutes of tape.  Naturally that meant that additional tape would just create fast forward nightmares so smaller tapes (and plenty of them) became somewhat popular.  I must have had a few dozen for my favorite programs.   Also backups were good because it got cold in Toronto and magnetic tape was not always as robust as you might like.   Plus you could rewind one with a pencil and it wouldn’t take so long, always a plus.

But the real magic of the PET’s tape was that the motor was computer controlled.  So if you got a big tape with lots of programs on it, it often came with an “index” program at the front.  That program would let you choose from a menu of options.  When you had selected it would instruct you to hit the fast forward button (which would do nothing) and strike a key on the pet.  Hitting the key would then engage the fast forward for just the right amount of time to get you to where the desired program was stored on the tape and the motor would stop!  Amazing!  What a time saver!

The timelines for other manufacturers is astonishingly similar, it seems everyone decided to get into the game in 1977 and things developed very much in parallel in all the ecosystems.  Apple, and Radio Shack were highly harmonious schedules.

But what about disk drives, surely they were a happening thing?  And indeed they were.  On the Commodore side there were smart peripherals like the 2040 and 4040 dual floppy drives.  Now they pretty much had to be that way because there was so little memory to work with that if you had to sacrifice even a few kilobytes to a DOS then you’d be hurting.   But what smarts, here’s what you do when you insert a new floppy

open 1,8,15:  Print #1, “I0”

or you could get one free command in there by doing

open 1,8,15,”I0”

And then use print for new commands.  To load a program by name simply do this:

load “gimme”,8

and then you can run it same as always. 

But how do you see what’s on your disk?  Well that’s easy, the drive can return the directory in the form of a program, which you can then list

load “$0”,8
list

And there you have all your contents.  Of course this just wiped your memory so I hope you saved what you had…

Well, ok, it was a total breakthrough from tape but it was hardly easy to use, and the directory thing was not really very acceptable.  But fortunately it was possible to extend the basic interpreter… sort of.  By happenstance, or maybe because it was slightly faster, the PET used a tiny bit of self-modifying code to read the next byte of input and interpret it.  You could hack that code and make it do something other than just read the next byte.  And so were born language extensions like the DOS helper.   Now you had the power to do this:

>I0

To initialize drive zero, and,

>$0

To print the directory without actually loading it!  Amazing!

/gimme

Could be used instead of the usual load syntax.

From a specs perspective these 300 RPM babies apparently could do about 40 KB/s transfer internally but that slowed down when you considered the normal track-to-track seeking and the transfer over IEEE488 or else the funky serial IEEE488 of the 1541.   I think if you got 8KB/s on parallel you’d be pretty happy.  Each disk stored 170k!

Tapes soon gave way to floppies… and don’t forget to cover the notch with masking tape if you don’t want to accidently destroy something important.  It was so easy to get the parameters backwards in the backup/duplicate command

>D1=0

Mean duplicate drive 1 from drive 0 but it was best remembered Destroy 1 using 0.

Suffice to say there has been a lot of innovation since that time.

Printing

It certainly wasn’t the case that you could get cheap high-quality output from a microcomputer in 1977 but you could get something.  In the CBM world the 2022 and 2023 were usable from even the oldest pet computers and gave you good solid dot matrix quality output.  By which I mean very loud and suitable for making output in triplicate. 

Letter quality printers were much more expensive and typically not in anything like an interface that was “native” to the PET.  I think other ecosystems had it better.  But it didn’t matter, the PET user port plus some software and an adapter cable could be made centronics compatible or a different cable and you could fake RS232 on it. That was enough to open the door to many other printer types.  Some were better than others.  We had this one teletype I’ll never forget that had the temerity to mark its print speeds S/M/F for slow, medium, and fast – with fast being 300 baud.   Generously, it was more like very slow, slow, and medium – or if you ask me excruciatingly slow, very slow, and slow.  But this was pretty typical.

If you wanted high quality output you could get a daisywheel printer, or better yet, get an interface that let you connect a daisywheel typewriter.  That’ll save you some bucks… but ribbons are not cheap. 

They still get you on the ink.

With these kinds of devices you could reasonably produce “letter-quality” output.  But what a microcosm of what’s normal the journey was.  Consider the serial protocol: 7 or 8 bits? parity or no? odd or even?  Baud rate?  You could spend a half hour guessing before you saw anything at all.  But no worries, the same software to talk to a TRS-80 Votrax synthesizer and speak like you’re in Wargames.

Now I call these things printers but you should understand they are not anything like what you see today.  The 2023 for instance could not even advance the page without moving the head all the way from side to side.  Dot matrix printers came out with new features like “bi-directional” meaning they could print going left to right and then right to left so they weren’t wasting time on the return trip.  Or “logic seeking” meaning that the printer head didn’t travel the whole length of the printed line but instead could advance from where it was to where it needed to be on the next line forwards or backwards.   A laser printer it ain’t.

Double-density dot matrix for “near-letter-quality” gave you a pretty polished look.  132 character wide beds were great for nice wide program listings but options were definitely more limited if you were not willing to roll your own interface box.

Still, with a good printer you could do your high school homework in a word processor, and print it in brown ink on beige paper with all your mistakes corrected on screen before you ever wrote a single character.

So much for my Brother Electric.  Thanks anyway mom.

 

Categories: Blogs

Sauce Connect Gets a Speed Boost & WebSocket Support

Sauce Labs - Thu, 11/20/2014 - 03:59

Sauce Connect was designed with security as priority one. But given this technology’s critical position in your testing process we know that performance and utility are important, too. For that reason we have made two major improvements to Sauce Connect tunnels.

  1. Faster startup times. Enhancements to the underlying technology enable Sauce Connect tunnels to startup up to three times faster.
  2. WebSocket support. The new tunnels support use of the WebSocket protocol in tested applications.

 

We are in the process of gradually migrating all Sauce Connect tunnels to the new architecture so you can expect to begin experiencing the benefits over the coming weeks.

Categories: Companies

Coming Soon – A Reimagined Sauce Labs UI

Sauce Labs - Thu, 11/20/2014 - 03:08

We are passionate about building products and services that help our users maximize the value they get out of their continuous integration and continuous delivery workflows. And while our core products serve this mission well, especially if you have integrated your CI server and are passing us test statuses, we realized we can do even more. We are excited to announce that we have begun work to completely overhaul the Sauce Labs UI and create a new experience specially designed for CI/CD workflows. The new UI will begin rolling out in phases next month.

1. Redesigned Dashboard

The first update to roll out will be a completely redesigned dashboard which will take the place of your account page located at saucelabs.com/account. The new dashboard is designed to aggregate your tests into builds, akin to what you would see on your CI/CD dashboard. The status of each build will be available at a glance as well as a summary of test statuses across the entire build. You can even watch a build progress from the dashboard as test statuses will be updated in real-time. For the new dashboard to work best you’ll need to send us both test statuses and build numbers. If you’re not sending this information now, there’s no time like the present to get set up. And if you’re not yet running your tests through CI/CD, the dashboard will still work beautifully for you, organizing your individual tests clearly in chronological order.

2. New Build Page

Builds will be a brand new concept within the Sauce Labs UI so they will ship with their own brand new view. The build page will show you complete details of the build itself, including run times and status, as well as a complete rundown of all tests associated with that build. The build page will serve as your jumping-off point for diving into test failures.

3. Redesigned Test Page

While the test page will remain functionally similar to the page you see today, we’ll be rolling out a refreshed UI to bring the page in line with the rest of the new experience. Expect a modernized look and feel, enhanced readability, and clean delineation of information.

4. New Archives Page

We’re replacing the current test listing found at saucelabs.com/tests with a new archives page. The archives page will be the home of all your account activity including builds, automated tests, and manual tests. This new page will ship with powerful and precise filtering, giving you the tools you need to quickly pinpoint exactly what you’re looking for.

The new UI will be available in beta before its full release. If you’re interested in being an early adopter, let us know at beta@saucelabs.com.

 

We always love talking with our customers, so if you have questions about the upcoming UI changes, would like to share your experience with the existing UI, or have ideas you’d like to see brought to life get in touch with us at product@saucelabs.com.

Categories: Companies

DevOps and the Resurrection of QA

IBM UrbanCode - Release And Deploy - Thu, 11/20/2014 - 01:25

A couple years ago I wrote that QA would be a natural mediator for DevOps discussions at it is traditionally between Dev and Ops, understands Dev’s speed and has concern for release quality that Ops respects. Two things had me convinced that this pattern wouldn’t happen. The first is the general lack of respect that the QA org is given in many shops. The second, was that QA teams were vanishing quickly – either having budgets cut brutally or being absorbed into development. That blog post was left behind when we moved blogs.

I’m starting to see some interesting signs of life. Release Management is often reporting through QA now and gaining respect and prominence. The best RM teams are playing the role of DevOps facilitator really well. The other trend is the rise of DevOps aware Quality Engineering orgs. One DevOps team I know reports through QE and cares for build automation, deploy/release automation, and helps dev teams setup their automated test harnesses.

The shift that seems to be working out is one Elisabeth Hendrickson (dir, QE @ Pivotal labs) talked about at the recent DevOps Enterprise Summit. Modern QE isn’t about rows of people following test scripts. It’s about the care and feeding of feedback loops. Because feedback loops and naturally cross-silo, the affinity with with DevOps is pretty clear.

Finally, we are starting to see this play out in the tools space. One of my favorite products is our MobileFirst Quality Assurance because while it has clever ways for testers to file bugs from within the context of the app, it also instruments the app to drive data-heavy feedback from users. We are seeing feedback from the field being included in the domain of a QA tool. Awesome. How many QA/QE teams are carefully tracking behavior in production beyond reproducing incident reports? The successful ones will include nurturing those feedback loops, not just the “tell the dev what they broke this week” loop.

Categories: Companies

42,000 Nexus Repository Managers, and Growing!

Sonatype Blog - Thu, 11/20/2014 - 00:10
Over the past 15 months, active Nexus instances have grown from 21,000 to 42,000.  Wowza.   That is news worth sharing, because you made it happen! This means our global Nexus customer base added 47 new instances every single day over that same period.  47 a day!  And the volume of active instances...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

Testing Tool Showdown: liteCam HD vs. Mobizen

uTest - Wed, 11/19/2014 - 23:36

7a9a23a7651f16f378279c983cd8039a_400x400Clear, to-the-point bug reports that are backed up with solid evidence are a must for testers when it comes to communicating with developers and getting to the root cause of issues quickly.

And that evidence comes in the form of attachments, which add to a bug report by offering proof of the bug’s existence, enabling the customer or developer to reproduce and quickly rectify the issue at hand.

But with all of the options out there, we wanted to single out a couple of options that could get testers started, so we took to two popular screen recording tools from our uTest Tool Reviews in liteCam and Mobizen.

liteCam

liteCam has a four-star average review from our uTesters, and while a couple of testers appreciated that “it packs all the features they need in an single UI that greatly improves their video recording workflow,” performance issues with frequent crashes marred the experience for one tester. What liteCam also has going for it is a Free (videos are watermarked) and Paid edition of the product.

Mobizen

Mobizen is also a popular screen recording tool amongst our tester base, with an identical four-star average review. Testers have called out its high frame rate, ease of use and installation, and great support on tablets. Additionally, another key standout of this particular tool is that it is 100% free.

Which of these screen recording tools gives you the most bang for your buck when it comes to bug report documentation? Be sure to leave your feedback in the Tool Reviews section of uTest or in the comments below.

If you end up choosing one of these options, also be sure to check out our recent uTest University courses on how to set up liteCam HD or Mobizen for screen recording.

Categories: Companies

Continuous Delivery in a .NET World

Adam Goucher - Quality through Innovation - Wed, 11/19/2014 - 17:05

Here is one the other talk I did at Øredev this year. The original pitch was going to be show a single character commit and walk it through to production. Which is in itself a pretty bold idea for 40 minutes, but… But that pitch was made 7 months ago with the belief we would have Continuous Delivery to production in place. We ended up not hitting that goal though so the talk became more of a experience report around things we (I) learned while doing it. I would guess they are still about a year away from achieving it given what I know about priorities etc.

Below is the video, and then the deck, and the original ‘script’ I wrote for the talk. Which in my usual manner deviated from on stage at pretty much every turn. But, stories were delivered, mistakes confessed to, and lots of hallways conversations generated so I’m calling it a win.

CONTINUOUS DELIVERY IN A .NET WORLD from Øredev Conference on Vimeo.

Continuous Delivery in a .NET World from Adam Goucher

Introduction
I’ll admit to have being off the speaking circuit and such for awhile and the landscape could have changed significantly, but when last I was really paying attention, most, if not all talks about Continuous Delivery focused on the ‘cool’ stack such as Rails, and Node, etc. Without any data to back up this claim at all, I would hazard a guess that there are however more .NET apps out there, especially behind the corporate firewall than those other stacks. Possibly combined. This means that there is a whole lot of people being ignored by the literature. Or at least the ones not being promoted by a tool vendor… This gap needs to be addressed; companies live and die based on these internal applications and there is no reason why they should have crappy process around them just because they are internal.

I’ve been working in a .NET shop for the last 19 months and we’re agonizingly close to having Continuous Delivery into production… but still not quite there yet. Frustrating … but great fodder for a talk about actually doing this in an existing application ['legacy'] context.

Not surprisingly, the high level bullets are pretty much the same as with other stacks, but there of course variations of the themes that are at play in some cases.

Have a goal
Saying ‘we want to do Continuous Delivery’ is not an achievable business goal. You need to be able to articulate what success looks like. Previously, success as looked like ‘do an update when the CEO is giving an investor pitch’. What is yours?

Get ‘trunk’ deliverable
Could you drop ‘trunk’ [or whatever your version control setup calls it] into production at a moment’s notice? Likely not. While it seems easy, I think this is actually the hardest part about everything? Why? Simple … it takes discipline. And that is hard. Really hard. Especially when the pressure ramps up as people fall back to their training in those situations and if you aren’t training to be disciplined…

So what does disciplined mean to me, right now…

  • feature flags (existence and removal of)
  • externalized configuration
  • non assumption of installation location
  • stop branching!!

Figure out your database
This, I think, is actually the hardest part of a modern application. And is really kinda related to the previous point. You need to be able to deploy your application with, and without, database updates going out. That means…

  • your tooling needs to support that
  • your build chains needs to support that
  • your application needs to support that (forwards and backwards compatible)
  • your process needs to support that

This is not simple. Personally, I love the ‘migration’ approach. Unfortunately… our DBA didn’t.

Convention over Configuration FTW
I’m quite convinced of two things; this is why RoR and friends ‘won’ and why most talks deal with them rather than .NET. To really win at doing Continuous Delivery [or at least without going insane], you need to standardize your projects. The solution file goes here. Images go here. CSS goes here. Yes, the ‘default’ project layout does have some of that stuff already figured out, but it is waaaaay too easy to go of script in the name of ‘configurability’. Stop that! Every single one of our .NET builds is slightly different because of that at 360, which means that we have to spend time when wiring them up and dealing with their snowflake-ness. I should have been able to ‘just’ apply a [TeamCity] template to the job and give it some variables…

Make things small [and modular]
This is something that has started to affect us more and more. And something that doesn’t be default in the RoR community with their prevalence of gems. If something has utility, and is going to be across multiple projects, make it a Nuget package. The first candidate for this could be your logging infrastructure. Then your notifications infrastructure. I have seen so much duplicate code…

Not all flows are created equal
This is a recent realization, though having said that, is a pretty obvious one as well. Not all projects, not all teams, not all applications have the same process for achieving whatever your Continuous Delivery goal is. Build your chains accordingly.

Automate what should be automated
I get accused of splitting hairs for this one, but Continuous Delivery is not about ‘push a button, magic, production!’. It is all about automating what should be automated, and doing by hand what should be done by hand. But! Also being able to short circuit gates when necessary.

It is also about automating the right things with the right tools. Are they meant for .NET or was it an afterthought? Is it a flash in the pan or is it going to be around? Does its project assumptions align with yours?

Infrastructure matters
For Continuous Delivery to really work, and this is why its often mentioned in the same breath as DevOps (we’ll ignore that who problem of ‘if you have devops you aren’t doing devops’…), the management of your infrastructure and environments needs to be fully automated as well. This is very much in the bucket of ‘what should be automated’. Thankfully, the tooling has caught up to Windows so you should be working on this right from the start. Likely in tandem with getting trunk deliverable.

Powershell
But even still, there are going to have to be things that you need to drop down to the shell and do. We made a leap forward towards our goal when we let Octopus start to control IIS. But they don’t expose enough hooks for the particular needs of our application so we have to use the IIS cmdlets to do what we need afterwards. And there is absolutely nothing wrong with this approach.

Its all predicated by people
Lastly, and most importantly, you need to have the right people in place. If you don’t, then it doesn’t matter how well you execute on the above items, you /will/ fail.

Categories: Blogs

What about Microsoft Component Extensions for C++?

Sonar - Wed, 11/19/2014 - 08:32

After my previous blog entry about the support of Objective-C, you could get the impression that we’re fully focused on Unix-like platforms and have completely forgotten about Windows. But that would be a wrong impression – with version 3.2 of the C / C++ / Objective-C plugin released in November, 2014, support for the Microsoft Component Extensions for Runtime Platforms arrived in answer to customer needs. The C-Family development team closely follows discussions in the mailing list for customer support, so don’t hesitate to speak about your needs and problems.

So what does “support of Microsoft Component Extensions for Runtime Platform” mean? It means that the plugin is now able to analyze two more C++ dialects: C++/CLI and C++/CX. C++/CLI extends the ISO C++ standard, allowing programming for a managed execution environment on the .NET platform (Common Language Runtime). C++/CX borrows syntax from C++/CLI, but targets the Windows Runtime (WinRT) and native code instead, allowing programming of Windows Store apps and components that compile to native code. Also could be noted there is not much static analyzers capable to analyze those dialects.

So now the full list of supported C++ dialects looks quite impressive – you can see it in the configuration page:

And this is doesn’t even count the C and Objective-C languages!

You also may notice from the screenshot above, that now there is clear separation between the ISO standards, the usual Microsoft extensions for C/C++ (which historically come from Microsoft Visual Studio compiler), and GNU extensions (which historically come from GCC compiler). The primary reason for the separation is that some of these extensions conflict with each other, as an example – the Microsoft-specific “__uptr” modifier is used as an identifier in the GNU C Library. To ease configuration, the plugin option names closely resemble the configuration options of GCC, Clang and many other compilers.

But wait, you actually don’t need to specify the configuration manually, because you can use the build-wrapper for Microsoft Visual Studio projects just like you can with non-Visual Studio projects. Just download “build-wrapper” and use it as a prefix to the build command for your Microsoft Visual Studio project. As an example:

build-wrapper --out-dir [output directory] msbuild /t:rebuild

and just add a single property to configuration of analysis:

sonar.cfamily.build-wrapper-output=[output directory]

The build wrapper will eavesdrop on the build to gather configuration data, and during analysis the plugin will use the collected configuration without the headaches of manual intervention. Moreover, this works perfectly for projects that have mixed subcomponents written with different dialects.

So all this means that from now you can easily add projects written using C++/CLI and C++/CX into your portfolio of projects regularly analysed by SonarQube.

Of course, it’s important that the growth of supported dialects is balanced with other improvements, and that’s certainly the case in this version: we made several improvements, added few rules and fixed 28 bugs. And we’re planning to go even further in the next version. Of course, as usual there will be new rules, and improvements, but we’ll also be adding a major new feature which will make analysis vastly more powerful, so stay tuned.

In the meantime, the improvements in version 3.2 are compatible with all SonarQube versions from 3.7.4 forward, and they’re worth adopting today.

Categories: Open Source

Single Sign-On Support for Enterprise Now In Beta

Sauce Labs - Wed, 11/19/2014 - 02:27

At Sauce Labs, we are hard at work identifying new ways to make adoption and usage of our products as simple and frictionless as possible. For larger organizations onboarding hundreds of users, managing access and security can quickly become challenging. To simplify the onboarding process and provide greater account security, we have rolled out integrations for four popular Single Sign-On (SSO) providers, including Ping Identity, OneLogin, Okta, and Microsoft Active Directory Federation Service (ADFS). At a high level, an SSO Identity Provider (IdP) provides a single gateway through which users can access an array of applications without logging into each application separately. A user logs into the IdP with one set of credentials and gains access to all connected applications through that same login.

This new integration reduces the likelihood that users will spend time on password recovery or account access issues and gives account owners greater control over account security. Account owners can optionally require users to log in via a corporate IdP, completely eliminating risks associated with standard account credentials.

How It Works

Our SSO support is based on the SAML 2.0 Browser POST profile. Below is a high-level representation of how authentication between the IdP and Sauce Labs is performed.

  1. User signs into the IdP via a web browser and attempts to access Sauce Labs service.
  2. IdP generates a SAML response in XML.
  3. IdP returns encoded SAML response to the browser.
  4. Browser forwards the SAML response to the Assertion Consumer Service (ACS) URL.
  5. Sauce Labs verifies the SAML response.
  6. Upon successful verification, user is granted access to Sauce Labs.

 

Enabling SSO For Your Account

The new SSO integrations are currently available through an open beta. If you are an Enterprise account owner and you would like to be placed in the beta, simply email us at beta@saucelabs.com and let us know with which provider you are interested in integrating. We will contact you to set up a kickoff and get you squared away. If you are not currently an Enterprise customer and are interested in learning about this and other Enterprise features, contact our sales team.

If you have existing Sauce Labs users, we have developed a quick and painless transition process ensuring your users are able to keep their activity history and data. Once your account is enabled for SSO, your users can access Sauce Labs via the IdP. They will be presented with the option to create a new account or log into their existing account. They need only provide their existing Sauce Labs credentials and sign in. That’s it – the transition process will be completed instantly and that user will be able to access Sauce Labs from the IdP in the future.

SSO Provider Partnerships

In conjunction with the release of our SSO integrations, we are pleased to announce partnerships with some of our amazing service providers. Once your account is enabled for SSO, you will be able to easily connect to your IdP through Ping Identity’s Application CatalogOkta’s Application Network (OAN), or OneLogin’s Connector.

Further Reading

If you do not currently use a Single Sign-On service provider and are interested in learning more about our integrated services, follow the links below.

Ping Identity
OneLogin
Okta

We love talking with our users so feel free to reach out to us at product@saucelabs.com with any comments, feedback, or requests.

 

Categories: Companies

Results from HP and Vivit Worldwide Experts: Part 1 of 3, Focus on Mobile

HP LoadRunner and Performance Center Blog - Tue, 11/18/2014 - 23:53

Deliver Amazing Apps with Confidence Now! Focus on Mobile Webinar

 

This thought leadership webinar focuses on Mobile: Trends; Themes; and Future, all from Experts and your peers in the Vivit Woldwide Community.

 

We have accelerated past ‘normal’ in the business world: a customer-focused and real-time feedback culture where expectations are higher and impacts faster. We must ‘Deliver Amazing Apps with Confidence Now’. As Developers, Testers, and Operations team members; you need the latest capabilities and proven practices to most effectively deliver these results.

 

Join us to grab ahold of HP’s latest capabilities and practices to deliver Mobile:
•How to develop mobile applications faster
•How to test with complex composite applications
•How to mitigate the risk of the mobile network and other distributed systems and services
•How to test earlier and throughout the development and testing lifecycle
•How to reduce time and effort to build and maintain development and test environments
•How to use ‘Lifecycle Virtualization’ to eliminate dependencies on hardware, software, and services

Categories: Companies

QASymphony Test Management Tool Integrated with Rally

Software Testing Magazine - Tue, 11/18/2014 - 20:11
QASymphony has announced a partnership with Rally Software. QASymphony’s qTest test case management tool now fully integrates with Rally’s Agile Platform for Application Lifecycle Management. This enhanced qTest integration to Rally ALM allows customers to leverage the power of exploratory, manual, and automated testing all in one easy to use platform. Additionally, customers are granted seamless traceability out of one tool and in to the other. qTest is a scalable QA software testing management platform optimized for enterprise Agile development teams. Available as a SaaS application or installed on-premise, qTest allows ...
Categories: Communities

Latest Testing in the Pub Podcast Takes on Testing Weekend Europe

uTest - Tue, 11/18/2014 - 20:00

Testing in the PubSteve Janaway and team are back for more pub pints over software testing discussion, in the latest Testing in the Pub podcast.

In Episode 13, UK-based software testers Amy Phillips and Neil Studd talk Weekend Testing Europe. Weekend Testing Europe is the European chapter of Weekend Testing and was just relaunched in 2014 by Amy and Neil.

Weekend Testing is a program that aims to facilitate peer-to-peer learning through monthly Skype testing sessions. If you’ll also recall, uTest contributor Michael Larsen is a founding member of the Americas chapter of the program.

Be sure to check out the podcast to learn more about the monthly sessions. If you’re from Europe and interested in participating thereafter, you can send an email and/or ping Testing Weekend Europe on Skype ID europetesters. You can also follow them on Twitter @europetesters.

The latest edition of the podcast is available right here for download and streaming, and is also available on YouTube and iTunes. Be sure to check out the entire back catalog of the series as well, and Stephen’s recent interview with uTest.

Categories: Companies

Advanced script enhancements in LoadRunner’s new TruClient – Native Mobile protocol

HP LoadRunner and Performance Center Blog - Tue, 11/18/2014 - 20:00

p9.pngIn my previous blog post I introduced LoadRunner’s new TruClient – Native Mobile protocol. In this post I’ll explain about advanced script enhancements. We’ll cover the area of object identification parameterization, adding special device steps and overcoming record and replay problems with ‘Analog Mode’. This post will be followed by the final post in this series on the TruClient – Native Mobile protocol that will focus on debugging using the extended log, running a script on multiple devices and transaction timings.

 

(This post was written by Yehuda Sabag from the TruClient R&D Team)

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today