Skip to content

Feed aggregator

Webinar recap: Static analysis’ role in automotive functional safety

Kloctalk - Klocwork - 1 hour 24 min ago

Last week, we held a joint webinar with QNX Software Systems discussing how static analysis plays a key role in automotive functional safety and ISO 26262 (you can watch the recording here). We had developers, testers, architects, and students attend from all over the world and they all had one interest in common: better delivery of safe automotive software.

We always try to understand our attendees and here’s an interesting result from one of the polls we ran (based on table 9 of ISO 26262-6, which lists methods of design verification for software units):

Which of the following tools/techniques does your company employ in its development?
(multiple answers allowed)

Static code analysis – 47%
Walk-through – 45%
Formal verification – 35%
Control flow analysis – 31%
Semantic code analysis – 27%
Data flow analysis – 22%

While static code analysis is clearly the most popular choice among those concerned with automotive functional safety, the other end of the spectrum, manual walk-through, is popular as well. It seems that relying on your own two eyes is still considered a reliable approach!

We asked two more questions specific to ISO 26262 and received these responses:

Is your organization currently working on a product that will be certified to the ISO 26262 standard?

No – 48%
Prefer not to say – 31%
Yes – 21%

Which ASIL level is your company most concerned with?
(multiple answers allowed)

ASIL C – 48%
ASIL B – 41%
ASIL A – 33%
ASIL D – 33%

While a large number of our attendees weren’t currently working on an ISO 26262 project (or preferred not to say), there’s quite a spread of interest across all the safety levels. This isn’t surprising given that our customers work on a wide range of automotive systems for different types of end products.

Regardless of safety level, Klocwork’s ISO 26262-certified checkers relieve the time and effort required for tool qualification – fast-forward to 24:10 in the webinar to see how.

For more on how Klocwork helps reduce the effort required to achieve ISO 26262 certification, read the following resources:

Software on Wheels: Addressing the Challenges of Embedded Automotive Software (PDF)
Fact sheet: Klocwork automotive overview (PDF)

Categories: Companies

Focus on Automated Testing, Discount for uTesters at UCAAT

uTest - 2 hours 9 min ago

Automation is a sector of software testing that has experienced explosive growth and enterprise investment in recent years. The knowledge necessary to learn about and specialize in automated testing is found at industry events like the upcoming 2nd annual User Conference on Advanced Automated Testing (UCAAT) in Munich, Germany from September 16-18, 2014. ucaat

The European conference, jointly organized by the “Methods for Testing and Specification” (TC MTS) ETSI Technical Committee, QualityMinds, and German Testing Day, will focus exclusively on use cases and best practices for software and embedded testing automation.

The 2014 program will cover topics like agile test automation, model-based tests, test languages and methodologies, as well as web of service and use of test automation in various industries like automotive, medical technology, and security, to name a few. Noted participants in the opening session include Dr. Andrej Pietschker (Giesecke & Devrient), Professor Ina Schieferdecker (Free University of Berlin), Markus Becher (BMW), Dr. Heiko Englert (Siemens), and Dr. Alexander Pretschner (Technical University of Munich).

UCAAT 2013, which took place in Paris, attracted 200 participants and included 21 technical presentations held by renowned speakers such as Professor Lionel Briand (University of Luxembourg) and Matthias Rasking (Accenture).

As a special offer to our testing community, you can receive a 5% discount for new registrations to UCAAT. Email for the special discount code for this and other shows.

Also, be sure to check out the Events calendar for upcoming online and in-person events!

Categories: Companies

Understanding Application Performance on the Network – Part VI: The Nagle Algorithm

In Part V, we discussed processing delays caused by “slow” client and server nodes. In Part VI, we’ll discuss the Nagle algorithm, a behavior that can have a devastating impact on performance and, in many ways, appear to be a processing delay. Common TCP ACK Timing Beyond being important for (reasonably) accurate packet flow diagrams, […]

The post Understanding Application Performance on the Network – Part VI: The Nagle Algorithm appeared first on Compuware APM Blog.

Categories: Companies

Networking is important–or what we are really not good at

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

virtualbusinessMany of us a software developer work with computers to avoid contact with people. To be fair, we all had our fair share of clients that would not understand why we couldn’t draw red lines with green ink. I understand the reason why would rather stay away from people who don’t understand what we do.

However… (there’s always an however) as I recently started my own business recently, I’ve really started to understand the meaning of building your network and staying in contact with people. While being an MVP has always lead me to meet great people all around Montreal, the real value I saw was when it was a very good contact of mine that introduced me to one of my first client. He knew they needed someone with my skills and directly introduced while skipping all the queues.

You can’t really ask for more. My first client was a big company. You can’t get in there without either being a big company that won a bid, be someone that is renowned or have the right contacts.

You can’t be the big company, you might not ever be someone but you can definitely work on contacts and expanding the amount of people you know.

So what can you do to expand your contacts and grow your network?

Go to user groups

This is killing 2 birds with one stone. First, you learn something new. It might be boring if you already now everything but let me give you a nice trick.

Arrive early and chat with people. If you are new, ask them if they are new too, ask them about their favourite presentation (if any), where they work, whether they like it, etc. Boom. First contact is done. You can stop sweating.

If this person has been here more than once, s/he probably knows other people that you can be introduced.

Always have business cards

I’m a business owner now. I need to have cards. You might think of yourself a low importance developer but if you meet people and impress them with your skills… they will want to know where you hang out.

If your business doesn’t have 50$ to put on you, make your own!  VistaPrint makes those “Networking cards” where you an just input your name, email, position, social network, whatever on them and you can get 500 for less than 50$.

Everyone in the business should have business cards. Especially those that makes the company money.

Don’t expect anything

I know… giving out your card sounds like you want to sell something to people or that you want them to call you back.

When I give my card, it’s in the hope that when they come back later that night and see my card they will think “Oh yeah it’s that guy I had a great conversation with!”. I don’t want them to think I’m there to sell them something.

My go-to phrase when I give it to them is “If you have any question or need a second advice, call me or email me! I’m always available for people like you!”

And I am.

Follow-up after giving out your card

When you give your card and receive another in exchange (you should!), send them a personal email. Tell them about something you liked from the conversation you had and ask them if you could add them on LinkedIn (always good). Seem simple  to salesman but us developers often forget that an email the day after has a very good impact.

People will remember you for writing to them personally with specific details from the conversation.

Yes. That means no “copy/paste” email. Got to make it personal.

If the other person doesn’t have a business card, take the time to note their email and full name (bring a pad!).

Rinse and repeat

If you keep on doing this, you should start to build a very strong network of developers in your city. If you have a good profile, recruiters should also start to notice you. Especially if you added all those people on LinkedIn.

It’s all about incremental growth. You won’t be a superstar tomorrow (and neither am I) but by working at it, you might end-up finding your next job through weird contacts that you only met once but that were impressed by who you are.


So here’s the Too Long Didn’t read version. Go out. Get business cards. Give them to everyone you meet. You intention is to help them, not sell them anything. Repeat often.

But in the long run, it’s all about getting out there. If you want a more detailed read of what real networking is about, you should definitely read Work the Pond by Darcy Rezac. It’s a very good read.

Categories: Blogs

Massive Community Update 2014-07-04

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So here I go again! We have Phil Haack explaining how he handle tasks in his life with GitHub, James Chamber’s series on MVC and Bootstrap, Visual Studio 2014 Update 3, MVC+WebAPI new release and more!

Especially, don’t miss this awesome series by Tomas Jansson about CQRS. He did an awesome job and I think you guys need to read it!

So beyond this, I’m hoping you guys have a great day!

Must Read

GitHub Saved My Marriage - You've Been Haacked (

James Chamber’s Series

Day 21: Cleaning Up Filtering, the Layout & the Menu | They Call Me Mister James (

Day 22: Sprucing up Identity for Logged In Users | They Call Me Mister James (

Day 23: Choosing Your Own Look-And-Feel | They Call Me Mister James (

Day 24: Storing User Profile Information | They Call Me Mister James (

Day 25: Personalizing Notifications, Bootstrap Tables | They Call Me Mister James (

Day 26: Bootstrap Tabs for Managing Accounts | They Call Me Mister James (

Day 27: Rendering Data in a Bootstrap Table | They Call Me Mister James (


Nodemon vs Grunt-Contrib-Watch: What’s The Difference? (


Update 3 Release Candidate for Visual Studio 2013 (

Test-Driven Development with Entity Framework 6 -- Visual Studio Magazine (


Announcing the Release of ASP.NET MVC 5.2, Web API 2.2 and Web Pages 3.2 (

Using Discovery and Katana Middleware to write an OpenID Connect Web Client | on (

Project Navigation and File Nesting in ASP.NET MVC Projects - Rick Strahl's Web Log (

ASP.NET Session State using SQL Server In-Memory (

CQRS Series (code on GitHub)

CQRS the simple way with eventstore and elasticsearch: Implementing the first features (

CQRS the simple way with eventstore and elasticsearch: Implementing the rest of the features (

CQRS the simple way with eventstore and elasticsearch: Time for reflection (

CQRS the simple way with eventstore and elasticsearch: Build the API with simple.web (

CQRS the simple way with eventstore and elasticsearch: Integrating Elasticsearch (

CQRS the simple way with eventstore and elasticsearch: Let us throw neo4j into the mix (

Ending discussion to my blog series about CQRS and event sourcing (


Michael Feathers - Microservices Until Macro Complexity (

Windows Azure

Azure Cloud Services and Elasticsearch / NoSQL cluster (PAAS) | I'm Pedro Alonso (


Monitoring (

Search Engines (ElasticSearch, Solr, etc.)

Fast Search and Analytics on Hadoop with Elasticsearch | Hortonworks ( This Week In Elasticsearch | Blog | Elasticsearch (

Solr vs. ElasticSearch: Part 1 – Overview | Sematext Blog on (

Categories: Blogs

Community Update 2014-06-25

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So not everything is brand new since I did my last community update 8 days ago. What I suggest highly is the combination of EventStore and ElasticSearch in a great article by Tomas Jansson.

It’s definitely a must read and I highly recommend it. Of course, don’t miss the series by James Chambers on Bootstrap and MVC.

Enjoy all the reading!

Must Read

Be more effective with your data - ElasticSearch | Raygun Blog (

Your Editor should Encourage You - You've Been Haacked (

Exploring cross-browser math equations using MathML or LaTeX with MathJax - Scott Hanselman (

CQRSShop - Tomas Jansson ( – Link to a tag that contains 3 blog post that are must read.

James Chambers Series

Day 18: Customizing and Rendering Bootstrap Badges | They Call Me Mister James (

Day 19: Long-Running Notifications Using Badges and Entity Framework Code First | They Call Me Mister James (

Day 20: An ActionFilter to Inject Notifications | They Call Me Mister James (

Web Development

Testing Browserify Modules In A (Headless) Browser (


Fredrik Normén - Using Razor together with ASP.NET Web API (

A dynamic RequireSsl Attribute for ASP.NET MVC - Rick Strahl's Web Log (

Versioning RESTful Services | Howard Dierking (

ASP.NET vNext Routing Overview (


Exceptions exist for a reason – use them! | John V. Petersen (

Nuget Dependencies and latest Versions - Rick Strahl's Web Log (

Trying Redis Caching as a Service on Windows Azure - Scott Hanselman (

Categories: Blogs

Massive Community Update 2014-06-17

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So as usual, here’s what’s new since a week ago.

Ever had problem downloading SQL Server Express? Too many links, download manager, version selection, etc. ? Fear not. Hanselman to the rescue. I’m also sharing with you the IE Developer channel that you should definitely take a look at.

We also continue to follow the series by James Chambers.

Enjoy your reading!

Must Read

Download SQL Server Express - Scott Hanselman (

Announcing Internet Explorer Developer Channel (

Thinktecture.IdentityManager as a replacement for the ASP.NET WebSite Administration tool - Scott Hanselman (


Why Use Node.js? A Comprehensive Introduction and Examples | Toptal (

Building With Gulp | Smashing Magazine ( James Chambers Series

Day 12: | They Call Me Mister James (

Day 13: Standard Styling and Horizontal Forms | They Call Me Mister James (

Day 14: Bootstrap Alerts and MVC Framework TempData | They Call Me Mister James (

Day 15: Some Bootstrap Basics | They Call Me Mister James (

Day 16: Conceptual Organization of the Bootstrap Library | They Call Me Mister James (


Owin middleware (

Imran Baloch's Blog - K, KVM, KPM, KLR, KRE in ASP.NET vNext (

Jonathan Channon Blog - Nancy, ASP.Net vNext, VS2014 & Azure (

Back To the Future: Windows Batch Scripting & ASP.NET vNext | A developer's blog (

Dependency Injection in ASP.NET vNext (


Here Come the .NET Containers | Wintellect (

Architecture and Methodology

BoundedContext (

UnitTest (

Individuals, Not Groups | 8th Light (

Open Source

Download Emojis With Octokit.NET - You've Been Haacked (


Elasticsearch migrations with C# and NEST | Thomas Ardal (

Categories: Blogs

Massive Community Update 2014-06-12

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So I’ve been doing a bit of an experiment. I’ve seen that those community updates are normally rather small. I’ve waited a whole week before posting something new to see if we have better content.

I like the “Massive Community Update” for the amount of links it provides and for the occasion to put James Chambers whole series in perspective.

If you’re still thinking about it… read it. It’s worth it.

Visual Studio “14” CTP

TWC9: Visual Studio "14" CTP Episode (

NDC Oslo 2014

0-layered architecture on Vimeo (

Monitoring your app with Logstash and Elasticsearch on Vimeo (

James Chambers Series

Day 7: Semi-Automatic Bootstrap – Display Templates | They Call Me Mister James (

Day 8: Semi-Automatic Bootstrap – Editor Templates | They Call Me Mister James (

Day 9: Templates for Complex Types | They Call Me Mister James (

Day 10: HtmlHelper Extension Methods | They Call Me Mister James (

Day 11: Realistic Test Data for Our View | They Call Me Mister James (

Web Development

NDC 2014: SOLID CSS/JavaScript & Bower talks | Anthony van der Hoorn (

Browserify: My New Choice For Modules In A Browser / Backbone App (


Final Thoughts on Nuget and Some Initial Impressions on the new KVM | The Shade Tree Developer on (

C# - A C# 6.0 Language Preview (


Host AngularJS (Html5Mode) in ASP.NET vNext (

ASP.NET: Building Web Application Using ASP.NET and Visual Studio (

jaywayco » Is ASP.Net vNext The New Node.js (

Learn How to Build a Modern Web Application with Client Side JavaScript and ASP.NET (

Fire and Forget on ASP.NET (

ASP.NET vNext Moving Parts: OWIN (

POCO controllers in ASP.NET vNext - StrathWeb (

Jon Galloway - A 30 Minute Look At ASP.NET vNext (


FIXED: Blue Screen of Death (BSOD) 7E in HIDCLASS.SYS while installing Windows 7 - Scott Hanselman (

Guide to Freeing up Disk Space under Windows 8.1 - Scott Hanselman (

GitHub for Windows 2.0 - You've Been Haacked (

Simplified Setup and Use of Docker on Microsoft Azure | MS OpenTech (

Categories: Blogs

Community Update 2014-06-04 ASP.NET vNext, @CanadianJames MVC Bootstrap series and what we learned from C++

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So the big news is that Visual Studio 14 actually reached CTP. Of course, this is not the final name and is very temporary.

If you want to install it, I suggest booting up a VM locally or on Windows Azure.


Visual Studio “14”

Visual Studio "14" CTP Downloads (

Announcing web features in Visual Studio “14” CTP (

Visual Studio "14" CTP (

ASP.NET vNext in Visual Studio “14” CTP (

Morten Anderson - ASP.NET vNext is now in Visual Studio (

James Chambers MVC/Bootstrap Series

Day 4: Making a Page Worth a Visit | They Call Me Mister James (

Web Development

To Node.js Or Not To Node.js | Haney Codes .NET (


aburakab/ASP-MVC-Tooltip-Validation · GitHub ( – Translate MVC errors to Bootstrap notification

Download Microsoft Anti-Cross Site Scripting Library V4.3 from Official Microsoft Download Center (

ASP.NET Web API parameter binding part 1 - Understanding binding from URI (

Cutting Edge - External Authentication with ASP.NET Identity (

Forcing WebApi controllers to output JSON (


What – if anything – have we learned from C++? (

Search Engine Elasticsearch 1.2.1 Released | Blog | Elasticsearch ( Marvel 1.2 Released | Blog | Elasticsearch (

Dealing with human language (

Categories: Blogs

Late Community Update 2014-06-02 REST API, Visual Studio Update 3, data indexing, Project Orleans and more

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So I was at the MVP Open Days and I’ve missed a few days. It seems that my fellow MVP James Chambers has started a great initiative about exploring Bootstrap and MVC with lots of tips and tricks. Do not miss out!

Otherwise, this is your classic “I’ve missed a few days so here are 20,000 interesting links that you must read” kind of day.


Must Read

AppVeyor - A good continuous integration system is a joy to behold - Scott Hanselman (

This URL shortener situation is officially out of control - Scott Hanselman (

James Chambers Bootstrap and MVC series

Day 0: Boothstrapping Mvc for the Next 30 Days | They Call Me Mister James (

Day 1: The MVC 5 Starter Project | They Call Me Mister James (

Day 2: Examining the Solution Structure | They Call Me Mister James (

Day 3: Adding a Controller and View | They Call Me Mister James (

Web Development

How much RESTful is your API | Bruno Câmara (

Data-binding Revolutions with Object.observe() - HTML5 Rocks (


ASP.NET Moving Parts: IBuilder (

Supporting only JSON in ASP.NET Web API - the right way - StrathWeb (

Shamir Charania: Hacky In Memory User Store for ASP.NET Identity 2.0 (


Missing EF Feature Workarounds: Filters | Jimmy Bogard's Blog (

Visual Studio/Team Foundation Server 2013 Update 3 CTP1 (VS 2013.3.1 if you wish) (

TWC9: Visual Studio 2013 Update 3 CTP 1, Code Map, Code Lens for Git and more... (

.NET 4.5 is an in-place replacement for .NET 4.0 - Rick Strahl's Web Log (

ASP.NET - Topshelf and Katana: A Unified Web and Service Architecture (

Windows Azure

Episode 142: Microsoft Research project Orleans simplify development of scalable cloud services (



Search Engines

The Absolute Basics of Indexing Data | Java Code Geeks (

Categories: Blogs

Incompatibility between Nancy and Superscribe

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So I’ve had the big idea of building an integration of Nancy and Superscribe and try to show how to do it.

Sadly, this is not going to happen.

Nancy doesn’t treat routing as a first class citizen like Superscribe does and doesn’t allow interaction with routing middleware. Nancy has its own packaged routing and will not allow Superscribe to provide it the URL.

Nancy does work with Superscribe but you have to hard-code the URL inside the NancyModule. So if you upgrade your Superscribe graph URL, Nancy will not respond on the new URL without you changing the hardcoded string.

I haven’t found a solution yet but if you do, please let me know!

Categories: Blogs

Skills Matrix & Development Plan - Template Walkthrough

Yet another bloody blog - Mark Crowther - Thu, 07/24/2014 - 00:33
One thing we'll get asked at some point is to assess the skills and competencies of the test team. To do that we need to understand what the skills competencies actually are and how we're going to assess them. We also need to decide what we're going to do with the information we gather.

Skills and competencies come in many shapes and forms. They draw on hard learning from the team members study up to raw experience gained over many years and projects delivered. As such we need to agree how to group them, then break them down into our Skills Matrix.


In the Skills Matrix on the site, we have the following examples:

  • Technical - Tools and Technology
  • Testing
  • Application

Clearly you could break these down in many ways, but these are a good start. Under each category we have entered specific examples such as;

  • Tools: ALM, UFT, Jira, Toad, PuTTY

Technology is more general and could include scripting languages, protocols or maybe servers and operating systems. As with all templates, it provides a guide but it's up to you to interpret and apply it to your unique testing or management problem.

In order for the team to be ranked (or rank themselves), we need to understand what those ranks are and what 'value' we're assigning. On the About tab, you'll see this has been defined as:

  Level   Definition 1 No knowledge   No practical, working knowledge, should be able to use if provided clear guidance 2 Awareness   Can work with existing solutions and practices, understand what to do but perhaps not fully why 3 Proficiency    Can maintain and provide minor improvements, notable skill in some areas 4 Competency   Full understanding of existing solutions and practices required for day to day work 5 Expertise   Able to critically assess and improve on current use and build future capability
Clear definitions are essential, but in no way perfect. Use these as a guide but encourage the team not to labour too much over them.

A word of warning...
When rolling out the Skills Matrix and asking the team to rank themselves, the first question will be 'Why?'. It isn't unreasonable that you'll spook the team into wondering what it might mean to rank low on the items you want to assess. You wouldn't be getting them to complete it if it wasn't relevant.

Be sure to reassure them, that this is to help identify the skill base of the team, to make assignment of testing tasks more effective, to identify ways in which the team members can be trained and so increase the team's capability. 

Professional Development Planning
You would do well to introduce a strong process of review and assessment of the team, before you roll out the Skills Matrix.

To help with this, grab a copy of the PDP Scratch Pad template and have a read through of the Developing the Team paper to learn more about implementing an appraisal process, both are on the main site.


Liked this post?Say thanks by Following the blog or subscribing to the YouTube Channel!

Categories: Blogs

Seeking the Light – A question from a recent TDD training attendee

James Grenning’s Blog - Wed, 07/23/2014 - 21:51

Here is a good question, and my reply, from a recent attendee of my Test-Driven Development for Embedded C training.

Hi James,

As I work more with TDD, one of the concepts I am still struggling to grasp is how to test “leaf” components that touch real hardware. For example, I am trying to write a UART driver. How do I test that using TDD? It seems like to develop/write the tests, I will need to write a fake UART driver that doesn’t touch any hardware. Let’s say I do that. Now I have a really nice TDD test suite for UART drivers. However, I still need to write a real UART driver…and I can’t even run the TDD tests I created for it on the hardware. What value am I getting from taking the TDD approach here?

I feel like for low-level, hardware touching stuff you can’t really apply TDD. I understand if I didn’t have the hardware I could write a Mock, but in my case I have the hardware so why not just write the real driver?

I am really confused about this…and so are my co-workers. Can you offer any words of wisdom to help us see the light?


Seeking the Light

Hi Seeking the Light

I am happy to help. Thanks for the good question.

Unit tests and integration tests are different. We focussed on unit testing in the class. You test-drove the flash driver Tuesday afternoon. That showed you how to test-drive a device driver from the spec. You mocked out IORead and IOWrite, not the flash driver. You test-drove the flash driver so that when you go to the hardware you have code that is doing what you think it is supposed to do.

The unit tests you write with mock IO are not meant to run with the real IO device, but with the fake versions of IORead and IOWrite. You could run the test suite on the real hardware, but the unit tests would still use mock IO.

I think the flash driver exercise illustrated the value. Pretty much everyone that does the flash driver exercise cannot get the ready loop right without several attempts. Most end up with an infinite loop, or a loop that does not run at all. With the TDD approach, we discover logic mistakes like that during off-target TDD. We want to find logic mistakes during test-driving because they are easy to identify and fix with the fast feedback TDD provides. Finding the problem on-target with a lot of other code (that can be wrong) is more difficult and time consuming. If your diver ready check resulted in an infinite loop, that can be hard to find. Maybe your watchdog timer will keep resetting the board as you hunt for the problem. Bottom line, it is cheaper to find those mistakes with TDD.

TDD can’t find every problem. What if you were wrong about which bit was the ready bit? An integration test could find it. An integration test would use the real UART driver with the real IORead and IOWrite functions. These tests make sure that the driver works with the real hardware. These are different than the unit tests and are worth writing. You could put a loopback connector on your UART connector. Your integration test could send and receive test data over the loopback. If your was looking at the wrong bit for the ready check, you would still have an infinite loop, but that happens only if you mis-read the spec. You’d have to find that mistake via review or integration test.

An integration test may be partially automated. You don’t need to run these so often so, partial automation should be OK. You would only rerun them when you touch the driver or are doing some release. (Loopback is probably better in this case as it can run unattended.) So the test might output a string to a terminal and wait for a string to be entered. Depending on other signals that your driver supports, you may want to breakout and control those signals in a physical test harness.

An integration test for the flash driver would exercise the flash device through the driver. You might read and write blocks of values to the real flash device. You might do the flash identification sequence. You might protect a block and try to write to it. Your integration test would make sure modification is prevented and generates the right error message. These tests use the real versions of IORead and IOWrite and run on the hardware only. When integration problems are found, solve them and then go back to the unit tests and make them reflect reality. You will know which tests need to be changed, because once the integration problems are fixed, the associated unit test will fail.

Some other words in your question makes me want to talk about a fake UART driver. You will want a fake UART driver when you are test-driving code that uses the UART driver. For example a message processor that waits for a string will be much easier to test if you fake the get_string() function. You can build that fake with mocking or hand crafted, depending upon your needs.

All that said, in general the test above the hardware abstraction layer (the layer your UART driver is part of) are the most valuable tests. They should encompass your product’s intelligence and uniqueness. Hardware comes and then it goes, as do the drivers as the components change. Your business logic has, or should have, a long useful life. The business logic for a successful product should last longer than any hardware platform’s life. Consequently those test have a longer useful life too. If I was creating a driver from scratch, I would use TDD because it is the fastest way for me to work, and results in code that can be safely changed as I discover where my mistakes are.

I hope this helps.


Categories: Blogs

Conventional HTML in ASP.NET MVC: Data-bound elements

Jimmy Bogard - Wed, 07/23/2014 - 19:03

Other posts in this series:

We’re now at the point where our form elements replace the existing templates in MVC, extend to the HTML5 form elements, but there’s still something missing. I skipped over the dreaded DropDownList, with its wonky SelectListItem objects.

Drop down lists can be quite a challenge. Typically in my applications I have drop down lists based on a few known sets of data:

  • Static list of items
  • Dynamic list of items
  • Dynamic contextual list of items

The first one is an easy target, solved with the previous post and enums. If a list doesn’t change, just create an enum to represent those items and we’re done.

The second two are more of a challenge. Typically what I see is attaching those items to the ViewModel or ViewBag, along with the actual model. It’s awkward, and combines two separate concerns. “What have I chosen” is a different concern than “What are my choices”. Let’s tackle those last two choices separately.

Dynamic lists

Dynamic lists of items typically come from a persistent store. An administrator goes to some configuration screen to configure the list of items, and the user picks from this list.

Common here is that we’re building a drop down list based on set of known entities. The definition of the set doesn’t change, but its contents might.

On our ViewModel, we’d handle this in our form post with an entity:

public class RegisterViewModel
    public string Email { get; set; }

    public string Password { get; set; }

    public string ConfirmPassword { get; set; }

    public AccountType AccountType { get; set; }

We have our normal registration data, but the user also gets to choose their account type. The values of the account type, however, come from the database (and we use model binding to automatically bind up in the POST the AccountType you chose).

Going from a convention point of view, if we have a model property that’s an entity type, let’s just load up all the entities of that type and display them. If you have an ISession/DbContext, this is easy, but wait, our view shouldn’t be hitting the database, right?


Luckily for us, our conventions let us easily handle this scenario. We’ll take the same approach as our enum drop down builder, but instead of using type metadata for our list, we’ll use our database.


// Our modifier
public class EnitityDropDownModifier : IElementModifier
    public bool Matches(ElementRequest token)
        return typeof (Entity).IsAssignableFrom(token.Accessor.PropertyType);

    public void Modify(ElementRequest request)
        request.CurrentTag.Append(new HtmlTag("option"));

        var context = request.Get<DbContext>();
        var entities = context.Set(request.Accessor.PropertyType)
        var value = request.Value<Entity>();

        foreach (var entity in entities)
            var optionTag = new HtmlTag("option")

            if (value != null && value.Id == entity.Id)


Instead of going to our type system, we query the DbContext to load all entities of that property type. We built a base entity class for the common behavior:

public abstract class Entity
    public Guid Id { get; set; }
    public abstract string DisplayValue { get; }

This goes into how we build our select element, with the display value showed to the user and the ID as the value. With this in place, our drop down in our view is simply:

<div class="form-group">
    @Html.Label(m => m.AccountType)
    <div class="col-md-10">
        @Html.Input(m => m.AccountType)

And any entity-backed drop-down in our system requires zero extra effort. Of course, if we needed to cache that list we would do so but that is beyond the scope of this discussion.

So we’ve got dynamic lists done, what about dynamic lists with context?

Dynamic contextual list of items

In this case, we actually can’t really depend on a convention. The list of items is dynamic, and contextual. Things like “display a drop down of active users”. It’s dynamic since the list of users will change and contextual since I only want the list of active users.

It then comes down to the nature of our context. Is the context static, or dynamic? If it’s static, then perhaps we can build some primitive beyond just an entity type. If it’s dynamic, based on user input, that becomes more difficult. Rather than trying to focus on a specific solution, let’s take a look at the problem: we have a list of items we need to show, and have a specific query needed to show those items. We have an input to the query, our constraints, and an output, the list of items. Finally, we need to build those items.

It turns out this isn’t really a good choice for a convention – because a convention doesn’t exist! It varies too much. Instead, we can build on the primitives of what is common, “build a name/ID based on our model expression”.

What we wound up with is something like this:

public static HtmlTag QueryDropDown<T, TItem, TQuery>(this HtmlHelper<T> htmlHelper,
    Expression<Func<T, TItem>> expression,
    TQuery query,
    Func<TItem, string> displaySelector,
    Func<TItem, object> valueSelector)
    where TQuery : IRequest<IEnumerable<TItem>>
    var expressionText = ExpressionHelper.GetExpressionText(expression);
    ModelMetadata metadata = ModelMetadata.FromLambdaExpression(expression, htmlHelper.ViewData);
    var selectedItem = (TItem)metadata.Model;

    var mediator = DependencyResolver.Current.GetService<IMediator>();
    var items = mediator.Send(query);
    var select = new SelectTag(t =>
        t.Option("", string.Empty);
        foreach (var item in items)
            var htmlTag = t.Option(displaySelector(item), valueSelector(item));
            if (item.Equals(selectedItem))

        t.Attr("name", expressionText);

    return select;

We represent the list of items we want as a query, then execute the query through a mediator. From the results, we specify what should be the display/value selectors. Finally, we build our select tag as normal, using an HtmlTag instance directly. The query/mediator piece is the same as I described back in my controllers on a diet series, we’re just reusing the concept here. Our usage would look something like:

<div class="col-md-10">
    @Html.QueryDropDown(m => m.User,
        new ActiveUsersQuery(),
        t => t.FullName,
        t => t.Id)

If the query required contextual parameters – not a problem, we simply add them to the definition of our request object, the ActiveUsersQuery class.

So that’s how we’ve tackled dynamic lists of items. Depending on the situation, it requires conventions, or not, but either way the introduction of the HtmlTag library allowed us to programmatically build up our HTML without resorting to strings.

We’ve tackled the basics of building input/output/label elements, but we can go further. In the next post, we’ll look at building higher-level components from these building blocks that can incorporate things like validation messages.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Appium Bootcamp – Chapter 2: The Console

Sauce Labs - Wed, 07/23/2014 - 17:30

appium_logoThis is the second post in a series called Appium Bootcamp by noted Selenium expert Dave Haeffner. To read the first post, click here.

Dave recently immersed himself in the open source Appium project and collaborated with leading Appium contributor Matthew Edwards to bring us this material. Appium Bootcamp is for those who are brand new to mobile test automation with Appium. No familiarity with Selenium is required, although it may be useful. This is the second of eight posts; a new post will be released each week.

Configuring Appium

In order to get Appium up and running there are a few additional things we’ll need to take care of.

If you haven’t already done so, install Ruby and setup the necessary Appium client libraries (a.k.a. “gems”). You can read a write-up on how to do that here.

Installing Necessary Libraries

Assuming you’ve already installed Ruby and need some extra help installing the gems, here’s what you to do.

  1. Install the gems from the command-line with gem install appium_console
  2. Once it completes, run gem list | grep appium

You should see the following listed (your version numbers may vary):

appium_console (1.0.1)
appium_lib (4.0.0)

Now you have all of the necessary gems installed on your system to follow along.

An Appium Gems Primer

appium_lib is the gem for the Appium Ruby client bindings. It is what we’ll use to write and run our tests against Appium. It was installed as a dependency to appium_console.

appium_console is where we’ll focus most of our attention in the remainder of this and the next post. It is an interactive prompt that enables us to send commands to Appium in real-time and receive a response. This is also known as a record-eval-print loop (REPL).

Now that we have our libraries setup, we’ll want to grab a copy of our app to test against.

Sample Apps

Don’t have a test app? Don’t sweat it. There are pre-compiled test apps available to kick the tires with. You can grab the iOS app here and the Android app here. If you’re using the iOS app, you’ll want to make sure to unzip the file before using it with Appium.

If you want the latest and greatest version of the app, you can compile it from source. You can find instructions on how to do that for iOS here and Android here.

Just make sure to put your test app in a known location, because you’ll need to reference the path to it next.

App Configuration

When it comes to configuring your app to run on Appium there are a lot of similarities to Selenium — namelythe use of Capabilities (e.g., “caps” for short).

You can specify the necessary configurations of your app through caps by storing them in a file called appium.txt.

Here’s what appium.txt looks like for the iOS test app to run in an iPhone simulator:

platformName = "ios"
app = "/path/to/"
deviceName = "iPhone Simulator"

And here’s what appium.txt looks like for Android:

platformName = "android"
app = "/path/to/api.apk"
deviceName = "Android"
avd = "training"

For Android, note the use of both avd. The "training" value is for the Android Virtual Device that we configured in the previous post. This is necessary for Appium to auto-launch the emulator and connect to it. This type of configuration is not necessary for iOS.

For a full list of available caps, read this.

Go ahead and create an appium.txt with the caps for your app (making sure to place it in the same directory as the Gemfile we created earlier).

Launching The Console

Now that we have a test app on our system and configured it to run in Appium, let’s fire up the Appium Console.

First we’ll need to start the Appium server. So let’s head over to the Appium GUI and launch it. It doesn’t matter which radio button is selected (e.g., Android or Apple). Just click the Launch button in the top right-hand corner of the window. After clicking it, you should see some debug information in the center console. Assuming there are no errors or exceptions, it should be up ready to receive a session.

After that, go back to your terminal window and run arc (from the same directory asappium.txt). This is the execution command for the Appium Ruby Console. It will take the caps from appium.txt and launch the app by connecting it to the Appium server. When it’s done you will have an emulator window of your app that you can interact with as well as an interactive command-prompt for Appium.


Now that we have our test app up and running, it’s time to interrogate our app and learn how to interact with it.

Click HERE to go to Chapter 1.

About Dave Haeffner: Dave is a recent Appium convert and the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by thousands of testing professionals) as well as The Selenium Guidebook (a step-by-step guide on how to use Selenium Successfully). He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Follow Dave on Twitter - @tourdedave

Categories: Companies

Announcing the 2014 Summer Bug Battle, uTest’s First Since 2010

uTest - Wed, 07/23/2014 - 16:18

Marty3uTest is happy and excited to announce that a proud tradition and competition that started in our community in 2008 is back after a four-year hiatus…the Bug Battle!

Bug Battles are arguably even more popular than they were since the last time we held this esteemed competition. Companies from Microsoft to Facebook are offering up bounties to testers that find the most crucial of bugs bogging down their apps, and putting their companies’ credibility on the line.

The Bug Battle launches right now, Wednesday, July 23. Testers will have two weeks, until Wednesday, August 6th, to submit the most impactful Desktop, Web and Mobile bugs from testing tools contained on our Tool Reviews site. Only the best battlers will take home all the due glory, respect, and the cash prizes! And speaking of those cash prizes, we’ll be awarding well over $1000, along with uTest swag for bugs that are not only the most crucial and impactful, but that are part of well-written bug reports.

Want to be updated on all of the action? Be sure to follow along on your favorite social media channels so you don’t miss any of the milestones:

We’ll also be keeping you covered on the competition here at the uTest Blog every step of the way, along with the announcement of the winners on Wednesday, August 20th…after the community gets their say in voting!

The competition is only for members of the uTest Community, which…ahem…is totally free, so if you’re not a member, sign up today. Beyond the competition, you’ll also have access to some of the top testing talent in the industry in our Forums, and a wealth of free training content at uTest University.

Be sure to check out all of the full submission details, rules, prizes and deadlines over at the official 2014 Summer Bug Battle site.

Let the games begin!

Categories: Companies

Flexibility increases appeal of open source for public sector

Kloctalk - Klocwork - Wed, 07/23/2014 - 15:05

Open source software is currently experiencing a surge in both popularity and applicability. While the technology has been around for quite a while by this point, never before has open source software been embraced to this degree. Perhaps the most notable example of this trend is the growing role played by these solutions in the public sector. Increasingly, governments around the world are leveraging open source for a wide range of purposes.

There are a number of factors driving this trend. Among the most significant, as Government Computing recently highlighted, is the growing realization that proprietary software providers require inflexible contracts. By turning to open source options, government agencies can enjoy the same or a better level of service without the need to abide by significant restrictions.

Real and imagined savings
The source noted that most of the debate swirling around open source versus proprietary solutions concerns cost. Many open source software advocates compare these solutions to generic medicine, while likening proprietary offerings to name-brand medications. The former will prove just as effective as the latter, but at a small fraction of the price.

Of course, as Government Computing acknowledged, this is far from a perfect comparison. There are other factors which can add to the cost of open source adoption, such as training, integration, governance, security, cloud adoption and more.

"Ignoring these in a simple view that open source is always cheaper will probably create a range of new costs," the source explained.

However, this does not mean the cost benefits of open source adoption are imagined. On the contrary, many agree that open source has the potential to deliver significant savings. Yet it is important to keep in mind that these rewards are possible only if open source is approached in a cautious, knowing way. This is key for open source solution providers, as well.

"The challenge for open source providers is to be open about total cost of ownership – the idea that open source is 'free' in a corporate environment is usually neither helpful nor true. Honesty about the cost economics will also help to promote the real potential of open source in a corporate environment," Government Computing explained.

Flexibility benefits
The bigger advantage provided by open source software, Government Computing asserted, is the greater flexibility it provides for users.

"The challenge for proprietary suppliers is to be aware that they are on 'thin ice,'" the source explained. "Inflexible and aggressive contracts, or significant unexpected price increases will increase the appeal of open source tools, especially in the public sector."

Already, this process is well underway. The source noted that the public, as well as private, sector now regularly uses open source software. Such offerings give users a much greater degree of control over how their software is implemented and utilized, which is a powerful incentive to any IT team.

This view coincides with the perspective recently offered by industry expert David Wheeler. In a conversation with, Wheeler emphasized that the U.S. government has significantly increased its embrace of open source software solutions in recent years. Agencies that until recently had virtually no open source involvement now use a range of offerings, including Red Hat Enterprise Linux, PostgreSQL and others. Departments now leveraging open source include NASA, the Consumer Financial Protection Bureau and the White House.

As open source becomes increasingly popular, its critical for teams to understand the risks, costs, and level of effort needed to incorporate code safely and effectively. Developing the right open source policy is the first step towards bringing a consistent, repeatable process to open source management.

Learn more:
• Build your own policy using our Open Source Policy Builder
• Understand the four strategies needed to reduce your open source risk

Categories: Companies

How to Spruce up your Evolved PHP Application

Do you have a PHP application running and have to deal with inconveniences like lack of scalability, complexity of debugging, and low performance? That’s bad enough! But trust me: you are not alone! I’ve been developing Spelix, a system for cave management, for more than 20 years. It originated from a single user DOS application, […]

The post How to Spruce up your Evolved PHP Application appeared first on Compuware APM Blog.

Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today