Skip to content

Feed aggregator

Using HPC to fight the flu

Kloctalk - Klocwork - Thu, 07/24/2014 - 19:00

Influenza is one of the most serious recurring health risks faced by United States citizens, and the world at large, every year. In a particularly bad season, the flu can wreck a tremendous amount of havoc. According to the CDC, between 1976-1977 and 2006-2007, seasonal deaths from the flu ranged from as few as 3,000 to as many as 49,000.

To better understand influenza and how it behaves and travels, scientists from a number of different teams are now utilizing high performance computing tools to study the virus, HPCWire reported.

Flu fighters
The news source reported that the scientists hail from the Texas Advanced Computing Center at the University of Texas at Austin, the San Diego Supercomputer Center at the University of San Diego, the University of Chicago Research Computing Center and the Department of Defense High Performance Computing Center. The team's research centers on the question of the influenza virus' replication.

The source noted that the most common treatment for influenza A currently is Amantadine. However, this treatment's effectiveness is diminishing due to viral mutations. Consequently, the researchers and other scientists around the globe are now working on alternative methods of defeating influenza.

HPC plays a key role in this capacity. The team is using four HPC systems to better simulate the complex process of proton transfer through the M2 protein channel within the influenza virus. The combined power of these HPC systems delivers an unprecedented degree of detail for multiscale simulations, according to the news source. For the first time, researchers were able to computationally describe the connection between influenza mutations on the M2 protein and increasing drug resistance.

"Computer simulation, when done very well, with all the right physics, reveals a huge amount of information that you can't get otherwise," explained Gregory Voth, the Haig P. Papazian Distinguished Service Professor in Chemistry at the University of Chicago and one of the lead researchers. 

According to Peter Preusch of the National Institutes of Health's National Institute for General Medical Sciences, these HPC simulations have the potential to usher in significant progress in the fight against influenza.

"This work helps expand the methods for molecular simulation available to researchers and may eventually lead to new and better drugs to treat influenza infections," said Preusch, the University of Chicago reported.

Scientific HPC
This combined effort represents one of the many ongoing efforts to leverage HPC capabilities for scientific computing projects. Thanks to its advanced capabilities, HPC is a critical tool for many scientific research initiatives. In particular, HPC's ability to run complex simulations that are stable and accurate is essential for research in numerous areas.

For example, the University of California, Santa Cruz, recently adopted a new HPC solution to improve the school's existing Hyades supercomputer, used primarily to perform astrophysics-related calculations. By running simulations, researchers can use these HPC capabilities to learn about astrophysical phenomena which cannot be studied experimentally.

Stability and accuracy can only be achieved through software that’s been validated and tested – a difficult task for complex HPC systems. That’s why debugging and memory analysis tools that support HPC’s unique environments (multiple processes, multiple GPUs, co-processors, etc.) are key to creating successful simulations.

Categories: Companies

Adding Custom Methods to Data Models with Angular $resource

Sauce Labs - Thu, 07/24/2014 - 18:00

Sauce Labs software developer Alan Christopher Thomas and his team have been hard at work updating our stack. He shared with us some insight into their revised dev process, so we thought we’d show off what he’s done. Read his follow-up post below.

Thanks for your great feedback to this post. Previously we examined three different approaches to modeling data in AngularJS. We’ve since incorporated some of your feedback, so we wanted to share that information here. You can also see updates we made in our original post.

One of our commenters made mention of a cleaner approach to adding custom methods to $resource models when our API response response allows it, using angular.extend().

In this implementation, we’re imagining an API response that looks like this:

[
  {
    "breakpointed": null,
    "browser": "android",
    "browser_short_version": "4.3",
    ...
  },
  {
    ...
  }
  ...
]

Each of the response objects in the list is a “Job” that contains a whole lot of metadata about an individual job that’s been run in the Sauce cloud.

We want to be able to iterate over the jobs to build a list for our users, showing the outcome of each: “Pass,” “Fail,” etc.

Our template looks something like this:


    
        
            
                {{ job.getResult() }}
            
            
                {{ job.name }}
            
        
    

Note the job.getResult() call. In order to get this convenience, however, we need to be able to attach a getResult() method to each Job returned in the response.

So, here’s what the model looks like, using Angular $resource:

angular.module('job.models', [])
    .factory('Job', ['$resource', function($resource) {
        var Job = $resource('/api/jobs/:jobId', {
            full: 'true',
            jobId: '@id'
        });

        angular.extend(Job.prototype, {
            getResult: function() {
                if (this.status == 'complete') {
                    if (this.passed === null) return "Finished";
                    else if (this.passed === true) return "Pass";
                    else if (this.passed === false) return "Fail";
                }
                else return "Running";
            }
        });

        return Job;
    }]);

Note that since each resulting object returned by $resource is a Job object itself, we can simply extend Job.prototype to include the behavior we want for every individual job instance.

Then, our controller looks like this (revised from the original post to make use of the not-so-obvious promise):

angular.module('job.controllers', [])
    .controller('jobsController', ['$scope', '$http', 'Job', function($scope, $http, Job) {
        $scope.loadJobs = function() {
            $scope.isLoading = true;
            var jobs = Job.query().$promise.then(function(jobs) {
                $scope.jobs = jobs;
            });
        };

        $scope.loadJobs();
    }]);

The simplicity of this example makes $resource a much more attractive option for our team’s data-modeling needs, especially considering that for simple applications, custom behavior isn’t incredibly unwieldy to implement.

Alan Christopher Thomas, Software Developer, Sauce Labs

Categories: Companies

Webinar recap: Static analysis’ role in automotive functional safety

Kloctalk - Klocwork - Thu, 07/24/2014 - 16:16

Last week, we held a joint webinar with QNX Software Systems discussing how static analysis plays a key role in automotive functional safety and ISO 26262 (you can watch the recording here). We had developers, testers, architects, and students attend from all over the world and they all had one interest in common: better delivery of safe automotive software.

We always try to understand our attendees and here’s an interesting result from one of the polls we ran (based on table 9 of ISO 26262-6, which lists methods of design verification for software units):

Which of the following tools/techniques does your company employ in its development?
(multiple answers allowed)

Static code analysis – 47%
Walk-through – 45%
Formal verification – 35%
Control flow analysis – 31%
Semantic code analysis – 27%
Data flow analysis – 22%

While static code analysis is clearly the most popular choice among those concerned with automotive functional safety, the other end of the spectrum, manual walk-through, is popular as well. It seems that relying on your own two eyes is still considered a reliable approach!

We asked two more questions specific to ISO 26262 and received these responses:

Is your organization currently working on a product that will be certified to the ISO 26262 standard?

No – 48%
Prefer not to say – 31%
Yes – 21%

Which ASIL level is your company most concerned with?
(multiple answers allowed)

ASIL C – 48%
ASIL B – 41%
ASIL A – 33%
ASIL D – 33%

While a large number of our attendees weren’t currently working on an ISO 26262 project (or preferred not to say), there’s quite a spread of interest across all the safety levels. This isn’t surprising given that our customers work on a wide range of automotive systems for different types of end products.

Regardless of safety level, Klocwork’s ISO 26262-certified checkers relieve the time and effort required for tool qualification – fast-forward to 24:10 in the webinar to see how.

For more on how Klocwork helps reduce the effort required to achieve ISO 26262 certification, read the following resources:

Software on Wheels: Addressing the Challenges of Embedded Automotive Software (PDF)
Fact sheet: Klocwork automotive overview (PDF)

Categories: Companies

Focus on Automated Testing, Discount for uTesters at UCAAT

uTest - Thu, 07/24/2014 - 15:30

Automation is a sector of software testing that has experienced explosive growth and enterprise investment in recent years. The knowledge necessary to learn about and specialize in automated testing is found at industry events like the upcoming 2nd annual User Conference on Advanced Automated Testing (UCAAT) in Munich, Germany from September 16-18, 2014. ucaat

The European conference, jointly organized by the “Methods for Testing and Specification” (TC MTS) ETSI Technical Committee, QualityMinds, and German Testing Day, will focus exclusively on use cases and best practices for software and embedded testing automation.

The 2014 program will cover topics like agile test automation, model-based tests, test languages and methodologies, as well as web of service and use of test automation in various industries like automotive, medical technology, and security, to name a few. Noted participants in the opening session include Dr. Andrej Pietschker (Giesecke & Devrient), Professor Ina Schieferdecker (Free University of Berlin), Markus Becher (BMW), Dr. Heiko Englert (Siemens), and Dr. Alexander Pretschner (Technical University of Munich).

UCAAT 2013, which took place in Paris, attracted 200 participants and included 21 technical presentations held by renowned speakers such as Professor Lionel Briand (University of Luxembourg) and Matthias Rasking (Accenture).

As a special offer to our testing community, you can receive a 5% discount for new registrations to UCAAT. Email testers@utest.com for the special discount code for this and other shows.

Also, be sure to check out the Events calendar for upcoming online and in-person events!

Categories: Companies

Understanding Application Performance on the Network – Part VI: The Nagle Algorithm

In Part V, we discussed processing delays caused by “slow” client and server nodes. In Part VI, we’ll discuss the Nagle algorithm, a behavior that can have a devastating impact on performance and, in many ways, appear to be a processing delay. Common TCP ACK Timing Beyond being important for (reasonably) accurate packet flow diagrams, […]

The post Understanding Application Performance on the Network – Part VI: The Nagle Algorithm appeared first on Compuware APM Blog.

Categories: Companies

Code Coverage Metrics That Matter

NCover - Code Coverage for .NET Developers - Thu, 07/24/2014 - 13:07

net-code-coverage-metrics-that-matterThe first rule of code coverage is that not all code coverage metrics are created equal.  In this webinar we discuss three key code coverage metrics that matter: branch coverage, sequence point coverage and the change-risk anti patterns score.  In addition, we cover how all three can work together to provide you a more comprehensive understanding of your code.

This webinar covers how each of the metrics is calculated so that you can use each of them on a more informed basis.  In addition, we discuss how they are useful in managing both code coverage and risk and providing you with measurable feedback on the overall riskiness of your code base.

Code Coverage Metrics That Matter Welcome to the NCover webinar on Code Coverage Metrics That Matter. Today we are going to discuss several key code coverage metrics that you can use in the development of your .NET applications to improve overall code quality and improve the reliability and viability of your .NET applications. After we’ve discussed the metrics we are going to show you how you can immediately find and start using those metrics within the NCover user interface. Once thing that is important to keep in mind when you think about code coverage is the fact that not all code coverage metrics are created equal. At NCover, we believe it is not only important to find the metrics that are most useful in maintaining your code but also understanding how those metrics are calculated. Okay, let’s start with how we measure success as it relates to code coverage within the NCover interface. For us, the most important metric in measuring the success of your testing is branch coverage. Branch coverage represents the percentage of individual code segments, or branches, that were covered during the testing of an application. When we refer to a “branch” we are referring to a segment of code that has exactly one entry point and one exit point. For example, if you are looking at a very simple if / else statement, that would have two distinct branches; the first being if the condition was met and the second being if it was not. We feel that branch coverage is a good measurement of the success of your testing strategy because it lets you know of the potential branches or paths that your software may take, how many of them have been exercised. So if branch coverage is how we measure success, then we want to look at how do we achieve success, how do we increase our total overall branch coverage. The metric we look to achieve this is sequence point coverage. Sequence point coverage is the percentage of sequence points that were covered during the testing of the application. that were covered during the testing of the application. As we will show you in a little bit, one of the most common ways to visualize sequence point coverage is when we drill down through the NCover GUI and go all the way to the source code where we can see, through source code highlighting, which sequence points have been covered in a particular application. If you have had any experience with code coverage metrics before, those are probably two metrics you are relatively used to seeing and understand pretty well. However, that’s only half of the equation. At NCover, we look at not only how well you have tested your code but we look at what is the risk of maintaining that code over time. The metric we use to assess that risk is the change risk anti-patterns score. The change risk anti-patterns score scores the amount of uncovered code against the complexity of that code. So if the change risk anti-patterns score reflects risk, in general, you want to keep this score as low as possible. The way to achieve this requires a balance between two variables, on the one hand you are trying to increase your total code coverage and on the other you are trying to decrease the total complexity of your code base. It’s a fairly well accepted fact that the more complex your code base is the larger the probability that you will have unintended consequences when you make changes to that code base, which means higher support costs, higher development costs and a higher total cost of software over a period of time. Identifying the right metrics and using them effectively within your organization can have several important benefits, including the ability to align teams across shared, common goals, to create a sense of transparency so as you manage the balance between increased testing and reducing risk you know exactly where to focus your efforts and, finally, improving your overall code quality, which as we mentioned before, is really about using finite resources to deliver the best applications possible. Unfortunately, using oversimplified metrics or using them improperly with a failure to really understand where they are coming from, with a failure to really understand where they are coming from, can cause several negative consequences including hiding critical issues within your code base, perhaps associated with a lack of testing or increased complexity, which can lead to a sense of false confidence and, ultimately, waste time, energy and valuable resources. Alright, let’s take a look within NCover and see where you can find these metrics and how you can start using them in your organization. Whether you are using NCover Code Central within the build environment or to aggregate coverage across a team or you are using NCover Desktop or Bolt within the development environment as part of your development process, the approach is the same but we will briefly walk you through both scenarios. Here, we are looking at the dashboard, which is an aggregation of the coverage metrics, with trend charts, across all of our open projects. As you can see, we prominently display branch coverage, sequence point coverage and a variety of the complexity metrics including the change risk anti-patterns score. When we drill into a particular project, we can see each of the code coverage metrics across all of our executions over time or across multiple machines. over time or across multiple machines. All of the metrics are represented with either green or red bars and these are based on user-defined thresholds that you can set across all of the metrics. For branch coverage, you can quickly see your total branch coverage as well as the total number of branch points and those that have been covered. You can also see the same for sequence point coverage. Your total percent, as well as the total number of covered and total available sequence points. In order to better manage the risk of your code, we provide you with a maximum change risk anti-patterns score, now this is across all of the methods within that particular set of code, as well as the number of methods within that set of code that have a change risk anti-patterns score that is in excess of the user-defined acceptable level. Although we provide you with a robust set of code coverage metrics, we also make it very easy for you and your team to select the metrics you want to focus on. By selecting “settings,” you can quickly identify which metrics you want displayed and which metrics you want hidden. It’s worth noting, that even though you may choose to hide a particular metric, the underlying data is still available should you decide you want to look at it later. As you continue to drill down in the NCover interface down to the method level, these metrics become even more useful as you can look at trends across all of your methods, complexity across all of your methods and what areas you may decide to focus either additional development efforts or additional testing efforts on. By drilling down to the source code level, you can quickly identify those areas of code that have and have not been tested. By dragging your mouse either over the actual source code, or the icons representing the individual sequence points, you can quickly identify how your code flows and those branches that still require testing. For developers working within Visual Studio, we extend the power of NCover’s solution directly into the Visual Studio interface through Bolt, our integrated test runner and code coverage solution. Within this interface, you’ll find the same metrics that you find within the NCover Code Central and Desktop user interface. that you find within the NCover Code Central and Desktop user interface. Again, allowing you to quickly identify those segments of code that either represent high risk or require additional testing. By drilling down to the source code level, you can again look, through source code highlighting, at exactly which sequence points and branch points have been tested. Just a quick note, if you are using NCover Bolt in conjunction with NCover Desktop, all of your code coverage data can seamlessly integrate with your project, allowing multiple members to aggregate coverage across a total code set. Regardless of the type of .NET application you are developing, or the size of your team, at NCover, we make code coverage simple. We offer free, 21-day trials of all of our code coverage solutions. All you need to do to get started is visit us at www.ncover.com.

//

The post Code Coverage Metrics That Matter appeared first on NCover.

Categories: Companies

Networking is important–or what we are really not good at

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

virtualbusinessMany of us a software developer work with computers to avoid contact with people. To be fair, we all had our fair share of clients that would not understand why we couldn’t draw red lines with green ink. I understand the reason why would rather stay away from people who don’t understand what we do.

However… (there’s always an however) as I recently started my own business recently, I’ve really started to understand the meaning of building your network and staying in contact with people. While being an MVP has always lead me to meet great people all around Montreal, the real value I saw was when it was a very good contact of mine that introduced me to one of my first client. He knew they needed someone with my skills and directly introduced while skipping all the queues.

You can’t really ask for more. My first client was a big company. You can’t get in there without either being a big company that won a bid, be someone that is renowned or have the right contacts.

You can’t be the big company, you might not ever be someone but you can definitely work on contacts and expanding the amount of people you know.

So what can you do to expand your contacts and grow your network?

Go to user groups

This is killing 2 birds with one stone. First, you learn something new. It might be boring if you already now everything but let me give you a nice trick.

Arrive early and chat with people. If you are new, ask them if they are new too, ask them about their favourite presentation (if any), where they work, whether they like it, etc. Boom. First contact is done. You can stop sweating.

If this person has been here more than once, s/he probably knows other people that you can be introduced.

Always have business cards

I’m a business owner now. I need to have cards. You might think of yourself a low importance developer but if you meet people and impress them with your skills… they will want to know where you hang out.

If your business doesn’t have 50$ to put on you, make your own!  VistaPrint makes those “Networking cards” where you an just input your name, email, position, social network, whatever on them and you can get 500 for less than 50$.

Everyone in the business should have business cards. Especially those that makes the company money.

Don’t expect anything

I know… giving out your card sounds like you want to sell something to people or that you want them to call you back.

When I give my card, it’s in the hope that when they come back later that night and see my card they will think “Oh yeah it’s that guy I had a great conversation with!”. I don’t want them to think I’m there to sell them something.

My go-to phrase when I give it to them is “If you have any question or need a second advice, call me or email me! I’m always available for people like you!”

And I am.

Follow-up after giving out your card

When you give your card and receive another in exchange (you should!), send them a personal email. Tell them about something you liked from the conversation you had and ask them if you could add them on LinkedIn (always good). Seem simple  to salesman but us developers often forget that an email the day after has a very good impact.

People will remember you for writing to them personally with specific details from the conversation.

Yes. That means no “copy/paste” email. Got to make it personal.

If the other person doesn’t have a business card, take the time to note their email and full name (bring a pad!).

Rinse and repeat

If you keep on doing this, you should start to build a very strong network of developers in your city. If you have a good profile, recruiters should also start to notice you. Especially if you added all those people on LinkedIn.

It’s all about incremental growth. You won’t be a superstar tomorrow (and neither am I) but by working at it, you might end-up finding your next job through weird contacts that you only met once but that were impressed by who you are.

Conclusion

So here’s the Too Long Didn’t read version. Go out. Get business cards. Give them to everyone you meet. You intention is to help them, not sell them anything. Repeat often.

But in the long run, it’s all about getting out there. If you want a more detailed read of what real networking is about, you should definitely read Work the Pond by Darcy Rezac. It’s a very good read.

Categories: Blogs

Massive Community Update 2014-07-04

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So here I go again! We have Phil Haack explaining how he handle tasks in his life with GitHub, James Chamber’s series on MVC and Bootstrap, Visual Studio 2014 Update 3, MVC+WebAPI new release and more!

Especially, don’t miss this awesome series by Tomas Jansson about CQRS. He did an awesome job and I think you guys need to read it!

So beyond this, I’m hoping you guys have a great day!

Must Read

GitHub Saved My Marriage - You've Been Haacked (haacked.com)

James Chamber’s Series

Day 21: Cleaning Up Filtering, the Layout & the Menu | They Call Me Mister James (jameschambers.com)

Day 22: Sprucing up Identity for Logged In Users | They Call Me Mister James (jameschambers.com)

Day 23: Choosing Your Own Look-And-Feel | They Call Me Mister James (jameschambers.com)

Day 24: Storing User Profile Information | They Call Me Mister James (jameschambers.com)

Day 25: Personalizing Notifications, Bootstrap Tables | They Call Me Mister James (jameschambers.com)

Day 26: Bootstrap Tabs for Managing Accounts | They Call Me Mister James (jameschambers.com)

Day 27: Rendering Data in a Bootstrap Table | They Call Me Mister James (jameschambers.com)

NodeJS

Nodemon vs Grunt-Contrib-Watch: What’s The Difference? (derickbailey.com)

.NET

Update 3 Release Candidate for Visual Studio 2013 (blogs.msdn.com)

Test-Driven Development with Entity Framework 6 -- Visual Studio Magazine (visualstudiomagazine.com)

ASP.NET

Announcing the Release of ASP.NET MVC 5.2, Web API 2.2 and Web Pages 3.2 (blogs.msdn.com)

Using Discovery and Katana Middleware to write an OpenID Connect Web Client | leastprivilege.com on WordPress.com (leastprivilege.com)

Project Navigation and File Nesting in ASP.NET MVC Projects - Rick Strahl's Web Log (weblog.west-wind.com)

ASP.NET Session State using SQL Server In-Memory (blogs.msdn.com)

CQRS Series (code on GitHub)

CQRS the simple way with eventstore and elasticsearch: Implementing the first features (blog.tomasjansson.com)

CQRS the simple way with eventstore and elasticsearch: Implementing the rest of the features (blog.tomasjansson.com)

CQRS the simple way with eventstore and elasticsearch: Time for reflection (blog.tomasjansson.com)

CQRS the simple way with eventstore and elasticsearch: Build the API with simple.web (blog.tomasjansson.com)

CQRS the simple way with eventstore and elasticsearch: Integrating Elasticsearch (blog.tomasjansson.com)

CQRS the simple way with eventstore and elasticsearch: Let us throw neo4j into the mix (blog.tomasjansson.com)

Ending discussion to my blog series about CQRS and event sourcing (blog.tomasjansson.com)

Architecture

Michael Feathers - Microservices Until Macro Complexity (michaelfeathers.silvrback.com)

Windows Azure

Azure Cloud Services and Elasticsearch / NoSQL cluster (PAAS) | I'm Pedro Alonso (www.pedroalonso.net)

NuGet

Monitoring nuget.org (blog.nuget.org)

Search Engines (ElasticSearch, Solr, etc.)

Fast Search and Analytics on Hadoop with Elasticsearch | Hortonworks (hortonworks.com)

Elasticsearch.org This Week In Elasticsearch | Blog | Elasticsearch (www.elasticsearch.org)

Solr vs. ElasticSearch: Part 1 – Overview | Sematext Blog on WordPress.com (blog.sematext.com)

Categories: Blogs

Community Update 2014-06-25

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So not everything is brand new since I did my last community update 8 days ago. What I suggest highly is the combination of EventStore and ElasticSearch in a great article by Tomas Jansson.

It’s definitely a must read and I highly recommend it. Of course, don’t miss the series by James Chambers on Bootstrap and MVC.

Enjoy all the reading!

Must Read

Be more effective with your data - ElasticSearch | Raygun Blog (raygun.io)

Your Editor should Encourage You - You've Been Haacked (haacked.com)

Exploring cross-browser math equations using MathML or LaTeX with MathJax - Scott Hanselman (www.hanselman.com)

CQRSShop - Tomas Jansson (blog.tomasjansson.com) – Link to a tag that contains 3 blog post that are must read.

James Chambers Series

Day 18: Customizing and Rendering Bootstrap Badges | They Call Me Mister James (jameschambers.com)

Day 19: Long-Running Notifications Using Badges and Entity Framework Code First | They Call Me Mister James (jameschambers.com)

Day 20: An ActionFilter to Inject Notifications | They Call Me Mister James (jameschambers.com)

Web Development

Testing Browserify Modules In A (Headless) Browser (derickbailey.com)

ASP.NET

Fredrik Normén - Using Razor together with ASP.NET Web API (weblogs.asp.net)

A dynamic RequireSsl Attribute for ASP.NET MVC - Rick Strahl's Web Log (weblog.west-wind.com)

Versioning RESTful Services | Howard Dierking (codebetter.com)

ASP.NET vNext Routing Overview (blogs.msdn.com)

.NET

Exceptions exist for a reason – use them! | John V. Petersen (codebetter.com)

Nuget Dependencies and latest Versions - Rick Strahl's Web Log (weblog.west-wind.com)

Trying Redis Caching as a Service on Windows Azure - Scott Hanselman (www.hanselman.com)

Categories: Blogs

Massive Community Update 2014-06-17

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So as usual, here’s what’s new since a week ago.

Ever had problem downloading SQL Server Express? Too many links, download manager, version selection, etc. ? Fear not. Hanselman to the rescue. I’m also sharing with you the IE Developer channel that you should definitely take a look at.

We also continue to follow the series by James Chambers.

Enjoy your reading!

Must Read

Download SQL Server Express - Scott Hanselman (www.hanselman.com)

Announcing Internet Explorer Developer Channel (blogs.msdn.com)

Thinktecture.IdentityManager as a replacement for the ASP.NET WebSite Administration tool - Scott Hanselman (www.hanselman.com)

NodeJS

Why Use Node.js? A Comprehensive Introduction and Examples | Toptal (www.toptal.com)

Building With Gulp | Smashing Magazine (www.smashingmagazine.com) James Chambers Series

Day 12: | They Call Me Mister James (jameschambers.com)

Day 13: Standard Styling and Horizontal Forms | They Call Me Mister James (jameschambers.com)

Day 14: Bootstrap Alerts and MVC Framework TempData | They Call Me Mister James (jameschambers.com)

Day 15: Some Bootstrap Basics | They Call Me Mister James (jameschambers.com)

Day 16: Conceptual Organization of the Bootstrap Library | They Call Me Mister James (jameschambers.com)

ASP.NET vNext

Owin middleware (blog.tomasjansson.com)

Imran Baloch's Blog - K, KVM, KPM, KLR, KRE in ASP.NET vNext (weblogs.asp.net)

Jonathan Channon Blog - Nancy, ASP.Net vNext, VS2014 & Azure (blog.jonathanchannon.com)

Back To the Future: Windows Batch Scripting & ASP.NET vNext | A developer's blog (blog.tpcware.com)

Dependency Injection in ASP.NET vNext (blogs.msdn.com)

.NET

Here Come the .NET Containers | Wintellect (wintellect.com)

Architecture and Methodology

BoundedContext (martinfowler.com)

UnitTest (martinfowler.com)

Individuals, Not Groups | 8th Light (blog.8thlight.com)

Open Source

Download Emojis With Octokit.NET - You've Been Haacked (haacked.com)

ElasticSearch

Elasticsearch migrations with C# and NEST | Thomas Ardal (thomasardal.com)

Categories: Blogs

Massive Community Update 2014-06-12

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So I’ve been doing a bit of an experiment. I’ve seen that those community updates are normally rather small. I’ve waited a whole week before posting something new to see if we have better content.

I like the “Massive Community Update” for the amount of links it provides and for the occasion to put James Chambers whole series in perspective.

If you’re still thinking about it… read it. It’s worth it.

Visual Studio “14” CTP

TWC9: Visual Studio "14" CTP Episode (channel9.msdn.com)

NDC Oslo 2014

0-layered architecture on Vimeo (vimeo.com)

Monitoring your app with Logstash and Elasticsearch on Vimeo (vimeo.com)

James Chambers Series

Day 7: Semi-Automatic Bootstrap – Display Templates | They Call Me Mister James (jameschambers.com)

Day 8: Semi-Automatic Bootstrap – Editor Templates | They Call Me Mister James (jameschambers.com)

Day 9: Templates for Complex Types | They Call Me Mister James (jameschambers.com)

Day 10: HtmlHelper Extension Methods | They Call Me Mister James (jameschambers.com)

Day 11: Realistic Test Data for Our View | They Call Me Mister James (jameschambers.com)

Web Development

NDC 2014: SOLID CSS/JavaScript & Bower talks | Anthony van der Hoorn (blog.anthonyvanderhoorn.com)

Browserify: My New Choice For Modules In A Browser / Backbone App (derickbailey.com)

.NET

Final Thoughts on Nuget and Some Initial Impressions on the new KVM | The Shade Tree Developer on WordPress.com (jeremydmiller.com)

C# - A C# 6.0 Language Preview (msdn.microsoft.com)

ASP.NET

Host AngularJS (Html5Mode) in ASP.NET vNext (geekswithblogs.net)

ASP.NET: Building Web Application Using ASP.NET and Visual Studio (channel9.msdn.com)

jaywayco » Is ASP.Net vNext The New Node.js (blog.jaywayco.co.uk)

Learn How to Build a Modern Web Application with Client Side JavaScript and ASP.NET (channel9.msdn.com)

Fire and Forget on ASP.NET (blog.stephencleary.com)

ASP.NET vNext Moving Parts: OWIN (whereslou.com)

POCO controllers in ASP.NET vNext - StrathWeb (www.strathweb.com)

Jon Galloway - A 30 Minute Look At ASP.NET vNext (weblogs.asp.net)

Miscellaneous

FIXED: Blue Screen of Death (BSOD) 7E in HIDCLASS.SYS while installing Windows 7 - Scott Hanselman (www.hanselman.com)

Guide to Freeing up Disk Space under Windows 8.1 - Scott Hanselman (www.hanselman.com)

GitHub for Windows 2.0 - You've Been Haacked (haacked.com)

Simplified Setup and Use of Docker on Microsoft Azure | MS OpenTech (msopentech.com)

Categories: Blogs

Community Update 2014-06-04 ASP.NET vNext, @CanadianJames MVC Bootstrap series and what we learned from C++

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So the big news is that Visual Studio 14 actually reached CTP. Of course, this is not the final name and is very temporary.

If you want to install it, I suggest booting up a VM locally or on Windows Azure.

Enjoy!

Visual Studio “14”

Visual Studio "14" CTP Downloads (www.visualstudio.com)

Announcing web features in Visual Studio “14” CTP (blogs.msdn.com)

Visual Studio "14" CTP (blogs.msdn.com)

ASP.NET vNext in Visual Studio “14” CTP (blogs.msdn.com)

Morten Anderson - ASP.NET vNext is now in Visual Studio (www.mortenanderson.net)

James Chambers MVC/Bootstrap Series

Day 4: Making a Page Worth a Visit | They Call Me Mister James (jameschambers.com)

Web Development

To Node.js Or Not To Node.js | Haney Codes .NET (www.haneycodes.net)

ASP.NET

aburakab/ASP-MVC-Tooltip-Validation · GitHub (github.com) – Translate MVC errors to Bootstrap notification

Download Microsoft Anti-Cross Site Scripting Library V4.3 from Official Microsoft Download Center (www.microsoft.com)

ASP.NET Web API parameter binding part 1 - Understanding binding from URI (www.strathweb.com)

Cutting Edge - External Authentication with ASP.NET Identity (msdn.microsoft.com)

Forcing WebApi controllers to output JSON (blog.bjerner.dk)

Videos

What – if anything – have we learned from C++? (channel9.msdn.com)

Search Engine

Elasticsearch.org Elasticsearch 1.2.1 Released | Blog | Elasticsearch (www.elasticsearch.org)

Elasticsearch.org Marvel 1.2 Released | Blog | Elasticsearch (www.elasticsearch.org)

Dealing with human language (www.elasticsearch.org)

Categories: Blogs

Late Community Update 2014-06-02 REST API, Visual Studio Update 3, data indexing, Project Orleans and more

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So I was at the MVP Open Days and I’ve missed a few days. It seems that my fellow MVP James Chambers has started a great initiative about exploring Bootstrap and MVC with lots of tips and tricks. Do not miss out!

Otherwise, this is your classic “I’ve missed a few days so here are 20,000 interesting links that you must read” kind of day.

Enjoy!

Must Read

AppVeyor - A good continuous integration system is a joy to behold - Scott Hanselman (www.hanselman.com)

This URL shortener situation is officially out of control - Scott Hanselman (www.hanselman.com)

James Chambers Bootstrap and MVC series

Day 0: Boothstrapping Mvc for the Next 30 Days | They Call Me Mister James (jameschambers.com)

Day 1: The MVC 5 Starter Project | They Call Me Mister James (jameschambers.com)

Day 2: Examining the Solution Structure | They Call Me Mister James (jameschambers.com)

Day 3: Adding a Controller and View | They Call Me Mister James (jameschambers.com)

Web Development

How much RESTful is your API | Bruno Câmara (www.bfcamara.com)

Data-binding Revolutions with Object.observe() - HTML5 Rocks (www.html5rocks.com)

ASP.NET

ASP.NET Moving Parts: IBuilder (whereslou.com)

Supporting only JSON in ASP.NET Web API - the right way - StrathWeb (www.strathweb.com)

Shamir Charania: Hacky In Memory User Store for ASP.NET Identity 2.0 (www.shamirc.com)

.NET

Missing EF Feature Workarounds: Filters | Jimmy Bogard's Blog (lostechies.com)

Visual Studio/Team Foundation Server 2013 Update 3 CTP1 (VS 2013.3.1 if you wish) (blogs.msdn.com)

TWC9: Visual Studio 2013 Update 3 CTP 1, Code Map, Code Lens for Git and more... (channel9.msdn.com)

.NET 4.5 is an in-place replacement for .NET 4.0 - Rick Strahl's Web Log (weblog.west-wind.com)

ASP.NET - Topshelf and Katana: A Unified Web and Service Architecture (msdn.microsoft.com)

Windows Azure

Episode 142: Microsoft Research project Orleans simplify development of scalable cloud services (channel9.msdn.com)

Tool

JSON to CSV (konklone.io)

Search Engines

The Absolute Basics of Indexing Data | Java Code Geeks (www.javacodegeeks.com)

Categories: Blogs

Incompatibility between Nancy and Superscribe

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 05:16

So I’ve had the big idea of building an integration of Nancy and Superscribe and try to show how to do it.

Sadly, this is not going to happen.

Nancy doesn’t treat routing as a first class citizen like Superscribe does and doesn’t allow interaction with routing middleware. Nancy has its own packaged routing and will not allow Superscribe to provide it the URL.

Nancy does work with Superscribe but you have to hard-code the URL inside the NancyModule. So if you upgrade your Superscribe graph URL, Nancy will not respond on the new URL without you changing the hardcoded string.

I haven’t found a solution yet but if you do, please let me know!

Categories: Blogs

Skills Matrix & Development Plan - Template Walkthrough

Yet another bloody blog - Mark Crowther - Thu, 07/24/2014 - 00:33
One thing we'll get asked at some point is to assess the skills and competencies of the test team. To do that we need to understand what the skills competencies actually are and how we're going to assess them. We also need to decide what we're going to do with the information we gather.

Skills and competencies come in many shapes and forms. They draw on hard learning from the team members study up to raw experience gained over many years and projects delivered. As such we need to agree how to group them, then break them down into our Skills Matrix.



YouTube Channel: [WATCH, RATE, SUBSCRIBE]http://www.youtube.com/user/Cyreath


In the Skills Matrix on the site, we have the following examples:

  • Technical - Tools and Technology
  • Testing
  • Application

Clearly you could break these down in many ways, but these are a good start. Under each category we have entered specific examples such as;


  • Tools: ALM, UFT, Jira, Toad, PuTTY


Technology is more general and could include scripting languages, protocols or maybe servers and operating systems. As with all templates, it provides a guide but it's up to you to interpret and apply it to your unique testing or management problem.

Enumeration
In order for the team to be ranked (or rank themselves), we need to understand what those ranks are and what 'value' we're assigning. On the About tab, you'll see this has been defined as:

  Level   Definition 1 No knowledge   No practical, working knowledge, should be able to use if provided clear guidance 2 Awareness   Can work with existing solutions and practices, understand what to do but perhaps not fully why 3 Proficiency    Can maintain and provide minor improvements, notable skill in some areas 4 Competency   Full understanding of existing solutions and practices required for day to day work 5 Expertise   Able to critically assess and improve on current use and build future capability
Clear definitions are essential, but in no way perfect. Use these as a guide but encourage the team not to labour too much over them.

A word of warning...
When rolling out the Skills Matrix and asking the team to rank themselves, the first question will be 'Why?'. It isn't unreasonable that you'll spook the team into wondering what it might mean to rank low on the items you want to assess. You wouldn't be getting them to complete it if it wasn't relevant.

Be sure to reassure them, that this is to help identify the skill base of the team, to make assignment of testing tasks more effective, to identify ways in which the team members can be trained and so increase the team's capability. 

Professional Development Planning
You would do well to introduce a strong process of review and assessment of the team, before you roll out the Skills Matrix.

To help with this, grab a copy of the PDP Scratch Pad template and have a read through of the Developing the Team paper to learn more about implementing an appraisal process, both are on the main site.

Mark.

Liked this post?Say thanks by Following the blog or subscribing to the YouTube Channel!



Categories: Blogs

Seeking the Light – A question from a recent TDD training attendee

James Grenning’s Blog - Wed, 07/23/2014 - 21:51

Here is a good question, and my reply, from a recent attendee of my Test-Driven Development for Embedded C training.

Hi James,

As I work more with TDD, one of the concepts I am still struggling to grasp is how to test “leaf” components that touch real hardware. For example, I am trying to write a UART driver. How do I test that using TDD? It seems like to develop/write the tests, I will need to write a fake UART driver that doesn’t touch any hardware. Let’s say I do that. Now I have a really nice TDD test suite for UART drivers. However, I still need to write a real UART driver…and I can’t even run the TDD tests I created for it on the hardware. What value am I getting from taking the TDD approach here?

I feel like for low-level, hardware touching stuff you can’t really apply TDD. I understand if I didn’t have the hardware I could write a Mock, but in my case I have the hardware so why not just write the real driver?

I am really confused about this…and so are my co-workers. Can you offer any words of wisdom to help us see the light?

Thanks!

Seeking the Light

Hi Seeking the Light

I am happy to help. Thanks for the good question.

Unit tests and integration tests are different. We focussed on unit testing in the class. You test-drove the flash driver Tuesday afternoon. That showed you how to test-drive a device driver from the spec. You mocked out IORead and IOWrite, not the flash driver. You test-drove the flash driver so that when you go to the hardware you have code that is doing what you think it is supposed to do.

The unit tests you write with mock IO are not meant to run with the real IO device, but with the fake versions of IORead and IOWrite. You could run the test suite on the real hardware, but the unit tests would still use mock IO.

I think the flash driver exercise illustrated the value. Pretty much everyone that does the flash driver exercise cannot get the ready loop right without several attempts. Most end up with an infinite loop, or a loop that does not run at all. With the TDD approach, we discover logic mistakes like that during off-target TDD. We want to find logic mistakes during test-driving because they are easy to identify and fix with the fast feedback TDD provides. Finding the problem on-target with a lot of other code (that can be wrong) is more difficult and time consuming. If your diver ready check resulted in an infinite loop, that can be hard to find. Maybe your watchdog timer will keep resetting the board as you hunt for the problem. Bottom line, it is cheaper to find those mistakes with TDD.

TDD can’t find every problem. What if you were wrong about which bit was the ready bit? An integration test could find it. An integration test would use the real UART driver with the real IORead and IOWrite functions. These tests make sure that the driver works with the real hardware. These are different than the unit tests and are worth writing. You could put a loopback connector on your UART connector. Your integration test could send and receive test data over the loopback. If your was looking at the wrong bit for the ready check, you would still have an infinite loop, but that happens only if you mis-read the spec. You’d have to find that mistake via review or integration test.

An integration test may be partially automated. You don’t need to run these so often so, partial automation should be OK. You would only rerun them when you touch the driver or are doing some release. (Loopback is probably better in this case as it can run unattended.) So the test might output a string to a terminal and wait for a string to be entered. Depending on other signals that your driver supports, you may want to breakout and control those signals in a physical test harness.

An integration test for the flash driver would exercise the flash device through the driver. You might read and write blocks of values to the real flash device. You might do the flash identification sequence. You might protect a block and try to write to it. Your integration test would make sure modification is prevented and generates the right error message. These tests use the real versions of IORead and IOWrite and run on the hardware only. When integration problems are found, solve them and then go back to the unit tests and make them reflect reality. You will know which tests need to be changed, because once the integration problems are fixed, the associated unit test will fail.

Some other words in your question makes me want to talk about a fake UART driver. You will want a fake UART driver when you are test-driving code that uses the UART driver. For example a message processor that waits for a string will be much easier to test if you fake the get_string() function. You can build that fake with mocking or hand crafted, depending upon your needs.

All that said, in general the test above the hardware abstraction layer (the layer your UART driver is part of) are the most valuable tests. They should encompass your product’s intelligence and uniqueness. Hardware comes and then it goes, as do the drivers as the components change. Your business logic has, or should have, a long useful life. The business logic for a successful product should last longer than any hardware platform’s life. Consequently those test have a longer useful life too. If I was creating a driver from scratch, I would use TDD because it is the fastest way for me to work, and results in code that can be safely changed as I discover where my mistakes are.

I hope this helps.

James

Categories: Blogs

Conventional HTML in ASP.NET MVC: Data-bound elements

Jimmy Bogard - Wed, 07/23/2014 - 19:03

Other posts in this series:

We’re now at the point where our form elements replace the existing templates in MVC, extend to the HTML5 form elements, but there’s still something missing. I skipped over the dreaded DropDownList, with its wonky SelectListItem objects.

Drop down lists can be quite a challenge. Typically in my applications I have drop down lists based on a few known sets of data:

  • Static list of items
  • Dynamic list of items
  • Dynamic contextual list of items

The first one is an easy target, solved with the previous post and enums. If a list doesn’t change, just create an enum to represent those items and we’re done.

The second two are more of a challenge. Typically what I see is attaching those items to the ViewModel or ViewBag, along with the actual model. It’s awkward, and combines two separate concerns. “What have I chosen” is a different concern than “What are my choices”. Let’s tackle those last two choices separately.

Dynamic lists

Dynamic lists of items typically come from a persistent store. An administrator goes to some configuration screen to configure the list of items, and the user picks from this list.

Common here is that we’re building a drop down list based on set of known entities. The definition of the set doesn’t change, but its contents might.

On our ViewModel, we’d handle this in our form post with an entity:

public class RegisterViewModel
{
    [Required]
    public string Email { get; set; }

    [Required]
    public string Password { get; set; }

    public string ConfirmPassword { get; set; }

    public AccountType AccountType { get; set; }
}

We have our normal registration data, but the user also gets to choose their account type. The values of the account type, however, come from the database (and we use model binding to automatically bind up in the POST the AccountType you chose).

Going from a convention point of view, if we have a model property that’s an entity type, let’s just load up all the entities of that type and display them. If you have an ISession/DbContext, this is easy, but wait, our view shouldn’t be hitting the database, right?

Wrong.

Luckily for us, our conventions let us easily handle this scenario. We’ll take the same approach as our enum drop down builder, but instead of using type metadata for our list, we’ll use our database.

Editors.Modifier<EnitityDropDownModifier>();

// Our modifier
public class EnitityDropDownModifier : IElementModifier
{
    public bool Matches(ElementRequest token)
    {
        return typeof (Entity).IsAssignableFrom(token.Accessor.PropertyType);
    }

    public void Modify(ElementRequest request)
    {
        request.CurrentTag.RemoveAttr("type");
        request.CurrentTag.TagName("select");
        request.CurrentTag.Append(new HtmlTag("option"));

        var context = request.Get<DbContext>();
        var entities = context.Set(request.Accessor.PropertyType)
            .Cast<Entity>()
            .ToList();
        var value = request.Value<Entity>();

        foreach (var entity in entities)
        {
            var optionTag = new HtmlTag("option")
                .Value(entity.Id.ToString())
                .Text(entity.DisplayValue);

            if (value != null && value.Id == entity.Id)
                optionTag.Attr("selected");

            request.CurrentTag.Append(optionTag);
        }
    }
}

Instead of going to our type system, we query the DbContext to load all entities of that property type. We built a base entity class for the common behavior:

public abstract class Entity
{
    public Guid Id { get; set; }
    public abstract string DisplayValue { get; }
}

This goes into how we build our select element, with the display value showed to the user and the ID as the value. With this in place, our drop down in our view is simply:

<div class="form-group">
    @Html.Label(m => m.AccountType)
    <div class="col-md-10">
        @Html.Input(m => m.AccountType)
    </div>
</div>

And any entity-backed drop-down in our system requires zero extra effort. Of course, if we needed to cache that list we would do so but that is beyond the scope of this discussion.

So we’ve got dynamic lists done, what about dynamic lists with context?

Dynamic contextual list of items

In this case, we actually can’t really depend on a convention. The list of items is dynamic, and contextual. Things like “display a drop down of active users”. It’s dynamic since the list of users will change and contextual since I only want the list of active users.

It then comes down to the nature of our context. Is the context static, or dynamic? If it’s static, then perhaps we can build some primitive beyond just an entity type. If it’s dynamic, based on user input, that becomes more difficult. Rather than trying to focus on a specific solution, let’s take a look at the problem: we have a list of items we need to show, and have a specific query needed to show those items. We have an input to the query, our constraints, and an output, the list of items. Finally, we need to build those items.

It turns out this isn’t really a good choice for a convention – because a convention doesn’t exist! It varies too much. Instead, we can build on the primitives of what is common, “build a name/ID based on our model expression”.

What we wound up with is something like this:

public static HtmlTag QueryDropDown<T, TItem, TQuery>(this HtmlHelper<T> htmlHelper,
    Expression<Func<T, TItem>> expression,
    TQuery query,
    Func<TItem, string> displaySelector,
    Func<TItem, object> valueSelector)
    where TQuery : IRequest<IEnumerable<TItem>>
{
    var expressionText = ExpressionHelper.GetExpressionText(expression);
    ModelMetadata metadata = ModelMetadata.FromLambdaExpression(expression, htmlHelper.ViewData);
    var selectedItem = (TItem)metadata.Model;

    var mediator = DependencyResolver.Current.GetService<IMediator>();
    var items = mediator.Send(query);
    var select = new SelectTag(t =>
    {
        t.Option("", string.Empty);
        foreach (var item in items)
        {
            var htmlTag = t.Option(displaySelector(item), valueSelector(item));
            if (item.Equals(selectedItem))
                htmlTag.Attr("selected");
        }

        t.Id(expressionText);
        t.Attr("name", expressionText);
    });

    return select;
}

We represent the list of items we want as a query, then execute the query through a mediator. From the results, we specify what should be the display/value selectors. Finally, we build our select tag as normal, using an HtmlTag instance directly. The query/mediator piece is the same as I described back in my controllers on a diet series, we’re just reusing the concept here. Our usage would look something like:

<div class="col-md-10">
    @Html.QueryDropDown(m => m.User,
        new ActiveUsersQuery(),
        t => t.FullName,
        t => t.Id)
</div

If the query required contextual parameters – not a problem, we simply add them to the definition of our request object, the ActiveUsersQuery class.

So that’s how we’ve tackled dynamic lists of items. Depending on the situation, it requires conventions, or not, but either way the introduction of the HtmlTag library allowed us to programmatically build up our HTML without resorting to strings.

We’ve tackled the basics of building input/output/label elements, but we can go further. In the next post, we’ll look at building higher-level components from these building blocks that can incorporate things like validation messages.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Appium Bootcamp – Chapter 2: The Console

Sauce Labs - Wed, 07/23/2014 - 17:30

appium_logoThis is the second post in a series called Appium Bootcamp by noted Selenium expert Dave Haeffner. To read the first post, click here.

Dave recently immersed himself in the open source Appium project and collaborated with leading Appium contributor Matthew Edwards to bring us this material. Appium Bootcamp is for those who are brand new to mobile test automation with Appium. No familiarity with Selenium is required, although it may be useful. This is the second of eight posts; a new post will be released each week.

Configuring Appium

In order to get Appium up and running there are a few additional things we’ll need to take care of.

If you haven’t already done so, install Ruby and setup the necessary Appium client libraries (a.k.a. “gems”). You can read a write-up on how to do that here.

Installing Necessary Libraries

Assuming you’ve already installed Ruby and need some extra help installing the gems, here’s what you to do.

  1. Install the gems from the command-line with gem install appium_console
  2. Once it completes, run gem list | grep appium

You should see the following listed (your version numbers may vary):

appium_console (1.0.1)
appium_lib (4.0.0)

Now you have all of the necessary gems installed on your system to follow along.

An Appium Gems Primer

appium_lib is the gem for the Appium Ruby client bindings. It is what we’ll use to write and run our tests against Appium. It was installed as a dependency to appium_console.

appium_console is where we’ll focus most of our attention in the remainder of this and the next post. It is an interactive prompt that enables us to send commands to Appium in real-time and receive a response. This is also known as a record-eval-print loop (REPL).

Now that we have our libraries setup, we’ll want to grab a copy of our app to test against.

Sample Apps

Don’t have a test app? Don’t sweat it. There are pre-compiled test apps available to kick the tires with. You can grab the iOS app here and the Android app here. If you’re using the iOS app, you’ll want to make sure to unzip the file before using it with Appium.

If you want the latest and greatest version of the app, you can compile it from source. You can find instructions on how to do that for iOS here and Android here.

Just make sure to put your test app in a known location, because you’ll need to reference the path to it next.

App Configuration

When it comes to configuring your app to run on Appium there are a lot of similarities to Selenium — namelythe use of Capabilities (e.g., “caps” for short).

You can specify the necessary configurations of your app through caps by storing them in a file called appium.txt.

Here’s what appium.txt looks like for the iOS test app to run in an iPhone simulator:

[caps]
platformName = "ios"
app = "/path/to/UICatalog.app.zip"
deviceName = "iPhone Simulator"

And here’s what appium.txt looks like for Android:

[caps]
platformName = "android"
app = "/path/to/api.apk"
deviceName = "Android"
avd = "training"

For Android, note the use of both avd. The "training" value is for the Android Virtual Device that we configured in the previous post. This is necessary for Appium to auto-launch the emulator and connect to it. This type of configuration is not necessary for iOS.

For a full list of available caps, read this.

Go ahead and create an appium.txt with the caps for your app (making sure to place it in the same directory as the Gemfile we created earlier).

Launching The Console

Now that we have a test app on our system and configured it to run in Appium, let’s fire up the Appium Console.

First we’ll need to start the Appium server. So let’s head over to the Appium GUI and launch it. It doesn’t matter which radio button is selected (e.g., Android or Apple). Just click the Launch button in the top right-hand corner of the window. After clicking it, you should see some debug information in the center console. Assuming there are no errors or exceptions, it should be up ready to receive a session.

After that, go back to your terminal window and run arc (from the same directory asappium.txt). This is the execution command for the Appium Ruby Console. It will take the caps from appium.txt and launch the app by connecting it to the Appium server. When it’s done you will have an emulator window of your app that you can interact with as well as an interactive command-prompt for Appium.

Outro

Now that we have our test app up and running, it’s time to interrogate our app and learn how to interact with it.

Click HERE to go to Chapter 1.

About Dave Haeffner: Dave is a recent Appium convert and the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by thousands of testing professionals) as well as The Selenium Guidebook (a step-by-step guide on how to use Selenium Successfully). He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Follow Dave on Twitter - @tourdedave

Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today