Skip to content

Feed aggregator

Conventional HTML in ASP.NET MVC: Data-bound elements

Jimmy Bogard - 2 hours 12 min ago

Other posts in this series:

We’re now at the point where our form elements replace the existing templates in MVC, extend to the HTML5 form elements, but there’s still something missing. I skipped over the dreaded DropDownList, with its wonky SelectListItem objects.

Drop down lists can be quite a challenge. Typically in my applications I have drop down lists based on a few known sets of data:

  • Static list of items
  • Dynamic list of items
  • Dynamic contextual list of items

The first one is an easy target, solved with the previous post and enums. If a list doesn’t change, just create an enum to represent those items and we’re done.

The second two are more of a challenge. Typically what I see is attaching those items to the ViewModel or ViewBag, along with the actual model. It’s awkward, and combines two separate concerns. “What have I chosen” is a different concern than “What are my choices”. Let’s tackle those last two choices separately.

Dynamic lists

Dynamic lists of items typically come from a persistent store. An administrator goes to some configuration screen to configure the list of items, and the user picks from this list.

Common here is that we’re building a drop down list based on set of known entities. The definition of the set doesn’t change, but its contents might.

On our ViewModel, we’d handle this in our form post with an entity:

public class RegisterViewModel
{
    [Required]
    public string Email { get; set; }

    [Required]
    public string Password { get; set; }

    public string ConfirmPassword { get; set; }

    public AccountType AccountType { get; set; }
}

We have our normal registration data, but the user also gets to choose their account type. The values of the account type, however, come from the database (and we use model binding to automatically bind up in the POST the AccountType you chose).

Going from a convention point of view, if we have a model property that’s an entity type, let’s just load up all the entities of that type and display them. If you have an ISession/DbContext, this is easy, but wait, our view shouldn’t be hitting the database, right?

Wrong.

Luckily for us, our conventions let us easily handle this scenario. We’ll take the same approach as our enum drop down builder, but instead of using type metadata for our list, we’ll use our database.

Editors.Modifier<EnitityDropDownModifier>();

// Our modifier
public class EnitityDropDownModifier : IElementModifier
{
    public bool Matches(ElementRequest token)
    {
        return typeof (Entity).IsAssignableFrom(token.Accessor.PropertyType);
    }

    public void Modify(ElementRequest request)
    {
        request.CurrentTag.RemoveAttr("type");
        request.CurrentTag.TagName("select");
        request.CurrentTag.Append(new HtmlTag("option"));

        var context = request.Get<DbContext>();
        var entities = context.Set(request.Accessor.PropertyType)
            .Cast<Entity>()
            .ToList();
        var value = request.Value<Entity>();

        foreach (var entity in entities)
        {
            var optionTag = new HtmlTag("option")
                .Value(entity.Id.ToString())
                .Text(entity.DisplayValue);

            if (value != null && value.Id == entity.Id)
                optionTag.Attr("selected");

            request.CurrentTag.Append(optionTag);
        }
    }
}

Instead of going to our type system, we query the DbContext to load all entities of that property type. We built a base entity class for the common behavior:

public abstract class Entity
{
    public Guid Id { get; set; }
    public abstract string DisplayValue { get; }
}

This goes into how we build our select element, with the display value showed to the user and the ID as the value. With this in place, our drop down in our view is simply:

<div class="form-group">
    @Html.Label(m => m.AccountType)
    <div class="col-md-10">
        @Html.Input(m => m.AccountType)
    </div>
</div>

And any entity-backed drop-down in our system requires zero extra effort. Of course, if we needed to cache that list we would do so but that is beyond the scope of this discussion.

So we’ve got dynamic lists done, what about dynamic lists with context?

Dynamic contextual list of items

In this case, we actually can’t really depend on a convention. The list of items is dynamic, and contextual. Things like “display a drop down of active users”. It’s dynamic since the list of users will change and contextual since I only want the list of active users.

It then comes down to the nature of our context. Is the context static, or dynamic? If it’s static, then perhaps we can build some primitive beyond just an entity type. If it’s dynamic, based on user input, that becomes more difficult. Rather than trying to focus on a specific solution, let’s take a look at the problem: we have a list of items we need to show, and have a specific query needed to show those items. We have an input to the query, our constraints, and an output, the list of items. Finally, we need to build those items.

It turns out this isn’t really a good choice for a convention – because a convention doesn’t exist! It varies too much. Instead, we can build on the primitives of what is common, “build a name/ID based on our model expression”.

What we wound up with is something like this:

public static HtmlTag QueryDropDown<T, TItem, TQuery>(this HtmlHelper<T> htmlHelper,
    Expression<Func<T, TItem>> expression,
    TQuery query,
    Func<TItem, string> displaySelector,
    Func<TItem, object> valueSelector)
    where TQuery : IRequest<IEnumerable<TItem>>
{
    var expressionText = ExpressionHelper.GetExpressionText(expression);
    ModelMetadata metadata = ModelMetadata.FromLambdaExpression(expression, htmlHelper.ViewData);
    var selectedItem = (TItem)metadata.Model;

    var mediator = DependencyResolver.Current.GetService<IMediator>();
    var items = mediator.Send(query);
    var select = new SelectTag(t =>
    {
        t.Option("", string.Empty);
        foreach (var item in items)
        {
            var htmlTag = t.Option(displaySelector(item), valueSelector(item));
            if (item.Equals(selectedItem))
                htmlTag.Attr("selected");
        }

        t.Id(expressionText);
        t.Attr("name", expressionText);
    });

    return select;
}

We represent the list of items we want as a query, then execute the query through a mediator. From the results, we specify what should be the display/value selectors. Finally, we build our select tag as normal, using an HtmlTag instance directly. The query/mediator piece is the same as I described back in my controllers on a diet series, we’re just reusing the concept here. Our usage would look something like:

<div class="col-md-10">
    @Html.QueryDropDown(m => m.User,
        new ActiveUsersQuery(),
        t => t.FullName,
        t => t.Id)
</div

If the query required contextual parameters – not a problem, we simply add them to the definition of our request object, the ActiveUsersQuery class.

So that’s how we’ve tackled dynamic lists of items. Depending on the situation, it requires conventions, or not, but either way the introduction of the HtmlTag library allowed us to programmatically build up our HTML without resorting to strings.

We’ve tackled the basics of building input/output/label elements, but we can go further. In the next post, we’ll look at building higher-level components from these building blocks that can incorporate things like validation messages.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Appium Bootcamp – Chapter 2: The Console

Sauce Labs - 3 hours 45 min ago

appium_logoThis is the second post in a series called Appium Bootcamp by noted Selenium expert Dave Haeffner. To read the first post, click here.

Dave recently immersed himself in the open source Appium project and collaborated with leading Appium contributor Matthew Edwards to bring us this material. Appium Bootcamp is for those who are brand new to mobile test automation with Appium. No familiarity with Selenium is required, although it may be useful. This is the first of eight posts; a new post will be released each week.

Configuring Appium

In order to get Appium up and running there are a few additional things we’ll need to take care of.

If you haven’t already done so, install Ruby and setup the necessary Appium client libraries (a.k.a. “gems”). You can read a write-up on how to do that here.

Installing Necessary Libraries

Assuming you’ve already installed Ruby and need some extra help installing the gems, here’s what you to do.

1. Install the gems from the command-line with `gem install appium_console`
2. Once it completes, run `gem list | grep appium`

You should see the following listed (your version numbers may vary):

```sh
appium_console (1.0.1)
appium_lib (4.0.0)
```

Now you have all of the necessary gems installed on your system to follow along.

An Appium Gems Primer

`appium_lib` is the gem for the Appium Ruby client bindings. It is what we’ll use to write and run our tests against Appium. It was installed as a dependency to `appium_console`.

`appium_console` is where we’ll focus most of our attention in the remainder of this and the next post. It is an interactive prompt that enables us to send commands to Appium in real-time and receive a response. This is also known as a [record-eval-print loop (REPL).

Now that we have our libraries setup, we'll want to grab a copy of our app to test against.

Sample Apps

Don't have a test app? Don't sweat it. There are pre-compiled test apps available to kick the tires with. You can grab the iOS app here and the Android app here. If you're using the iOS app, you'll want to make sure to unzip the file before using it with Appium.

If you want the latest and greatest version of the app, you can compile it from source. You can find instructions on how to do that for iOS here and Android here.

Just make sure to put your test app in a known location, because you'll need to reference the path to it next.

App Configuration

When it comes to configuring your app to run on Appium there are a lot of similarities to Selenium -- namelythe use of Capabilities (e.g., "caps" for short).

You can specify the necessary configurations of your app through caps by storing them in a file called `appium.txt`.

Here's what `appium.txt` looks like for the iOS test app to run in an iPhone simulator:

```
[caps]
platformName = “ios”
app = “/path/to/UICatalog.app.zip”
deviceName = “iPhone Simulator”
“`

And here’s what `appium.txt` looks like for Android:

“`
[caps]
platformName = “android”
app = “/path/to/api.apk”
deviceName = “Android”
avd = “training”
“`

For Android, note the use of both `avd`. The `”training”` value is for the Android Virtual Device that we configured in the previous post. This is necessary for Appium to auto-launch the emulator and connect to it. This type of configuration is not necessary for iOS.

For a full list of available caps, read this.

Go ahead and create an appium.txt with the caps for your app (making sure to place it in the same directory as the Gemfile we created earlier).

Launching The Console

Now that we have a test app on our system and configured it to run in Appium, let’s fire up the Appium Console.

First we’ll need to start the Appium server. So let’s head over to the Appium GUI and launch it. It doesn’t matter which radio button is selected (e.g., Android or Apple). Just click the `Launch` button in the top right-hand corner of the window. After clicking it, you should see some debug information in the center console. Assuming there are no errors or exceptions, it should be up ready to receive a session.

After that, go back to your terminal window and run `arc` (from the same directory as `appium.txt`). This is the execution command for the Appium Ruby Console. It will take the caps from `appium.txt` and launch the app by connecting it to the Appium server. When it’s done you will have an emulator window of your app that you can interact with as well as an interactive command-prompt for Appium.

Outro

Now that we have our test app up and running, it’s time to interrogate our app and learn how to interact with it.

Click HERE to go to Chapter 1.

About Dave Haeffner: Dave is a recent Appium convert and the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by thousands of testing professionals) as well as The Selenium Guidebook (a step-by-step guide on how to use Selenium Successfully). He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Follow Dave on Twitter - @tourdedave

Categories: Companies

Announcing the 2014 Summer Bug Battle, uTest’s First Since 2010

uTest - 4 hours 57 min ago

Marty3uTest is happy and excited to announce that a proud tradition and competition that started in our community in 2008 is back after a four-year hiatus…the Bug Battle!

Bug Battles are arguably even more popular than they were since the last time we held this esteemed competition. Companies from Microsoft to Facebook are offering up bounties to testers that find the most crucial of bugs bogging down their apps, and putting their companies’ credibility on the line.

The Bug Battle launches right now, Wednesday, July 23. Testers will have two weeks, until Wednesday, August 6th, to submit the most impactful Desktop, Web and Mobile bugs from testing tools contained on our Tool Reviews site. Only the best battlers will take home all the due glory, respect, and the cash prizes! And speaking of those cash prizes, we’ll be awarding well over $1000, along with uTest swag for bugs that are not only the most crucial and impactful, but that are part of well-written bug reports.

Want to be updated on all of the action? Be sure to follow along on your favorite social media channels so you don’t miss any of the milestones:

We’ll also be keeping you covered on the competition here at the uTest Blog every step of the way, along with the announcement of the winners on Wednesday, August 20th…after the community gets their say in voting!

The competition is only for members of the uTest Community, which…ahem…is totally free, so if you’re not a member, sign up today. Beyond the competition, you’ll also have access to some of the top testing talent in the industry in our Forums, and a wealth of free training content at uTest University.

Be sure to check out all of the full submission details, rules, prizes and deadlines over at the official 2014 Summer Bug Battle site.

Let the games begin!

Categories: Companies

Flexibility increases appeal of open source for public sector

Kloctalk - Klocwork - 6 hours 10 min ago

Open source software is currently experiencing a surge in both popularity and applicability. While the technology has been around for quite a while by this point, never before has open source software been embraced to this degree. Perhaps the most notable example of this trend is the growing role played by these solutions in the public sector. Increasingly, governments around the world are leveraging open source for a wide range of purposes.

There are a number of factors driving this trend. Among the most significant, as Government Computing recently highlighted, is the growing realization that proprietary software providers require inflexible contracts. By turning to open source options, government agencies can enjoy the same or a better level of service without the need to abide by significant restrictions.

Real and imagined savings
The source noted that most of the debate swirling around open source versus proprietary solutions concerns cost. Many open source software advocates compare these solutions to generic medicine, while likening proprietary offerings to name-brand medications. The former will prove just as effective as the latter, but at a small fraction of the price.

Of course, as Government Computing acknowledged, this is far from a perfect comparison. There are other factors which can add to the cost of open source adoption, such as training, integration, governance, security, cloud adoption and more.

"Ignoring these in a simple view that open source is always cheaper will probably create a range of new costs," the source explained.

However, this does not mean the cost benefits of open source adoption are imagined. On the contrary, many agree that open source has the potential to deliver significant savings. Yet it is important to keep in mind that these rewards are possible only if open source is approached in a cautious, knowing way. This is key for open source solution providers, as well.

"The challenge for open source providers is to be open about total cost of ownership – the idea that open source is 'free' in a corporate environment is usually neither helpful nor true. Honesty about the cost economics will also help to promote the real potential of open source in a corporate environment," Government Computing explained.

Flexibility benefits
The bigger advantage provided by open source software, Government Computing asserted, is the greater flexibility it provides for users.

"The challenge for proprietary suppliers is to be aware that they are on 'thin ice,'" the source explained. "Inflexible and aggressive contracts, or significant unexpected price increases will increase the appeal of open source tools, especially in the public sector."

Already, this process is well underway. The source noted that the public, as well as private, sector now regularly uses open source software. Such offerings give users a much greater degree of control over how their software is implemented and utilized, which is a powerful incentive to any IT team.

This view coincides with the perspective recently offered by industry expert David Wheeler. In a conversation with Opensource.com, Wheeler emphasized that the U.S. government has significantly increased its embrace of open source software solutions in recent years. Agencies that until recently had virtually no open source involvement now use a range of offerings, including Red Hat Enterprise Linux, PostgreSQL and others. Departments now leveraging open source include NASA, the Consumer Financial Protection Bureau and the White House.

As open source becomes increasingly popular, its critical for teams to understand the risks, costs, and level of effort needed to incorporate code safely and effectively. Developing the right open source policy is the first step towards bringing a consistent, repeatable process to open source management.

Learn more:
• Build your own policy using our Open Source Policy Builder
• Understand the four strategies needed to reduce your open source risk

Categories: Companies

How to Spruce up your Evolved PHP Application

Do you have a PHP application running and have to deal with inconveniences like lack of scalability, complexity of debugging, and low performance? That’s bad enough! But trust me: you are not alone! I’ve been developing Spelix, a system for cave management, for more than 20 years. It originated from a single user DOS application, […]

The post How to Spruce up your Evolved PHP Application appeared first on Compuware APM Blog.

Categories: Companies

How do you take notes? Want to share it?

The Social Tester - 7 hours 22 min ago
As some of you may know I’ve been doing some on and off research over the last five years on note taking. I’ve been interviewing people, gathering notes, observing people and studying about note taking. I’m bringing all of this together in an upcoming book. The book will be released next year via Lean Pub... Read More
Categories: Blogs

Software Testing News: “QA – the Yawn Function of IT?”

Earlier this month, Software Testing News, a website that “delivers the latest news in the industry, from the most up-to-date reports in web security to the latest testing tool that can help you perform better” carried an opinion piece from Original Software CEO Colin Armitage. “Given the mission critical role they play, it’s odd that […]
Categories: Companies

StormRunner Load bringing you fast and simple performance testing on demand

HP LoadRunner and Performance Center Blog - Wed, 07/23/2014 - 04:42

When you think of the word “Cloud”, chances are that you don’t immediately think of weather. (This is an industry-blog after all). Your mind most likely thinks of the ways you can save money by moving to the cloudiStock_000018646128Small.jpg.

 

Now what do you think about when I say the word “Storm”? (Now I bet you are thinking about weather.)  As of July 24, my hope is that you will think performance testing when I say “storm”.  Keep reading to find out why you should look to the cloud for the latest in performance testing.

 

 

Categories: Companies

The Deadline to Sign up for GTAC 2014 is Jul 28

Google Testing Blog - Wed, 07/23/2014 - 02:53
Posted by Anthony Vallone on behalf of the GTAC Committee

The deadline to sign up for GTAC 2014 is next Monday, July 28th, 2014. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to add yours for consideration. If you would like to speak or attend, be sure to complete the form by Monday.

We will be making regular updates to our site over the next several weeks, and you can find conference details there:
  developers.google.com/gtac

For those that have already signed up to attend or speak, we will contact you directly in mid August.

Categories: Blogs

Test Status Report - Template and Walkthrough

Yet another bloody blog - Mark Crowther - Wed, 07/23/2014 - 02:27
As mentioned in the End of Day Reporting - Worked Example article, EoD reporting is something we'll get asked to do as software testers, on virtually all our projects. In fact it should be literally all our projects, even when we're not asked we should provide it.

In order to do so, we need a means to capture the data that will be included in a report. That data is just data though and we need to convert it into easily digestible information. Be mindful that raw numbers may be accurate and are valuable, but they are often devoid of meaning. When reporting, don't be shy about including context and narrative - your report may be unreadable without it!

The previous article on End of Day reporting covered some points around presentation, so let's look at capturing and interpreting the raw data here. When reading through this article, have a copy of the template open in front of you. We'll walk-through each of the key sections here, but it's best to watch the YouTube video and refer to the template for a more in depth look at the Workbook.

Get a copy of the Test Status Tracker Template



YouTube Channel: [WATCH, RATE, SUBSCRIBE]http://www.youtube.com/user/Cyreath
Oh, watch out for formulas, there are plenty of them on the Worksheets!

--------------

Project Details
Enter the following details about the application. This data is used throughout the Workbook.

Project Name: [Enter the name of the project or application under test]
Start Date: [Enter the date as nn-mmm, e.g. 21-Jul]
Total No of Cases: [Enter the total Test Cases in scope]

Daily Tracker
There is nothing you need to enter on this tab. It is generated from the Data Table and Issue Tracker tabs. However, you will need to manually edit chart symbol colours. Refer to the status table and review variance against plan on the chart. You'll then need to select the individual data point on the green Actual line, then fill the correct colour into the point.

Data Table
There are two main sections, Planned and Actual, where you need to enter data. Before completing the table, enter the names of the testing staff in the left hand column. When planning out the amount items that will be worked on each day, enter the total in the above 'Total No. Of Test Cases' box first. Then map out what can be delivered on a day by day basis, for each member of staff.

At the end of each day, enter the Actual numbers achieved by each team member.
With the data in place you will need to select the calculation cells at the bottom of the 'Actual' table and using the fill handle, populate the cells under the day you have just entered data for

This is done daily to avoid showing zero figures on the chart, making progress easier to see.

Issues Tracker
Where an issue or risk emerges during delivery, track it on this tab if your project has no other central place to record the data.

Downtime Tracker
Each time the team are unable to progress with delivery due to some issue, this is 'downtime' and should be recorded on the Downtime Tracker tab.

The tracker forms a historic record of time lost and allows reconciliation of lost time against plan, to support reporting and re-scheduling activities.

--------------

As stated above, watch the video for a clearer run-down of the tracker. This is one worked example to show some tricks and tips of how you can make your own. If you make any interesting changes be sure to let us know!

Mark


Liked this post?
Say thanks by Following the blog or subscribing to the YouTube Channel!



Categories: Blogs

Trusting Third-Party Code That Can’t Be Trusted

Sonatype Blog - Tue, 07/22/2014 - 23:05
Paul Roberts (@paulfroberts) at InfoWorld recently shared his perspective on “5 big security mistakes coders make”. First on his list was trusting third-party code that can’t be trusted. Paul shares: “If you program for a living, you rarely -- if ever -- build an app from scratch. It's much more...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

4 reasons why testers time is more valuable than number of bugs

Testlio - Community of testers - Tue, 07/22/2014 - 23:00

One of the fundamental beliefs in Testlio has been to value the time testers spend on testing, instead of measuring them based on the number of bugs submitted.
We have gathered some thoughts why time spent on testing is important and works for Testlio clients and testers.

Motivation for testers. Testers often say that they are undervalued. But all the professional testers and developers know how important it is to have quality testing before launching new applications. Testers who are paid per hour instead of bugs submitted feel that they are valued and their work is important. They are more motivated to focus on the current application and making sure the whole app is being tested.

Testing the whole system. Developers and clients are interested that the whole system will be tested. Getting information on the common use-cases is as important as going deep into functional, performance and security testing.

Serious bugs are more important for clients and developers. Of course spelling mistakes are important as well, but a security issue makes the whole app vulnerable. Testers who know they have time want to drill deeper to find hard-core issues. The experienced testers are much more satisfied to find killer bugs in addition to some pixel alignment that is ugly to look at. Proper bug reporting always needs some time to reproduce the issues to confirm the details.

Team work, communication and sharing information among testers is important to work fluently. A team of testers is interested in finding bugs and adding additional information as a joint team. Measuring their individual time rather than bugs, they don’t need to sprint to be the first one to report.

What do you think, why time spent on testing is more important than number of reported bugs?

Categories: Companies

Automating CD Pipelines with Jenkins - Part 1: Vagrant, Fabric and Selenium

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Tracy Kennedy, solutions architect, CloudBees, about a session given by Hoi Tsang, DealerTrack, at JUC Boston.

There’s a golden standard for the software development lifecycle that it seems most every shop aspires to, yet seemingly few have already achieved - a complete continuous delivery pipeline with Jenkins that automatically pulls from an SCM repository on each commit, then compiles the code, packages the app and runs all unit/acceptance/static analysis tests in parallel.

Integration testing on the app then runs in mini-stacks provided by Vagrant and if the build passes all testing, Jenkins stores the binary in a repository as a release candidate until a candidate passes QA. Jenkins then plucks the release from the repository to deploy it to production servers, which are created on-demand by a provisioning and configuration management tool like Chef.

The nitty gritty details of the actual steps may vary from shop to shop, but based on my interactions with potential CloudBees customers and the talks at the 2014 Boston JUC, this pipeline seems to be what many high-level execs aspire to see their organization achieving in the next few years.

Jenkins + Vagrant, Fabric and Selenium
Hoi Tsang of DealerTrack gave a wonderful overview of how DealerTrack accomplished such a pipeline in his talk: “Distributed Scrum Development w/ Jenkins, Vagrant, Fabric and Selenium.”

As Tsang explained, integration can be a problem, and it’s an unfortunately expensive problem to fix. He explained that is was best to think of the problem of integration as a multiplication problem, where

Hoi Tsang, DealerTrackpractice x precision x discipline = perfection
When it comes to SCRUM, which Tsang likened to being “like driving really fast on a curvy road,” most all of the attendees at Tsang’s JUC speech practiced it and almost all confirmed that they do test-driven development.

In Tsang’s case, DealerTrack was also a test-driven development shop and had the goals of writing more meaningful use cases and defining meaningful test data.

To accomplish this, DealerTrack set up Jenkins and installed a few plugins: Build Pipeline plugin, Cobertura and Violations to name a few. They also created build and deployment jobs - the builds were triggered by code commits and schedules, and the builds triggers tests whose pass/fail rules have been defined by each DealerTrack team. Their particular rules were:
  • All unit tests passed
  • Code coverage > 90%
  • Code standard > 90%
DealerTrack had their Jenkins master control a Selenium hub, which consisted of a grid of dedicated VMs/boxes registered to the Selenium hub. Test cases would get distributed among the grid, and the results would be reported back to the associated Jenkins jobs.

The builds would also be subject to an automated integration build, which relied on Vagrant to define mini-stacks for the integration tests to run in by checking out source code into a shared folder with a Virtual Machine, launching the VM, preparing + running the test, then cleaning up the test space. Despite this approach to integration testing taking longer, Tsang argued that it provided a more realistic testing environment.

If the build passed, then its artifact would be uploaded to an internally-hosted repository and reports on the code standards + code coverage were published.This would also trigger a documentation generation job.

According to Tsang, DealerTrack also managed to setup an automated deployment flow, where Jenkins would pick up a build from the internal repository, tunnel into the development server, then drop off the artifact and deploy the build. They managed to accomplish this using Python Fabric, a CLI for streamlining the use of SSH for application deployment or system administrator tasks.

Tsang explained that DealerTrack had a central Jenkins master to maintain the build pipeline, but split the work between each team’s assigned slave and assigned testing server. Dedicated slaves worked on the more important jobs, which allowed branch merging to be accomplished 30% faster.
Stay tuned for Part 2!


Tracy Kennedy
Solutions Architect
CloudBees
As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. (Meet the Bees blog post coming soon!) For now, follow her on Twitter.
Categories: Companies

UPDATE: $10,000 Tesla Hacking Challenge Accepted…and Defeated

uTest - Tue, 07/22/2014 - 19:30

Continuing in the Security State of Mind here at the uTest Blog today, some of you may remember that we reported last week that teslahack1the 2014 SyScan conference was offering a $10,000 bounty for any tester who was able to remotely access a Tesla Model S’ automobile operating system.

That open challenge didn’t last too long, apparently.

According to The Register, students from Zhejiang University late last week were able to take control of the automobile remotely while it was driving, gaining access to its doors and sunroof by opening them, switching on the headlights, and, for some giggles, sounding the horn, too.

If you’ll remember, Tesla didn’t play any part in this open challenge to hackers at the Chinese conference, but it did issue a statement supporting “the idea of providing an environment in which responsible security researchers can help identify potential vulnerabilities,” hoping “security researchers will act responsibly and in good faith.” Opening the doors while the car is driving doesn’t sound too responsible to me, but that just underscores the fact that this is something definitely worth looking into on the part of Tesla.

I know a little company that could help.

 

Categories: Companies

LoadRunner licenses now available on Pronq.com

HP LoadRunner and Performance Center Blog - Tue, 07/22/2014 - 18:26

Acquiring testing tools shouldn’t be painful. You should be able to get started quickly, and scale up your usage on-demand, at any time. That’s why we’re now offering Virtual User Day licenses for LoadRunner on Pronq.com.

Categories: Companies

Working with HP LoadRunner and HP Network Virtualization

HP LoadRunner and Performance Center Blog - Tue, 07/22/2014 - 18:24

02.pngHP Network Virtualization (NV) helps you test the point-to-point performance of network-deployed products under real-world conditions. You can simulate network effects such as latency, packet loss and bandwidth, allowing your test to run in an environment that closely resembles the actual deployment of your application.

This becomes even more important in the mobile world, where performance testers must take into consideration the different communication conditions which are affected by the network operators, infrastructures, etc.

 

Continue reading to learn about how NV integrates with HP LoadRunner to provide the complete testing experience for all network situations.

 

(This post was written by Dan Belfer, Ilona Zaurov and Yoav Weiss, from the LoadRunner R&D Team)

Categories: Companies

Conventional HTML in ASP.NET MVC: Replacing form helpers

Jimmy Bogard - Tue, 07/22/2014 - 17:51

Other posts in this series:

Last time, we ended at the point where we had a baseline behavior for text inputs, labels and outputs. We don’t want to stop there, however. My ultimate goal is to eliminate (as much as possible) using any specific form helper from ASP.NET MVC. Everything we need to determine what/how to render input elements is available to us on metadata, we just need to use it.

Our first order of business is to catalog the expected elements we wish to support:

  • Button (no)
  • Checkbox (yes)
  • Color (yes)
  • Date (yes)
  • DateTime (yes)
  • DateTime Local (yes)
  • Email (Yes)
  • File (No)
  • Hidden (Yes)
  • Image (No)
  • Month (No)
  • Number (Yes)
  • Password (Yes)
  • Radio (Yes)
  • Range (No)
  • Reset (No)
  • Search (No)
  • Telephone (Yes)
  • Text (Yes)
  • Time (Yes)
  • Url (Yes)

And the other two input types that don’t use the <input> element, <select> and <textarea>. This is where convention-based programming and the object model of HtmlTags really shines. Instead of us needing to completely replace a template as we do in MVC, we only need to extend the individual tags, and leave everything else alone. We know that we want to have a baseline style on all of our inputs. We also want to configure this once, which our HTML conventions allow.

So how do we want to key into our conventions? I like to follow a progression:

  • Member type
  • Member name
  • Member attributes

We can infer a lot from the type of a member. Boolean? That’s a checkbox. Nullable bool? That’s not a checkbox, but a select, and so on. Let’s look at each type of input and see what we can infer to build our conventions.

Labels

Labels can be a bit annoying, you might need localization and so on. What I’d like to do is provide some default, sensible behavior. If we look at a login view model:

public class LoginViewModel
{
    [Required]
    [EmailAddress]
    [Display(Name = "Email")]
    public string Email { get; set; }

    [Required]
    [DataType(DataType.Password)]
    [Display(Name = "Password")]
    public string Password { get; set; }

    [Display(Name = "Remember me?")]
    public bool RememberMe { get; set; }
}

We have a ton of display attributes, all basically doing nothing. These labels key into a couple of things:

  • Label text
  • Validation errors

We’ll get to validation in a future post, but first let’s look at the labels. What can we provide as sensible defaults?

  • Property name
  • Split PascalCase into separate words
  • Booleans get a question mark
  • Fallback to the display attribute if it exists

A sensible label convention would get rid of nearly all of our “Label” attributes. The default conventions get us the first two, we just need to modify for the latter two:

Labels.ModifyForAttribute<DisplayAttribute>((t, a) => t.Text(a.Name));
Labels.IfPropertyIs<bool>()
    .ModifyWith(er => er.CurrentTag.Text(er.OriginalTag.Text() + "?"));

With this convention, our Display attributes go away. If we have a mismatch between the view model property and the label, we can use the Display attribute to specify it explicitly. I only find myself using this when a model is flattened. Otherwise, I try and keep the label I show the users consistent with how I model the data behind the scenes.

Checkbox

This one’s easy. Checkboxes represent true/false, so that maps to a boolean:

Editors.IfPropertyIs<bool>().Attr("type", "checkbox");

// Before
@Html.CheckBoxFor(m => m.RememberMe)
@Html.LabelFor(m => m.RememberMe)

// After
@Html.Input(m => m.RememberMe)
@Html.Label(m => m.RememberMe)

Not very exciting, we just tell Fubu for bools, make the “type” attribute a checkbox. The existing MVC template does a few other things, but I don’t like any of them (like an extra hidden etc).

Color

With some model binding magic, we can handle this by merely looking at the type:

Editors.IfPropertyIs<Color>().Attr("type", "color");


Date/Time/DateTime/Local DateTime

This one is a little bit more difficult, since the BCL doesn’t have a concept of a Date. However, NodaTime does, so we can use that type and key off of it instead:

Editors.IfPropertyIs<LocalDate>().Attr("type", "date");
Editors.IfPropertyIs<LocalTime>().Attr("type", "time");
Editors.IfPropertyIs<LocalDateTime>().Attr("type", "datetime-local");
Editors.IfPropertyIs<OffsetDateTime>().Attr("type", "datetime");


Email

Email could go a number of different ways. There’s not really an Email type in .NET, so we can’t key off the property type. The MVC version uses an attribute to opt-in to an Email template, but I think that’s redundant. In my experience, every property with “Email” in the name is an email address. Why not key off this?

Editors.If(er => er.Accessor.Name.Contains("Email"))
    .Attr("type", "email");

This one could go both ways, but if I want to also/instead go off the DataType attribute, it’s just as easy. I don’t like being too explicit or too confusing, so you’ll have to base this on what you actually find in your systems.

Hidden

Hiddens can be a bit funny. If I’m being sensible with Guid identifiers, I know right off the bat that any Guid type should be hidden. It’s not always the case, so I’d like to support the attribute if needed.

Editors.IfPropertyIs<Guid>().Attr("type", "hidden");
Editors.IfPropertyIs<Guid?>().Attr("type", "hidden");
Editors.IfPropertyHasAttribute<HiddenInputAttribute>().Attr("type", "hidden");


Number

Number inputs are a bit complicated. I actually tend to avoid them, as I find they’re not really that usable. However, I do want to provide some hints to the user as well as some rudimentary client-side validation with the “pattern” attribute.

Editors.IfPropertyIs<decimal?>().ModifyWith(m =>
    m.CurrentTag
    .Data("pattern", "9{1,9}.99")
    .Data("placeholder", "0.00"));

I’d do similar for other numeric types (integer/floating point).

Password

We’ll use the same strategy as our hidden input – key off the name if we can, otherwise check for an attribute.

Editors.If(er => er.Accessor.Name.Contains("Password"))
    .Attr("type", "password");
Editors.If(er =>
{
    var attr = er.Accessor.GetAttribute<DataTypeAttribute>();
    return attr != null && attr.DataType == DataType.Password;
}).Attr("type", "password");

We had to get a little fancy with our attribute check, but nothing too crazy.

Radio

Radio buttons represent a selection of a group of items. In my code, this is represented with an enum. Since radio buttons are a bit more complicated than just an input tag, we’ll need to build out the list of elements manually. We can either build up our select element from scratch, or modify the defaults. I’m going to go the modification route, but because it’s a little more complicated, I’ll use a dedicated class instead:

Editors.Modifier<EnumDropDownModifier>();

// Our modifier
public class EnumDropDownModifier : IElementModifier
{
    public bool Matches(ElementRequest token)
    {
        return token.Accessor.PropertyType.IsEnum;
    }

    public void Modify(ElementRequest request)
    {
        var enumType = request.Accessor.PropertyType;

        request.CurrentTag.RemoveAttr("type");
        request.CurrentTag.TagName("select");
        request.CurrentTag.Append(new HtmlTag("option"));
        foreach (var value in Enum.GetValues(enumType))
        {
            var optionTag = new HtmlTag("option")
                .Value(value.ToString())
                .Text(Enum.GetName(enumType, value));
            request.CurrentTag.Append(
                optionTag);
        }
    }
}

Element modifiers and builders follow the chain of responsibility pattern, where any modifier/builder that matches a request will be called. We only want enums, so our Matches method looks at the accessor property type. Again, this is where our conventions show their power over MVC templates. In MVC templates, you can’t modify the matching algorithm, but in our example, we just need to supply the matching logic.

Next, we use the Modify method to examine the incoming element request and make changes to it. We replace the tag name with “select”, remove the “type” attribute, but leave the other attributes alone. We append a child option element, then loop through all of the enum values to build out name/value options from our enum’s metadata.

Why use this over EnumDropDownListFor? Pretty easy – it gets all of our other conventions, like the Bootstrap CSS classes. In a system with dozens or more enumerations shown, that’s not something I want to repeat all over the place.

Telephone

We’ll treat the telephone just like our password element – check for a property name, and fall back to an attribute.

Editors.If(er => er.Accessor.Name.Contains("Phone"))
    .Attr("type", "tel");
Editors.If(er =>
{
    var attr = er.Accessor.GetAttribute<DataTypeAttribute>();
    return attr != null && attr.DataType == DataType.PhoneNumber;
}).Attr("type", "tel");

If we want to enforce a specific pattern, we’d use the appropriate data-pattern attribute.

Text

This is the default, so nothing to do here!

Url

Just like our password, we’ll look at the property name, then an attribute:

Editors.If(er => er.Accessor.Name.Contains("Url"))
    .Attr("type", "url");
Editors.If(er =>
{
    var attr = er.Accessor.GetAttribute<DataTypeAttribute>();
    return attr != null && attr.DataType == DataType.Url;
}).Attr("type", "url");

If we get tired of typing that attribute matching logic out, we can create an extension method:

public static class ElementCategoryExpressionExtensions
{
    public static ElementActionExpression HasAttributeValue<TAttribute>(
        this ElementCategoryExpression expression, Func<TAttribute, bool> matches)
        where TAttribute : Attribute
    {
        return expression.If(er =>
        {
            var attr = er.Accessor.GetAttribute<TAttribute>();
            return attr != null && matches(attr);
        });
    }
}

And our condition becomes a bit easier to read:

Editors.If(er => er.Accessor.Name.Contains("Url"))
    .Attr("type", "url");
Editors
    .HasAttributeValue<DataTypeAttribute>(attr => attr.DataType == DataType.Url)
    .Attr("type", "url");


Wrapping up

We knocked out most of the HTML5 input element types, leaving out ones that didn’t make too much sense. We can still create conventions for those missing elements, likely using property names and/or attributes to determine the right convention to use. Quite a bit more powerful than the MVC templates!

Next up, we’ll look at more complex element building example where we might need to hit the database to get a list of values for a drop down.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Transition to Agile Testing – Part 4: 7 Practical Tips

Software Testing Magazine - Tue, 07/22/2014 - 16:28
Software testing during the transition to Agile is not easy. This fourth and final part proposes 7 practical tips for a smooth adoption of Agile software testing practices. From better communication to test automation, these tips should help to solve some of the issues that are naturally associated with this transition. Remember that it is up to you, as a team, to implement your kind of Agile and if you love it or hate it. Author: Elizabeth Bagwell, Stainless Software In this, the fourth and final section in this series on the ...
Categories: Communities

Software Test Engineer, Schweitzer Engineering Laboratories, Pullman, WA

Software Testing Magazine - Tue, 07/22/2014 - 15:39
Schweitzer Engineering Laboratories (SEL) seeks a professional, innovative and meticulous individual for our Software Test Engineer position in our Windows software group. If you are looking for an opportunity to work for a great company, with excellent benefits and an outstanding reputation for quality, then we invite you to join our technical team! Software Test Engineer Responsibilities: * Perform the role of tester on software projects to ensure that all testing deliverables meet quality, cost, schedule and performance objectives. * Write Test Case Specifications, create testing estimates, write and execute functional tests, document ...
Categories: Communities

Keeping the Hackers Out – Securing Client/Server Communication

The Seapine View - Tue, 07/22/2014 - 15:30

Keeping your data secure to protect your company’s intellectual property is top priority. Encrypting client/server communication is one way to help ensure your data is safe from eavesdropping by hackers. TestTrack and Seapine License Server 2014.1 improve existing encryption methods and introduce a new, stronger option: RSA key exchange.

Securing TestTrack communication

Communication between TestTrack clients and the TestTrack Server should always be encrypted. At a bare minimum, make sure the Encrypt communication between clients and the server option is enabled in the Security server options in the TestTrack Server Admin Utility.

TestTrack Server security options - basic encryption

If your network is potentially insecure and you need stronger encryption, you can use RSA key exchange. RSA is a public key encryption algorithm that uses separate keys for encryption and decryption.*

You may want to use RSA if:

  • Your organization stores sensitive information in TestTrack.
  • Your network is potentially insecure.
  • Users log in to client applications from outside your network.
  • Users are authenticated to TestTrack using LDAP, single sign-on, or external authentication.

Using RSA does require native client users to do a little bit of work, but we’ll get to that in a minute.

To use RSA key exchange, in the TestTrack Server Admin Utility, make sure Encrypt communication between clients and the server is selected and then select Use RSA key exchange.

TestTrack Server security - RSA encryption

 

Next, click Download Public Key File to download a file that contains the TestTrack Server address, port number, and public key. Select a location and enter a file name, and click Save to save the file. Make sure you save the file in a secure location.

Here’s the important part. The public key must be added to any TestTrack clients that connect to the server. Distribute the key file to all users who use the native TestTrack Client, add-ins, and native TestTrack Server Admin Utility. Users must import the key file to their server connection settings. For example, in TestTrack, click Setup on the login dialog box, select the server, and click Edit. Click Import in the Edit TestTrack Server dialog box, select the key file, and click Open. Click OK to save the changes.

TestTrack Server connection settings

 

If users use TestTrack web clients, only the TestTrack administrator needs to import the key file using the TestTrack Registry Utility on the TestTrack Server. In the registry utility, click CGI Options. In the Default TestTrack Server area, click Import, select the key file, and click Open. Click OK to save the changes.

TestTrack Registry Utility CGI settings

 

If you ever suspect the private key on the TestTrack Server is compromised, you can easily regenerate the keys, download the new key file, and import it to clients.

Securing Seapine License Server communication

The same encryption and key exchange principles apply to the Seapine License Server. Always enable encryption to make sure communication is secure between the license server, admin utilities, API, and other Seapine product servers. If you need stronger encryption, you can use RSA key exchange.

To enable encryption and RSA for the license server, in the license server admin utility, click Server Options and select the Server category.

More information

For more information about encryption and key exchange, see the following help topics.

TestTrack:

Seapine License Server:

—–

*If you’re curious about the technical details, here’s how RSA works: the client application generates a random, 256-bit secret key and encrypts it with the server’s public key. The server hashes the secret key and signs the hash with its private key. The private key is only stored on the server hard drive and never leaves the server. To compromise the secret key or impersonate the server, a hacker must know the server’s private key or substitute their own public key in client applications.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today