Skip to content

Feed aggregator

Lessons Learned When Designing Products for Smartwatches & Wearables

Jonathan Kohl - Sat, 07/12/2014 - 18:46

Lately, I have been doing a bit of work designing products for smartwatches and wearables. It’s a challenge, but it is also a lot of fun. I’m just getting started, but I’ll try to describe what I have learned so far.

Designing for these devices has required a shift in my thinking. Here’s why: we have a long and rich history in User Experience (UX) and User Interface (UI) design for programs written for computers with screens. When we made the leap from a command line interface to graphical interface, this movement exploded. For years we have benefitted from the UX community. Whenever I am faced with a design problem when I’m working on a program or web site, I have tons of material to reach for to help design a great user interface.

That isn’t the case with wearables because they are fundamentally different when it comes to user interaction. For example, a smartwatch may have a small, low-fidelity screen, while an exercise bracelet may have no screen at all. It might just have a vibration motor inside and a blinking light on the outside to provide live feedback to the end user.

So where do you start when you are designing software experiences that integrate with wearables? The first thing I did was look at APIs for popular wearables, and for their guidance on how to interact with end users. I did what I always do, I tried to find similarities with computers or mobile devices and designed the experiences the way I would with them.

Trouble was, when we tested these software experiences on real devices, in the real world, they were sometimes really annoying. There were unintended consequences with the devices vibrating, blinking, and interrupting real world activities.
“AHHHH! Turn it off! TURN IT OFF!!”

Ok, back to the drawing board. What did we miss?

One insight I learned from this experience sounds simple, but it required a big adjustment in my design approach. I have been working on software systems that tried to make a virtual experience on a computer relatable to a real-world experience. With wearables devices that we literally embed into our physical lives, that model reverses. It can mess with your mind a bit, but it is actually very obvious once it clicks in your brain.

Simply put, when I don’t have a UI on a device, the world becomes my UI.

Let me expand on my emerging wearable design approach to help explain why.

Understand the Core Value Proposition of Your Product

If you’ve been developing software for computers and mobile devices already, this may sound simple, but it can actually be a difficult concept to nail down.

One approach I take is to reduce the current features. If we cut this feature, does the app still work? Does it prevent the end user from solving problems or being entertained? If we can cut it, it might be a supporting feature, not a core feature. Remember, wearables have less power and screen real estate, so we’ll have to reduce. When I had a group of core features remaining, now it is time to summarize. Can we describe what these features do together to create value and a great experience for users?

Another approach I use is to abstract our application away from computing technology altogether. I map out common user goals and workflows and try to repeat them away from the PC with a paper and pen. With an enterprise productivity application that involved a lot of sharing and collaboration, I was able to do this with different coloured paper (to represent different classes of information), folders (to represent private or shared files), post-its and different coloured pens for labelling, personalization.

In a video game context, I did this by reducing the game and mechanics down to a paper, pen, rule book and dice. I then started adding technology back until I had enough for the wearable design.

Now, how do you describe how you are different? Have you researched other players in this market? Who are your competitors, or who has an offering that is quite similar? How are you different? What sets you apart in a sea of apps and devices? This is vital to understand and express clearly.

How do I know if I am done, or close enough? As a team, we should be able to express what our product is and what it does in a sentence or two. Then, that should be relatable to people outside of our team, preferably people we know who aren’t technologists. If they understand the core offering, and express interest with a level of excitement, then we are on our way.
If you are starting out new, this can be a little simpler since it is often easier to create something new than to change what is established. However, even with a fresh, new product, it is easy to bloat it up with unneeded features, so have the courage to be ruthless about keeping things simple, at least at first.

Research and Understand the Device

With wearables and mobile devices in general, the technology is very different than what we are used to with PCs. I call them “sensor-based devices” since the sensors are a core differentiator from PCs and enable them to be so powerful and engaging to users. The technical capabilities of these devices are incredibly important to understand because it helps frame our world of possibilities when we decide what features to implement on wearables and smart watches. Some people prefer to do blue-sky feature generation without these restrictions in place, but I prefer to work with what is actually appropriate and possible with the technology. Also, if you understand the technology and what it was designed for, you can exploit its strengths rather than try to get it to do something it can’t do, or does very poorly.

This is what I do when I am researching a new device:

  • Read any media reviews I can find. PR firms will send out prototypes or early designs, so even if the device hasn’t been released yet, there are often some information and impressions out there already.
  • Read or at least skim the API documentation. Development teams work very hard to create app development or integration ecosystems for their devices. If you aren’t technical, get a friendly neighbourhood developer on your team to study it and summarize the device capabilities and how it is composed. You need to understand what sensors it has, how they are used, and any wireless integration that it uses to communicate to other devices and systems.
  • If they have provide it, thoroughly read the device’s design/UX/HCI guidelines. If they don’t, read others that are offering similar. For example, Pebble smart watches have a simple but useful Pebble UX Guide for UI development. It also refers to the Android and Apple design guidelines and talks about their design philosophy. Pebble currently emphasize a minimalist design, and recommend creating apps for monitoring, notifications and remote control. That is incredibly helpful for narrowing your focus.
  • Search the web – look for dev forums, etc. for information about what people are doing. You can pick up on chatter about popular features or affordances, common problems, and other ideas that are useful to digest. Dev forums are also full of announcements and advice from the technical teams delivering the devices as well, which is useful to review.
Determine Key Features by Creating an Impact Story

Now we can put together our core value proposition and the device’s capabilities. However, it’s important to understand our target market of users, and where they will use these devices, and why. I’ve been calling these types of stories different things over the years: technical fables, usage narratives, expanded scenarios and others, but nothing felt quite right. Then I took the course User Experience Done Right by Jasvir Shukla and Meghan Armstrong and I was delighted to find out that they use this approach as well. They had a better name: impact stories, so that is what I have adopted as well.

What I do is create an impact story that describe situations where this sort of technology might help. However, I frame them according to people going about their regular everyday lives. Remember that stories have a beginning, middle and end, and they have a scene, protagonists, antagonists, and things don’t always go well. I add in pressures and bad weather conditions that make the user uncomfortable, making sure they are things that actually occur in life, trying to create as realistic situations as I can. Ideally, I have already created some personas on the project and I can use them as the main characters.

Most people aren’t technology-driven – they have goals and tasks and ideas that they want to explore in their everyday lives and technology needs to enable them. I try to leave the technology we are developing out of the picture for the first story. Instead, I describe something related to what our technology might solve, and I explore the positives, negatives, pressures, harmonies and conflicts that inevitably arise. From this story, we can then look at gaps that our technology might fill. Remember that core value proposition we figured out above? Now we use this to figure out how we can use our technology platforms to address any needs or gaps in the story.

Next, we filter those ideas through the technical capabilities of the device(s) we are targeting for development. This is how we can start to generate useful features.

Once we get an idea on some core features, I then write three more short stories: a happy ending story (what we aspire to), a bad ending story (the technology fails them, and we want to make sure we avoid that) and a story that ends unresolved (to help us brainstorm about good and bad user experience outcomes.)

Impact stories and personas are great tools for creating and maintaining alignment with both business and technical stakeholders on teams. Stories have great hooks, they are memorable, and they are relatable. With experienced people, they remind them of good and bad project outcomes in the past, which help spur on the motivation for a great user experience. No one wants their solution to be as crappy as the mobile app that let you down last night at the restaurant and cost you a parking ticket.

Use the Real World as Your User Interface

UX experts will tell you that concrete imagery and wording works better than abstract concepts. That means if you have a virtual folder, create an icon that looks like a folder to represent what it is by using a cue from the physical world. What do we do if we have no user interface on a device to put any imagery on it at all? Or maybe it is just very small and limited, what then? It turns out the physical world around us is full of concrete imagery, so with a bit of awareness of a user’s context, we can use the real world as our UI, and enhance those experiences with a wearable device.

Alternate Reality Games (ARGs) are a great source of inspiration and ideas for this sort of approach. For a game development project I was working on, I also looked at Geocaching mechanics. Looking to older cellular or location-based technology and how they solved problems with less powerful devices is an enormous source of information when you are looking at new devices that share some similarities.

I talked to a couple of friends who used to build location-based games for cell phones in the pre-smartphone era, and they told me that one trick with this approach pick things that are universal (roads, trees, bodies of water, etc.) and add a virtual significance to them in your app experience. If I am using an exercise wearable, my exercise path and items in my path that I pass by might trigger events or significance to the data I am creating. If you run past significant points of interest on a path, information notifications to help cheer you on can be incredibly rewarding and engaging.

Enhance situational activities

One thing that bugs me about my smartphone is that it rarely has situational awareness. I have to stop what I am doing and inform it of the context I am in and go through all these steps to get what I want at that moment. I want it to just know. Yesterday I was on my way to a meeting in a part of town I am bit unfamiliar with. I had the destination on my smartphone map, without turn by turn directions turned on. I had to take a detour because of construction, so I needed to start a trip and get turn-by turn directions from the detoured area I was on. I pulled over to the side of the road, pulled out my smartphone, and I spent far too long trying to get it to plan out a trip. I had to re-enter the destination address, get the current location I was at and mess around with it before I could activate it. A better experience would be a maps app that would help and suggest once it senses you have stopped, and allow you to quickly get an adjusted trip going. While you have a an active trip, these devices are quite good at adjusting on the fly, but it would be even better if they knew what I was doing and suggested things that would make sense for me right now, in that particular situation.

It is easy to get irritating and over suggest and bug people to death about inconsequential things, but imagine you are walking past your favorite local restaurant, and a social app tells you your friends are there. Or on the day you usually stop in after work, your smartwatch or wearable alerts you to today’s special. If I leave my doctor’s office and walk to the front counter, a summary of my calendar might be a useful thing to have displayed for me. There are many ways that devices can use sensors and location services to help enhance an existing situation, and I see a massive amount of opportunity for this. Most of the experience takes place in real life, away from a machine, but the machine pops up briefly to help enhance the real life experience.

Rely on the Brain and the Imagination of Your User

If we create or extend a narrative that can make real world activities also have virtual meaning, that can be a powerful engagement tool. One mobile app I like is a jogging app that creates a zombie game overlay on your exercise routine. Zombies, Run! is a fantastic example of framing one activity into a context of another. This app can make my exercise more interesting, and gets your brain involved to help focus on what might become a mundane activity.

With a wearable, we can do this too! You just extend the narrative of what you created on your job and delay telling you what happened until you are complete, and have logged in to your account on a PC or smartphone/tablet. You have to reinforce the imagery and narrative a bit more on the supporting apps on devices with a screen.

ARGs really got me thinking about persisting a narrative. It is one thing to apply virtual significance to real-world objects, but what happens if we have no user interface at all? What are we left with? The most powerful tool we have access to is our human brains, so why not use those too? Sometimes as software designers I think we forget about how powerful this can be, and we almost talk down to our users. We dumb everything down and over praise them rather than respecting that they might have different interpretations or alternative ways of creating value for themselves with our software. Just because we didn’t think of it doesn’t mean it has no merit. It does require a shift towards encouraging overall experiences rather than a set of steps that have to be followed, which can be challenging at first.

Wearable Integration – Data Conversion

If you are working with a wearable that doesn’t have a screen or UI, and is essentially a measuring device, one option to tie in your app experience is to think of converting the data from one context into another. This can be done by tying into APIs for popular wearables. You don’t have an app on the device, but your device ties into the data that is gathered by it and used for something else. For example, convert effort from an exercise wearable into something else in your app. One example of this is Virgin Pulse, an employee engagement application that has a wearable that tracks exercise. Exercise with the wearable can be converted into various rewards within their system. The opportunities for this sort of conversion of data that is measured for one purpose to another experience altogether are endless.

One app I designed extended data generation activities to a narrative in an app. We extended the our app concepts to the physical activity and tapped into the creative minds and vivid imaginations of the humans using the devices with a few well placed cues. This was initially the most difficult app for me to design, but it turned out that this was the overwhelming favourite from a “fun to use” perspective. The delay between generating the data out in the real world, and then coming home and using your PC or tablet to discover what your data measured by the wearable had created in our app was powerful. Anticipation is a powerful thing.

However, be careful when you do this. Here are a couple of things to be aware of:

  • Make sure the conversion rules are completely transparent and communicated to the users. Users need to feel like the system is fair, and if they feel taken advantage of, they will stop using your app. Furthermore, many consumer protection groups and laws could be broken in different jurisdictions if you don’t publish it, and change it without user consent.
  • Study currency conversion for ideas on how to do this well. Many games use the US dollar as a baseline for virtual currencies in-game, mirroring the real world markets. These are sophisticated systems with a long history, so you don’t have to re-invent the wheel, you can build on knowledge and systems that are already there.
  • Add Variability Design Mechanics to Combat Boredom

    It can be really boring to use a device that just does the same things over and over. Eventually, it can just fade into the background and users don’t notice it anymore, which causes you to lose them. If they are filtering out your app, they won’t engage with it. Now, this is a tricky area to address because the last thing you want to do is harass people or irritate them. I get angry if an app nags me too much to use it like some needy ex, or try hard salesman. However, a bit of design work here can help add some interest without being bothersome, and in many cases, add to the positive experience.

    Here are some simple ideas on adding variation:

  • Easter Eggs: add in navigation time savers that will be discovered by more savvy users and shared with friends to surprise and delight
  • Variable Results: don’t do the same thing every time. Add in different screen designs for slightly different events. One trick is to use time and seasons as cues to update a screen with themes that fit daytime, night time, and seasons. Another is to use the current context of use to change the application behaviour or look. There are lots of things you can do here.
  • Game Mechanics: levelling and progression can help people feel a sense of progress and accomplishment, and if there are rewards or features that get unlocked at different levels, it can be a powerful motivator. It also adds dimensions to something repetitive that changes the user’s perspective and keeps it from getting stale.
Provide for User Control and Error Correction

As we learned when designing notifications for a smartwatch, it can be incredibly irritating if it is going off all the time and buzzing on your wrist. Since wearables are integrated with our clothing, or worn directly next to our bodies, it is incredibly important to provide options and control for users. If your app is irritating, people will stop using it. However, one person’s irritating is another person’s delight, so be sure to allow for notifications and vibrations and similar affordances in your product to be turned on and off.

Conclusion

This is one of the most fun areas for me right now in my work, and I hope you find my initial brain dump of ideas on the topic helpful. Sensor-based devices are gaining in popularity, and all indications show that some combination of them will become much more popular in the future.

Categories: Blogs

Q&A: Context-Driven Testing Champions Talk Trends, Preview Let’s Test Oz

uTest - Fri, 07/11/2014 - 18:12

Henrik Andersson and David Greenlees are two well-known contributors to the context-driven testing community and together co-founded the Let’s Test conferences, which celebrate the context-driven school of thought. Let’s Test Oz is slated for September 15-17 just outside Sydney, Australia, and uTest has secured an exclusive 10% discount off new registrations. Be sure to email testers@utest.com for this special discount code if you plan on attending.

In this interview, we talk with Henrik and David on trends in the context-driven community, and get a sense of what testers can expect at Let’s Test Oz.

19c4175HenrikAndersson

uTest: Like James Bach, you’re both members of the ‘context-driven’ testing community. What drove each of you to context-driven testing?

HA: Actually, James did. I had close to no awareness of the context-driven testing (CDT) community before I hosted James’ RST class in Sweden in spring of 2007. During my discussions with James, I found that we shared lots of fundamental views on testing, and he insisted that I should meet more people in the CDT community.

James told me about the CAST conference that took place in the States, and that just before this, there would be a small peer conference called WHET 4 that his brother Jon hosted. A few days later, I got an invitation from Jon Bach to attend. At this workshop, where we spent a weekend discussion on Boundary Testing, I met testers like Cem Kaner, Ross Collard, Scott Barber, Rob Sabourin, Michael Bolton, Dough Hoffman, Keith Stobie, Tim Coulter, Dawn Haynes, Paul Holland, Karen Johnson, Sam Kalman, David Gilbert, Mike Kelly, and, of course, Jon and James Bach. From then on I was hooked!

DG: Difficult question to answer without writing a novel! I wrote about my testing journey some time back, however, that doesn’t really touch on my drivers toward the CDT community. If I was to pinpoint one thing, it would be the book Lessons Learned in Software Testing (Bach, Kaner, Pettichord). This was my first introduction to the community and to what I believe is a better way to test…in fact…the only way to test.

What keeps me here is the fantastic people I come across each and every day. We challenge each other, we’re passionate, and we’re not afraid to put our opinions out there for the world to hear and critique. This all adds to the betterment of our craft, which is our ultimate goal. I’m a firm believer that there is no ‘one-size-fits-all’ approach to testing, and when you add that to my natural tendency to explore rather than confirm, I find that the CDT community is a great fit for me.

uTest: And speaking of James Bach, he’s one of the keynote speakers at Let’s Test Oz in the Fall. Can you tell us a little bit about the idea behind the show, and why you felt it was time for context-driven conferences in Europe and Australia?

HA: Let’s Test is all about building, growing and strengthening the CDT community. We have successfully arranged Let’s Test three years in a row in Europe, but the attendees are coming from all over the world. The idea behind Let’s Test is to create a meeting place for testers to learn, share experiences, grow, meet other testers, do some real testing, and, of course, to have a whole lot of fun.

When David Greenlees and Ann-Marie Charrett told me about what they were looking to achieve, I immediately felt that it was in line with Let’s Test, and believe Let’s Test can be a great vehicle to grow the CDT community in Australia.

Last year, we did a one-day tasting of Let’s Test in Sydney, and this year, we did one in the Netherlands. In November, we will be hosting one in Johannesburg, South Africa. The purpose of the small tastings of Let’s Test is for testers to get a glance at the Let’s Test experience, at a really low cost. If you cant come to the real Let’s Test, this is a great alternative to check out what it is all about.

DG: From the Australian point of view, it’s fair to say that the CDT community is very small. We refer to the area as ‘Downunder’ — this is our way of saying Australia and New Zealand. I felt it was time to change that, and one way to help the CDT community thrive is to hold a CDT conference.

For quite a few years now, I’ve felt that Downunder needed a different style of software testing conference, one where conferring is the ultimate goal, and so I emailed Henrik, and he was extremely positive and encouraging…so here we are.

uTest: What’s changed or surprised you the most about the context-driven testing community in the past couple of years?

DG: That’s difficult for me to answer as I’ve only been engaged with, and a member of, the CDT community for 3-4 years now. What I have found is that the CDT community constantly changes — it’s the nature of being driven by your context. The testing we do changes all the time as the technology we use, and development approaches we undertake, change all the time. Not only that, as a community, we are big on education and the study of our craft. Along with that comes new discoveries every week! All you need to do is follow the blogs of CDT community members and you’ll be blown away by what’s being learned and shared constantly.

uTest: ‘Best practices’ can be a bit of a dirty phrase if you subscribe to the context-driven testing school of thought – one size fits all is rarely in the cards. But is there ever a situation where best practices are warranted as a tester?

HA: “Best Practices” are a sales gimmick that too many people have been repeating for too long, with the result that people who are looking for a shortcut have started to believe in it.

I’m not interested if there might be a hypothetical situation where it is valid. I just don’t like the concept of it since it is limiting my ability to do think, be creative, experiment and to learn new stuff — in short, to do good and valuable work.

uTest: What separates a great tester from the rest of the pack?

DG: Dare I say it: It depends on the context.

One thing I will call out is passion for the craft and self-education. If a tester has passion for what they are doing, the self-education tends to comes naturally, and the rest falls into place. Sure, there will always be times where a tester questions whether testing is right for them — that’s a part of a tester’s evolution — but passionate testers will always fall back to it, or change something to re-motivate themselves.

I don’t believe in calling out particular skills that a tester should have, because every context is different, and may require a completely different set of skills.

uTest: Who are some of the folks in testing you’d consider an ‘influencer,’ whether a peer or someone who’s a social media giant?

HA: There are so many that have influenced my work. I mentioned a few in my previous answers. Today, I get lots of inspiration from my fellow testers at House of Test. Also there is a bunch of ‘newish’ testers that have lots of energy. To mention a few:

  • Ben Kelly
  • Louise Perold
  • Chris Blain
  • Tim Coulter
  • Ilari Aegerter
  • Huib Schoots
  • Iain McCowatt
  • Maria Kedemo
  • Erik Brickarp

DG: I’d like to stay local for this answer, and personal. I don’t believe that I can speak for anyone else when calling out the influencers of our craft — it’s a very personal thing. There are a small group of testers who I’m in regular contact with and who influence me, for the better, almost every day (in no particular order):

These testers all have very different strengths that I call upon often. They are my ultimate ‘test team’ of influencers Downunder. Guess what? There is a bonus. If you come to Let’s Test Oz, you’ll get to meet almost every single one of them!

uTest: Can you give our testers a preview of what to expect at Let’s Test Oz in September?

DG: The best preview anyone could possibly hope for is on our Archive website page. From here, you can access notes for tutorials and sessions, our YouTube channel and other videos from attendees, blog post reviews, pictures, and much more.

What I will say is to come prepared to confer and learn! It’s a retreat-style conference where accommodation and all meals are included, and the evenings are just as valuable as the sessions during the day. We put the confer back into conference!

Categories: Companies

The Atlassian Story with guest Tim Pettersen on Nexus Live

Sonatype Blog - Fri, 07/11/2014 - 17:56
Developers around the world are using BitBucket, Stash, Confluence, Jira and HipChat to help manage their projects. In the July 31 installment of Nexus Live, we’ll talk with Tim Pettersen, Developer Advocate at Atlassian.  We’ll find out what’s in store for future releases and how...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

Conventional HTML in ASP.NET MVC: Adopting Fubu conventions

Jimmy Bogard - Fri, 07/11/2014 - 17:42

Other posts in this series:

Now that we’ve established a base for programmatically building out HTML, we can start layering on top more intelligent model-centric conventions for both displaying and editing data. Eventually, I want to get to the point of not just supporting simple conventions like “DateTime = date picker” and “bool = checkbox” but much more powerful ones. Things like “a property named ‘Comments’ should be a textarea” or “a property of an entity type should make its dropdown options the list of items from the database”. These are all things that are universal to the metadata found on those properties.

On top of that, I still want to enable customization, but at the HTML tag level. Ultimately, these builders build one or more tags, but nothing bigger than that. I don’t want to use templating for this – complex logic inside a template is difficult to do, which is why you have concepts like Ember views or Angular directives. I want to build an equivalent to those, but for MVC.

I built a rather lousy, but functional version, years ago, but missed the concept of an object model to build HTML tags. That lousy version was ported to MVC Contrib, and should not be used by anybody under any circumstances. Instead, I’ll pull in what FubuMVC already built, which turns out is built on top of the HtmlTags library.

I’m not going to use any of the other pieces of FubuMVC – just the part that conventionally builds HTML tags. First things first, we’ll need to get the Fubu MVC conventions and related packages up and running in our app.

Integrating Fubu Conventions

First, we’ll need to install the correct packages. For this, we’ll just need a couple:

  • FubuMVC.Core.UI
  • FubuMVC.StructureMap3

These two pull down a number of other packages. To make things easier on ourselves, we’ll also install the StructureMap package for integrating with MVC 5:

  • StructureMap.MVC5

Once that’s done, we have StructureMap plugged in to MVC, and the components ready for plugging FubuMVC into ASP.NET MVC. We’ll need to make sure that the correct assemblies are loaded into StructureMap for scanning:

Scan(
    scan => {
        scan.AssemblyContainingType<IFubuRequest>();
        scan.AssemblyContainingType<ITypeResolver>();
        scan.AssemblyContainingType<ITagGeneratorFactory>();
        scan.AssemblyContainingType<IFieldAccessService>();
        scan.AssemblyContainingType<StructureMapFubuRegistry>();

        scan.TheCallingAssembly();
        scan.WithDefaultConventions();
        scan.LookForRegistries();
    });

We just make sure we add the default conventions (IFoo –> Foo) for the Fubu assemblies we referenced as part of the NuGet packages. Next, we need to configure the pieces that normally are done through FubuMVC configuration, but because we’re not pulling all of Fubu, we need to do through container configuration:

public class FubuRegistry : Registry
{
    public FubuRegistry()
    {
        var htmlConventionLibrary = new HtmlConventionLibrary();
        htmlConventionLibrary.Import(new DefaultHtmlConventions().Library);
        For<HtmlConventionLibrary>().Use(htmlConventionLibrary);

        For<IValueSource>().AddInstances(c =>
        {
            c.Type<RequestPropertyValueSource>();
        });
        For<ITagRequestActivator>().AddInstances(c =>
        {
            c.Type<ElementRequestActivator>();
            c.Type<ServiceLocatorTagRequestActivator>();
        });
        For<HttpRequestBase>().Use(c => c.GetInstance<HttpRequestWrapper>());
        For<HttpContextBase>().Use(c => c.GetInstance<HttpContextWrapper>());
            
        For<HttpRequest>().Use(() => HttpContext.Current.Request);
        For<HttpContext>().Use(() => HttpContext.Current);

        For<ITypeResolverStrategy>().Use<TypeResolver.DefaultStrategy>();
        For<IElementNamingConvention>().Use<DotNotationElementNamingConvention>();
        For(typeof(ITagGenerator<>)).Use(typeof(TagGenerator<>));
        For(typeof(IElementGenerator<>)).Use(typeof(ElementGenerator<>));
    }
}

There’s a bit here. First, we create an HtmlConventionLibrary, import default conventions, and register this instance with the controller. We’re going to modify this in the future but for now we’ll use the defaults. This class tells FubuMVC how to generate HtmlTag instances based on element requests (more on that soon). Next, we register a value source, which is analogous to a ValueProvider in MVC. The ITagRequestActivator is for filling in extra details around a tag request (again, normally filled in with FubuMVC configuration).

Since FubuMVC still has pieces that bridge into ASP.NET, we need to register the HttpContext/Request classes based on HttpContext.Current. In the future ASP.NET version, this registration would go away in favor of Web API’s RequestContext.

The ITypeResolverStrategy determines how to resolve a type based on an instance. I included this because, well, something required it so I registered it. Much of this configuration was a bit of trial-and-error until pieces worked. Not a knock on Fubu, since this is what you deal with bridging two similar frameworks together. Still much cleaner than bridging validation frameworks together *shudder*.

The IElementNamingConvention we’re using tells Fubu to use the MVC-style notation for HTML element names (foo[0].FirstName). Finally, we register the open generic tag/element generators. Even though the naming convention is IFoo->Foo, StructureMap doesn’t automatically register open generics.

This is the worst, ugliest part of integrating Fubu into ASP.NET MVC. If you can get past this piece, you’re 100 yards from the marathon finish line.

Now that we have Fubu MVC configured for our application, we need to actually use it!

Supplanting the helpers

Because the EditorFor and DisplayFor are impossible to completely replace, we need to come up with our own methods. FubuMVC exposes similar functionality in InputFor/DisplayFor/LabelFor methods. We need to build HtmlHelper extensions that call into FubuMVC element generators instead:

public static class FubuAspNetTagExtensions
{
    // Similar methods for Display/Label
    public static HtmlTag Input<T>(this HtmlHelper<T> helper, 
        Expression<Func<T, object>> expression)
        where T : class
    {
        var generator = GetGenerator<T>();

        return generator.InputFor(expression, model: helper.ViewData.Model);
    }

    private static IElementGenerator<T> GetGenerator<T>() where T : class
    {
        var generator = DependencyResolver.Current.GetService<IElementGenerator<T>>();
        return generator;
    }
}

We build an extension method for HtmlHelper that accepts an expression for the model member you’re building an input for. Next, we use the dependency resolver (service location because MVC) to request an instance of an IElementGenerator based on the model type. Finally, we call the InputFor of IElementGenerator to generate an HtmlTag based our expression and model. Notice there’s no ModelState involved (yet). We’ll get to validation in the future.

Finally, we need to use these Label and Input methods in our forms. Here’s one example from the Register.cshtml view from the default MVC template:

<div class="form-group">
    @Html.Label(m => m.Email).AddClass("col-md-2 control-label")
    <div class="col-md-10">
        @Html.Input(m => m.Email).AddClass("form-control")
    </div>
</div>
<div class="form-group">
    @Html.LabelFor(m => m.Password, new { @class = "col-md-2 control-label" })
    <div class="col-md-10">
        @Html.PasswordFor(m => m.Password, new { @class = "form-control" })
    </div>
</div>

I left the second alone to contrast our version. So far not much is different. We do get a more elegant way of modifying the HTML, instead of weird anonymous classes, we get targeted, explicit methods. But more than that, we now have a hook to add our own conventions. What kinds of conventions? That’s what we’ll go into the next few posts.

Ultimately, our goal is not to build magical, self-assembling views. That’s not possible or desirable. What we’re trying to achieve is standardization and intelligence around building model-driven input, display, and label elements. If you’re familiar with Angular directives or Ember views, that’s effectively what our conventions are doing – encapsulating, with intelligent, metadata-driven HTML elements.

Next up: applying our own conventions.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

[Webinar] "Fast IT": Concepts and Examples from Assembla and Attivio

Assembla - Fri, 07/11/2014 - 17:41

Join us on July 23, 2014 from 11:00 AM - 11:45 AM EDT for a webinar “Fast IT”: Concepts and Examples from Assembla and Attivio.

describe the image

When we at Assembla heard about the 2-2-2 project structure used by Attivio, we knew we had a fun story and a big idea to share.  The fun story is the way that Attivio can spin-up major Business Intelligence apps with 2-day, 2-person prototyping sessions. The big idea is “Fast IT”: a way of managing fast and Agile projects, while working smoothly with your slower, more reliable core systems: "Core IT".

In this Webinar, Sid Probstein, CTO of Attivio, and Andy Singleton, founder of Assembla, will share their discoveries about ways that “Core” and “Fast” can work smoothly together.  We will show tools that help you wrap and index your Core IT so that you can easily use it in Fast IT projects.  And, we’ll show how to professionally launch and manage an expanding portfolio of Fast IT projects for analytics, Web, mobile and marketing applications and SaaS integration. 

This Webinar is designed to help IT professionals or project managers who are handling analytics, Web, mobile, cloud and marketing applications.

describe the image

Presented By:

assembla logo rectangle    Attivio logo

Categories: Companies

Lessons of Youth: A License to Use

Sonatype Blog - Fri, 07/11/2014 - 16:03
I can still recall (it actually pains me to count the years, so I refuse to) with perfect clarity the sound of my 1200 baud modem handshaking with my neighborhood’s local BBS. It’s a sound that so consistently produces a smile for me, I liken it to the crisp smell of air just before rain begins to...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

In a post-Heartbleed world, firms need to scrutinize their open source security

Kloctalk - Klocwork - Fri, 07/11/2014 - 15:30

For quite a while, open source security solutions enjoyed a virtually unbroken string of successes, with little in the way of negative news surrounding these offerings. Then came Heartbleed. Undoubtedly the most significant setback to open source security ever discovered, the Heartbleed vulnerability exposed a tremendous percentage of the total Internet to possible cyberthreats.

In light of this revelation, many industry experts have offered their thoughts on the future of open source. Speaking at Computing's recent Enterprise Security and Risk Management Summit in London, a number of panelists asserted that Heartbleed should indeed cause businesses to scrutinize their open source security efforts, but that it is too late to abandon such solutions altogether.

Open source on the loose
The discussion began when Computing Magazine editor Stuart Sumner asked the panelists whether Heartbleed should cause business decision-makers to be more doubtful toward open source software. In response, Marc Lueck, director of global threat management at publishing house Pearson, argued that there is really no longer any choice in the matter.

"We don't have the opportunity to change our minds now, we're using open source, that decision is made," he said, the news source reported. "We now need to figure out how to fix it, how to solve it, how to protect ourselves from decisions that have already been made."

Lueck is far from the only expert to share this view. Writing for ZDNet, Steven J. Vaughan-Nichols recently argued that the "future belongs to open source," even despite the Heartbleed revelation.

"Outside of Apple and Microsoft, everyone, and I mean pretty much everyone, has already decided that open source is how they'll develop and secure their software. Google, Facebook, Yahoo, Wikipedia, Twitter, Amazon, you know all of Alexa's top ten websites in the world, rely on open-source software every day of the year," Vaughan-Nichols wrote.

Scrutiny needed
The significance of Heartbleed​, therefore, is not that companies need to reconsider their commitment to open source, but rather that firms should make more of an effort to ensure that these solutions are fully protected and applicable to a given situation, as Ashley Jelleyman, head of information assurance at BT and a participant in the Computing panel, explained.

"I think the real issue is not whether it's open source or closed source, it's actually about what you do with it and how you actually evaluate it to make sure it's fit for purpose," said Jelleyman, Computing Magazine reported. "It's have we checked this through, are we watching what it's doing?"

This thought goes back to one of the most popular notions concerning open source security: The idea that with enough eyes, all bugs are shallow. With proprietary solutions, a few oversights could potentially lead to a serious software security flaw, but open source enables and, theoretically, requires more people to examine any given piece of code. This dramatically reduces the likelihood that a major vulnerability will persist for long.

Yet such an oversight is precisely what happened to OpenSSL, leading to the Heartbleed flaw. Essentially, every organization that leveraged Heartbleed assumed that the software had been thoroughly vetted. It was so popular, it seemed inevitable that someone at some point would have noticed if there was any real vulnerability.

To protect themselves from future open source security risks, organizations need to take a closer look at their open source practices, rather than relying heavily on assumptions. By adopting a more critical posture to understand where open source is being used and the associated risks, firms can embrace open source and all of the advantages it entails without compromising their cybersecurity capabilities.

Categories: Companies

New Online Course: Beautiful Builds and Continuous Delivery Patterns

ISerializable - Roy Osherove's Blog - Fri, 07/11/2014 - 10:31

 

I somehow forgot to blog about this, but it’s never too late. My new online course Beautiful Builds and Continuous Delivery patterns is now available and is $25 until the end of this month (July).

Here’s the course description:

Ah, Continuous Delivery. Everybody and their sister are talking about it, but in real life, nothing is ever as simple as listening to a conference talk about it.

  • Can you really deploy 20 times a day if you QA department is breathing down your neck that they are using the staging server between 9 and 5?
  • Are the teams waiting for each other to finish their work , creating bottlenecks?
  • Is security threatening to have you fired for even suggesting you deploy to production?

In this course Roy Osherove, author of the books “Beautiful Builds” (still in progress, actually) and “The Art of Unit Testing”, discusses common problems and solutions (patterns!) during build automation and continuous delivery.

We start from the basics, defining the differences between automated builds and CI, separation of concerns in build management, and then move on to more advanced things such as making builds faster using artifacts, solving versioning issues with snapshots, cross-team dependencies, and much more.

  • All videos are both streamable AND downloadable to watch offline, no DRM.

More info at beautifulbuilds.com.

 

Categories: Blogs

Should you write unit tests or integration tests?

ISerializable - Roy Osherove's Blog - Fri, 07/11/2014 - 10:22

I got this question in the mail. I thought it was quite valid for many other people:

Question:

 

Trying to promote unit tests in a new work place: the “search” action from the UI goes through IOC container which calls a WCF service, where the search itself is done using entity framework auto generated code. My colleague claims that due to the multiple dependencies, it doesn’t make sense to fake everything when running the unit test, and it makes more sense to do integration tests in this case.

Is my hunch correct? Should I put efforts into implementing unit tests in this complex scenario?

 

My Answer:

You can go either way.

It really depends on time tradeoffs
  1. How long will it take to write the tests? For integration tests you will have to wait until the whole system is complete to get the tests even failing, unless you start from a webpage, and then they will fail for as long as the entire system of layers is not built.
  2. How long does it take to run and get feedback? integration tests take longer to run usually. and are more complicated to setup. But when they pass you get a good sense of confidence that all the parts work nicely with each other. with unit tests you will get faster feedback, but you will still need those integration tests for full system confidence. On the other hand with integration tests and no unit tests the developers will have to wait in long cycles before they know if they broke something.
There is no one answer. I usually do a mix of both. For web systems I might even start with acceptance tests that fail, and then fill the system slowly with unit tests for parts of the features with more focused functionality.  I guess you can change the question to : What type of tests should we write FIRST? Only you know that. it changes for every project and system.

 

Categories: Blogs

Software Testing T~log! (help needed)

Yet another bloody blog - Mark Crowther - Fri, 07/11/2014 - 02:41
Hey All,

Well, I finally got my equipment and software sorted out to be able to Vlog. That is, Video Log, creating a kind of video based diary or End of Day Stand-up.

However, it's not that life diary kind of Vlog, it's a Testing Vlog or as I'm going to call it a T~log! Yay, as in Testing kinda like video blog :) #t~log! is my official new hashtag. 

I've been wanting to do these for a while but couldn't nail the format. In the end it struck me there was more than enough stuff happening in a testing day to chat to the testing community about for 10 to 15 minutes.

In the latest video I mention an new paper on the main site, An Approach to Project Sizing and Complexity - grab it here: http://www.cyreath.co.uk/papers.html. There's also the London Tester Gathering on the 20th July, let people know you're coming by visiting the Meetup site http://www.meetup.com/agiletesting/. I also mention the CIA style guide, testing feeds and plans for the next videos.

Once I work out the tech, I hope to get others on the t~log! and pull in more stuff than just from my own testing day. It would be great to have a literal EoD Stand-up t~log! of what happened today in testing. Bare with me while I polish the format...


but... I need your help
I will: 

  • Try and t~log! daily (Mon to Thursday minimum)
  • Call out meetings, events, sites, blogs, resources, etc. that are of interest to the community
  • Get others to t~log! with me
  • Keep the t~log! informative and provide links to resources etc

I need you to:



Thanks in advance,

Mark

Subscribe, Watch, Rate!http://www.youtube.com/subscription_center?add_user=cyreath

#t~log!




Categories: Blogs

Throwback Thursday: 80’s Tech at its Best

uTest - Thu, 07/10/2014 - 22:46

The 80’s brought with it an incredible range of technology that for better or worse shaped the age we live in now. For this TBT, we’ll be having a quick look at some of the more surreal/novel items that came from the land of neon and synth.

tech1

The Private Eye, brought to us by Reflections Technology, allowed the wearer to view a 1-inch LED screen with image quality comparable to a 12-inch display. Released in 1989, the Private Eye head-mounted display was used by hobbyists and researchers alike, going on to become the subject of an augmented reality experiment in 1993. To think that this type of wearable technology has only been tapped into fully within the past 3 years is pretty mind-blowing.

tech2

The Stereo Sound Vest provides the wearer with a $65 portable speaker solution to provide a ‘safer’ listening option without the use of headphones. With zip-off sleeves, it’s a wonder this wasn’t all the rage.

tech3

This all-in-one player included an AM-FM stereo, microcassette player, recorder-player, calculator, and a digital alarm clock that fit in your hand. This was the Swiss Army knife of media at the time…and boy was it a looker!

What’s your favorite piece of 80s tech nostalgia that you yearn for? Be sure to let us know in the comments.

Categories: Companies

uTest Non-profit Partner Brings 150 Software Testing Jobs to the Bronx

uTest - Thu, 07/10/2014 - 19:43

extralargeIT job training non-profit Per Scholas plans to bring 150 new software testing jobs to the Bronx, New York, this Fall when it opens a large software testing center there.

According to a DNAinfo.com news story:

Per Scholas, which is based in The Bronx, and the IT consulting company Doran Jones plan to open the roughly $1 million, three-story, 90,000-square-foot software testing center at 804 E. 138th St., near Willow Avenue.

All of the entry-level jobs will be sourced from Per Scholas graduates, and the boom of 150 new jobs is widely expected to open a lot of doors not usually available in the urban Bronx neighborhood. Keith Klain, co-CEO of Doran Jones, hopes to see the center eventually grow to 500 employees.

As a proud partner of Per Scholas, uTest was there for the groundbreaking of the testing center earlier in 2014, and looks forward to many more lives that we can collectively influence.

Per Scholas is a non-profit with the mission of breaking the cycle of poverty by providing technology education, access, training and job placement services for people in underserved communities.

 

Categories: Companies

Stop Comparing Software Delivery With Manufacturing!

James Betteley's Release Management Blog - Thu, 07/10/2014 - 17:54

A couple of weeks ago I was at an Experience Devops event in London and I was talking about how software delivery, which is quite often compared to a manufacturing process, is actually more comparable to a professional sports team. I didn’t really get time to expand on this topic, so I thought I’d write something up about it here. It all started when I ran a cheap-and-nasty version of Deming’s Red Bead Experiment, using some coloured balls and an improvised scoop…

The Red Bead Experiment

I was first introduced to Deming’s Red Bead Experiment by a guy called Ben Mitchell (you can find his blog here). It’s good fun and helps to highlight how workers are basically constrained by the systems the work in. I’ll try to explain how the experiment works:

  • You have a box full of coloured beads
  • Some of the beads are red
  • You have a paddle with special indentations, which the beads collect in (or you could just use a scoop, like I did).
  • You devise a system whereby your “players” must try to collect exactly, let’s say, 10 red beads in each scoop.
  • You record the results

Now, given the number of red beads available, it’s unlikely the players will be able to collect exactly 10 beads in each scoop. In my especially tailored system I told the players to keep their eyes closed while they scooped up the balls. I also had about half as many red beads as any other colour (I was actually using balls rather than beads but that doesn’t matter!). The results from the first round showed that the players were unable to hit their targets. So here’s what I did:

  • Explain the rules again, very clearly. Write them down if necessary. Being as patronising as possible at this point!
  • Encourage the players individually
  • Encourage them as a team
  • Offer incentives if they can get the right number of red beads (free lunch, etc)
  • Record the results

Again, the results will be pretty much the same. So…

  • Threaten the individuals with sanctions if they perform badly
  • Pick out the “weakest performing” individual
  • Ask them to leave the game
  • Tell the others that the same will happen to them if they don’t start hitting the numbers.

In the end, we’ll hopefully realise that incentivising and threatening the players has absolutely zero impact on the results, and that the numbers we’re getting are entirely a result of the flawed system I had devised. Quite often, it’s the relationship between workers and management that gets the attention in this experiment (the encouragement, the threats, the singling out of individuals), but I prefer to focus on the effect of the constraining system. Basically, how the results are all down to the system, not the individual.

Thanks Kanban!

I think one of the reasons why the software industry is quite obsessed with traditional manufacturing systems is because of the Toyota effect. I’m a huge fan of the Toyota Production System (TPS), Just-in-time production (JIT) Lean manufacturing and Kanban – they’re all great ideas and their success in the manufacturing world is well documented. Another thing they all have in common is that various versions of these principles have been adopted into the software development world. I also happen to think that their application in the software development world has been a really good thing. However, the side-effect of all this cross-over has been that people have subconsciously started to equate software delivery processes with manufacturing processes. Just look at some of the terminology we use everyday:

  • Software engineering 
  • Software factories
  • Kanban
  • Lean
  • Quality Control (a term taken directly from assembly lines)

It’s easy to see how, with all these manufacturing terms around us, the lines can become blurred in people’s minds. Now, the problem I have with this is that software delivery is NOT the same as manufacturing, and applying a manufacturing mindset can be counter-productive when it comes to the ideal culture for software development. The crucial difference is the people and their skillsets. Professionals involved in software delivery are what’s termed as “knowledge workers”. This means that their knowledge is their key resource, it’s what sets them apart from the rest. You could say it’s their key skill. Manufacturing processes are designed around people with a very different skillset, often ones that involve doing largely repetitive tasks, or following a particular routine. These systems tend not to encourage innovation or “thinking outside of the box” – this sort of thing is usually assigned to management, or other people who tend not to be on the production line itself. Software delivery professionals, whether it be a UX person, a developer, QA, infrastructure engineer or whatever, are all directly involved in the so-called “production line”, but crucially, they are also expected to think outside of the box and innovate as part of their jobs. This is where the disconnect lies, in my opinion. The manufacturing/production line model does NOT work for people who are employed to think differently and to innovate.

If Not Manufacturing Then…

Ok, so if software delivery isn’t like manufacturing, then what is it like? There must be some analogous model we can endlessly compare against and draw parallels with, right? Well, maybe…

 

home sweet home

home sweet home

I’m from a very rural area of west Wales and when anyone local asks me what I do, I can’t start diving into the complexities of Agile or devops, because frankly it’s so very foreign to your average dairy farmer in Ceredigion. Instead, I try to compare it with something I know they’ll be familiar with, and if there’s one thing that all people in west Wales are familiar with, it’s sheep rugby.

It’s not as daft as it sounds, and I’ve started to believe there’s actually a very strong connection between professional team sports and Agile software development. Here’s why:

Software delivery is a team effort but also contains subject matter experts who need to be given the freedom to put their skills and knowledge to good use – they need to be able to improvise and innovate. Exactly the same can be said of a professional rugby or soccer (yes, I’m going to call it soccer) teams. Rugby and soccer are both team sports but both contain very specific roles within that team, and for the teams to be successful, they need to give their players the freedom and space to use their skills (or “showing off” as some people like to call it).

2008 World Player of the Year Shane Williams

2008 World Player of the Year Shane Williams

Now, within a rugby team you might have some exceptionally talented players – perhaps a winger like former World player of the year Shane Williams. But if you operate a system which restricts the amount of involvement he gets in a game, he’ll be rendered useless, and the team may very well fail. Even with my dislike of soccer, I still think I know enough about how restrictive formations and systems can be. The long ball game, for instance, might not benefit a Lionel Messi style player who thrives on a possession & passing game.

The same can be said of software delivery. If we try to impose a system that restricts our individual’s creativity and innovation, then we’re really not going to get the best out of those individuals or the team.

 

So Where Does Agile Fit Into All of This?

Agile is definitely the antidote to traditional software development models like Waterfall, but it’s not immune from the same side-effects as we witness when we do the red bead experiment. It seems to be that the more prescriptive a system is, the greater the risk is of that system being restrictive. Agile itself isn’t prescriptive, but Kanban, XP, Scrum etc, to varying degrees are (Scrum more prescriptive than Kanban for instance). The problem arises when teams adopt a system without understanding why the rules of that system are in place.

prescriptive = restrictive

For starters, if we don’t understand why some of the rules of Scrum (for instance) exist, then we have no business trying to impose them on the team. We must examine each rule on merit, understand why it exists, and adapt it as necessary to enable our team and individuals to thrive. This is why a top-down approach to adopting agile is quite often doomed to fail.

So What Should We Do?

My advice is to make sure everyone understands the “why” behind all of the rules that exist within your chosen system. Experiment with adapting those rules slightly, and see what impact that change has on your team and on your results. Hmmm, that sounds familiar…

 Plan, Do, Check, Act

The Deming Cycle: Plan, Do, Check, Act

 


Categories: Blogs

Understanding Application Performance on the Network – Part V: Processing Delays

In Part IV, we wrapped up our discussions on bandwidth, congestion and packet loss. In Part V, we examine the four types of processing delays visible on the network, using the request/reply paradigm we outlined in Part I. Server Processing (Between Flows) From the network’s perspective, we allocate the time period between the end of […]

The post Understanding Application Performance on the Network – Part V: Processing Delays appeared first on Compuware APM Blog.

Categories: Companies

CloudBees Announces Public Sector Partnership with DLT Solutions


Continuous Delivery is becoming a main initiative across all vertical industries in commercial markets/private markets. The ability for IT teams to deliver quality software on a hourly/daily/weekly basis is the new standard.

The public sector has the same needs to accelerate application delivery for important governmental initiatives. To make access to the CloudBees Continuous Delivery Platform easier for the public sector, CloudBees and DLT Solutions have formally joined hands in order to help provide Jenkins Enterprise by CloudBees and Jenkins Operations Center by CloudBees to federal, state and local governmental entities.

With Jenkins Enterprise by CloudBees now offered by DLT Solutions, public sector agencies have access to our 23 proprietary plugins (along with 900+ OSS plugins) and will receive professional support for their Jenkins continuous integration/continuous delivery implementation.

Some of our most popular plugins can be utilized to:
  • Eliminate downtime by automatically spinning up a secondary master when the primary master fails with the High Availability plugin
  • Push security features and rights onto downstream groups, teams and users with Role-based Access Control
  • Auto-scale slave machines when you have builds starved for resources by “renting” unused VMware vCenter virtual machines with the VMware vCenter Auto-Scaling plugin
Try a free evaluation of Jenkins Enterprise by CloudBees or read more about the plugins provided with it.

For departments using larger installations of Jenkins, CloudBees and DLT Solutions propose Jenkins Operations Center by CloudBees to:
  • Access any Jenkins master in the enterprise. Easily manage and navigate between masters (optionally with SSO)
  • Add masters to scale Jenkins horizontally, instead of adding executors to a single master. Ensure no single point of failure
  • Push security configurations to downstream masters, ensuring compliance
  • Use the Update Center plugin to automatically ensure approved plugin versions are used across all masters
Try a free evaluation of Jenkins Operations Center by CloudBees, or watch a video about Jenkins Operations Center by CloudBees.

The CloudBees offerings, combined with DLT Solutions’ 20+ years of public sector “know-how”, makes it easier to support and optimize Jenkins in the civilian, federal and SLED branches of government.

For more information about the newly established CloudBees and DLT Solutions partnership read the news release.

We are proud to partner with our friends at DLT Solutions to bring continuous delivery to governmental organizations.

Zackary Mahon
Business Development Manager
CloudBees

Categories: Companies

Integrating TestTrack with Git and Other Source Control Providers

The Seapine View - Thu, 07/10/2014 - 12:00

TestTrack 2014.1 introduces source control integration with Git, GitHub, and other external providers. This integration allow users to attach source files to TestTrack items when pushing changes to the source control server, which can help team members keep track of source file changes and quickly find information in their source control tool while working in TestTrack.

The TestTrack administrator is responsible for setting up the integration components. First, install and configure the new source control provider CGI (ttextpro.exe) on the TestTrack web server. This CGI accepts attachment data from the source control provider and sends it to the TestTrack Server. See the TestTrack installation help for information about installing and configuring the CGI.

Next, in the TestTrack Client, add the source control provider to generate the required integration key. When adding providers, you can also enter commit and file URLs to specify the format for links included with attachment information on the Source Files tab in items.

GitExampleSourceControlProvidersDialog

Finally, use the new source control provider API to create hook scripts that pass attachment data from your source control provider to the TestTrack source control provider CGI, and install the scripts in the central and local Git repositories. These scripts must include the provider key from the Source Control Providers dialog box in TestTrack to work correctly. To create these scripts, you should understand how to use JSON to pass data from your source control provider. Sample commit-msg and post-receive scripts are available, which respectively verify items exist in the project when committing changes to a local Git repository and attach files to items when changes are pushed to the Git Server. You may want to use the sample scripts as a handy reference when creating scripts for your integration. You can also contact Seapine Services for help creating scripts.

After the integration is configured, users can attach source files to TestTrack items when they push changes to the source control server. To attach Git files to items, enter the tag for the item in the commit message. For example, enter [IS-34] to attach the commit to issue 34.

GitExampleCommit

To view the attached files in TestTrack, click the Source Files tab when working with an item. Click a file path or commit message to view additional file information in the associated source control viewer.

GitExampleSourceFilesTab

For more information about integrating TestTrack with Git or other source control providers, see the TestTrack help.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

5 things you didn’t know a testing framework could do

BugBuster - Thu, 07/10/2014 - 11:43

We at BugBuster believe that testing frameworks and cloud platforms can take test engineering to a higher ground.

Case in point: we included in BugBuster several cool features you wouldn’t believe a regular testing framework could do: smart exploration, file upload testing, web-to-email testing, language testing and getting rid of those annoying sleeps and waits, normally required to deal with asynchronous user interfaces.

 

Smart exploration

BugBuster is a webkit-based browser that runs on several dedicated servers in our cloud, and features advanced logic enabling automated smart explorationBugBuster will load a page, analyze its DOM, and produce a list of actions it can do. Then, it will try any combination of these actions in a deterministic way, uncovering bugs and issues that can easily be reproduced for debugging.

Doing so, BugBuster is able to test those edge cases that a human tester would not necessarily have found. And, thanks to our conductor API, you have full control over the automated testing process; as a result, you can let BugBuster crawl around your website and react with a precisely defined behaviour when BugBuster reached a certain location or executes a certain action.

How BugBuster's automated exploration works

How BugBuster’s automated exploration works

Timing insensitive testing: getting rid of sleeps and waits

BugBuster’s technology behaves differently from other testing frameworks in that it is timing-insensitive: after each action, it will wait for the page to stabilize before executing the next action in the queue. BugBuster detects AJAX calls, JavaScript animations and page loads, allowing you to avoid cumbersome sleep() and wait() function calls.

This code snippet shows how easy it is to guide BugBuster through a test scenario that involves page loads:

File upload testing

File uploads are a major headache when writing automated tests for a web application. BugBuster helps lighten this burden by providing advanced file upload capabilities. Several tools and frameworks support also this feature, but they are limited and depend heavily on the browser being used for the automation. BugBuster improves this feature in several ways:

  • Providing a catalog of pre-built files ready to use in your tests.
  • Providing a file generator that can produce different file formats and sizes with your own content, on the fly.
  • Supporting multiple file uploads.
  • Allowing to check the accepted file types in the file chooser.

The following code illustrates how easy it is to generate a file to test a file uploads:
You can find more information about automated file upload testing in this post.

Web-to-email testing

BugBuster provides more control over the test process while considerably simplifying both writing the test scenarios and the underlying infrastructure. It generates an email address on-the-file linked to the running test session; then, BugBuster’s incoming email servers will receive the message and dispatch it to the right running session. The result? You can neatly write test cases that have access to the whole email document and envelop: subject, body, headers, addressees, sender, and so on, without need to set up any email server.

How BugBuster closes the web to email testing loop

How BugBuster closes the web-to-email testing loop

You can read more about web-to-email testing on this post.

Language testing

It is not unusual to see multi-language websites with text displaying in the wrong language. At BugBuster we have a solution for that: thanks to our language detection module, you can now automate the detection of languages in your test scenarios.

Using it is quite straightforward: just require the language module and use its detect() function to do all the heavy stuff for you. Then, you can use the result of the detection to know what  languages are present on a particular page.

Say you have a multilingual CMS or web application, available in English, French, German, Chinese and Spanish, and you want to ensure that all the pages are presented to the user in the same language, in this case English. The following code will allow BugBuster to crawl your application by repeating the same language test on each page it discovers:

You can read more about language testing on this post.

The post 5 things you didn’t know a testing framework could do appeared first on BugBuster.

Categories: Companies

.NET in SonarQube: bright future

Sonar - Thu, 07/10/2014 - 11:12

A few months ago, we started on an innocuous-seeming task: make the .NET Ecosystem compatible with the multi-language feature in SonarQube 4.2. What followed was a bit like one of those cartoons where you pull a string on the character’s sweater and the whole cartoon character starts to unravel. Oops.

Once we stopped pulling the string and started knitting again (to torture a metaphor), what came off the needles was a different sweater than what we’d started with. The changes we made along the way – fewer external tools, simpler configuration – were well-intentioned, and we still believe they were the right things to do. But many people were at pains to tell us that the old way had been just fine, thank you. It had gotten the job done on a day-to-day basis for hundreds of projects, and hundreds-of-thousands of lines of code, they said. It had been crafted by .NETers for .NETers, and as Java geeks, they said, we really didn’t understand the domain.

And they were right. But when we started, we didn’t understand how much we didn’t understand. Fortunately, we have a better handle on our ignorance now, and a plan for overcoming it and emerging with industry leading C# and VB.NET analysis tools.

First, we’re planning to hire a C# developer. This person will be first and foremost our “really get .NET” person, and represents a real commitment to the future of SonarQube’s .NET plugins. She or he will be able to head off our most boneheaded notions at the pass, and guide us in the ways of righteousness. Or at least in the ways of .NETness.

Of course it’s not just a guru position. We’ll call on this person to help us progressively improve and evolve the C# and VB.NET plugins, and their associated helpers, such as the Analysis Bootstrapper. He (or she) will also help us fill the gaps back in. When we reworked the .NET ecosystem there were gains, but there were also loses. For instance, there are corner cases not covered today by the C# and VB.NET plugins which were covered with the old .NET Ecosystem.

We also plan to start moving these plugins into C#. We’ve realized that just can’t do the job as well in Java as we need to. But the move to C# code will be a gradual one, and we’ll do our best to make it painless and transparent. Also on the list will be identifying the most valuable rules from FxCop and ReSharper and re-implementing them in our code.

At the same time, we’ll be advancing on these fronts for both C# and VB.NET:

  • Push “cartography” information to SonarQube.
  • Implement bug detection rules.
  • Implement framework-specific rules, for things like SharePoint.

All of that with the ultimate goal of becoming the leader in analyzing .NET code. We’ve got a long way to go, but we know we’ll bring it home in the end.

Categories: Open Source

More Ruby goodness for testing

Did I mention how much I love Ruby?

items = (“A”..”Z”).to_a.in_groups(5,false)

5.times do | i |
puts items[i].flatten.to_s
puts “—-”
end

Source code is at http://apidock.com/rails/Array/in_groups

Categories: Blogs

Planned changes in Jenkins User Conference contact information collection



One of the challenges of running Jenkins User Conferences is to ballance the interest of attendees and the interest of sponsors. Sponsors would like to know more about attendees, but attendees are often weary of getting contacted. Our past few JUCs have been run by making it opt-in to have the contact information passed to sponsors, but the ratio of people who opt-in is too low. So we started thinking about adjusting this.

So our current plan is to reduce the amount of data we collect and pass on, but to make this automatic for every attendee. Specifically, we'd limit the data only to name, company, e-mail, and city/state/country you are from. But no phone number, no street address, etc. We discussed this in the last project meeting, and people generally seem to think this is reasonable. That said, this is a sensitive issue, so we wanted more people to be aware.

By the way, the call for papers to JUC Bay Area is about to close in a few days. If you are interested in giving a talk (and that's often the best way to get feedback and take credit on your work), please make sure to submit it this week.

Categories: Open Source

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today