Skip to content

Jimmy Bogard
Syndicate content
Strong opinions, weakly held
Updated: 2 hours 47 min ago

Mobile authentication with Xamarin.Auth and refresh tokens

Thu, 11/13/2014 - 15:46

An internal app I’ve been working with for a while needed to use OAuth2 (specifically, OpenID Connect) to perform authentication against our Google Apps for Your Domain (GAFYD) accounts. Standard OAuth 1.0/2.0 flows are made easy with the Xamarin.Auth component. Since OpenID Connect is built on top of OAuth 2.0, the Xamarin.Auth component could suffice.

A basic flow for using OAuth with Google APIs would look like this:

image

But for our purposes, we have a mobile application that connects to our APIs, but we simply want to piggyback on top of Google for authentication. So our flow looks more like:

image

This all works great straight out of the box, very nicely. One problem however – the token returned by the Google servers is only valid for a short period of time – 30 minutes or so. You *could* ignore this problem in the API we built, and not validate that part of the JWT. However, we don’t want to do that. Because we’re now going over the interwebs with our API calls, and potentially over insecure networks like coffee shop wi-fi, we want a solid verification of the JWT token:

  • The token’s hash matches
  • The issuer is valid (Google)
  • The allowed audience is correct – we only accept client IDs from our app
  • The certificate is signed against Google’s public OAuth certificates
  • The token has not expired

This becomes a bit of a problem – the token expires very soon, and it’s annoying to log in every time you use the app. The Xamarin.Auth component supports storing the token on the device, so that you can authenticate easily across app restarts. However, out-of-the-box, Xamarin.Auth doesn’t support the concept of refresh tokens:

image

Since the refresh token is stored on the device, we just need to ask Google for another refresh token once the current token has expired. To get Xamarin.Auth to request a refresh token, we need to do a couple of things: first, override the GetInitialUrlAsync method to request a refresh token as part of getting an auth token:

public override Task<Uri> GetInitialUrlAsync ()
{
	string uriString = string.Format (
		"{0}?client_id={1}&redirect_uri={2}&response_type={3}&scope={4}&state={5}&hd=foo.com&access_type=offline&approval_prompt=force",
		this.AuthorizeUrl.AbsoluteUri,
		Uri.EscapeDataString (this.ClientId),
		Uri.EscapeDataString (this.RedirectUrl.AbsoluteUri),
		this.AccessTokenUrl == null ? "token" : "code",
		Uri.EscapeDataString (this.Scope),
		Uri.EscapeDataString (this.RequestState)
	);

	var url = new Uri (uriString);
	return Task.FromResult (url);
}

The format of the URL is from Google’s documentation, plus looking at the behavior of the existing Xamarin.Auth component. Next, we create a method to request our refresh token if we need one:

public virtual Task<int> RequestRefreshTokenAsync(string refreshToken)
{
    var queryValues = new Dictionary<string, string>
    {
        {"refresh_token", refreshToken},
        {"client_id", this.ClientId},
        {"grant_type", "refresh_token"}
    };

	if (!string.IsNullOrEmpty(this.ClientSecret))
	{
		queryValues["client_secret"] = this.ClientSecret;
	}

	return this.RequestAccessTokenAsync(queryValues).ContinueWith(result =>
	{
	    var accountProperties = result.Result;

	    this.OnRetrievedAccountProperties(accountProperties);

        return int.Parse(accountProperties["expires_in"]);
	});
}

I have a pull request open to include this method out-of-the-box, but until then, we’ll just need to code it ourselves. Finally, we just need to call our refresh token as need be before making an API call:

var account = AccountStore.Create().FindAccountsForService("MyService").FirstOrDefault();

if (account != null) {
    var token = account.Properties["refresh_token"];
    var expiresIn = await authenticator.RequestRefreshTokenAsync(token);
    UserPreferences["tokenExpiration"] = DateTime.Now.AddSeconds(expiresIn);
}

In practice, we’d likely wrap up this behavior around every call to our backend API, checking the expiration date of the token and refreshing as needed. In our app, we just a simple decorator pattern around an API gateway interface, so that refreshing our token was as seamless as possible to the end user.

In your apps, the URL will likely be different in format, but the basic format is the same. With persistent refresh tokens, users of our mobile application log in only once, and the token refreshes as needed. Very easy with Xamarin and the Xamarin.Auth component!

Some minor complaints with the component, however. First, it’s not a Xamarin.Forms component, so all the code around managing accounts and displaying the UI had to be in our platform-specific projects. Second, there’s no support for Windows Phone, even though there are issues and pull requests to fill in the behavior. Otherwise, it’s a great component that makes it simple to add robust authentication through your own OAuth provider or piggybacking on a 3rd party provider.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Dealing with the linker in Xamarin apps

Tue, 11/11/2014 - 18:51

The last few months I’ve been working quite a bit with Xamarin and in particular Xamarin.Forms. I’ve got a series of posts upcoming on my exploits with that and migrating to ReactiveUI, but first things first, I actually need to deploy my app.

I’ve got my app working in debug/release mode in the simulator, and debug mode on a device. However, when I ran the app in release mode on a device, it just crashed without warning. Not a very good experience. I’ve deployed this app several times on the device, but something was causing this crash.

Typically a crash on a deployment is one of two things:

  • Something off in DEBUG conditional code/config
  • Something off in DEBUG/RELEASE project config

I checked the DEBUG conditional code config, which does things like point to the test/production API endpoints. That looked OK, so what else was different?

Screen Shot 2014-11-11 at 9.20.41 AM

That was my debug version of the app, where no assemblies were linked. In the Release mode, only SDK assemblies were linked. For many cases, this works, as the compiler can figure out exactly what methods/fields etc. are being referenced.

Normally, this is OK, until you get a series of telling exceptions, usually a MissingMethodException. In my case, I switched my Debug settings to the same as Release, and got:

System.MissingMethodException: Default constructor not found for type System.ComponentModel.ReferenceConverter
  at System.Activator.CreateInstance (System.Type type, Boolean nonPublic) [0x00094] in /Developer/MonoTouch/Source/mono/mcs/class/corlib/System/Activator.cs:326
  at System.Activator.CreateInstance (System.Type type) [0x00000] in /Developer/MonoTouch/Source/mono/mcs/class/corlib/System/Activator.cs:222
  at System.ComponentModel.TypeDescriptor.GetConverter (System.Type type) [0x0009f] in ///Library/Frameworks/Xamarin.iOS.framework/Versions/8.4.0.16/src/mono/mcs/class/System/System.ComponentModel/TypeDescriptor.cs:437
  at ReactiveUI.ComponentModelTypeConverter.<typeConverterCache>m__0 (System.Tuple`2 types, System.Object _) [0x0002d] in /Users/paul/code/reactiveui/ReactiveUI/ReactiveUI/Platform/ComponentModelTypeConverter.cs:24
  at Splat.MemoizingMRUCache`2[System.Tuple`2[System.Type,System.Type],System.ComponentModel.TypeConverter].Get (System.Tuple`2 key, System.Object context) [0x00000] in <filename unknown>:0
  at Splat.MemoizingMRUCache`2[System.Tuple`2[System.Type,System.Type],System.ComponentModel.TypeConverter].Get (System.Tuple`2 key) [0x00000] in <filename unknown>:0
  at ReactiveUI.ComponentModelTypeConverter.GetAffinityForObjects (System.Type fromType, System.Type toType) [0x00000] in /Users/paul/code/reactiveui/ReactiveUI/ReactiveUI/Platform/ComponentModelTypeConverter.cs:30
  at ReactiveUI.PropertyBinderImplementation+<typeConverterCache>c__AnonStorey5.<>m__0 (System.Tuple`2 acc, IBindingTypeConverter x) [0x00000] in /Users/paul/code/reactiveui/ReactiveUI/ReactiveUI/PropertyBinding.cs:1006
  at System.Linq.Enumerable.Aggregate[IBindingTypeConverter,Tuple`2] (IEnumerable`1 source, System.Tuple`2 seed, System.Func`3 func) [0x00000] in <filename unknown>:0

First lesson: keep linker settings the same between build configurations. When you encounter this sort of issue, the problem is usually the same – reflection/dynamic loading of assemblies means the linker can’t see that you’re going to access some type or member until runtime. The fix is relatively simple – force a reference to the types/members in question.

In my iOS project, I have a file called “LinkerPleaseInclude.cs”, and in it, I include all types/members referenced:

public class LinkerPleaseInclude
{
    public void Include()
    {
        var x = new System.ComponentModel.ReferenceConverter (typeof(void));
    }
}

Completely silly, but this reference allowed my app to run with the linker in play. More info on the linker can be found on the Xamarin documentation.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Azure Mobile Services with AutoMapper

Wed, 10/29/2014 - 15:23

At the recent Xamarin Evolve conference, Paul Batum gave a great talk on Azure Mobile Service in cross-platform business apps, part of which included a piece on how AutoMapper fits in with their overall system:

There were sooooo many of those C# shirts at the conference, I felt a little left out without one.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Comparing processing times of NServiceBus saga patterns

Tue, 10/28/2014 - 19:16

A few weeks ago I gave a talk at NSBCon NYC on scaling NServiceBus, and one of the pieces I highlighted were various saga/processing patterns and how they can affect performance. It was difficult to give real numbers as part of the discussion, mostly because how long it takes to do something is highly variable in the work being done and environmental constraints.

I compared three styles of performing a set of distributed work:

And highlighted the performance differences between them all. Andreas from the Particular team mentioned that some work had been done to improve saga performance, so I wanted to revisit my assumptions to see if the performance numbers still hold.

I wanted to look at a lot of messages – say, 10K, and measure two things:

  • How long it took for an individual item to complete
  • How long it took for the entire set of work to complete

Based on this, I built a prototype that consisted of a process of 4 distinct steps, and each variation of process control to track/control/observe progress. You can find the entire set of code on my GitHub.

Here’s what I found:

Process Total Time Average Median Observer 6:28 0.1 sec <0.1 sec Controller 6:25 3:25 3:37 Routing Slip 2:57 2.6 sec <0.1 sec

Both the observer and controller styles took roughly the same total amount of time. This is mostly because they have to process the same total amount of messages. The observer took slightly longer in my tests, because the observer is more likely to get exceptions for trying to start the same saga twice. But once an item began in the observer, it finished very quickly.

On the controller side, because all messages get funneled to the same queue, adding more messages meant that each individual item of work would have to wait for all previous items to complete.

Finally, the routing slip took less than half the time, with higher total average but comparable median to the observer. On the routing slip side, what I found was that the process sped up over time as the individual steps “caught up” with the rate of incoming messages to start the process.

This was all on a single laptop, so no network hops needed to be made. In practice, we found that each additional network hop from a new message or a DB call for the saga entity added latency to the overall process. By eliminating network hops and optimizing the total flow, we’ve seen in production total processing times decrease by an order of magnitude based on the deployment topology.

This may not matter for small numbers of messages, but for many of my systems, we’ll have 100s of thousands to millions of messages dropped on our lap, all at once, every day. When you have this situation, more efficient processing patterns can alleviate pressure in completing the work to be processed.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

NServiceBus 5.0 behaviors in action: routing slips

Thu, 10/02/2014 - 14:57

I’ve wrote in the past how routing slips can provide a nice alternative to NServiceBus sagas, using a stateless, upfront approach. In NServiceBus 4.x, it was quite clunky to actually implement them. I had to plug in to two interfaces that didn’t really apply to routing slips, only because those were the important points in the pipeline to get the correct behavior.

In NServiceBus 5, these behaviors are much easier to build, because of the new behavior pipeline features. Behaviors in NServiceBus are similar to HttpHandlers, or koa.js callbacks, in which form a series of nested wrappers around inner behaviors in a sort of Russian doll model. It’s an extremely popular model, and most modern web frameworks include some form of it (Web API filters, node, Fubu MVC behaviors, etc.)

Behaviors in NServiceBus are applied to two distinct contexts: incoming messages and outgoing messages. Contexts are represented by context objects, allowing you to get access to information about the current context instead of doing things like dependency injection to do so.

In converting the route supervisor in my routing slips implementation, I greatly simplified the whole thing, and got rid of quite a bit of cruft.

Creating the behavior

To first create my behavior, I need to create an implementation of an IBehavior interface with the context I’m interested in:

public class RouteSupervisor
    : IBehavior<IncomingContext> {
    
    public void Invoke(IncomingContext context, Action next) {
        next();
    }
}

Next, I need to fill in the behavior of my invocation. I need to detect if the current request has a routing slip, and if so, perform the operation of routing to the next step. I’ve already built a component to manage this logic, so I just need to add it as a dependency:

private readonly IRouter _router;

public RouteSupervisor(IRouter router)
{
    _router = router;
}

Then in my Invoke call:

public void Invoke(IncomingContext context, Action next)
{
    string routingSlipJson;

    if (context.IncomingLogicalMessage.Headers.TryGetValue(Router.RoutingSlipHeaderKey, out routingSlipJson))
    {
        var routingSlip = JsonConvert.DeserializeObject<RoutingSlip>(routingSlipJson);

        context.Set(routingSlip);

        next();

        _router.SendToNextStep(routingSlip);
    }
    else
    {
        next();
    }
}

I first pull out the routing slip from the headers. But this time, I can just use the context to do so, NServiceBus manages everything related to the context of handling a message in that object.

If I don’t find the header for the routing slip, I can just call the next behavior. Otherwise, I deserialize the routing slip from JSON, and set this value in the context. I do this so that a handler can access the routing slip and attach additional contextual values.

Next, I call the next action (next()), and finally, I send the current message to the next step.

With my behavior created, I now need to register my step.

Registering the new behavior

Since I have now a pipeline of behavior, I need to tell NServiceBus when to invoke my behavior. I do so by first creating a class that represents the information on how to register this step:

public class Registration : RegisterStep
{
    public Registration()
        : base(
            "RoutingSlipBehavior", typeof (RouteSupervisor),
            "Unpacks routing slip and forwards message to next destination")
    {
        InsertBefore(WellKnownStep.LoadHandlers);
    }
}

I tell NServiceBus to insert this step before a well-known step, of loading handlers. I (actually Andreas) picked this point in the pipeline because in doing so, I can modify the services injected into my step. This last piece is configuring and turning on my behavior:

public static BusConfiguration RoutingSlips(this BusConfiguration configure)
{
    configure.RegisterComponents(cfg =>
    {
        cfg.ConfigureComponent<Router>(DependencyLifecycle.SingleInstance);
        cfg.ConfigureComponent(b => 
            b.Build<PipelineExecutor>()
                .CurrentContext
                .Get<RoutingSlip>(),
           DependencyLifecycle.InstancePerCall);
    });
    configure.Pipeline.Register<RouteSupervisor.Registration>();

    return configure;
}

I register the Router component, and next the current routing slip. The routing slip instance is pulled from the current context’s routing slip – what I inserted into the context in the previous step.

Finally, I register the route supervisor into the pipeline. With the current routing slip registered as a component, I can allow handlers to access the routing slip and add attachment for subsequent steps:

public RoutingSlip RoutingSlip { get; set; }

public void Handle(SequentialProcess message)
{
    // Do other work

    RoutingSlip.Attachments["Foo"] = "Bar";
}

With the new pipeline behaviors in place, I was able to remove quite a few hacks to get routing slips to work. Building and registering this new behavior was simple and straightforward, a testament to the design benefits of a behavior pipeline.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

The value proposition of Hypermedia

Tue, 09/23/2014 - 16:13

REST is a well-defined architectural style, and despite many misuses of the term towards general Web APIs, can be a very powerful tool. One of the constraints of a REST architecture is HATEOAS, which describes the use of Hypermedia as a means of navigating resources and manipulating state.

It’s not a particularly difficult concept to understand, but it’s quite a bit more difficult to choose and implement a hypermedia strategy. The obvious example of hypermedia is HTML, but even it has its limitations.

But first, when is REST, and in particular, hypermedia important?

For the vast majority of Web APIs, hypermedia is not only inappropriate, but complete overkill. Hypermedia, as part of a self-descriptive message, includes descriptions on:

  • Who I am
  • What you can do with me
  • How you can manipulate my state
  • What resources are related to me
  • How those resources are related to me
  • How to get to resources related to me

In a typical web application, client (HTML + JavaScript + CSS) are developed and deployed at the same time as the server (HTTP endpoints). Because of this acceptable coupling, the client can “know” all the ways to navigate relationships, manipulate state and so on. There’s no downside to this coupling, since the entire app is built and deployed together, and the same application that serves the HTTP endpoints also serves up the client:

image

For clients whose logic and behavior are served by the same endpoint as the original server, there’s little to no value in hypermedia. In fact, it adds a lot of work, both in the server API, where your messages now need to be self-descriptive, and in the client, where you need to build behavior around interpreting self-descriptive messages.

Disjointed client/server deployments

Where hypermedia really shines are in cases where clients and servers are developed and deployed separately. If client releases aren’t in line with server releases, we need to decouple our communication. One option is to simply build a well-defined protocol, and don’t break it.

That works well in cases where you can define your API very well, and commit to not breaking future clients. This is the approach the Azure Web API takes. It also works well when your API is not meant to be immediately consumed by human interaction – machines are rather lousy at understanding following links, relations and so on. Search crawlers can click links well, but when it comes to manipulating state through forms, they don’t work so well (or work too well, and we build CAPTCHAs).

No, hypermedia shines in cases where the API is built for immediate human interaction, and clients are built and served completely decoupled from the server. A couple of cases could be:

image

Deployment to an app store can take days to weeks, and even then you’re not guaranteed to have all your clients at the same app version:

image

Or perhaps it’s the actual API server that’s deployed to your customers, and you consume their APIs at different versions:

image

These are the cases where hypermedia shines. But to do so, you need to build generic components on the client app to interpret self-describing messages. Consider Collection+JSON:

{ "collection" :
  {
    "version" : "1.0",
    "href" : "http://example.org/friends/",
    
    "links" : [
      {"rel" : "feed", "href" : "http://example.org/friends/rss"},
      {"rel" : "queries", "href" : "http://example.org/friends/?queries"},
      {"rel" : "template", "href" : "http://example.org/friends/?template"}
    ],
    
    "items" : [
      {
        "href" : "http://example.org/friends/jdoe",
        "data" : [
          {"name" : "full-name", "value" : "J. Doe", "prompt" : "Full Name"},
          {"name" : "email", "value" : "jdoe@example.org", "prompt" : "Email"}
        ],
        "links" : [
          {"rel" : "blog", "href" : "http://examples.org/blogs/jdoe", "prompt" : "Blog"},
          {"rel" : "avatar", "href" : "http://examples.org/images/jdoe", "prompt" : "Avatar", "render" : "image"}
        ]
      }
    ]
  } 
}

Interpreting this, I can build a list of links for this item, and build the text output and labels. Want to change the label shown to the end user? Just change the “prompt” value, and your text label is changed. Want to support internationalization? Easy, just handle this on the server side. Want to provide additional links? Just add new links in the “links” array, and your client can automatically build them out.

In one recent application, we built a client API that automatically followed first-level item collection links and displayed the results as a “master-detail” view. A newer version of the API that added a new child collection didn’t require any change to the client – the new table automatically showed up because we made the generic client controls hypermedia-aware.

This did require an investment in our clients, but it was a small price to pay to allow clients to react to the server API, instead of having their implementation coupled to an understanding of the API that could be out-of-date, or just wrong.

The rich hypermedia formats are quite numerous now:

The real challenge is building clients that can interpret these formats. In my experience, we don’t really need a generic solution for interaction, but rather individual components (links, forms, etc.) The client still needs to have some “understanding” of the server, but these can instead be in the form of metadata rather than hard-coded understanding of raw JSON.

Ultimately, hypermedia matters, but in far fewer places than are today incorrectly labeled with a “RESTful API”, but is not entirely vaporware or astronaut architecture. It’s somewhere in the middle, and like many nascent architectures (SOA, Microservices, Reactive), it will take a few iterations to nail down the appropriate scenarios, patterns and practices.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs