Resolving Build Errors When Targeting Mulitple Framework Versions

Here’s a tip that I hope can help some other folks when working on a solution that targets multiple versions of the .Net Framework.

As a developer, I tend to have a short memory and flush it often. When I start using framework features, I let myself easily move on, mentally, from the time when said features didn’t exist. 2010 is sooo last year. Or four years ago, but who’s keeping count.

This morning, I started getting the following error while working on Glimpse, where the primary project is authored in .Net 4.5:

System.Enum does not contain a definition for ‘TryParse’.


A quick check on MSDN shows that System.Enum does indeed contain a definition for TryParse, but only in 4.0 and higher.


If you peek back at the Error List screen cap, you’ll see the hint to what was going on in the “Project” column. Namely, one of the projects in the Glimpse solution used for backwards compatibility targets an older version of the .Net Framework.

So, this is actually pretty easy to resolve, and I have two obvious choices:

  1. I can test to see if the NET35 compilation symbol is defined and write two copies of the code, one with, one without the use of TryParse, or,
  2. Just use the cross-framework supported approach from way back in the day (2010), where we would wrap up the Enum.Parse call in a try-catch block.

For brevity of code, I chose #2.

        order = (ScriptOrder)Enum.Parse(typeof(ScriptOrder), orderName);
    catch (ArgumentException)
        return new StatusCodeResourceResult(404, string.Format("Could not resolve ScriptOrder for value provided '{0}'.", orderName));

Should Glimpse drop support for .Net 3.5 down the road, this would be an easy pull request to update and make use of the new(ish) TryParse method.

Filed under the category of “things to keep fresh in your mind when working on open source”.

For more on multi-targeted solutions, you can check out this read on MSDN.

Changing the Namespace With Entity Framework 6.0 Code First Databases

Sometimes a refactoring of your project includes changing the namespaces used throughout your project. If you’re using Entity Framework 6.0, this type of change can have an impact on EF’s ability to detect the current status of your namespace. This article helps you to mitigate any conflicts and allows your migrations to stand as-are.

I cover a bit of background, but you can jump to the end to see two or three possible fixes.

Also, sending a thanks out to my good friend David Paquette for the review on this post.

My Beef With Existing Fixes

If you change namespaces and run into problems, you might see an error similar to the following:

An exception of type ‘System.Data.SqlClient.SqlException’ occurred in EntityFramework.dll but was not handled in user code

Additional information: There is already an object named ‘AspNetRoles’ in the database.

I’m not sure I would call this misleading, but it certainly doesn’t explain the problem in the clearest of terms. Here’s what I would prefer to see, particularly in the Additional Information section of the error message:

Additional information: The namespace for the Entity Framework migration does not have any corresponding migrations in the target database. It is possible that your connection string is not configured correctly, that you are attempting to update a database that was created without migrations enabled, or that your namespace for your configuration class has been modified. Query the migrations table on database *database_name* to review current migration state.

Sure, it’s verbose, but it lets you in on what might be happening.

Specifically, migrations are tracked by ContextKey in the __MigrationHistory table and the key includes namespace information. When your namespace changes, you also need to update the records in the DB that correspond to migrations that have already been executed.

My beef with existing fixes? Most of the time when you see similar errors come up the answer seems to ignore the fact that people might be using this stuff in production. Namely, the fix tends to be “drop your database and let the Framework rebuild it”, which, sure, I mean, it solves the problem. It’s just not good for business.

Behind the Scenes

For each change to your database tracked with with a migration, a hash representing your model is computed and stored in order to detect the next set of changes that occur. As you execute the migration, the hash is added, along with the MigrationId and ContextKey to the migrations table.

When you attempt to access your data through the DbContext and you’re using, for example, an initializer such as MigrateDatabaseToLatestVersion, the Framework will attempt to play catch-up and make sure the database reflects the current model in your application. To do this, it queries the database to see where the database thinks it’s at, and it uses both reflection over and information from your configuration and context classes. You can see the queries that run if you capture the chatter with SQL Profiler:


And if you drill into the details you’ll see something like the following as the Framework tries to figure out where you’re at:


I’ve dashed out my namespace as this was work for a client, but you can see the root of the problem here. The Configuration class is in the Root_Namespace.Migrations namespace; if you move the class to a new namespace, this query is modified to reflect it, but previous migrations stored in the database are not.

Your configuration is automatically created for you when you enable migrations; it’s a class that exists in a namespace which is based on the default namespace of your project. Root_Namespace.Migrations.Congifuration is also the value that is written to the migrations table as the value for ContextKey.

That is our vector to correct the problem.

Building Out The Fix

The first and easiest approach is one that works locally, and could meet your needs if you have access to all affected databases. All you have to do is execute a modified version of the SQL script below:

UPDATE [dbo].[__MigrationHistory] 
   SET [ContextKey] = 'New_Namespace.Migrations.Configuration'
 WHERE [ContextKey] = 'Old_Namespace.Migrations.Configuration'

You should be golden at that point, however, this won’t work if you’re doing continuous integration with a team of developers, or if you have a continuous deployment strategy in place. For that, the solution lies in adding the following code to your database Configuration constructor class:

public Configuration()
    AutomaticMigrationsEnabled = false;
    this.ContextKey = "Old_Namespace.Migrations.Configuration";

This strategy is more durable and will help prevent any of your teammates (or production servers) from running into the same issue. Of course, this keeps your old namespace hanging around in your migrations table, but it does the trick.

A potentially more elegant solution would be to create a database initializer that takes the old and new namespace into account and corrects the migrations table if necessary. This could end up being considerably more work, so you’d have to evaluate if it makes sense for your project and timeline. You can reference an example implementation here:

MigrateDatabaseToLatestVersion Source on CodePlex

Cheers, and happy coding!

Scheduled Jobs in Windows Azure Web Sites

Last year I published a pretty good primer on Windows Azure Web Sites (available on Amazon as Windows Azure Web Sites ) but the Azure team keeps coming out with new features. This post will walk you through the creation of scheduled jobs on Windows Azure Web Sites.

Tasks That Aren’t Part of Your Web Site’s UI

On demand reports are a great feature but don’t give your users as-at reporting. Artifacts from operations on your site can use up disk space. Sometimes, you’d prefer to have a digest of information sent out, rather than a notification on every interesting event.

If you need to run some kind of process on a regular interval a solution might be this handy feature on Azure Web Sites: scheduled jobs. These types of requirements can sometimes be met with BI tools, but might cover any kind of activity which may or may not be associated with data, such as:

  • a nightly report
  • a cleanup script
  • sending an email
  • pushing an SMS message hourly to your phone for new account signups

I have a console app for the purpose of this article that I’ve created that runs some reports.  You’ll see later similar output as the logs from the cloud-run copy of this application.


Yes, my reports start at 0. Don’t judge, my brothers and sisters.

I’m using an EXE here, but you can use any of the following:

  • Windows CMD (.cmd, .bat or .exe)
  • Node.js (.js)
  • PowerShell (.ps1)
  • PHP Scripts (.php)
  • Python (.py), or
  • Bash (.sh)

Your script or executable needs to be in a ZIP file which can also include any other files you need for processing such as configuration, images to embed in email messages, etc.

Configuring the Job

Click on the Web Jobs tab in the web site’s dashboard, then click “Add” from the command bar at the bottom of the site or from the dashboard page that loads up.


If you haven’t already signed up for the Schedule preview program, you’ll not yet be able to create scheduled jobs, but’s it’s trivial to setup and the link is provided on the Web Job screen.


Follow the link and complete the sign up; it’s a straightforward button click.


With that in place you can continue with creating the job. I wrap up all my needed files into a zip, and then I pick my options on the first page of the job setup:


Finally, I create my schedule and configure it to run every day for a year:


Your job is added to the list and then runs on schedule, or you can run it on demand from the command bar. After executing, logs are added to your account:


Clicking the link brings you to the history of job runs where any console output is available for viewing. Error logs, should any rise, are also saved out here.


You can see the output by drilling into the log, which is unsurprisingly similar to what we saw from our console  output at the start of this article.


Understanding Job Storage Requirements

The job ZIP that you create can be up to 200MB and will be stored in your web site’s corresponding file system. Logs are also saved out, albeit in a slightly different path.

Job scripts are saved at: D:\home\site\wwwroot\App_Data\jobs\triggered\JOB_NAME

Job logs are saved out at: D:\home\data\jobs\triggered\JOB_NAME

This is actually really great info to know, because with your job script saved in your application’s App_Data directory, you have the ability to manipulate the configuration files (if any) for your script.

Keep in mind that the storage needs for your jobs are factored into your web site’s storage restrictions, so jobs that generate output need to be monitored to make sure you’re not exceeding your quota.

Getting at the Raw Files

There is a great – and growing – administrative back door called Kudu to your Windows Azure Web Site that you may not be aware of. It helps with all kinds of things like SCM checkin hooks, deployment tasks, or viewing logs. You can reach it at this location:

It’s basically the URL that you use to access the host on, but you plug in the scm.  There is a debug console that gives you the ability to plug away through your files in the Kudu menu.


Wrapping Up & Next Steps

Windows Azure Web Sites now easily allows you to create and manage jobs that can be executed on demand or on a schedule. You ZIP up your files, feed them into the site and then configure the execution times for each of your scripts through the dashboard for your site.

Now go solve some scheduled job need, and happy coding!

My First Time: A Non-Android Developer’s Tale of Development with Xamarin

Even though I largely sit on the Microsoft technology stack, it would be without reason to leave development on other stacks unexplored. The old adage – jack of all trades is a master of none – used to plague me as a younger developer as I tried to get my hands into everything and found it hard to become a master of anything.  So, though I’ve kept abreast of what my development brethren on iOS and Android have been up to (and taking much notice of their market share compared to my platform of choice) I have only dabbled to insignificant measure with either. 

I would like to give a shout to my buddies Mike and Brad who have entertained me at length with conversations and code comparisons on both iOS and Android, respectively, as I work on Windows Phone.

But there’s a cross-over class now – highly functional, feature-rich and, better still, it’s “native” to the development experience I know and love in Visual Studio.

My previous comparison was quite jagged; the Visual Studio Express SKU for Windows Phone is free and installs with a double click. “Hello World” is literally seconds away, post-installation when you’re cutting a Windows Phone app. But, when I last tried Android development with Eclipse, there were several downloads, patches, a video card update (yes, seriously, for my L502X) and numerous animal sacrifices required to get the development environment and emulator running.  And I really like my cat, so that didn’t go so well.

Enter into the mix Xamarin’s solution to building apps, with a twist that .Net developers are going to love.

I’m Going to Need a Few Things

imageFrom the get-go, the Xamarin install experience is smart and well-informed. People still make bad installers in 2014, but I can’t accuse Xamarin of that. Like any good citizen, this one knows what it needs to know to get your PC up-and-running. A quick inventory to avoid downloading the parts you already have, then it’s off to cyberspace to fetch the bits. Grab a coffee.

After pulling about 1.5GB down (thank goodness for fast interwebs) the installer runs without much prompting and preps your box with the goods.

Compared to my last experience? So far, this is aces, baby. Each of the installed target platforms even pops up web pages corresponding to the latest version in the Xamarin Developer Center. No errors, only confirmations. Seamless install.

I open up Visual Studio and from my File –> New Project experience I get this:


Creating the project gives me a prompt for my Xamarin credentials, which then activates my subscription.


Visual Studio is well equipped to give me the lay of the land through the Solution Explorer. You can see the project layout, look at files that make up the solution and even drill into classes to get at the method level-of-detail. I see some interesting bits and drill in.


I do the most natural thing in the world to any dev familiar with Visual Studio and hit F5. I want to see what this baby does. I get the comically honest message:


You are about to launch the MonoForAndroid_API_10 emulator. Google Android emulators are slow. Do you wish to proceed?

Yes. Yes, I do. But!! First I need to make sure that I’m using the correct emulator. In my case I had selected an Ice Cream Sandwich project template, so I needed to update my emulator selection to the MonoForAndroid_API_15 option. On my little 2 core i7 with 8GB RAM, the first-start for the virtual device and deployment took about 8 minutes, so, that previous message about taking a little time to get things going is pretty true. That said, the first run also needs to fire up the emulator, push the SDK out, then install the app and sync the assemblies. Seconds later, I have a working app. Hello World!


Bells and Whistles. Because Awesome.

I return to the IDE, press the Stop control for the debugger and dig into the code. I set a breakpoint on an interesting line of code and re-run the app.  Are you kidding me? Sweet! I’m debugging an Android application in Visual Studio.


That interesting line of code allowed me to assume something given the project structure I had previously seen, so I drilled into the folder called “Resources” where you wouldn’t be too surprised to find a “Layout” folder, followed by a “Main.axml”. Double-clicking this file gave me a well-equipped toolbox and a rich designer with draw and source modes and a convenient device selection for preview purposes.


Wrapping Up

“Guess what, Mom, I’m an Android developer!” That right there, that is not on the top of the list of phone calls I am going to make in 2014. There’s obviously lots more to familiarize myself with, but this establishes a coherent base: I have a great development experience from a trusted company (Xamarin and Microsoft announced partnership details) that is winning awards for the work they do, in the best integrated development environment PERIOD working with a language I love.

In the months ahead I’m going to be talking a lot more about Reactive Applications, and one of my goals is to make sure that I’m providing examples for cross-platform experiences. I’m working closely with my good friend Simon Timms to explore concepts related to RA on the Microsoft stack in the back end, but these applications are designed for scale and the reality is that most of your potential client base may exist on a different platform.

Sure, it’s easy to be nervous when you do it for the first time, but then you realize you were likely making a bigger deal out of it than necessary. When you’re well-equipped, there’s really no reason to feel any kind of anxiety over experimentation. Oh, and for the record, I’m still talking about Android development.

Next Steps

I’ll be writing soon on my other adventures, particularly with building out cloud-based solutions. These will really, really scale well to serve as the platform for client apps on all kinds of platforms, Android included. If you want to get in on the mix of things, be sure to prep yourself with the following:

  1. Hit the Xamarin web site and sign up for your trial. #WorthIt.
  2. Get familiar with your target: Android design specs are readily available.
  3. Check out the excellent starter community on Xamarin’s site.  Docs, to recipes, to tutorials, and all in the context you choose – xplat or platform-specific.

MVA Jump Start – Windows Azure Web Sites Deep Dive

If you tuned into the MVA Jump Start for Windows Azure Web Sites, you’ll know that we covered a lot of ground in a short period of time. I promised to share all the resources I mentioned and all the code that I shared throughout the day, so here it is!

If You Haven’t Seen the Session…

You can watch it on Microsoft Virtual Academy on demand, then follow along with the resources below.

Know Thy Tools

Continuous Deployment

Go-Live Checklist

Lightning Round

Also, you can get all the code I was demo’ing here:

MisterJames on GitHub

Next Steps…More MVA!

If you haven’t already done so you can register for Microsoft Virtual Academy here. As well, here are some courses I recommend (along with my session, of course Winking smile ):

Cheers, and happy coding!

Windows Azure Web Sites – My MVA Recording Experience in Redmond

Today I had the pleasure and privilege of recording a session – 7 modules altogether – covering Windows Azure Web Sites with Tejaswi Redkar. The material was for a course imageprovided by Microsoft Virtual Academy and recorded live in front of a virtual audience of nearly 1,000 folks from 82 countries around the world. MVA has over 1.5 million subscribers worldwide, so this was a big audience!

I had excellent support from Sangeeta who lined up our session, a great time with Tej who co-presented with me, and tons of help and feedback from Danny and Garry who produced the whole event and gave us wonderful feedback throughout the whole day. It was a great time and I learned a ton in the process about the process and can’t wait for another opportunity to do another session.

Virtual Peeps are Awesome

I couldn’t believe the participation from the virtual audience! There were tweets going out almost all day and lots of activity in the chat room. This is a great way to learn, IMO, with other peers asking questions while the session is going on and some great experts on hand to field questions from the viewers while we shelled out the information on WAWS.

Help from the audience came in all kinds of forms. When I asked folks to fill out a survey with me to demonstrate SignalR running in the cloud, we had over 200 responses!


Even when I wasn’t asking for help, the viewers were sharp enough (and paying close enough attention!) to catch URLs that were being used and jump into the demos themselves. #Awesomesauce. Smile


And when things went wrong – we only had one demo that didn’t work! – people were still giving me a hand after the session. I had a Node.js + Mongo DB setup that gave me a 500 when I deployed as one of the last demos of the day, but before I got back to the hotel, someone had found the cure and posted the details – refreshing the page!


The Studio Staff Were a Treat to Work With

If you’ve seen me present live, you know I’m an animated speaker. It was all I could do to stay in my seat for 6 hours of content! Barry and Danny were the producers/recording engineers who executed the live production and, in a tight space with limited budget, did an awesome job of making Tej and I look like we knew what we were doing. They were a great feedback loop, giving advice, helping us make changes as we went through the day and keeping us on schedule.

As presenters we got to use a couple of ginormous multi-touch screens and had to switch back and forth between the presentation and the slide deck, but Barry and Danny made it look good and I think we avoided blinking screens of fail for the whole day.

The Microsoft Campus is Pretty Darn Cool

Best part of the experience was walking around knowing that everyone there was smarter than me. Very humbling to get to mix with the folks that develop the tools upon which I make my bread and butter, so I soaked in as much as I could. Lunch at The Commons (actually, I think it was called The Mixer), a lap around the Visitor Center, a trip to the company store and of course getting into the recording studio in downtown Redmond.


The Commons (or Mixer?) is a collection of shops, services, food courts and lounge areas for people to relax. There was a live jazz band playing at one end of the building which was pretty cool, and a great selection of eats to choose from.

I was really impressed by how serious they seemed to take environmental responsibility and even nutrition. The cafeteria I had breakfast at had “less” and “more” indicators for all the food items to help people make healthy choices, and everything that you used to eat – cups, lids, plates, even the utensils, were all compostable, and even the bags they went into were compostable. Cool stuff. 

And you get a sense about how global and how far reaching the company is. I come from a town of about 40,000 people; the MS Campus has over 40k employees working on the grounds, not counting contractors and visitors. One of the boardrooms that I was in was a designated global security/event response center, where I presume smart people dealing with serious issues might sometimes convene.  People from all kinds of cultures and backgrounds are making cool things happen and including folks like me. Pretty darn cool.

If you’re really into cars, I suppose visiting the factory where yours was built would be pretty epic. That’s pretty much what I experienced here on this trip, and I can’t wait to come back. Plus, it’s so warm here (compared to the –40 weather I came from!).

Once again, thanks to all who participated in the day’s events, helped with the demos online and made the day a success. I hope you all get a chance to bring some awesome back to your team, wherever you work.

If you want to track down the session on the MVA website, check it out here.

Cheers, and happy coding!

Windows Azure Web Sites on MVA

The Windows Azure team has accelerated their deployment cadence, and to be honest, if you happened to look away for nine months, it’s like a whole ‘nother baby to deal with. For those not building solutions on WAWS, you’d be surprised at how much has changed, improved and increased in flexibility.


I’m privileged to get the chance to break it all down for you with Tejaswi Redkar, Director of Worldwide Communities at Microsoft. Join us live on Microsoft Virtual Academy on Thursday, December 12th (register here).

We’re bringing something to the table for everyone; for newcomers to WAWS you’ll get a peek at the Azure portal interface and gain a better understanding of the inner workings. Beyond the basic, we dive deep into everything from initial and continuous deployment right through to scaling your site up to support traffic growth. We also look at development off the .Net stack, including Node.js, and show you how to integrate with other aspects of Azure from your site.  Here’s the day at a glance:

  • Web Sites Overview
  • Know Thy Tools
  • Continuous Deployment
  • Scaling and Configuration
  • Go Live Checklist
  • Case Studies – Scale and Deployment
  • Lightning Round

Hope to catch you at the event…it’s free and easy to attend online!


Working With IAuthenticationFilter in the MVC 5 Framework

When implementing a custom authentication filter it’s important to know where in the pipeline your filter is invoked; if your purpose is to prevent unauthorized access to a controller action, be sure to implement your credentials verification early enough in the process. This post walks you through the creation of a basic authentication filter and shows the correct method in which to do the check. Good news: it’s easy!

We can learn a lot about the new IAuthenticationFilter interface by implementing one and seeing where it fits in the MVC pipeline.  Before we get started, let’s first remember that authentication and authorization are separate concerns in your application, so this filter is a welcome little addition.

Authentication is where a user provides credentials to access a resource, whereas authorization allows access to particular resources based on properties of the user’s identity. In previous versions of the MVC Framework we had the AuthorizeAttribute, which could be used to cause a redirect if you were unauthenticated, but it’s also true that it blurred the lines a little around auth & auth. 

Creating a Basic Do-Nothing Filter

Now that Authentication can more easily stand alone, the process to implement it is pretty simple:

  • Add a class to your project called NewAuthenticationFilter
  • Inherit from ActionFilterAttribute and IAuthenticationFilter
  • Implement the interface (click on the interface name and press Ctrl+. and hit enter)
  • Throw a couple of debug statements in so that we can set breakpoints, remembering to add the diagnostics namespace to your using statements

Just a few quick side notes on the above code.

  • I usually stick my filters into a Filters folder in my project (or into a separate DLL project to keep them reusable) to try to keep things clean
  • We inherit from ActionFilterAttribute so that we can use it as an attribute on our actions; just using IAuthenticationFilter isn’t enough
  • You can implement interface members by clicking on the interface name and pressing Ctrl+. and then tapping Enter

Your final class should look like the following:

public class NewAuthenticationFilter: ActionFilterAttribute, IAuthenticationFilter
    public void OnAuthentication(AuthenticationContext filterContext)

    public void OnAuthenticationChallenge(AuthenticationChallengeContext filterContext)

With that in place, go the the Index action on the HomeController and add the attribute to the method. This will let the events fire as the request is processed. Also, set a breakpoint on the return statement. Your action should look like this:

public ActionResult Index()
    return View();

Great! Let’s hit F5 and see what happens:

  • The OnAuthentication method is invoked, press F5 to continue
  • The action is executed, press F5 to continue
  • The OnAuthenticationChallenge is called last

Perfect, some of how we can use this is now becoming apparent. First of all, it’s important to note that the action can be fully executed before the OnAuthenticationChallenge is executed. This means that if you’re intending on preventing any DB writes or calls to something mutable or executable during the action, you’ll need to implement logic in OnAuthentication method to prevent this (we’ll get to that in a minute).

Second, both methods allow you to evaluate request information and set properties of the response, such as the Result (an ActionResult), but remember that the Action itself has the ability to set some of these properties and might overwrite what you set up in OnAuthentication (which, again, is called before the index action method in our example).

Finally, remember that this is just one part of the pipeline. The Result of the action hasn’t yet been executed when OnAuthenticationChallenge is complete, so something like a View hasn’t been rendered, or a FileResult that would load data from disk hasn’t yet been called.

But Wait! What is This All For?

I’m so glad you asked. Here’s the thing…Debug statements aren’t that interesting and don’t really show you how this can come into context. The user passes through the above code and our action is executed, as is our result, so the user gets to see whatever they requested come back in their browser. Let’s update our code and take another look at how our application might use this.

An authentication filter doesn’t really do much for us if it’s not, oh, say, filtering for authentication, so let’s start by checking to see if our user is actually presenting some kind of credentials.

public void OnAuthentication(AuthenticationContext filterContext)
    if (!filterContext.Principal.Identity.IsAuthenticated)
        filterContext.Result = new HttpUnauthorizedResult();

This is the most basic form of a check. “The identity of the principal is not authenticated, so set the result to unauthorized, an HTTP 401 status code.” Now we’re cooking with peanut oil and our 401 will not let the user access the action until they satisfy whatever arguments you’d like them to fulfill.  To that end, you don’t only need to use the principal identity, you can use whatever you like, such as a tie-in to a third-party SSO provider in a mixed identity environment.

Next, we have to deal with the fact that someone who is unauthenticated tried to access a resource we were protecting. The way the Authorize attribute worked was just by letting that 401 we set above flow through the framework. In an MVC application the default mechanism for authentication is Forms, for which there is a default account controller and corresponding views added to our project. Thus, by simply letting the user “fall through” here, they will end up at the login page.  For the following code in your challenge method:

public void OnAuthenticationChallenge(AuthenticationChallengeContext filterContext)
    // redirect the user to some form of log in

…the user will keep that 401 state through, and will ultimately be bounced back to the site’s login page.


So, in it’s simplest form as we’ve done above, authentication filters give us a way to remove Authorize attributes (separation of concerns) and mimic the behavior we were previously leveraging to force users to log in. But there’s more, right?

Practical Uses of Our New Custom Authentication Bestie

We get pretty good support with the Authorize attribute and our ability to create custom filters already, but if we agree that separation of concerns is important, we get a few benefits with the new Authentication filters and the timing with which their methods are fired.

Isolated Custom Authentication

One of the first possibilities is that you can now setup an action or controller to have a custom authentication mechanism, rather than using whatever the default is for the site. If you have an external provider (perhaps something legacy) that you can proxy credentials to, this might be one way you could approach this.

Customizing UI For Login

There may be some cases where you want to do something like augment credentials, or perhaps configure part of the application, or even let users log in with a single-use code. OnAuthenticationChallenge lets you configure the result, so you can route the user to whatever view you like in your application.

Things to Remember

I actually don’t have a lot of caveats to suggest here, but perhaps the most important bit to remember is that if you IAuthenticationFilter.OnAuthenticate fall through without a 401, you’re letting the user into the action. Your OnAuthenticateChallenge will still fire, but remember from the breakpoints set above (when we were just spitting out debug lines) that the action will fire and your code will be executed. Logging the user in at this point happens after the invocation…a worst case scenario would be on a POST where an update is happening.

Finally, remember that Authentication wraps Authorization and action execution, and the Authenticate method precedes your first opportunity to interact with the request pipeline when you use ActionFilterAttribute inheritance on its own.

Next Steps

It’s pretty easy to try this out! Why not give the following a try?

  • File –> New –> Project and give the exercise above a try
  • Think of other uses or scenarios where one could leverage this
  • Break out your custom authentication filters built on the old objects in older versions of the MVC Framework

Happy coding!

Thanks to good friend David Paquette who reviewed and suggested updates on this post.

More Love for JavaScript from IntelliSense – Featured on MSDN

Microsoft has improved file changes and script tracking and added tooling to Visual Studio 2013. You can read more about it on my article on MSDN, where I was honored to be selected for a guest blog post.


Check out the full article and see how VS 2013 gives you more options and tools for keeping your IntelliSense in sync.

SignalR Resources

I’ve had the pleasure over the last few weeks of carrying on the discussion around real time web with folks in Calgary, Kitchener/Waterloo and most recently Saskatoon. It’s clear, from the questions that are coming up, that there is a real desire to help improve the user experience in mobile and business apps – it’s not just about web.

I’ve put together some of the resources that I presented as well as the code that I wrote during the sessions. Here they are:

A big thanks to the user groups and to Prairie Dev Con for having me out, and thanks to everyone who joined in the conversation!