Resolving Build Errors When Targeting Mulitple Framework Versions

Here’s a tip that I hope can help some other folks when working on a solution that targets multiple versions of the .Net Framework.

As a developer, I tend to have a short memory and flush it often. When I start using framework features, I let myself easily move on, mentally, from the time when said features didn’t exist. 2010 is sooo last year. Or four years ago, but who’s keeping count.

This morning, I started getting the following error while working on Glimpse, where the primary project is authored in .Net 4.5:

System.Enum does not contain a definition for ‘TryParse’.

image

A quick check on MSDN shows that System.Enum.aspx) does indeed contain a definition for TryParse, but only in 4.0 and higher.

 image

If you peek back at the Error List screen cap, you’ll see the hint to what was going on in the “Project” column. Namely, one of the projects in the Glimpse solution used for backwards compatibility targets an older version of the .Net Framework.

So, this is actually pretty easy to resolve, and I have two obvious choices:

  1. I can test to see if the NET35 compilation symbol is defined and write two copies of the code, one with, one without the use of TryParse, or,
  2. Just use the cross-framework supported approach from way back in the day (2010), where we would wrap up the Enum.Parse call in a try-catch block.

For brevity of code, I chose #2.

    try
{
order = (ScriptOrder)Enum.Parse(typeof(ScriptOrder), orderName);
}
catch (ArgumentException)
{
return new StatusCodeResourceResult(404, string.Format(“Could not resolve ScriptOrder for value provided ‘{0}’.”, orderName));
}

Should Glimpse drop support for .Net 3.5 down the road, this would be an easy pull request to update and make use of the new(ish) TryParse method.

Filed under the category of “things to keep fresh in your mind when working on open source”.

For more on multi-targeted solutions, you can check out this read on MSDN.aspx).

Changing the Namespace With Entity Framework 6.0 Code First Databases

Sometimes a refactoring of your project includes changing the namespaces used throughout your project. If you’re using Entity Framework 6.0, this type of change can have an impact on EF’s ability to detect the current status of your namespace. This article helps you to mitigate any conflicts and allows your migrations to stand as-are.

I cover a bit of background, but you can jump to the end to see two or three possible fixes.

Also, sending a thanks out to my good friend David Paquette for the review on this post.

My Beef With Existing Fixes

If you change namespaces and run into problems, you might see an error similar to the following:

An exception of type ‘System.Data.SqlClient.SqlException’ occurred in EntityFramework.dll but was not handled in user code Additional information: There is already an object named ‘AspNetRoles’ in the database.

I’m not sure I would call this misleading, but it certainly doesn’t explain the problem in the clearest of terms. Here’s what I would prefer to see, particularly in the Additional Information section of the error message:

Additional information: The namespace for the Entity Framework migration does not have any corresponding migrations in the target database. It is possible that your connection string is not configured correctly, that you are attempting to update a database that was created without migrations enabled, or that your namespace for your configuration class has been modified. Query the migrations table on database database_name to review current migration state.

Sure, it’s verbose, but it lets you in on what might be happening.

Specifically, migrations are tracked by ContextKey in the __MigrationHistory table and the key includes namespace information. When your namespace changes, you also need to update the records in the DB that correspond to migrations that have already been executed.

My beef with existing fixes? Most of the time when you see similar errors come up the answer seems to ignore the fact that people might be using this stuff in production. Namely, the fix tends to be “drop your database and let the Framework rebuild it”, which, sure, I mean, it solves the problem. It’s just not good for business.

Behind the Scenes

For each change to your database tracked with with a migration, a hash representing your model is computed and stored in order to detect the next set of changes that occur. As you execute the migration, the hash is added, along with the MigrationId and ContextKey to the migrations table.

When you attempt to access your data through the DbContext and you’re using, for example, an initializer such as MigrateDatabaseToLatestVersion, the Framework will attempt to play catch-up and make sure the database reflects the current model in your application. To do this, it queries the database to see where the database thinks it’s at, and it uses both reflection over and information from your configuration and context classes. You can see the queries that run if you capture the chatter with SQL Profiler:

image

And if you drill into the details you’ll see something like the following as the Framework tries to figure out where you’re at:

image

I’ve dashed out my namespace as this was work for a client, but you can see the root of the problem here. The Configuration class is in the Root_Namespace.Migrations namespace; if you move the class to a new namespace, this query is modified to reflect it, but previous migrations stored in the database are not.

Your configuration is automatically created for you when you enable migrations; it’s a class that exists in a namespace which is based on the default namespace of your project. Root_Namespace.Migrations.Congifuration is also the value that is written to the migrations table as the value for ContextKey.

That is our vector to correct the problem.

Building Out The Fix

The first and easiest approach is one that works locally, and could meet your needs if you have access to all affected databases. All you have to do is execute a modified version of the SQL script below:

UPDATE [dbo].[__MigrationHistory]
SET [ContextKey] = ‘New_Namespace.Migrations.Configuration’
WHERE [ContextKey] = ‘Old_Namespace.Migrations.Configuration’

You should be golden at that point, however, this won’t work if you’re doing continuous integration with a team of developers, or if you have a continuous deployment strategy in place. For that, the solution lies in adding the following code to your database Configuration constructor class:

public Configuration()
{
AutomaticMigrationsEnabled = false;
this.ContextKey = “Old_Namespace.Migrations.Configuration”;
}

This strategy is more durable and will help prevent any of your teammates (or production servers) from running into the same issue. Of course, this keeps your old namespace hanging around in your migrations table, but it does the trick.

A potentially more elegant solution would be to create a database initializer that takes the old and new namespace into account and corrects the migrations table if necessary. This could end up being considerably more work, so you’d have to evaluate if it makes sense for your project and timeline. You can reference an example implementation here:

MigrateDatabaseToLatestVersion Source on CodePlex

Cheers, and happy coding!

Scheduled Jobs in Windows Azure Web Sites

Last year I published a pretty good primer on Windows Azure Web Sites (available on Amazon as Windows Azure Web Sites ) but the Azure team keeps coming out with new features. This post will walk you through the creation of scheduled jobs on Windows Azure Web Sites.

Tasks That Aren’t Part of Your Web Site’s UI

On demand reports are a great feature but don’t give your users as-at reporting. Artifacts from operations on your site can use up disk space. Sometimes, you’d prefer to have a digest of information sent out, rather than a notification on every interesting event.

If you need to run some kind of process on a regular interval a solution might be this handy feature on Azure Web Sites: scheduled jobs. These types of requirements can sometimes be met with BI tools, but might cover any kind of activity which may or may not be associated with data, such as:

  • a nightly report
  • a cleanup script
  • sending an email
  • pushing an SMS message hourly to your phone for new account signups

I have a console app for the purpose of this article that I’ve created that runs some reports.  You’ll see later similar output as the logs from the cloud-run copy of this application.

image

Yes, my reports start at 0. Don’t judge, my brothers and sisters.

I’m using an EXE here, but you can use any of the following:

  • Windows CMD (.cmd, .bat or .exe)
  • Node.js (.js)
  • PowerShell (.ps1)
  • PHP Scripts (.php)
  • Python (.py), or
  • Bash (.sh)

Your script or executable needs to be in a ZIP file which can also include any other files you need for processing such as configuration, images to embed in email messages, etc.

Configuring the Job

Click on the Web Jobs tab in the web site’s dashboard, then click “Add” from the command bar at the bottom of the site or from the dashboard page that loads up.

image

If you haven’t already signed up for the Schedule preview program, you’ll not yet be able to create scheduled jobs, but’s it’s trivial to setup and the link is provided on the Web Job screen.

image

Follow the link and complete the sign up; it’s a straightforward button click.

image

With that in place you can continue with creating the job. I wrap up all my needed files into a zip, and then I pick my options on the first page of the job setup:

image

Finally, I create my schedule and configure it to run every day for a year:

image

Your job is added to the list and then runs on schedule, or you can run it on demand from the command bar. After executing, logs are added to your account:

image

Clicking the link brings you to the history of job runs where any console output is available for viewing. Error logs, should any rise, are also saved out here.

image

You can see the output by drilling into the log, which is unsurprisingly similar to what we saw from our console  output at the start of this article.

image

Understanding Job Storage Requirements

The job ZIP that you create can be up to 200MB and will be stored in your web site’s corresponding file system. Logs are also saved out, albeit in a slightly different path.

Job scripts are saved at: D:\home\site\wwwroot\App_Data\jobs\triggered\JOB_NAME

Job logs are saved out at: D:\home\data\jobs\triggered\JOB_NAME

This is actually really great info to know, because with your job script saved in your application’s App_Data directory, you have the ability to manipulate the configuration files (if any) for your script.

Keep in mind that the storage needs for your jobs are factored into your web site’s storage restrictions, so jobs that generate output need to be monitored to make sure you’re not exceeding your quota.

Getting at the Raw Files

There is a great – and growing – administrative back door called Kudu to your Windows Azure Web Site that you may not be aware of. It helps with all kinds of things like SCM checkin hooks, deployment tasks, or viewing logs. You can reach it at this location:

http://your_site_name.scm.azurewebsites.net

It’s basically the URL that you use to access the host on azurewebsites.net, but you plug in the scm.  There is a debug console that gives you the ability to plug away through your files in the Kudu menu.

image

Wrapping Up & Next Steps

Windows Azure Web Sites now easily allows you to create and manage jobs that can be executed on demand or on a schedule. You ZIP up your files, feed them into the site and then configure the execution times for each of your scripts through the dashboard for your site.

Now go solve some scheduled job need, and happy coding!

My First Time: A Non-Android Developer’s Tale of Development with Xamarin

Even though I largely sit on the Microsoft technology stack, it would be without reason to leave development on other stacks unexplored. The old adage – jack of all trades is a master of none – used to plague me as a younger developer as I tried to get my hands into everything and found it hard to become a master of anything.  So, though I’ve kept abreast of what my development brethren on iOS and Android have been up to (and taking much notice of their market share compared to my platform of choice) I have only dabbled to insignificant measure with either. 

I would like to give a shout to my buddies Mike and Brad who have entertained me at length with conversations and code comparisons on both iOS and Android, respectively, as I work on Windows Phone.

But there’s a cross-over class now – highly functional, feature-rich and, better still, it’s “native” to the development experience I know and love in Visual Studio.

My previous comparison was quite jagged; the Visual Studio Express SKU for Windows Phone is free and installs with a double click. “Hello World” is literally seconds away, post-installation when you’re cutting a Windows Phone app. But, when I last tried Android development with Eclipse, there were several downloads, patches, a video card update (yes, seriously, for my L502X) and numerous animal sacrifices required to get the development environment and emulator running.  And I really like my cat, so that didn’t go so well.

Enter into the mix Xamarin’s solution to building apps, with a twist that .Net developers are going to love.

I’m Going to Need a Few Things

imageFrom the get-go, the Xamarin install experience is smart and well-informed. People still make bad installers in 2014, but I can’t accuse Xamarin of that. Like any good citizen, this one knows what it needs to know to get your PC up-and-running. A quick inventory to avoid downloading the parts you already have, then it’s off to cyberspace to fetch the bits. Grab a coffee.

After pulling about 1.5GB down (thank goodness for fast interwebs) the installer runs without much prompting and preps your box with the goods.

Compared to my last experience? So far, this is aces, baby. Each of the installed target platforms even pops up web pages corresponding to the latest version in the Xamarin Developer Center. No errors, only confirmations. Seamless install.

I open up Visual Studio and from my File –> New Project experience I get this:

image

Creating the project gives me a prompt for my Xamarin credentials, which then activates my subscription.

image

Visual Studio is well equipped to give me the lay of the land through the Solution Explorer. You can see the project layout, look at files that make up the solution and even drill into classes to get at the method level-of-detail. I see some interesting bits and drill in.

image

I do the most natural thing in the world to any dev familiar with Visual Studio and hit F5. I want to see what this baby does. I get the comically honest message:

image

You are about to launch the MonoForAndroid_API_10 emulator. Google Android emulators are slow. Do you wish to proceed?

Yes. Yes, I do. But!! First I need to make sure that I’m using the correct emulator. In my case I had selected an Ice Cream Sandwich project template, so I needed to update my emulator selection to the MonoForAndroid_API_15 option. On my little 2 core i7 with 8GB RAM, the first-start for the virtual device and deployment took about 8 minutes, so, that previous message about taking a little time to get things going is pretty true. That said, the first run also needs to fire up the emulator, push the SDK out, then install the app and sync the assemblies. Seconds later, I have a working app. Hello World!

image

Bells and Whistles. Because Awesome.

I return to the IDE, press the Stop control for the debugger and dig into the code. I set a breakpoint on an interesting line of code and re-run the app.  Are you kidding me? Sweet! I’m debugging an Android application in Visual Studio.

image

That interesting line of code allowed me to assume something given the project structure I had previously seen, so I drilled into the folder called “Resources” where you wouldn’t be too surprised to find a “Layout” folder, followed by a “Main.axml”. Double-clicking this file gave me a well-equipped toolbox and a rich designer with draw and source modes and a convenient device selection for preview purposes.

image

Wrapping Up

“Guess what, Mom, I’m an Android developer!” That right there, that is not on the top of the list of phone calls I am going to make in 2014. There’s obviously lots more to familiarize myself with, but this establishes a coherent base: I have a great development experience from a trusted company (Xamarin and Microsoft announced partnership details) that is winning awards for the work they do, in the best integrated development environment PERIOD working with a language I love.

In the months ahead I’m going to be talking a lot more about Reactive Applications, and one of my goals is to make sure that I’m providing examples for cross-platform experiences. I’m working closely with my good friend Simon Timms to explore concepts related to RA on the Microsoft stack in the back end, but these applications are designed for scale and the reality is that most of your potential client base may exist on a different platform.

Sure, it’s easy to be nervous when you do it for the first time, but then you realize you were likely making a bigger deal out of it than necessary. When you’re well-equipped, there’s really no reason to feel any kind of anxiety over experimentation. Oh, and for the record, I’m still talking about Android development.

Next Steps

I’ll be writing soon on my other adventures, particularly with building out cloud-based solutions. These will really, really scale well to serve as the platform for client apps on all kinds of platforms, Android included. If you want to get in on the mix of things, be sure to prep yourself with the following:

  1. Hit the Xamarin web site and sign up for your trial. #WorthIt.
  2. Get familiar with your target: Android design specs are readily available.
  3. Check out the excellent starter community on Xamarin’s site.  Docs, to recipes, to tutorials, and all in the context you choose – xplat or platform-specific.

MVA Jump Start – Windows Azure Web Sites Deep Dive

If you tuned into the MVA Jump Start for Windows Azure Web Sites, you’ll know that we covered a lot of ground in a short period of time. I promised to share all the resources I mentioned and all the code that I shared throughout the day, so here it is!

If You Haven’t Seen the Session…

You can watch it on Microsoft Virtual Academy on demand, then follow along with the resources below.

Know Thy Tools

Continuous Deployment

Go-Live Checklist

Lightning Round

Also, you can get all the code I was demo’ing here:

MisterJames on GitHub

Next Steps…More MVA!

If you haven’t already done so you can register for Microsoft Virtual Academy here. As well, here are some courses I recommend (along with my session, of course Winking smile ):

Cheers, and happy coding!

Windows Azure Web Sites - My MVA Recording Experience in Redmond

Today I had the pleasure and privilege of recording a session – 7 modules altogether – covering Windows Azure Web Sites with Tejaswi Redkar. The material was for a course imageprovided by Microsoft Virtual Academy and recorded live in front of a virtual audience of nearly 1,000 folks from 82 countries around the world. MVA has over 1.5 million subscribers worldwide, so this was a big audience!

I had excellent support from Sangeeta who lined up our session, a great time with Tej who co-presented with me, and tons of help and feedback from Danny and Garry who produced the whole event and gave us wonderful feedback throughout the whole day. It was a great time and I learned a ton in the process about the process and can’t wait for another opportunity to do another session.

Virtual Peeps are Awesome

I couldn’t believe the participation from the virtual audience! There were tweets going out almost all day and lots of activity in the chat room. This is a great way to learn, IMO, with other peers asking questions while the session is going on and some great experts on hand to field questions from the viewers while we shelled out the information on WAWS.

Help from the audience came in all kinds of forms. When I asked folks to fill out a survey with me to demonstrate SignalR running in the cloud, we had over 200 responses!

survey

Even when I wasn’t asking for help, the viewers were sharp enough (and paying close enough attention!) to catch URLs that were being used and jump into the demos themselves. #Awesomesauce. Smile

chat

And when things went wrong – we only had one demo that didn’t work! – people were still giving me a hand after the session. I had a Node.js + Mongo DB setup that gave me a 500 when I deployed as one of the last demos of the day, but before I got back to the hotel, someone had found the cure and posted the details – refreshing the page!

todo

The Studio Staff Were a Treat to Work With

If you’ve seen me present live, you know I’m an animated speaker. It was all I could do to stay in my seat for 6 hours of content! Barry and Danny were the producers/recording engineers who executed the live production and, in a tight space with limited budget, did an awesome job of making Tej and I look like we knew what we were doing. They were a great feedback loop, giving advice, helping us make changes as we went through the day and keeping us on schedule.

As presenters we got to use a couple of ginormous multi-touch screens and had to switch back and forth between the presentation and the slide deck, but Barry and Danny made it look good and I think we avoided blinking screens of fail for the whole day.

The Microsoft Campus is Pretty Darn Cool

Best part of the experience was walking around knowing that everyone there was smarter than me. Very humbling to get to mix with the folks that develop the tools upon which I make my bread and butter, so I soaked in as much as I could. Lunch at The Commons (actually, I think it was called The Mixer), a lap around the Visitor Center, a trip to the company store and of course getting into the recording studio in downtown Redmond.

image

The Commons (or Mixer?) is a collection of shops, services, food courts and lounge areas for people to relax. There was a live jazz band playing at one end of the building which was pretty cool, and a great selection of eats to choose from.

I was really impressed by how serious they seemed to take environmental responsibility and even nutrition. The cafeteria I had breakfast at had “less” and “more” indicators for all the food items to help people make healthy choices, and everything that you used to eat – cups, lids, plates, even the utensils, were all compostable, and even the bags they went into were compostable. Cool stuff. 

And you get a sense about how global and how far reaching the company is. I come from a town of about 40,000 people; the MS Campus has over 40k employees working on the grounds, not counting contractors and visitors. One of the boardrooms that I was in was a designated global security/event response center, where I presume smart people dealing with serious issues might sometimes convene.  People from all kinds of cultures and backgrounds are making cool things happen and including folks like me. Pretty darn cool.

##

If you’re really into cars, I suppose visiting the factory where yours was built would be pretty epic. That’s pretty much what I experienced here on this trip, and I can’t wait to come back. Plus, it’s so warm here (compared to the –40 weather I came from!).

Once again, thanks to all who participated in the day’s events, helped with the demos online and made the day a success. I hope you all get a chance to bring some awesome back to your team, wherever you work.

If you want to track down the session on the MVA website, check it out here.

Cheers, and happy coding!