Workaround: NuGet Packages Failing to Download in Visual Studio 2015 RTM

I haven’t figured out a common theme yet, but certain packages are failing to restore when you attempt to install them from the NuGet primary feed via the project.json file in Visual Studio 2015. Thanks to Brock Allen for confirming I wasn’t going insane.

A couple of things I’ve discovered:

  • This seems to be more common for prerelease packages
  • It seems to work if the package has a previous release version (not in pre)

As a workaround, you can add the packages manually via the dialog in Visual Studio, just make sure you hit that pre-release flag:

image

If that doesn’t work for you – sometimes I’m not seeing the package above in my feed – if you have it you can add another NuGet feed to an alternate package source, like I’ve done here with AutoFac’s nightly build feed:

image

The other thing is that once you get it installed in your system cache, it will resolve it from there, which I imagine makes it harder to triage for anyone trying to figure out what’s going on.

I’m seeing various confirmations of this on Twitter:

image

image

With NuGet 3 being release (and part of VS 2015) I think some package authors are unsure if it’s their problem or what the case may be. Depending on the method you come at it, it’s possible that you can still get the package, but I would say it seems unpredictable right now.

Response from Postsharp.net is not a Valid Nuget v2 Service Response

After installing PostSharp.net on my machine for a project (I did the MSI install) I started getting errors during the package restore that ended up blocking my builds. They looked a lot like this:

Error: FindPackagesById: EntityFramework.Core Response from https://www.postsharp.net/nuget/packages/FindPackagesById()?id=’EntityFramework.Core’?id=) is not a valid NuGet v2 service response.

image

Now, an important note here: I’m on a machine that’s seen various updates and changes to VS 2015, and this was a version of PostSharp that wasn’t originally built for the RTM version of Visual Studio. So…this may be entirely circumstantial, but it’s what I ran into.

And it wasn’t just on that one package (others would give the same result) and it wasn’t just on one project. I tried to isolate this, but couldn’t find the source. Why was PostSharp getting in the way of my package restore? Even using DNU from the command line, after I explicitly uninstalled it? I started setting compiler variables to block PostSharp on those projects, but that got frustrating quickly, so I resorted to uninstalling everything I could find of it.

After the uninstall, I still was stumped, same errors all over again. With the help of my friend Donald Belcham, I was able to find traces of PostSharp still on my machine, located in the system-wide NuGet package source feed configuration:

image

Unchecking that box above does the trick.

Might be an edge case if you run into this, but if you do, and this helps, consider buying Don a scotch!

Happy coding. Smile

Day 3 – Extracting a Service to Interact with Azure Table Storage

In this series we are looking at the basic mechanics of interacting with cloud-based Table Storage from an MVC 5 Application, using the Visual Studio 2013 IDE and Microsoft Azure infrastructure.

Our controllers are not supposed to be about anything more than getting models together so that our views have something to present. When we start mixing concerns, our application starts to become very difficult to test, controllers start getting quite complex and the difficulty in maintaining our application can skyrocket.

Let’s avoid that.

If you want to follow along, please pop into Azure and make sure you’ve got an account ready to go. The trial is free, so get at it!

Defining Our Service

Let’s look at the operations we’re going to need, from what we’ve already implemented, and knowing what we’re planning from our outline:

  • Insert a record
  • Get a filtered set of records
  • Update a record
  • Deleting a record

Cool beans. At first blush it seems like we’ve got a pretty simple set of concerns, but notice that I didn’t include things like “connecting to Azure”, “creating table references” or “reading configuration information”, as those are all separate concerns that our controller doesn’t actually care about.  Remember, we’re trying to isolate our concerns.

Hrm…so, manipulating records, adding, deleting, filtering, separating concerns from our business logic…this is starting to sound familiar, right? 

Use a repository to separate the logic that retrieves the data and maps it to the entity model from the business logic that acts on the model. The business logic should be agnostic to the type of data that comprises the data source layer. Source: MSDN.

So, we’re going to want to build something using the repository pattern. We’ll use that repository here in our application’s controllers, but in a larger project you might go even further to have an application services layer where you map between the domain models and the view models that we have in our views. All in, our interface might look like the following:

public interface ITableStorageRepository<T> where T : TableEntity
{
void Insert(T entity);
void Update(T entity);
void Delete(T entity);
IEnumerable<T> GetByPartition(string partitionKey);
}

The public interface simply gives us a way to do our CRUD operations and treat a filtered result set as a collection in order to minimize duplicate constructs and operations related to table queries. You can think of the CloudTableClient and TableQueries from the Azure SDK as part of a Data Mapper layer that enables us to build this abstraction.

Note: For the purpose of illustration, I’m going to continue to use TableEntity here, which doesn’t completely abstract the Azure Table Storage concern away from my controller. I understand that; in a real-world scenario, I would typically have a view model that is used in the MVC application and an intermediary service would handle mapping as required.

Implementing the Service

Leveraging this is going to be awesome, but we need to move some of the heavy-lifting out of our controller first.  Let’s start by creating a Repositories folder and adding a class called KittehRepository, which will of course implement ITableStorageRepository<KittehEntity>.

Don’t peak! As an exercise for the reader, you can use the interface noted above to implement the class. Use the interface above to craft your KittehRepository class. You should be able to find all the bits you need by exploring the objects already in use in the controller.

When you’re ready, here’s my version of the solution below:

public class KittehRepository : ITableStorageRepository<KittehEntity>
{
private readonly CloudTableClient _client;

<span class="kwrd">public</span> KittehRepository()
{
    var storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting(<span class="str">"StorageConnectionString"</span>));
    _client = storageAccount.CreateCloudTableClient();
}

<span class="kwrd">public</span> <span class="kwrd">void</span> Insert(KittehEntity entity)
{
    var kittehTable = _client.GetTableReference(<span class="str">"PicturesOfKittehs"</span>);
    var insert = TableOperation.Insert(entity);
    kittehTable.Execute(insert);
}

<span class="kwrd">public</span> <span class="kwrd">void</span> Update(KittehEntity entity)
{
    var kittehTable = _client.GetTableReference(<span class="str">"PicturesOfKittehs"</span>);
    var insert = TableOperation.Replace(entity);
    kittehTable.Execute(insert);
}

<span class="kwrd">public</span> <span class="kwrd">void</span> Delete(KittehEntity entity)
{
    var kittehTable = _client.GetTableReference(<span class="str">"PicturesOfKittehs"</span>);
    var insert = TableOperation.Delete(entity);
    kittehTable.Execute(insert);
}

<span class="kwrd">public</span> IEnumerable&lt;KittehEntity&gt; GetByPartition(<span class="kwrd">string</span> partitionKey)
{
    var kittehTable = _client.GetTableReference(<span class="str">"PicturesOfKittehs"</span>);
    var kittehQuery = <span class="kwrd">new</span> TableQuery&lt;KittehEntity&gt;()
                    .Where(TableQuery.GenerateFilterCondition(<span class="str">"PartitionKey"</span>, QueryComparisons.Equal, partitionKey));
    var kittehs = kittehTable.ExecuteQuery(kittehQuery).ToList();

    <span class="kwrd">return</span> kittehs;
}

}

One thing to note is that I’ve pushed most of the initialization up to the constructor, and I’ve not implemented any kind of seeding code. The table seeding that I illustrated in Day 1 is a concern that should be implemented outside of a repository, likely as part of a process that starts up the application in first-run scenarios, or something that would be run as part of a deployment to a test/QA environment.

Cleaning up Our Controller

I love this. The controller can now do what we need it to do. Here’s the complete class, with an added constructor that accepts a reference to the repository (we’ll wire that up shortly):

public class HomeController : Controller
{
private readonly ITableStorageRepository<KittehEntity> _kittehRepository;

<span class="kwrd">public</span> HomeController(ITableStorageRepository&lt;KittehEntity&gt; kittehRepository)
{
    _kittehRepository = kittehRepository;
}

<span class="kwrd">public</span> ActionResult Index()
{
    var kittehs = _kittehRepository.GetByPartition(<span class="str">"FunnyKittehs"</span>);
    <span class="kwrd">return</span> View(kittehs);
}

[HttpPost]
<span class="kwrd">public</span> ActionResult Index(KittehEntity entity)
{
    _kittehRepository.Insert(entity);
    <span class="kwrd">return</span> RedirectToAction(<span class="str">"Index"</span>);
}

<span class="kwrd">public</span> ActionResult About()
{
    ViewBag.Message = <span class="str">"Your application description page."</span>;

    <span class="kwrd">return</span> View();
}

<span class="kwrd">public</span> ActionResult Contact()
{
    ViewBag.Message = <span class="str">"Your contact page."</span>;

    <span class="kwrd">return</span> View();
}

}

Notice how we’ve reduce the amount of code in this class significantly, to the point that anyone should be able to read it – with little or no exposure to Azure Table Storage – and still have a sense of what’s going on. We’ve taken our controller from over 50 lines of code (non-cruft/whitespace) to about 5.

Just to see how much more clear we’ve made things, do a “remove and sort” on your usings. You’ll notice that everything to do with Azure has all but disappeared; our repository has served it’s purpose!

Okay, so the repository is in place, and our controller is dramatically simplified. Now we need to do a bit of wiring to let the MVC Framework know that we’d like an instance of the class when the controller fires up. Here’s how.

Adding Dependency Injection

First, open the Package Manager Console (View –> Other Windows –> Package Manager Console) and type the following:

install-package Ninject.MVC5

The Ninject packages required for interoperation with the MVC Framework are installed, and you get a new class in _AppStart called NinjectWebCommon. This class contains an assembly-level attribute that allows it to properly wire dependency injection up in your application at startup, you’ll see this at the top of the file.

What happens now is quite interesting: when the MVC Framework tries to create an instance of a controller (i.e., when someone makes a request to your application), it looks for a constructor with no parameters. This no longer exists on our controller because we require the ITableStorageRepository<KittehEntity> parameter.  Ninject will now step in for us and say, “Oh, you want something that looks like that? Here’s one I made for you!”

To get that kind of injection love, you need to go into the NinjectWebCommon class and update the RegisterServices method to include this line of code:

kernel.Bind<ITableStorageRepository<KittehEntity>>().To<KittehRepository>();

This simply says, “When someone asks for that interface, give them this concrete class.

So at this point, the wiring is done, and you can run your app! It will have the exact same functionality and user experience, but it will be much more technically sound.

Notes and Improvements

Just a few things to note:

  • I’ve kept things simple and omitted ViewModels
  • You’d likely want to have a layer between your controller and repository class in most real-word scenarios
  • The repository class should have it’s dependencies injected as well, namely, the configuration information it needs to connect to Azure. A proper configuration helper class would do the trick and, once registered with Ninject, could also be accepted as a parameter on the constructor

Summary

With the repository in place we can now lighten the load on the controller and more easily implement features with our concerns more clearly separated. In the next couple of posts, we’re going to start allowing the user to manipulate the entities in the table.

Happy coding! Smile

Three Types of Relationships You Need to Survive as a Remote Worker

Getting through your work day on your own is easy enough.  Over the long run, though,  you’re going to need to have some solid relationships in your life to help get you through the rough patches and pick you up from the falls, but more importantly, to be there when it’s time to celebrate the wins.

While these are going to seem obvious at first, I want to make the point that they’re by no means effortless. In fact, some of the closest connections in your life can be the most affected by your choice to have your home serve also as your place of work.

Your Co-Workers

office_relationshipsIn the movie Office Space, the main character Peter Gibbons pines that his motivation to work, or rather, to “work just hard enough not to get fired,” stems from unhealthy relationships with management. This cannot be your reality as a remote worker, and you need to make efforts to establish (and maintain) trust with your boss and teammates.

I have worked in blended environments where there was a head office with regular staff, but many remote workers and many office workers who had the option to work from home as they elected to do so. Being a permanent remote with very little office time (averaging about a day a month in the office) it was harder to get to know folks, but I knew it was really on me to own it.

Before you start at a company as a remote worker, or before you start working from home, talk to the management and other staff that already work remotely to see what it is like. If the atmosphere supports it, flying solo can be a great experience, but you still need the support of ground control. If you work as the only remote worker, or if management doesn’t trust or understand how productivity can work remotely, it may not be the right time for you to engage in flight.

Fellow Remote Workers

These ones are pretty important, especially in a company where there is a block of folks that work full-time at the office and a group that works remotely. Why are these folks good to know?

Because they get ya. They’re on the Skypes. They’re on the Slack.

They are trying to do the same things that you are doing and likely face the same challenges as you do. They look for solutions and have found tools that help them avoid the pitfalls. You’ve likely had to work through something that they haven’t yet, you have it figured out and it’s great to be able to share that with them.

Don’t be afraid to share your failures or ask for help! Being a good remote worker means mentoring and being mentored by other remote workers so that collectively we can all be really good at it.

People With Absolutely Nothing to do with Work

Ahhh…five o’clock, am I right? It’s that time when you disconnect from work and start to enjoy the more meaningful things in life. Of course, that means that you have be successful in leaving work in the first place, which can be tricky, but isn’t impossible to do, especially if you’ve put in a good day.

There a measure of counter-intuitiveness here that you’ll find. For example, keeping strong relationships in your own household is actually more about defining and maintaining boundaries during the workday than you might imagine. For example, if your office door is open as you’re working and your spouse, kids or roommates feel free to come and chat, you’re going to be less productive than you’d like to be. Over time, that loss of focus and reduced ability to create tangible outputs are going to start to turn into stress. Allowing the non-work relationships to bleed into your day can be a form of long term toxin that will erode your success as a remote worker.

Likewise, allowing your work day to bleed into your evenings and weekends will rob you of the best part of your day. Maybe you’re married with kids, fiancéed, or perhaps single, but the fact holds true regardless of your situation: marginalizing your family and friends to try to improve things at work will yield exactly zero positive results for your personal life. If you don’t agree with that statement, it’s simply because Dr. Seuss got the idea of the Grinch’s heart being two sizes too small from your life story.

There will be exceptions, where some rule-bending and time-bleeding will occur, but your job as a remote is to maintain that balance and be honest with yourself about how you’re doing in that regard.

An Old Adage

I do have to admit that I’m quite fortunate in this regard, as I’ve been able to align with an employer who is very much accustomed to and accepting of remote work. Working remotely, while becoming more common, isn’t yet universally accepted and I know from personal experience that if the culture isn’t there in the company, it can’t work on the long term.

It’s long and often been said that it takes a village to raise a child, and the reality of that statement is that at some point we have to grow up and be part of the village. As a remote worker, you need to remember that community isn’t going to just happen for you. Sure, you may not be raising kids, but the outcomes of your work efforts can only be best realized if you’re able to establish some great relationships along the way.

Rethinking our Practices with the MVC Framework

We get set in our ways, don’t we? It’s funny how the sharper and more confident we get with our frameworks and the tooling we employ to work with them, we also get a little more attached to our way of doing things. And then along comes a major version change, ripe with breaking changes and new bits to twiddle and we’re left saying, “But, that’s not how we’ve always done it!”.

Case in point: service injection into views. In ASP.NET’s MVC Framework 6 we get this new concept which, if we’re going to accept, requires that we relax on our thinking of how we’ve always done things.

My friends Dave Paquette, Simon Timms and myself have been ruffling through a few of these types of changes, and Simon did a great job of illustrating how we used to get data into our views, and how we might do it in tomorrow’s MVC. For a walkthrough of service injection I highly recommend his article on it.

How does it work? The new inject feature gives us the ability to asynchronously invoke methods on classes that are dynamically created and given to our view. It’s IoC for your UI.

Personally, I’d been wrestling with a good use case here because we had a way to do it, and it seems an obvious one (illustrated by Simon) had been missing my thought stream, likely because it’s been clouded for a few years with ViewBag. In all reality, the idea of using the ViewBag – a dynamic object that is double-blind, easily forgotten about and easily polluted – to push bits of data to the view has always kind of bugged me, no less than using filters did, but we didn’t have an elegant, framework-driven mechanism to make it happen more gracefully. We do now.

Also, let’s not confuse things here: In more cases than not, your ViewModel is going to be the correct place to put your data, and where I’ve put my data for most things – like drop down lists – but this type of feature is exciting because it opens the door to explore new options in building views and experiences for our users.

But, doesn’t it break the design of MVC?

[caption id=”attachment_6491” align=”alignright” width=”300”]Source: http://www.nv.doe.gov/library/photos/ Sometimes things blow up when you try them out, but you still gotta try.[/caption]

Perhaps. Maybe, if you want to say, “The only definition valid for any framework is the original definition.” But we have more tools today to do our job, and in particular for this case dependency injection which has become a first-class citizen in ASP.NET. So, let’s rewind a bit and ask, why is it a bad practice to give a component the pieces it needs to do its work?

Let’s think of the type of problem that we’re trying to solve here, as Simon did in his article: a view needs to populate a dropdown list. It doesn’t need to access the database, and it shouldn’t have it. It doesn’t need to know a connection string, or if data is coming from a cache, a web service or otherwise, it just needs the data. Giving it an interface by which to look it up, well, to me that seems like a good idea.

If instead you favor the approach of using the controller to populate the ViewBag or use filters (or other techniques) you inherently introduce coupling to a specific view in the controller by forcing it to look up data to populate a dropdown box. You are still injecting data into the view. In my mind, the controller should know as little as possible about the view. Why should I have to change my controller if I need to change my view?

I want to make a clear distinction here, though, as I do believe the controller answers very specific concerns, namely, those that deal with a particular entity. But the PersonController shouldn’t have to know the list of Canadian Provinces, should it?

Don’t need to know where I’m going, just need to know where I’ve been

The assumption that the controller provides everything the view needs is guided by past pretence. It was true in MVC5 and earlier because it was what we had to work with. My point is that in MVC6 we now have a construct that allows:

  • Separation of concern/single responsibility
  • Testability
  • Type safety
  • Injectable dependencies
    In my mind, the controller is just a component. So is the view. The controller’s concerns are related to the entity in question. The view is required to render correct UI such that a form can be filled out in a way that satisfies the requirements of the view model. Again, why use a person controller to retrieve details about countries and states?

I don’t see controllers as any more important than any other component. They have things they need, and they should have those things injected. My controllers don’t talk to the database, they talk to command objects and query objects via interface and those are injected from an IoC container.

I think now, with views as first-class components, that we can look at views in the same way.

But what about ViewBag?

With ViewBag (and filters) we have a problem that we’re not really talking about in the best interest of not upsetting anyone. The fact that my controller has to do the lifting for the combo boxes is awkward and doesn’t really help us out too much with maintaining SRP. But we didn’t previously have a good way to address this.

We also tend to overlook the fact that Views are effectively code. Why can’t our principles apply to them as well? Of course I shouldn’t access the database from the view, but why can’t I know about an interface that does (and have it injected)?

This is a great use case of this new feature, and one that demonstrates that “not changing for the sake of not changing” isn’t a good mantra. If my view code/class/script is responsible for rendering the view, I see no problem injecting into it the things it needs to do so.

After all, isn’t that what you’re doing with ViewBag? Just injecting things into the view through the Dynamic? Except, with ViewBag, no one sees type problems and everyone has to cast. Now we’ve got run time errors.

There is the argument that says that even if we’re abstracting away the data access, we’re introducing the ability for the view to call the database. Again, I don’t think the view is any less important a component in the scheme of things, and there is a level of appropriateness with which we must use the feature. Will it be abused? Likely. You don’t want to be injecting database change-capable components into the view, but that is more a case of bad choices in implementation. You can completely destroy the maintainability of a project and wreak havoc on your users with service injection, but that doesn’t mean you should avoid it. I’ve seen people write 1,000 lines of code in a method, but that doesn’t mean I don’t use methods any more.

When changes come to frameworks, I think it’s okay to rethink our best practices. Taking Simon’s approach we have:

  • Interface-based injection
  • Abstraction from underlying data access strategy (db, cache, text file, whatever)
  • Testable components
  • Maintaining SRP in our controller and view
  • No casting from dynamic to proper types
    I’m okay with this approach and will be using this approach in MVC 6 projects.

I highly encourage you to do your own reading on this and explore the feature in greater detail. Here are a few links for your consideration.

Image credit: http://www.nv.doe.gov/library/photos/

Project K, dnvm, dnx, & dnu and Entity Framework 7 (for bonus points)

Things, they are a-changing!

imageIf you’ve played with different versions of ASP.NET 5 and MVC 6 along the way and have recently updated to the RC build of Visual Studio 2015 you’ll likely have noticed a few changes.  Just a couple.

I found that I’ve been mostly able to survive the transition with a few questions around runtimes and where things have moved, but not all the bits are obvious. Here’s a list of some things that you’ll likely want to know as you navigate the M-V-Seas.  (See what I did there? Smile with tongue out )

A big thanks goes out to fellow Canadian Simon Timms and beach bum Dave Paquette for stumbling through these bits with and for me. #experts

Renaming and Reorganization

There used to be two separate commands for language services and running/booting up a site or application. These commands (k and klr) have both been merged with the runtime environment and are all now part of dnx, a.k.a. the .Net Execution environment.

The version manager, which was previously the kvm script (ps1 or sh, depending on environment), is now a command line utility called dnvm, or .Net Version Manager.

Finally, a new utility has been added that replaces the previous dependencies manager with a few enhancements (such as command hoisting from your project.json). The new name is dnu (the .Net Development Utility) and you use it to restore or manage packages, create NuGet packages or publish your project.

So, in summary:

k, klr, kre  => dnx
kvm => dnvm
kpm => dnu

So You Want to Run a Migration?

We used to use the package manager console in Visual Studio to do our migrations work, however, this is not currently the case in VS2015. I imagine that this will continue to improve, but there is still a delta in the way that we used to do things. Today, we’re going to do things a little differently: in a properly prepared console, you’ll type the following in your project directory (not your solution directory):

dnx . ef [options] [command]

This command tells the .Net execution environment to use the current directory and to run the ef command. From there you could type migration or whatever else you’re looking for. Leaving the options and command out, for instance, gives you the magic unicorn of awesome.

Wait, That Didn’t Work?

First of all, you’ll need to make sure that you have the tooling for the dn* utilities on your PATH. There needs to be environment variables setup to point you to the correct runtimes, or rather, the runtimes you’re currently targeting. You can see all the runtime versions you have installed by typing:

dnvm list

Typically, you’ll see two different runtimes (clr and coreclr) for each architecture (x64, x86), and you’ll see each of those for each version you have installed.

The “correct” version for your purposes may be a moving target, so make sure you have a runtime and version that works with the version of EF you have. If you’re not sure (or you thought you were sure but things aren’t working) take to Jabbr and ask for a hand (they are great there).

Next, your solution and/or project will have to have the correct references to EF. Edit your project.json to have the following dependencies and commands (you can do this in VS or Notepad or whatever tool you like, just save the file when you’re done):

  “dependencies”: {
“EntityFramework.SqlServer”: “7.0.0-beta4”,
“EntityFramework.Commands”: “7.0.0-beta4”,

},

“commands”: {
“ef”: “EntityFramework.Commands”,

},

Almost there. Now we need to restore those packages locally so that you can use the EF tooling. To do that, we’re going to use the following command from the solution directory:

dnu restore

Voila! You should be good to go! Navigate to your project directory and hack away at your migrations.

Keeping Up to Date

So, go grab Visual Studio 2015! If you run into trouble there is a wealth of information out there (albeit, much of it is quickly becoming outdated or conflicting). As I already mentioned, Jabbr is a great place to ask questions, as are Twitter and Stack Overflow. Brice Lambson periodically posts updates on his blog. I have found that the documentation for Asp.Net 5 has also been kept fairly up-to-date, which you can read here.

Happy coding! Smile