Now, if I could I’d pinch myself to make sure I’m not a robot, but I know very well that if I’m smart enough to think of that, they must have also programmed a sense of touch and pain into me as well. So testing to see if a website user is going to be even more daunting, because we can’t even pinch them.
Thankfully, the reCaptcha service offered by Google is free add-on to your site that will help to avoid bad data getting into your site, prevent malicious users from gaining access to your resources, and helping you to avoid unwanted side effects of bots that pile up junk data through your forms.
Read on to see how to get this all wired up in a Razor Pages application in ASP.NET Core. Heck, if you are in an MVC app or are building a Web API (or Azure Function) this would all still serve useful!
Here’s how it works: actually, we don’t know. Google holds their cards pretty close. The thing is, the more anyone knows about the service, the easier it is for the bad peeps to figure out a way to bypass it. So, as far as the actual “non-robot” side of things goes, we’re going to leave that part to the perfectly capable engineers at Google.
However, we can certainly use the service without having to know it’s technical innards. The important part is that some client side code will help us to generate a token through the reCaptcha service using a client secret that is only valid for the domains we specify. Then, we can use that token to verify that it was approved by reCaptcha on our server with a different, private key, that we and only we know and can use to validate the token.
And those are the key principles: a client-side token and a back-end verification of said token.
First, pop over to reCaptcha, sign in and go to the Admin Console. From there you can create or manage sites. I chose the “invisible” implementation because it’s fairly non-invasive but it is still able to provide a great level of protection to my site.
There are actually pretty good docs once you generate your keys, namely:
But those docs are implementation agnostic, so we’ll have a look at how to get things going in a Razor Pages application. What we need to do in the Razor Pages and ASP.NET space is something like this:
We don’t want to put our secrets into source control lest the directory be public or otherwise exposed. I like to create a placeholder in my appsettings.json
file for the data like so:
1 | "Captcha": { |
Now, whenever someone on my team (or me, on a different computer) pulls down the source code it’s easy to copy and paste the configuration section into my user secrets, which are never added to the repo.
Now, right-click on your project in Visual Studio and choose “Manage User Secrets”, then copy and paste the above into the root of the JSON document. Fill in the keys with the secrets from your Google configuration.
It’s also a good idea at this time to update your staging and production environments, or any build automation steps or key stores where you would need these settings. Remember that the end result is a key-value pair, so the JSON nesting should be removed before you set a key somewhere and the key should be the composite of all property names in the path assembled with colon. What I mean by that is that our settings above will be Captcha:ClientKey
and Captcha:ServerKey
when you add them to your other environments.
The second side of the configuration is the ability to work with the data in a POCO. We create a class for this so that we can take advantage of the options pattern. The class looks like this:
1 | public class CaptchaSettings |
Two simple properties in a class. Easy-peasy.
CaptchaVerificationService
ServiceHere is the complete listing of the service I’ve created to handle the verifications. I have chosen to have a false
result by default in the event of a service verification failure or other communication exception, but you can choose a default that best suits your needs.
1 | public class CaptchaVerificationService |
Next up, head over to your Startup
class and pop into your ConfigureServices
method to add these two lines:
1 | services.AddOptions<CaptchaSettings>().BindConfiguration("Captcha"); |
The first line pulls the configuration from your key store, configuration file or user secrets, depending on your environment. The second just makes our little verification class available for dependency injection.
Our view will have to be updated to integrate some code from the sample on the reCaptcha site. We’ll include the script from Google, and add a callback that submits our form. This code is in my cshtml
view containing a form where I want the captcha to appear.
1 | <script src="https://www.google.com/recaptcha/api.js" async defer></script> |
Next, we update the submit button in the view to include a few properties that help the script understand what kind of captcha we’re generating and to specify the callback.
1 | <button class="g-recaptcha" data-sitekey="@Model.CaptchaClientKey" data-callback='onSubmit'>Keep me posted!</button> |
You’ll notice in there the client key…this is why we took advantage of the IOptions
bits and exposed it through the service. It’s part of the page model for simplicity, and we just load it up on the get
request in the next step.
That was a captcha pun, in case you missed it.
Anyway, the last step is to add a bit of code to our page’s .cs
file. Let’s start with the class-level field for the service reference and the property we expose through our page model.
1 | private readonly CaptchaVerificationService verificationService; |
As well we’re going to need to capture the token from our form on the way back from the view to the server. The field is named in a non-CLR way, so we use the Name
property on our binding to tie it to the way Google names the token in the client script.
1 | [BindProperty(Name = "g-recaptcha-response")] |
Our constructor also needs some massaging. My page is called Index
so appropriately my class is called IndexModel
.
1 | public IndexModel(CaptchaVerificationService verificationService) |
And, finally, our OnPost
method needs to include the service check before proceeding with any data processing.
1 | public async Task OnPost() |
Stitching together all of the parts takes a bit of work on the first pass, but once the config and service are in place, it literally only takes a couple of minutes to wire up your views (pages, controllers, etc.). reCaptcha is a pretty slick addition to your site that can help prevent script kiddies, bots and purveyors of evil from fluffing with your data. Because no one likes getting their data fluffed.
Happy coding!
]]>decimal
column, you may run into problems with truncation happening silently to you behind the scenes. With a default precision of 18 digits and 2 decimal places you may lose some data and the framework/runtime won’t complain about it when you do.You can fix that though data annotations or with an explicit wire-up in your database context. Let’s look at how that’s done.
Let’s say you create an entity type that looks something like the following:
1 | public class Part |
If you’re dealing with dollars and cents, an underlaying field that records to the penny may be enough. In other scenarios you may be looking for more precision than 1/100. When you attempt to create a migration for the entity based on the class above, you’ll get a warning that looks like this:
No type was specified for the decimal column ‘Size’ on entity type ‘Part’. This will cause values to be silently truncated if they do not fit in the default precision and scale. Explicitly specify the SQL server column type that can accommodate all the values using ‘HasColumnType()’.
I find that most projects I’m on are already using data annotations. They are handy for model validation in our APIs (or UIs in ASP.NET) and are rather unobtrusive. They are also great at revealing rules for the underlaying data structure that we have in our project without having to dig into SQL Server. Said another way, an annotation shows a developer what the data is about and any constraints that exist in the database. They are concise and work great when the rules are straightforward.
Specifying the precision with an annotation is pretty straightforward, as is seen with Part.Size
below.
1 | public class Part |
This will store 18 digits in the database with 4 of those after the decimal.
Intellisense will prompt you to add the System.ComponentModel.DataAnnotations.Schema
namespace to your class if you don’t already have it.
Your database context class can use the protected override void OnModelCreating
method with a ModelBuilder
parameter to futher configure your database instance (see Microsoft Docs). In our case, we’ll use the ModelBuilder
to specify the type to use in SQL Server as well as the precision we wish to use.
1 | protected override void OnModelCreating(ModelBuilder modelBuilder) |
The ModelBuilder
is a good option and sometimes the only option when trying to do more complicated configuration. I will sometimes add a logging statement (for the development environment) in my OnModelCreating
in a project that also uses data annotations because it at least stands a chance of being seen by a developer in the event that unexpected behaviour is being observed. Another option would be to leave a comment on any entity that is configured via OnModelCreating
so that you’re not leaving future versions of yourself scratching your head.
Happy coding!
]]>Publish has encountered an error. Build failed. Check the Output window for more details.
Upon further examination, I noticed that there was a problem with an msbuild task, namely PrepublishScript
.
The error looked something like the following:
1 | Target "PrepublishScript" in project "C:\james\code\project.csproj" (target "PrepareForPublish" depends on it): |
This made sense; after all, I had just deleted my bower.json
and moved package management over to npm.
To clean this up, I had to simply unload the project and edit the csproj file directly. Tracking down the PrepublishScript
target, I was able to locate the line that was causing me grief:
1 | <Target Name="PrepublishScript" BeforeTargets="PrepareForPublish"> |
Removing the exec
line for bower did the trick, and I was able to resume publishing.
As a final step, I had to reload my project and reset my startup project configuration, which Visual Studio loses when you unload a project in the mix.
Over time the templates for projects evolve and they may or may not leave artifcats behind, such as a build step for a now antiquated package manager. You may see other errors such as this in your travels, especially on projects created “in transition” as ASP.NET Core came about. Don’t be afraid to create a branch and play with that csproj file if needed when you run into these kinds of things; you can always roll back your changes or abandon the branch if things go south.
Happy coding!
]]>Knowing quantitatively where your users congregate (and wish to interact) is the best way to understand how you can start to piece together your bot, but you don’t have to have all the answers right away.
Your users will be approaching your business from different mediums all the time. Some users might use Facebook, others Skype, others still SMS. A channel in the Microsoft Bot Framework is a funnel through which your users will communicate with your bot code.
The problem is, should you try to write a bot on your own, is that nearly every single source of user interaction out there will define its own standard for bot interaction. These are likely going to be JSON in some kind, with an end point that you can post data to and a corresponding web hook that you’ll need to write to catch responses and other input. They will require some form of identification. They will almost certainly have a custom payload that is unique to their platform, and distinct, often unique ways to render information to the end user.
The framework does a pretty good job of aggregating that data into a common set of objects that are easy to code against. This makes it a little easier to write shared services and renderers that will help you to craft a unique experience.
The currently supported channels are:
The Direct Line protocol also allows you to create a custom client application and integrate with the Bot Framework using a fairly straightforward control in a number of different scenarios, with or without front-end frameworks like React.
Note that each channel will give you certain capabilities that may not be offered in other channels, including access to specific user data or the way that you can render output and controls to interact with your bot. You need to take these into consideration so that you can manage your output and flow appropriately on each channel.
In conversation with other bot writers over the last little while the conversation keeps coming up around how they chose the channels they use in their bot. There is no magic recipe for which channels to use for your organization, but in my case the primary channels will be SMS and Facebook and later via our web site.
For you, the choice will likely come down to the way your users already behave. For one colleague, this meant using Skype as an internal knowledge bot. For another, it was a combination of SMS and web.
Here’s why we chose to move ahead with our selection of SMS and Facebook:
These reasons likely won’t be right for you, but it may be worth starting to record some metrics to help you make a choice. Knowing that 81.4% of conversations on Facebook were about store hours, location and product info meant that we could make a safe assumption about how to best serve our customers.
You may find that your users have different objectives based on their channel, or perhaps the channels are so different that they can’t possibly deliver a consistant experience. This was the case with BakeBot. SMS users (plain text on phones) and Facebook (rich, visual experiences with multiple options for display in browser or in a dedicated messaging app) have very different expectations.
We decided to start those conversations off very differently, so we have multiple RootDialog
classes that allow us to model those experience appropriate to the channel in question. This allows me to approach the handoff to the dialog in an explicit way. While not displayed below, I also have the ability to check and act on channel-specific data (like Facebook’s “Get Started” command) which we’ll look at in a different post.
1 | if (message.ChannelId == Channels.Facebook ) |
You’ll notice that I have a fallback to channels I’m still okay with (namely the emulator and the web chat channels) and then finally I reply with a generic message and continue on with a notification to a human.
Okay, next up we’re going to hop into some bot code, so make sure you have your pre-requisites in place and get ready to start bot-making.
Happy coding!
This article is part of an ongoing series on the Microsoft Bot Framework.
]]>Before we dive into building a bot, let’s start by getting on the same page of understanding and filling in some background details. If nothing else, this will give you visibility into why I’ve approached things the way I have.
This article is part of an ongoing series on the Microsoft Bot Framework.
Over the years I’ve made several stabs at building some type of command-and-answer service. Early on I wrote text games simliar to “Mad Libs”. I’ve written auto-responders for email clients and console applications that allow users to choose from menus. I have built context menus in WinForms applications, dropdowns on web pages and multi-view apps in mobile toolkits. I’ve written scripts that accept various commands, options and parameters that in turn pass on some type of payload to a service accross the wire. I’ve built Web APIs that accept images and endpoints that process files to produce custom HTTP response messages as required for integration.
If you’ve shared in any of these adventures, I have good news, friends: just like me, you’ve been training to write bots.
“This is going to be awesome.” - me, giggling in the Cognitive Services AI Immersion Workshop at //Build 2017
The further I dive into the Bot Framework and the other complimentary services to support it, the more I feel at home. The types of things I have to do to build a bot feel natural, like things that I’ve been doing for years. The SDKs are well thought out and improving. Lessons learned in other areas of SDK and API design have been applied and are evolving and being tailored to meet the specific needs of interacting with people using conversation. And the cloud enables some very interesting capabilities without the effort that would have been required just a few short years ago.
So, yeah…this is going to be awesome.
A bot is not the arrival of sentient beings. A bot itself is not an implementation of a neural network or the birth of a T-1000. You’ll hear repeatedly that as a bot developer, your job is not to pass the Turing test or even allow users free-form interaction.
From a technology perspective, a bot is nothing more than an API endpoint that accepts data and returns a response. The messages that are sent by, received to and proxied through the Microsoft Bot Framework are JSON, which makes them human-readable and easy to log and debug.
From the view of a delivered product, a bot puts some tools in place to allow your customers or staff to access business intelligence or issue commands relevant to your business’ needs. It will interpret what the user is saying, look data up in your data stores, make decisions through logic you create, perform tasks on behalf of your users or the system and ultimately help a user accomplish a task.
You will be responsible for hosting the bot in some way. Because it’s just a service that you likely already understand, you can stuff it on an existing web server or ship it out to the cloud. And because bots can leverage other cloud and programming features you’ll find that you’re already well-equipped with an arsenal that will help you build some pretty powerful experiences.
I am not going to go too deep into the tech here. There are some great resources out there that do so, but the important bits for a developer are what I refer to as the “lifecycles” of the elements of which the bots are composed. There are also many ways to build and host bots, so I’ll assume you’re going to go with the Visual Studio experience, with configuration and deployment destined for Azure. As a ‘dotnet’ developer I am also going to focus my efforts in the c# domain, though there are kits available for other languages including node and Java.
A bot is typically composed of:
The entry point for any bot is the user interface (there is an exception to this which we’ll cover in detail, but lets start here). The UI is something that you can write yourself (such as your web page) but it is just as likely or more so that your user will come in though UI that you didn’t write. These are called channels and they exist in the form of Facebook Messenger, SMS, Skype and so forth. And herein lies the biggest value proposition of the Bot Framework: the abstraction over those channels.
User interaction is configured for each channel in the Azure portal. You will need to register your application with each of the respective third parties (such as creating an application in Facebook) and then setup application IDs, webhooks or other aspects to establish channel communication.
In the context of the Microsoft Bot Framework, the bot itself is a Web API project that accepts HTTP POST requests (note that the V3 API is written on .NET full framework, whereas the upcoming V4 is written for .NET Core).
The incoming message from a channel - the user input - comes across the wire as JSON and goes through the model binding process as would any payload in your MVC or Web API project. There are pre-defined types provided by the framework to help make this fairly painless.
In your API controller you are responsible for inspecting the type of activity that was sent from the channel and pushing the message to the appropriate code. This may mean that you’ve recieved text from a user, that someone has liked a response your bot sent, or a signal that a user has joined a conversation.
In the event of a non-system message, you’re likely going to send the activity on to the Bot Framework bits. The framework takes care of figuring out state, propagating channel-specific data to your code and helping you restore conversational state. Usually, this will mean something like passing your message through the framework to a dialog class. The purpose of each dialog is to segregate related bits of code that would help to form a user experience. The typical pattern is to create a “root” dialog from which all conversations can be managed. From the root, you guide users by using suggestions or buttons so that they can move in a direction that helps them find success.
So, this is where it gets interesting (and more than a little bit complicated).
These are just a few of many great questions you’re going to have to think through, as I’m currently doing, and hopefully I can add some insight to your thoughts along the way. (And, hey! Please add your insights along the way as well!)
While your job is to create an experience (dialog) or set of experiences (dialogs) that will help the user get at what they need to, some users are hell bent on failure. Well, at least that’s what it looks like when you’re reviewing the logs.
when r u open and is there chocolate pie. it’s for Wedsnday. it’s my aunts birthday and i haven’t seen her in a long time and I think she likes it. or we could have a cake if you have some - Actual Facebook user message
In an upcoming post I’ll share a lot more on this, but just know that you can’t solve for “user” quite yet. It really comes down to making sure you guide the user, and we’ll look at that in greater detail as we go.
As a primer, you’re likely going to need to install a few things or at least update some bits on your machine. Here’s some links that will help to get you started:
Okay…I’ll try to keep this content coming as fast I work through it. Feel free to give any feedback you have along the way as we learn together.
Happy coding!
This article is part of an ongoing series on the Microsoft Bot Framework.
]]>In this blog series we’ll walk through the concepts, design and code required to build a bot using the Microsoft Bot Framework. We’ll leverage several Azure services to add value to the conversation with our customers by using data we’ve already got in our organization.
This article is part of an ongoing series on the Microsoft Bot Framework.
The ability for developers to create engaging, believable - and more importantly, useful - bots has been evolving rapidly over the last few years. Today, we have access to some incredible services and frameworks that allow us to rapidly build bots we can leverage to fill business needs.
The surface area that bots can reach has grown, too, with every major messaging platform providing integration capabilities for bot frameworks to interact with users. From Facebook to Skype to SMS and beyond, the number of users that can access your bot using paradigms they already understand is extensive and growing both locally and globally.
We’re reaching a new level of artificial pseudo-sentience where the “conversational UI” will emerge as a common pattern for customers to interact with companies, large and small alike. At Chez Angela, our bakery is using all kinds of new features of Azure to make our company successful and help our customers get at the information they need. We’re excited about what BakeBot can help us accomplish, from ordering to scheduling to answering common questions about allergies, ingredients and product availability.
Posts In This Bot Series |
---|
Day 1: What a Bot Is (And What It Is Not) |
Day 2: Choosing a Channel |
Next up: QnA Maker & Your First Bot |
Is there a topic you’d like to see that’s not in the mix? Please leave a comment below!
Happy coding!
]]>My role in the bakery is largely on the technology side, and I’m building everything to do with our “bakery platform” on Azure and related technologies. My wife, of course, is the digital to analog converter. :)
This blog series will cover all the tech that I use and build on. In this post, I’m going
to layout the feature set for everything that I’ll be building along the way and what
technologies I intend (and actually) use to implement them.
This article is part of an ongoing series called Building a Cloud-Based Bakery.
Over the next several months, new blog entries based on the code that helps to build out the following features will be linked below. I hope you enjoy these entries as we build our cloud-based bakery, and that they help you in your coding efforts.
Features and Services | Technologies Used | Related Posts |
---|---|---|
Order Processing | Azure Functions, Square Payments, Azure Service Bus | |
SMS/Facebook Messenger Bot | LUIS, QnA Maker, Web API, Bot Framework | |
Automated Answers | QandA Maker, Bot Framework | |
Web Site Features | ASP.NET Core, Tag Helpers, Azure Web Apps | |
Suggesting Products | LUIS, Document DB, Bot Framework | |
Answering Questions About Product Ingredients | LUIS, EF, Azure SQL, Bot Framework | |
Communicating with Customers | ASP.NET Core, SendGrid, Twilio, Azure Functions | |
Subscription Management | .NET Core, SendGrid, EF, Square Payments | |
Product Reminders | Bot Framework, EF, Azure Functions, Service Bus | |
Protecting Data | SSL Certs, Azure Web Apps, Middleware | |
Identity and Profile | ASP.NET Core, Bot Framework | |
Protecting Application Secrets | User Secrets, Visual Studio, dotnet , Azure Web Apps |
The code for much of the bakery will be freely available on GitHub, though some aspects of the code will be kept private to guard the logic that helps to set us apart. When I have the codebase ready, I’ll update this post to get the code out there!
Happy Coding!
]]>Ya know what is really embarassing? Missing a meeting with a group of people you really like working with. Don’t be a James. Get your Gmail notifications on your desktop and be on time for those important, team-bonding sessions.
I recently joined the team at Inventive and I’m so geeked to be working with some pretty incredible folks. There is this great combination and balance of humor, humility and “get sh!t done” that is really pushing this old dog to keep up. I love it.
Aaaaaaand then I go and miss two standups in my first week. Good grief, Charlie Brown. I’m thankful that “patience” and “grace” are also in this culture.
Inventive uses the Google apps suite of tools which is new-ish to me for complete workflow. I’ve only really used it once before, and the calendar app and the way they handle meeting notifications has changed. It used to be that Chrome would “pop over” what you were working on with the in-browser alerts. It was more than mildly annoying, but at the same time it kept me on top of my meetings.
But, no excuses. New tooling means new learning, so I had to dig around the new Calendar experience to find the setting I was looking for. I found an [older post] out there which pointed me in the right direction, but the new bits look like, well, different bits, so here I share.
The first step is simply to drill into your calendar settings, so pop over to that app and open that up.
Scroll down to the Event Settings under the General header, and select “Desktop Notifications” from the “Notifications” option.
Now, close and restart your browser, then head back to your calendar. Chrome will prompt you to allow desktop notifications. If you say yes (why wouldn’t you at this point?) then you should see the following.
There. No more missed standups. The only catch seems to be that when you’re using multiple profiles you’re going to have to open Chrome up with the profile for which you’d like notifications enabled at least once after startup, but this is something you’re likely doing to get your mail and whatnot anyway. Hope this helps!
Happy coding!
]]>Duplicate ‘Compile’ items were included. The .NET SDK includes ‘Compile’ items from your project directory by default.
Thankfully, fixing this is pretty straightforward.
I’m not sure what series of events led you to getting this error message, but for my project it happened when I was renaming a file in Visual Studio 2017. In my case, I had a file that contained a couple of classes that I intended to break out into separate files. None of the classes assumed the name of the file. My file was unsaved and I attempted to rename the file in the IDE to the name of one of the classes.
The full error message is as follows:
Duplicate ‘Compile’ items were included. The .NET SDK includes ‘Compile’ items from your project directory by default. You can either remove these items from your project file, or set the ‘EnableDefaultCompileItems’ property to ‘false’ if you want to explicitly include them in your project file.
After the IDE makes the change, I start getting the error message, along with all kinds of ‘unambiguous reference’ errors throughout my code.
The error message is offputting because it is worded in such a way that you’ve done something wrong. In my case, I haven’t touched my project file, so what is it that is causing the project file to contain invalid sections in my config? The rename process in Visual Studio must include some staging steps where it tries its best to keep track of the in-progress changes, but it apparently doesn’t degrade gracefully.
Of course, if you’ve been meddling with the project file on your own, then you can’t pin it on Visual Studio ;)
Reading the error message we can understand that the project file contains instructions to include a file explicitly in a ‘Compile’ section that is not required, because the file in question is already contained as part of the project path.
In the Solution Explorer, right-click on your project and select Edit Project .csproj
from the context menu.
The full text of the error message will tell you the file name that is problematic, and it will be in a section that looks something like the following:
1 | <ItemGroup> |
It is safe to remove this ‘Compile’ node, along with the wrapping ‘ItemGroup’, provided the file is indeed in your path. Before nuking the node, create a commit in your source control software so that you can roll back if need be.
Happy Coding!
]]>Okay, community, you’ve responded to the call. Now it’s time to take this to the next level.
AllReady is a production-ready application. We have real-world tenure and work with national level organizations to help organize volunteers. Through our software, we can empower communities to best use their limited and valuable resources to prepare for disasters.
Heroes are found amongst those who respond to disasters and those who help us prepare for them. However, preparedness efforts often go unreported. AllReady brings visibility and participation to the preparedness campaigns of the humanitarian organizations who work every day to reduce and ideally remove the impact of disasters big and small. - Tony Surma, Co-Founder and Vice President of HTBox
Be it in the recesses of their own space, crammed in a room with other developers or even through a virtual code-a-thon, the development community has embraced AllReady and the Humanitarian Toolbox cause.
Originally conceived as a “technical demo with purpose”, AllReady has grown into an application with a healthy set of features.
AllReady is designed to to deliver preparedness services by replacing pen and paper with web and mobile apps. Our volunteers will be using AllReady to organize the installation of thousands of free smoke alarms. Traditionally, our focus has been on the heroic act of disaster response but in understanding that the mission of the American Red Cross is to alleviate suffering, isn’t it equally noble to empower volunteers everywhere to try and prevent that suffering in the first place? - Jim McGowan, American Red Cross
We’ve been through a couple of iterations, seen great feedback from different groups and skill levels and still managed to make significant progress entirely by the grace of our incredible volunteers.
We’ve done our best to implement architecture worthy of production, automated build, test and deployment processes and swung with all the punches the changing world of .NET Core has thrown at us. And, when need be, we even get contributions from the ASP.NET team itself.
So far, we’ve encouraged developers of all skill levels to join the cause. We’ve mentored folks along the way, introduced them to git and GitHub and, of course ASP.NET Core. We’ve seen stars rise and get their Microsoft MVP based on their contributions and other community involvement (congrats to you , Steve!).
But where we stand now, the game is changing a little.
We’ve put a lot of thought into taking the next steps of this project. It’s clear that the time has come for us to put the call out to senior developers who can make big impacts on short cycles. However, we don’t want to exclude anyone and we will continue to support anyone who wishes to contribute, maintaining “up for grabs” issues and giving feedback through pull requests.
Up next on this project is some heavy lifting. We need folks to take our I/O bound code and move it into Azure functions. There is some refactoring that needs to be done to prevent “lava-flow architecture” from prevailing. We need to up our client-side game and move to a more modern and appealing JavaScript framework and client-side modules written in TypeScript.
As such, we’re putting the call out, starting today, for senior developers who consider themselves as capably independent devs who can equally write c# and TypeScript and take cues from designers on how to get the UX just right.
We’re looking for some designers to take our project and imagine what a better user experience would like. We need your help to draw a few references and do some conceptual work on a few screens for us to use as both inspiration and guideline.
We’re hoping to find connections to other charitable
You can be on any platform; the project runs on Windows, Mac and Linux. You don’t need Visual Studio 2017, but it may make your life a little easier. Minimally you’ll want a capable text-based IDE (like Visual Studio Code).
If you’re doing personal inventory, here’s what we’re after:
If you don’t have these, but have a strong desire to learn, we’d also love your help, so please consider joining. There’s lots of work at all levels, priorities and complexities!
If any of this piques your interest, we implore you to reach out!
We have semi-regular Community Standups which you can find on our YouTube channel.
You can fork, clone and contribute via our GitHub repo.
You can find us on Twitter and Facebook (and the main HTBox Facebook page as well).
You can join our Virtual Code-a-Thon which runs from November 10-26, 2017.
Once connected on one of those avenues, we can also get you into our Slack channel to ask questions with project experts and leadership alike.
Please tweet out this call to help, share it however you do social and let those who may be interested in joining the cause know about what is going on.
If you are a member or leader of a user group, please share with those in your community.
If you’re interested in having a code-a-thon featuring the AllReady project, please reach out to us and we will help you co-ordinate and even keynote the event if you like.
Thank you for taking a moment to help code for the greater good.
Happy Coding!
]]>Historically, a large number of organizations have worked with data analysts that develop the schema for an application given a set of requirements using a set of powerful designer tools. These tools are capable of generating change scripts, comparing databases for changes and drawing out the relationships between entities. This is more true for larger applications, certainly more true in the enterprise space, and is a practice you can still see in operation today particularily in working with legacy code.
However, for smaller shops and projects and in the agile space, this is not the common approach. Many folks choose to evolve data models on the fly, use tools like data projects or migration tools like Roundhouse. And, in today’s advancements in EF, there are many projects that elect to use code first migrations which have evolved far beyond the one-way, strongly opinionated roots that first came to Entity Framework back in the unicorn days.
In my travels and in mentoring developers over the last few years I’ve found a surprising number of folks who haven’t seen the tooling that is freely available to them through SQL Server Management Studio, also known as SSMS.
In an effort to better understand a fix or feature that I’m asked to help with, I often will sketch out the problem and the implementation. When this includes modifications to the schema I like to employ the use of a diagram in my effort. While pen-and-paper would do just fine for most cases, it’s often handy to know data types, see any relationships and move things around as often as needed to clarify the picture of the problem I’m trying to solve. And this is where tooling comes into play.
SSMS is a medium-weight utility that clocks in just under a gig (you can download it here). It’s by no means small, but it is powerful and has some great cloud-connectivity capabilities as well. It’s designed for the fully-featured SQL Server SKUs, but it works just fine for LocalDB as well.
To get started with diagrams in LocalDB, simply invoke the context menu with a right-click on your database’s “Database Diagrams” node in SSMS’s Object Explorer.
Click on the option for “Install Diagram Support”.
Note I run on Windows 10 and I am not on a domain. There is no active directory on my machine. I have found that when I use code-first databases and EF migrations that I will get an error like Microsoft SQL Server Error: 15404
saying:
LocalDb could not obtain information about group/user (username)).
This error also surfaces as ‘SQL Server error 0x534’, ‘0x54b’, ‘0x2147’.
You can get past this error by elevating your privileges on the database in question using the following command. To execute, just right-click on your database to get that context menu back and select “New Query”, then paste this in (and fix your DB name):
alter authorization on database::[your-db-name-no-quotes] to sa
This is okay with LocalDb instances running on your machine, as I’m assuming that you’re going to be running this locally. Be sure to consider best practicies, application-level security and your company policies before changing database permissions for databases your organization makes available at large.
With security set and ready to go you’re pretty much off to the races.
This part is super easy. Now that diagram support has been added just invoke that context menu again, but instead, this time choose “New Database Diagram”. A blank canvas will appear and you’ll be prompted to add tables.
Select the tables you want to learn more about and add them to your diagram. SSMS draws out any relationships that exist.
By default, tables will be displayed with only the column names. A much more useful rendering is to switch to the “Standard” view which will reveal extended properties of the columns.
SSMS has a whole host of other features baked into diagrams that I tend to avoid. You can create new tables, modify columns, change properties and create relationships.
Don’t.
I can’t see many scenarios where this would be a good idea. These types of things should be done through a managed and repeatable process that can be easily propogated to other developer workstations or out into production. Using the designer to do this will break my heart and also a kitten will die.
There is one case where this might be useful. If you are learning and want to better understand the SQL that is generated through the lens of something meaningful to you (like entities in your project) you might be able to use this as a learning tool. Go through with your changes in the designer (they are not saved by default), then select Database Diagram -> Generate Change Scripts from the application menu. SSMS will prompt you to save out change scripts.
Note: You may need to enable the setting called “Prevent saving changes that require table re-creation” in order to get some types of edits to work, otherwise you’ll see a message saying that “Saving changes is not permitted.”
You can enable that setting in Tools -> Options -> Designers.
To it’s credit, SSMS does a healthy job of trying to preserve your data. Here, I’ve simply added a column to the CampaignGoal
table, but you can see what SSMS is trying to do behind the scenes with its change script:
1 | /* To prevent any potential data loss issues, you should review this script in detail before running it outside the context of the database designer.*/ |
Personally, I find dotnet ef migrations add new-notes-field
a lot easier to do.
Database Diagrams are local to your computer and are not part of your code base. If you’re using EF and code-first, chances are you’ve never seen a “picture” of your model. Grab yourself a copy of SQL Server Management Studio and get sketching.
That feature ain’t going to finish itself ;)
Happy coding!
]]>For others, in-person training is the most effective way to dive into new content. Having someone in the same room who knows how to navigate a new release of software, framework and tooling and all the related changes is a powerful asset while you learn.
That said, we are pleased to announce our first Monsters workshop in Calgary, Alberta. Please join us in Calgary as we mash on changes, approaches, caveats and wins for all things in ASP.NET Core.
Already interested? You can sign up today and join us in February from the 22nd to the 24th.
My good friends Dave, Simon and I have been mashing on ASP.NET Core since its inception. This workshop is the culmination of what we have learned along the way and applied in our projects, samples and through our videos on Microsoft’s Channel 9. We’re taking you deep into three fully-packed days that walk you through various stages of application development. Our number one priority is to equip you with the skills you need to start on a Core MVC project and transition your existing skills to the new tooling.
We expect you to be familiar with web technologies and to be comfortable in Visual Studio. Beyond that, here is some of what you can expect:
Be sure to check out our training site to view the full curriculum.
Calgary and area is home to some of the most beautiful sights in Canada, with a mountain range full of winter sports about an hour away, skiing at Calgary’s Olympic Park as well as NHL and WHL Hockey on the edge of downtown. There are great restaurants, museums, art exhibits and theatre, along with with a great night life including brew pubs, world-famous Canadian poutine and an assortment of comedy clubs.
If you’re joining us from outside the area, we highly recommend adding on a few days to your trip so that you can explore the area. If you are from outside of Canada, you will need to get a valid International Driver’s Permit from your country before you leave if you wish to rent a car when you’re here (handy for exploring!).
Can’t join us in Calgary? No problem. Just hit the registration page and sign up for our email last to be notified of other upcoming training cities.
Happy New Year, and happy coding!
]]>My good friend Simon Timms (not Tibbs) reached out to me on the first day of this series and said, “I’d love to write a post on Functions using F#”. I said that sounded like a fantastic idea, and now here we are. (Well, here I am, with his post. He’s on a beach…)
This article is part of an ongoing series on Azure Functions.
Thanks Thanks to James for letting me guest author on his blog. I love functions and I love F# so this was a fun post to write.
-Simon
Psst…thanks back at you Simon! Great post and honoured you offered to write it as an addition to the series!
Azure functions can be written directly in a wide variety of languages:
You can also build an executable which can run on Windows and upload it to the function to be executed. This approach allows for maximum compatibility with most any workload you can throw at it. Of all the natively supported languages I think F# fits most nicely into the functional mold.
Of course you can write your functions in C# and everything will be fine but wouldn’t you rather write them in a functional programming language?
F# isn’t perhaps a pure functional programming language but it is close enough to have some great advantages over C#. Declarations are immutable by default, the syntax is terse and stylish, null exceptions are all but unheard of and it has fantastic support for filter which are useful when dealing with any sort of complex data. It is a natively supported language on Azure Functions so let’s see how that works.
You can start with a new Azure function but instead of selecting C# let’s take F#. For our example we’ll base it off of the HTTP Triggered function.
In our scenario we’d like to pull back the headlines from the BBC World Service in JSON format. This, as it turns out, is pretty easy to do in F# thanks to type providers.
A type provider is a compile-time shim which can be used to generate types from a variety of different data sources. The FSharp.Data project includes providers for such things as CSV, XML, Json and databases among other data sources. You can read more about it at their website. We can start our project by adding a new file to the solution, a project.json. This will allow downloading and including libraries from nuget.
Our project needs the type provider and also a handy Json serializer. We’ll include the latest FSharp.Data and Newtonsoft.Json:
1 | { |
Once you save that file you should see a restore started by the functions runtime. This will download and stage your libraries.
Now onto the actual F#. Start by opening the run.fsx file anc clearing it out. We’ll need to include the libraries we’ve just downloaded form nuget
1 | #r "System.Net.Http" |
With all our libraries properly referenced we can pull the definition of the BBC’s RSS feed into a type. Remember this will be done at compiles time so you don’t have access to runtime variables like settings in the environment.
1 |
|
As simply as that we now have a type called RSS which is generated from the BBC’s RSS feed. This could have been any feed or format, the XmlProvider just downloads the file and infers the schema.
Now the meat of the problem, download the feed, get the titles and convert them into Json
1 | let Run(req: HttpRequestMessage, log: TraceWriter) = |
This code downloads the RSS feed, logs out the title then maps out the item titles and returns it as a Json serialized array. If you’re new to F# you might have noticed that we don’t explicitly return anything. F# will implicitly return the last line of your function.
At the time of writing the output of this function looks like
1 | "[\"Hillary Clinton: I wanted to curl up with book after election loss\",\"Liz Truss urged to 'get a grip' on minimum-term inmates\",\"Whiplash plans to 'cut car insurance premiums by £40'\",\"Plans to curb House of Lords powers 'dropped'\",\"Leonard Cohen: Singer died in sleep after fall\",\"Price of Football 2016: Premier League cuts cost of tickets\",\"Dementia game 'shows lifelong navigational decline'\",\"Animals still poached in 'horrifying numbers' - Prince William\",\"Online calculator predicts IVF baby chances\",\"Hillsborough: Sir Norman Bettison defends book on disaster\",\"RSPB hails 'remarkable' recovery of threatened cirl bunting\",\"Most common surnames in Britain and Ireland revealed\",\"Meet the girl, 4, who called 999 and saved her mum's life\",\"Donald Trump's name removed from NYC buildings\",\"Price of Football 2016: Away tickets can cost more in Championship than Premier League\",\"Mosul battle: Inside an Islamic State mortar factory\",\"Airbus crew train in Stornoway crosswinds\",\"The video game that's actually dementia research\",\"China traffic dance video goes viral\",\"Part of an Eiffel Tower staircase up for auction\",\"BBC Breakfast\",\"What will President Trump do about North Korea?\",\"Phil Mercer: Australia's child poverty 'national shame'\",\"Is Nigel Farage heading for the Lords?\",\"Pidgin - West African lingua franca\",\"Trump presidency: Your questions answered\",\"How does the UK's Supreme Court work?\",\"Newspaper headlines: Steak in prison and 'Three Lions party'\",\"The Blood Forest\",\"Supermoon\",\"Picking up the pieces\",\"Week in pictures: 5 - 11 November 2016\",\"Your pictures\",\"Andy Murray beats Kei Nishikori at ATP World Tour Finals in London\",\"Wayne Rooney apologises to England chiefs over 'inappropriate' images\",\"Women's Champions League: Brondby 1-1 Manchester City Women (agg 1-2)\",\"Brackley Town 4-3 Gillingham (aet)\",\"Whites v blacks\",\"'He's a devil'\",\"Price of Football\",\"Sick and stranded\",\"Battle of the barnets\",\"Fake news quiz\",\"Housing squeeze\",\"Virtual Ariel\"]" |
Of course the F# used here isn’t very idiomatic. Let’s clear it up a bit
1 | let Run(req: HttpRequestMessage, log: TraceWriter) = |
With that we have an Azure Function written in a functional programming language. 20 lines all told,including blank lines.
When we use GitHub to store our code, we enable an easy way to deploy our functions continuously to Azure, but it doesn’t come without caveats. This post is about getting you familiar with the benefits, side-effects and consideration points you’ll need to make as you move towards continuous deployment in Azure Functions.
This article is part of an ongoing series on Azure Functions.
Here is a high-level summary of how your code gets from GitHub (or any other source control service you configure) out to your Azure Function App:
Let’s have a look at a few of the details up close.
There’s a pretty easy trail to follow in the Azure Portal to get GitHub hooked up, with the one requirement being that you have a repository ready to go. Create the repo and push it to GitHub (or create it on GitHub) then navigate through the Functions Apps UI to “Function app settings” and then under “Deploy” click on “Configure continuous deployment” to sign in to GitHub and pick your repo.
But if you’ve already created a few functions in your app, you will want to get those down to your computer to work on them in Visual Studio and put them under source control. Here’s how to do it:
In step 2 I mention downloading the wwwroot directory from the Kudu console. To do that, head back to “Function app settings” and then under “Deploy” click on “Go to Kudu”. Drill into the `site directory and then click on the download icon to get your functions.
You could also grab these files by using an FTP client and the credentials you have set up in the Azure Portal.
This is pretty handy. It means that you can have a very rapid-to-deploy path that gives you the ability to build things quickly and get them out to Azure completely hands off.
Working locally has other benefits. It’s much easier to work with multiple files without clicking around in a browser and hopping throughout your functions.
Visual Studio and VS Code are powerful tools in our belts, and things like IntelliSense, a tabbed editing interface and the ability to work offline are all big wins.
Using a source control as a trigger for deployment will also help encourage your team to avoid using the portal to edit their scripts (there’s a related caveat to this below). This is great because it means that no one will make unsolicited or accidental changes to the scripts in your portal…the UI is locked down to prevent out-of-band updates. Because you can pick a branch to deploy from, you can actually use branches for different environments.
Ah yes, ‘tis true that this ease of deployment comes at a cost. I would like to point out that in this scenario - deploying from a branch - we are using the Kudu build and deployment pipeline for each deployment target. For those of you who practice CI/CD in such a way that the assets from your build server are promoted through each of your environments, this is not the correct path for you.
Because each merge to a branch is going to trigger a build for the environment that is watching it, you’re actually getting different builds going to each environment. This isn’t entirely a loss; after all, we’re supposed to be entirely environment agnostic, right? If we don’t care about the operating system or the machine that is running it, and the exact same bits are used to build it each time, do we have a problem?
Well…some folks (including the very well respected crew over at ThoughtWorks) consider a branch per environment to be an anti-pattern. I couldn’t agree more when we’re talking about traditional software, architectures and environments (for oh, so many reasons), but in the world of PaaS, is it something that we should be rethinking? (I will talk about this in greater detail in another post).
One final and important caveat is that your source control repository may contain far more than just the Azure Function code. Deploying All The Things may not be in your best interest at best, and at worst could cause hard-to-triage problems out in the cloud. You can likely get around this with an orpahned branch, but that feels akward to me.
Finally, because you don’t have the ability to make iterative code changes in the Functions editor, your only way to make changes is to edit the code locally and push it up to GitHub, thus triggering a deployment. I actually consider this a perk as well, but it’s something you should be aware of.
If GitHub to Azure direct isn’t your thing, remember that the build and deployment bits are built on Kudu, which has an API that you can consume as part of your deployment pipeline. You can also use publishing profiles and msdeploy or wawsdeploy to get your functions out there with just a little bit of script.
Using a script as part of another build server process also gives you the ability to extract the Function assets and deploy them seperately from the rest of your project code.
In short, there’s no reason to back away from Azure Functions if the idea of deploying from source control or per-environment branches are outside of your comfort zone.
While the jury is no longer out on why you’d want to have a build and deployment pipeline in place, there certainly can be early wins in a project for testing and prototyping by deploying a project directly from source control. For non-critical workloads and early in your prototyping cycles, deployment from GitHub or any other source control service may give you an easy way to get part of your app in front of clients and consumers. Give it a try and discuss it with your team to see if this feature has a place in your project.
Happy coding!
]]>This article is part of an ongoing series on Azure Functions.
Worksloads can come in many shapes and your goal as a developer in the cloud is to make sure that everything you touch happens quickly. API surface area that is required to scale has certain responsibilities such as keeping a low overhead, not doing IO bound operations synchronously and more. Azure Function Apps can help you achieve these goals by giving you building blocks that let you fan out your workload and keep your services nice and responsive.
TL;DR: Queue the work to be performed individually, then acknowledge the receipt of work. Let the actual processing happen in the background.
Let’s suppose you want to accept a list of photo albums to download. There could be dozens of albums and each album could have dozens or even hundreds of photos. You don’t have the details of each album, so when you get the list, you’ll need to iterate over all the albums and fetch the details, at which point you’ll have the list of photos you can download.
I’m using this example because it is close to my career: I actually built something like this many moons ago, prior to cloud being a “real” thing that we could tap into. The solution back then was to have many services running on many servers and using a single dispatcher to queue up downloads and distribute the download tasks. It cost many dollars to service the lease on those machines. Today, building something like this on Azure Functions is way easier, and wouldn’t cost anything (infrastructure-wise) until users were paying to use it.
To follow along with this one, you’ll want to hit the Azure portal and create a Queue-triggered function with a C# template, creating the appropriate queue and selecting a storage account. Add a file to the function called Album.csx
with the following code inside it:
1 | public class Album |
Our function will be receiving a message that a number of albums will need to be downloaded. The Functions UI has a handy “test” feature that you can use to send messages while you mash on your code. I’m using the following test data to simulate the information that would be coming into my queue:
1 | [ |
Here’s my code to process the message:
1 |
|
Note that I’m just logging things out and returning. We need to make that acutally do some interesting work. More importantly though, I’m not going to be doing any of the work here, not the downloading bits, anyways. I want simply to put the album information on the queue to be processed external to this call. I can do this quickly and get out.
You’ll notice the ICollector<Album>
in the signature. That is augmented from the default ICollector<string>
so that we can take advantage of the type binding that Functions offers.
The next thing to do is to start sending that output somewhere. It’s going to the queue, but it’s not being processed. To get processing to happen, we’re going back to the integration tab and pressing the magic button:
Clicking “go” takes you to the create page for adding a new function to your Functions App. The queue trigger template is selected and the parameters are already filled out. When you create the new function you’re taken to the code, where the initial pass of the script looks like so:
1 | using System; |
Let’s update that, because we have a type already broken out, and we want to have more strongly-typed arguments. Add an Album.csx
to your function again here (it’s a different file set from our first one) and put the same members in the class as before. Don’t worry - we’re going to look at fixing this copy-and-paste nonsense later in the series.
Your new function should look like so:1
2
3
4
5
6
7
8
9
using System;
public static void Run(Album album, TraceWriter log)
{
log.Info($"C# Queue trigger function processed: {album.Title}");
// do the downloading bits here...
}
There, now, if you want to see the fun stuff happen, open another browser window and put your two functions side-by-each on the screen. Return to the intake function and run it again. Your single-message list of albums is diced up and sent to the queue one-by-each, then processed over in the other function. Cool beans.
When you have the data at hand, don’t squirrel it away from the rest of your processing pipeline. Instead, forward that information along.
We should always work towards having a measure of idempotence in our messages. What I mean by that is simply that the message should stand on it’s own. If you have the title and the URL of an album to download, don’t make the next handler in the chain use the URL to look up the title. It can mean that you start to build out fatter messages, but the payoff is that you don’t have to go and look things up. Messages can be replayed without pulling in dependencies and you’ll reach a much higher level of scalability.
Note that it’s a good practice in cases like this to include something along the lines of a correlation ID, to help understand which queued work items belong together. It’s also a good way to figure out when you’ve completed a distributed set of work. If folks are interested in this, I will dig further into how to achieve this in Azure.
Happy coding!
]]>In this post we’ll look at using a storage account trigger to automatically have an image processed as part of an Azure Function App. Not just to be used for image processing, any type of object can trigger a block of work and it will follow these same mechanics.
This article is part of an ongoing series on Azure Functions.
The type of Function I’m creating here is based off of the BlobTrigger-CSharp
template. The interface allows you to create the function and select your storage account settings, including the path to the source of the images. You can trigger Azure Functions from an event in one of your existing storage accounts, or you can use the Azure Functions interface to create a new storage account as I’m doing here when I create my function.
The portal will create a binding for your script that will allow you to process files created at the path specified. The connection string for your storage account will automatically be created and added to your Function App. There is no magic here. To see how this is wired up, inspect the function.json
file in your function through the View Files
tool pane.
One important note is that the parameter {imageName}
in the path is what you will need to name the parameter in your method signature. We’ll come back to that, but first, we need to add another parameter binding, this time for output, so that we can save out the resized image back to Azure Blob Storage.
Click on the Integrate menu and add a new Output binding to the list. The type you want to select here is Azure Blob Storage output. Basically, we’re taking one blob and saving it out as another blob.
I’ve selected the same connection string that I created in the first step. We’re limited in that we must use a storage account from the same region as our Function App. Note the blob parameter name because we’re going to be seeing it as the name of the output stream in our function. Also note the path, where I’m reusing the {imageName}
parameter. This was the name of the file coming into our function, and we’ll save it out as the same filename but to a different path.
Next, use the View Files
button to reveal your app’s assets and then add a new file called project.json
. This is the current mechanism that Function Apps use to restore packages from NuGet (which we’ll cover in a future post).
1 | { |
Save the JSON above into your project.json
, then navigate to run.csx
, the script file for your endpoint.
This is what the code should look like in your run.csx
file when you’re done with it. I’m using the awesome ImageResizer
library to execute a resize operation with one stream as the source and the other as the output. Here’s where all those parameter names come back into play:
1 |
|
The input blob or stream called inputImage
, the name of the image itself imageName
which we extract from the path, and the output stream called resizedImage
. We also get a bonus parameter in there of type TraceWriter
which is provided to us by the runtime to facilitate logging.
Redgate makes this great tool called Azure Explorer that you should grab. It makes working with Azure Storage much easier. Sign in to the tool, add your storage account to the configuration and you should be off to the races.
If you created the storage account through the Azure Functions App wizard (as I did above), remember that there is no magic here! This is a slice of a App Service, just the parts needed to execute code built on the Web Jobs SDK. This means that you can use the Azure portal to navigate to the Application Settings as you would for any other app by clicking on the Function app settings
and drilling into the Application Settings from that page. Note that this is all just Azure, so you could also filter by your storage account name from the portal and drill in to find your connection string.
This technique is by no means restricted to resizing images. There are a whole host of other event types that you can elect to leverage as a trigger for your Function App including:
Give one of those a shot!
Happy coding!
]]>This is one way you can organize your scripts, types and objects in Azure Function Apps, and we’ll have a deeper look at another approach later in this series.
This article is part of an ongoing series on Azure Functions.
The automatic bindings in Azure Functions are pretty nifty and cut out a lot of the communication and serialization cruft you might otherwise have to deal with. You’ll see function signatures like the following:
1 | public static void Run(Stream imageBlob, string name, TraceWriter log) { ... } |
Above, Stream
is not a value type, it’s a reference type…a complex object that is hydrated by the runtime for you. These parameters are bound for you when the function is executed and values from the input - be it a new file created on a file commit in blob storage, an HTTP request or some other trigger - will be mapped into the types you provide in your method signature. You can think of this as model binding as we know it in ASP.NET MVC (if you’re from that background).
There is a step in the creation of your Function where you create the mappning for these bindings, and different types of Functions seem to have different capabilities for binding. For example, your input bindings for Queues seems to be more powerful (can bind to POCOs) whereas the HTTP-triggered Functions seem to only allow for the HTTPReqeust binding, meaning you’ll have to deserialize the payload yourself.
The first thing you’ll want to do is to not put your types in your scripts, unless it’s truly a single-purpose Function. In the code editor you can reveal your project assets by clicking the View Files button.
In the bottom of that tool pane, click the add button and create a new file. Here, my person.csx
script has a class definition in it.
1 | public class Person { |
When you define types in other files you will need to pull them into your Function, they are not inherantly available. You can pull in a type in another script as you would pull in a reference to an assembly:
1 | #load "person.csx" |
This allows you use the type either as a binding parameter (if supported for your Function trigger type) or as an instance you create in code. Here’s an of using the Person
class in an HTTP-triggered Function:
1 | #load "person.csx" |
Now, should you pass in a payload on an HTTP request with a JSON body of something like the following, your Function will be able to read that data out with req.Content.ReadAsAsync<Person>()
:
1 | { |
Even though there is a default set of namespaces available to you in your Function scripts, I think it is advisable to explicitly declare your using
statements. This will prevent problems with namespace conflicts, make it more apparent to others where your types are coming from (including future you, who tends to disapprove of older-you’s shortcuts), and make it easier if you want to move your types out of the cloud and into reusable libraries.
Here are some things to try out:
Happy coding!
]]>Azure Functions are built on the Web Jobs SDK which is a proven base that has matured over the last few years. It differs in that you can opt to use a “dynamic” pricing model rather than the “App Service” model. This is important, as you can now be billed per “gigabyte second”, a new, ridiculously small unit of measure that clocks down to the milisecond.
c# support is provided through .csx files which helps eliminate some of the cruft of projects, but introduces other limitations. Things like dependency injection aren’t supported yet, and there is a little more legwork in getting third-party binaries up and available in your
Some libraries are preloaded to make things easy and others are hot in Azure so you can reference them without having to pull in libraries manually.
Here are the namespaces that are included in all your scripts by default. These namespaces are imported by default and are available as though you’ve already put the namespaces in using
statements:1
2
3
4
5
6
7
8Microsoft.Azure.WebJobs
Microsoft.Azure.WebJobs.Host
System
System.Collections.Generic
System.IO
System.Linq
System.Net.Http
System.Threading.Tasks
These .NET Framework assemblies are also available, but you’ll have to add a using
for any types you wish to use in your functions.1
2
3
4
5
6mscorlib
Microsoft.Azure.WebJobs.Extensions
System.Core
System.Net.Http.Formatting
System.Web.Http
System.Xml
There are other assemblies that are “hot” in the environment and can easily be brought into your scripts. If you want to take a dependency on a types in these libraries you need to reference them in your script:1
2
3
4
5
6Microsoft.AspNEt.WebHooks.Common
Microsoft.AspNet.WebHooks.Receivers
Microsoft.Azure.NotificationHubs
Microsoft.ServiceBus
Microsoft.WindowsAzure.Storage
Newtonsoft.Json
To create the reference to a library in your scripts, say for Newtonsoft.Json
, use the following statement at the top of your script:1
#r "Newtonsoft.Json"
Then you can add an appropriate using
statement and use it’s types.
ASP.NET Core is not yet supported in Azure Functions but support is on the way. This is a priority for the team and they are working hard on getting ASP.NET Core support, but there are still dependencies on too many libraries that are not yet ported to Core, as evidenced by the automatically “known” libraries that are included in Functions.
You can find more of the basics covered on the Azure Functions documentation website, but if you’re comfortable with the above, feel free to browse the articles in this series for some real-world ways to leverage Azure Functions.
Would you like to see more? Suggest an Azure Function topic in the comments below or ping me on the Twitters.
We got to mash with Chris Anderson who works on the Functions and Web Jobs team at Microsoft.
Happy Coding!
]]>If you’ve worked inside of the MVC Framework you’ve either explicitly noticed or been implicitly subjected to some of the conventions at work. These include things like:
These conventions work to remove some of the effort we need to get our application running. Some of them are locked in - we can’t change the names of methods that are invoked in the startup class, for instance, as there is an explicit search for Configure
and ConfigureServices
- but others can be ammended, removed, and replaced on our whim.
There are four categories of conventions that we’re going to briefly discuss here:
Convention | Interface | Description |
---|---|---|
Application (Widest net) | IApplicationModelConvention | Provides access to application-wide conventions, allowing you to iterate over each of the levels below. |
Controller | IControllerModelConvention | Conventions that are specific to a controller, but also allows you to evaluate lower levels. |
Action | IActionModelConvention | Changes to action-level conventions can be made here, as well as on any parameters of the actions. |
Parameter (Smallest scope) | IParameterModelConvention | Specific to parameters only. |
As your application loads it will use any conventions that you have added starting at the outter-most field of view, application conventions, then working it’s way in through controller and action conventions to parameter conventions. In this way, the most specific conventions are applied last, meaning there is a caveat that if you add a parameter convention by using IControllerModelConvention
then it could be overwritten by any IParameterModelConvention
, regardless of the order in which you add the conventions to the project. This is different from middleware, in a sense, because the order of conventions only applies within the same level, and there is a priority on level that you can’t adjust.
I wanted to build a convention that celebrated how great rabbits were at math, specifically, multiplication. You know those rabbits! What I did first was to create an interface:
1 | public interface IAmRabbit { } |
…so that I could use it in an attribute:
1 | public class RabbitControllerAttribute : Attribute, IAmRabbit { } |
…which allowed me to apply the attribute to my controller:1
2
3
4
5[RabbitController]
public class HomeController : Controller
{
// ...
}
…so that I could leverage the attribute in my convention:
1 | public class RabbitConvention : IControllerModelConvention |
Now, the great math capabilities of bunnies are available! I loop through all the actions on my controller and create a cloned version of the action with the prefix Bunny
. So there will be an Index
action and a BunnyIndex
and so forth at runtime. Now, you may think that this isn’t too relevant at first glance, so I’ll leave it as an excercise to the reader to think about how Web API actions might be handled by convention when you have action names that are verbs.
Wiring up the convention is easy…just add it to the conventions collection when you’re adding MVC in the ConfigureServices
method in Startup
:
1 | public void ConfigureServices(IServiceCollection services) |
Here are some great resources that will help you explore other uses of these interfaces.
Filip Wojcieszyn’s Posts and Community Contributions
Steve Smith’s article on feature folders
As you can see, you are not locked into the default behaviours of ASP.NET Core MVC, and you have many surface areas acting as cusomization points for you to exploit.
]]>Let’s have a look at what it takes to allow users to authenticate in our application using GitHub as the login source, and you can check out the Monsters video take of this on Channel 9.
OAuth has been known as a complicated spec to adhere to, and this is further perpetuated by the fact that while much of the mechanics are the same among authentication providers, the implementation of how one retrieves information about the logged in user is different from source-to-source.
The security repo for ASP.NET gives us some pretty good options for the big, wider market plays like Facebook and Twitter, but there is aren’t - nor can or should there be - packages for every provider. GitHub is appealing as a source when we target other developers, and while it lacks a package of its own, we can leverage the raw OAuth provider and implement the user profile loading details on our own.
In short, the steps are as follows:
Microsoft.AspNet.Authentication.OAuth
packageOkay, now let’s dive into the nitty gritty of it.
First step is a gimme. Just head into your project.json
and add the package to the list of dependencies in your application.
1 | "Microsoft.AspNet.Authentication.OAuth": "1.0.0-rc1-final", |
You can see here that I am on RC1, so assume there may still be some changes to the naming and, obviously, the version of the package you’ll want to use.
Pull down the user account menu from your avatar in the top-right corner of GitHub, then select Settings. Next, go to the OAuth Applications section and create a new application. This is pretty straightforward, but it’s worth pointing out a few things.
First, you’ll need to note your client ID and secret, or minimally, you’ll want to leave the browser window open.
Second you’ll see that I have a authorization callback setup in the app as follows:
https://localhost:44363/signin-github
This is important for two reasons:
signin-github
bit will need to be configured in our middlewareIf you want better control over how that is configured in your application, you can incorporate the appropriate settings into your configuration files, but you’ll also need to update your GitHub app. This process is still relevant - you’ll likely want something to test with locallying without having to deploy to test your application.
For production applications you’ll be fine to set environment variables or configure application settings in Azure (which are loaded as env vars), but locally you’ll want access to the config as well. You can setup user secrets via the command line, or you can just right-click on your project in Visual Studio 2015 and select “Manage User Secrets”. From there, you set it up like so:
1 | { |
In the above code we also wired up some code to fire during the OnCreatingTicket
event, so let’s implement that next. To do this, we’ll add the middleare to the Configure
method in our startup class, and add a property to the class to expose our desired settings.
The middleware call is like so:app.UseOAuthAuthentication(GitHubOptions);
And we create the property as such:
Remember that callback path that we setup on GitHub, you’ll see it again in our settings above. You’ll also note that we’re retrieving our client ID and secret from our configuration, and that we’re setting up a handler when the auth ticket is created so that we can go fetch additional details about the authenticating party.
We’ll have to call back out to GitHub to get the user’s details, they don’t come back with the base calls for authentication. This is the part that is different for each provider, and thus you’ll need to write this part for yourself if you wish to use an alternate source for authentication.
We will add two parts to this; the first will call out to get the information about the user, the second will parse the result to extract the claims. Both of these can live in your startup.cs
class.
Unfortunately the base implementation of the OAuth provider does not support allowing us to request additional fields for the user; I’ll take a look at that in a future post. All you’re going to get are the basics with the above - so none of the account details beyond the email addres, nor ability to work with their repos/issues/PRs.
There you have it. All the chops you need to start exercising your OAuth muscle, and a basic implementation that you can leverage as a starting point. Trying this out will take you about 15 minutes, start to finish, provided you already have a GitHub account.
Finally, check out the Monsters’ video on Channel 9 where I code this live.
Happy Coding!
]]>