comments edit

Update: my terminology in this post is not quite correct. Whenever I refer to the server part of OData, I really mean to say ‘WCF Data Services’. OData is the protocol, and WCF Data Services is the specific implementation. So the ‘quirk’ we ran into is a WCF Data Services thing and not an OData thing.

As you may know, NuGet uses an OData feed for its packages. Whenever you install packages, or search for packages from Visual Studio, it goes through this feed, which you can find at

If you’re a NuGet user, you may also have noticed that the perf of NuGet searches from Visual Studio had been quite bad in recent months. You’d go to the NuGet package dialog and type a search string, and it would take 10 or more seconds to give you results. Ouch! :(

It turns out that the perf issue was due to a nasty OData quirk that we’ve since worked around, and I thought it might be interesting to share this with others. I’m partly doing this as you might run into this yourself if you use OData, partly to poke a little fun at OData, and also to poke a little fun at ourselves, since we really should have caught that from day one.

A whole stack of query abstractions

When you make an OData query from a .NET client, you go through a whole bunch of abstraction layers before a SQL query is made. Let’s say for example that you’re looking for packages that have the string ‘T4MVC’ in their description. It would roughly go though these stages:

First, in your .NET client, the OData client library would let you write something like:

var packages = context.Packages.Where(p => p.Description.Contain("T4MVC"));

Second, this code gets translated by the OData client LINQ provider into a URL with a query string that looks like this:


Third, this is processed by the OData server, which turns it back into a LINQ expressing, which in theory will look similar to what you had on the client, which was:

var packages = context.Packages.Where(
    p => p.Description.Contain("T4MVC"));

Of course, the ‘context’ here is a very different beast from what it was in step 1, but from a LINQ expression tree point of view, there shouldn’t be much difference.

And finally, the Entity Framework LINQ provider turns this into a SQL query, with a WHERE clause that looks something like:

WHERE Description LIKE N'%T4MVC%'

And then it executes nice and fast (assuming a proper index), and all is well.

When the abstractions break down

Unfortunately, that clean sequence was not going as planned, resulting is much less efficient queries, which started to get really slow as our package count started to get large (and we’re already at over 7000 as of writing this post!).

So which of these steps went wrong? For us, it turned out to be the third one, where the OData server code was creating a very complex LINQ expression.

To understand why, let’s first briefly discuss OData providers. When you write an OData DataService, you actually have the choice between three types of providers:

  • An Entity Framework provider which works directly over an EF ObjectContext

  • A reflection provider which works on an arbitrary context that exposes entity sets that are not tied to a specific database technology

  • A custom provider, which is something so hard to write that almost no one has ever done it (maybe a slight exaggeration, but not by much!)

Give that we’re using EF, #1 seems like the obvious choice. Unfortunately, the EF provider is very inflexible, as it doesn’t let you use any calculated properties on your entities. In other words, it only works if the only thing you want on your OData feed are fields that come straight from the database. So for most non-trivial apps, it’s not a very usable option, and it wasn’t for us (we have some calculated fields like ReportAbuseUrl).

So we ended up using the reflection provider, and wrapping the EF objects with our own objects which exposed whatever we wanted.

Functionally, this worked great, but what we didn’t realize is that the use of the reflection provider causes OData to switch to a different LINQ expression tree generator which does ‘crazy’ things. Specifically, it makes the bad assumption that when you use the reflection provider, you must be using LINQ to object.

So it protects you by using some ‘null propagation’ logic which makes sure that when you write p.Description.Contain(“T4MVC”), it won’t blow up if the Description is ever null. It does this by inserting some conditional checks in the LINQ expression. This is very useful if you are in fact using LINQ to object, but it’s a perf disaster if you are using LINQ to EF!

Now, when translated into SQL, what should have been the simple WHERE clause above was in fact becoming something like this:

WHERE  1 = ( CASE 
               WHEN ( Description LIKE N'%T4MVC%' ) THEN 
               CAST(1 AS BIT) 
               WHEN ( NOT ( Description LIKE N'%T4MVC%' ) ) THEN 
               CAST(0 AS BIT) 
             END ) 

which was running significantly slower. Note that in reality, we’re querying for multiple fields at once, so the final SQL statement ended up being much scarier than this. I’m just using this simple case for illustration.And to make things worse, we learned that there was no way of turning off this behavior. What to do? ##

The solution: use some LINQ ninja skills to restore order

LINQ ninja David Fowler found this an irresistible challenge, and came up with a fix is both crazy and brilliant: he wrote a custom LINQ provider that analyses the expression tree generated by the OData LINQ provider, searches for the unwanted conditional null-check pattern, and eliminates it before the expression gets handed out to the EF LINQ provider.

If you want to see the details of his fix, it’s all on github, split into two projects:

QueryInterceptor ( is a helper library that makes it easier to write this type of query modification code.

ODataNullPropagationVisitor ( builds on QueryInterceptor and specifically targets the removal of the unwanted null check.

Naturally, these are available via NuGet (with the second depending on the first). After importing those packages, all that’s left to do is add one small call to your IQueryable, e.g.

query = query.WithoutNullPropagation();

and your expression trees will be given a gardener’s special pruning :)

Lesson learned: always check your SQL queries

Some might conclude that all those query abstractions are just too dangerous, and we should just be writing raw SQL instead, where this never would have happened. But I think that would be way too drastic, and I certainly wouldn’t stop using abstractions because of this issue.

However, the wisdom we learned is that no matter what query abstractions you’re using (LINQ, OData, or other), you should always run SQL query analyzer on your app to see what SQL statements get run in the end. If you see any queries that doesn’t completely make sense based on what your app is doing, get to the bottom of it and address it!

Of course, this is really ‘obvious’ advice, and the fact that we never did that is certainly a bit embarrassing. Part of the problem is that our tiny NuGet team is mostly focused on the NuGet client, and that the server hasn’t been getting enough love. But yes, these are just bad excuses, and in the end, we messed that one up. But now it’s fixed :)

comments edit

Like many others, I have been playing around with Google+ to see what the new kid on the block has to offer. And it does have some good things going for it, with the concepts of Circles providing a pretty nice approach to privacy.

Unfortunately, it suffers from the same flaw that Facebook and Twitter have always had: it makes the naïve assumption that when you follow someone, you want to hear everything they have to say. In other words, it treats us as one-dimensional beings, which doesn’t match the ‘real world’.

This is something I have always found particularly painful on Twitter, both as a tweet consumer and a tweet producer.

As a consumer, I end up not following a bunch of interesting .NET folks because they’re too ‘noisy’, meaning they tweet about a lot of unrelated things that I don’t care about. I’ve tried to follow Scott Hanselman’s philosophy and let the river of crap wash over me, but it just didn’t work for me. I guess I couldn’t take the smell.

As a producer, I end up not tweeting many things I would want to say, because I know that a lot of my 2500 followers only care about the .NET side, and I don’t want to add crap to their rivers. For instance, I follow tennis closely, but I’m not going to tweet super insightful things like “OMG, Federer lost!!”, because I know most followers don’t care.

So to summarize, I’m missing out as a consumer, and repressed as a producer. Sad! :(

Aren’t Twitter hashtags the way to follow topics instead of users?

Twitter hashtags are an ugly hack over a weak platform, and don’t do much to solve this.

First of all, as a producer, it makes no difference to my followers, since they will see my tweets no matter what hashtags they contain.

As a consumer, hashtags fail pretty badly for a number of reasons. First of all, many people don’t use them correctly. They get misspelled, forgotten, and often conflict with unrelated things. But more importantly, they assume that you want to hear about that topic from everybody, while in many cases I only want to hear what a selected set of users are saying about that topic.

If I could set a search criteria for each user that I follow, I might be getting somewhere, but that’s just not an option today. And even that would work poorly given the inconsistent use of hashtags.

But don’t Google+ Circles solve this issue?

No, not one bit! Circles are about privacy and nothing else. The issue I’m discussing here has nothing to do with privacy; it’s about filtering of public information.

I see people saying that Google+ successfully merges what Facebook and Twitter are good at: connecting with friends and having a public voice. They are wrong! Let’s put that to the test…

Let say I convince all my family to get on Google+ (a tough challenge, but bear with me). I add them to my ‘family’ circle and they do the same thing. We can share family things with great privacy; that’s nice, and is where circles shine.

But now let’s say I’m also using Google+ the way I use twitter today, and write a whole bunch of things about .NET.

What happens when my family members click on their ‘family’ circle? They’re inundated with all that .NET stuff from me that they couldn’t care less about! Their first reaction is that they want to go back to Facebook, where they don’t see that ‘work’ stuff.

Now let’s look at a second scenario: I want to publicly share various things about both .NET and tennis. They key word here is publicly. I don’t want to have to add everyone who can read my tennis and .NET comments two circles, since I want it to be wide open. Circles are just not meant to solve this.

The answer: Channels

One simple way to solve this is to add a concept called ‘channels’. Here is how it would work:

First everyone can (optionally) define a list of channels. In my case, I might create channels called ‘tech’, ‘tennis’, and ‘personal’. For each channel, you can write a one line ‘advertisement’ of what you generally discuss there. e.g. my tech channel might say ‘stuff I work on, mostly related to .NET and NuGet’.

Then whenever you share something, you can choose whether it should go to everyone or just some channel. Note that when I say ‘everyone’ here, I really mean ‘everyone that is allowed to see it’. Again, channels are not a privacy concept; they are orthogonal.

Finally, when you follow someone (i.e. add them to a circle), you get to choose whether you want the whole person, or only some of the channels. e.g. my mom would pick my ‘personal’ channel, while some .NET folks may choose ‘tech’, and others might leave it unfiltered and get it all (which would be the default, as it is today).

As an additional option, you could attach a channel to each circle. e.g. my ‘family’ circle would use to the ‘personal’ channel, so I don’t have to think about it when I share from there. Note that this setting only applies to what I share. For each family member that I follow, I can still select what I want from their channels (which are likely not named the same as mine).

This may seem a bit complicated, but I don’t think it would be in practice, because:

  • Users coming from Facebook who only use it to connect to friends would not define any channels.
  • When you start following someone, you’d typically follow the whole person, as you do today. Then if you start getting too much noise from them, an easy-to-find option would allow you to tune it down. e.g. the context menu on my ‘tennis’ comment would offer “Don’t show any more ‘tennis’ comments from this user”. Conceptually, this is similar to Facebook offering you to ignore Farmville entries from some users, and that’s an easy concept to understand.

So it would not make the platform any less approachable to newbies, but the extra power would be readily available when needed.

Good old blogs have had that forever

Interestingly, if you view ‘things that you share’ as ‘blog posts’, and ‘following someone’ as ‘subscribing to their RSS feed’, you find that the channel feature I describe here is almost identical to the concept of tags/labels in a blog.

e.g. You subscribe to to get all my posts, and to to only get my posts about NuGet.

So the basic concept is far from new, but for some reason the big social networks have not caught on to it.

Will this feature ever be available?

Well, that’s good question! My hope is that enough people want it that the big social networks will eventually want to implement something like it.

If I had to choose, I’d prefer Google+ to be the one offering this, since I think it has a model which lends itself to it best.

And if all else fails, I’ll just have to start a new social network. Or not! :)

comments edit

Click here to find all the posts relating to the Razor Generator

A few days ago, I blogged about how you can use Razor Generator to precompile your MVC Razor views. In this post, I will demonstrate how you can then unit test your precompiled views. Note that this is still very much experimental, so at this point the primary goal is to get feedback on the concept.

Simple walkthrough to unit test views

After installing RazorGenerator, create an MVC 3 Razor app, using the ‘Internet Application’ template and including the unit test project.

In the previous post, we used precompiled views in a different library, so this time let’s keep them in the MVC project to show something different. First, use NuGet to install the RazorGenerator.Mvc package in your MVC project.Then, as in the previous post, set the custom tool on Views\Home\Index.cshtml to ‘RazorGenerator’, causing Index.cs to be generated under it.But now, let’s do something new and use NuGet again to add the RazorGenerator.Testing package to the unit test project (not to the MVC app!).And that’s all it takes to get set up! Now we can write a unit test for our precompiled Index.cshtml view. e.g. create a Views\HomeViewsTest.cs (in the unit test project):

using HtmlAgilityPack;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using MvcApplication2.Views.Home;
using RazorGenerator.Testing;

namespace MvcApplication1.Tests.Views {
    public class HomeViewsTest {
        public void Index() {
            // Instantiate the view directly. This is made possible by
            // the fact that we precompiled it
            var view = new Index();

            // Set up the data that needs to be accessed by the view
            view.ViewBag.Message = "Testing";

            // Render it in an HtmlAgilityPack HtmlDocument. Note that
            // you can pass a 'model' object here if your view needs one.
            // Generally, what you do here is similar to how a controller
            //action sets up data for its view.
            HtmlDocument doc = view.RenderAsHtml();

            // Use the HtmlAgilityPack object model to verify the view.
            // Here, we simply check that the first <h2> tag contains
            // what we put in view.ViewBag.Message
            HtmlNode node = doc.DocumentNode.Element("h2");
            Assert.AreEqual("Testing", node.InnerHtml.Trim());

A few notes about unit testing views

Unit testing views in ASP.NET MVC is something that was very tricky to do before, due to the fact that the views are normally compiled at runtime. But the use of the Razor Generator makes it possible to directly instantiate view classes and unit test them. Now the big question, is whether unit testing views is desirable. Some people have expressed concerns that it would be a bit fragile due to the changing nature of the HTML output.My take here is that while it would be a bad idea to try to compare the entire HTML output, the test can be made pretty solid by selectively comparing some interesting fragments, as in the sample above.That being said, I have not tried this is a real app, so there is still much to learn about how this will all play out. This is just a first step!

What about partial views?

When designing this view testing framework, we took the approach that we wanted to focus on the output of just one view at a time. Hence, if a view calls @Html.Partial(…) to render a sub-view, we don’t let the sub-view render itself, and instead just render a token to mark where the sub-view would be. This seemed more true to the nature of what a unit test should be, compared to letting the whole composite page render itself, which would be more of a functional test (plus there were some tough challenged to making it work).

Where do we go from here?

Well, it’ll be interesting to hear what people think about the general idea. We’re interested in two types of feedback. First, what do you think about the overall concept of unit testing views using this approach. Second, please report bugs that you run into to At this point, I expect it to be a bit buggy and probably blow up on some complex views. Treat it as a proof of concept! :)

comments edit

Click here to find all the posts relating to the Razor Generator A while back, I blogged about a single file generator I wrote that can precompile Razor helpers. A bit later, Chris van de Steeg started from that code base and extended it to support precompiling MVC views (see his post).

On my side, this project stayed dormant for a little while, but recently we started extending it to support a number of interesting new scenarios, including precompiling MVC views (albeit with a slightly different approach from Chris’s).

Most of the new code was written by Pranav Krishnamoorthy, a dev on the ASP.NET team. Many thanks to him for moving this project forward.

Installing the generator

It’s on the VS extension gallery, so install it from there. It’s called “Razor Generator” (not to be confused with “Razor Single File Generator for MVC”, which is Chris’).



Walkthrough to precompile MVC views

You can use it to precompile MVC views either in a separate library or in the MVC project itself. I’ll demonstrate the separate library case, as it’s a bit more interesting.

To begin with, create a new MVC 3 app using Razor (and the ‘Internet Application’ template). Then add a new class library project to the solution (e.g. call it MyPrecompiledViews), and add a reference to it from the MVC project.

Update (4/26/2012): the best approach is to actually create an MVC project for that library, instead of a library project. You’ll never actually run it as an Mvc app, but the fact that it comes with the right set of config files allows intellisense and other things to work a lot better than in a library project. See for latest info. Now the fun part begins: using NuGet, install the RazorGenerator.Mvc package into your class library. This adds a number of things to the project:

  • A reference to RazorGenerator.Mvc.dll, which contains the view engine
  • Logic to register the view engine using WebActivator (in App_Start\PrecompiledMvcViewEngineStart.cs).
  • Two web.config files that are there to make intellisense work while you author your views (they’re not used at runtime)
  • A sample view, which you can later remove

Let’s take a closer look at that sample view:



Notice that it has a Custom Tool set to RazorGenerator, which causes it to generate a .cs file underneath itself (thanks to the generator we installed earlier).

This is just a sample, so now let’s move the Views\Home\Index.cshtml from the MVC project to the same folder in the class library (you can press Shift during the drag/drop to make it a move). Then set the generator to RazorGenerator as in test.cshtml. You’ll now get an Index.cs nested under Index.cshtml.

And that’s it you’re done! You can now run your app, and it will be using the precompiled version of Home\Index.cshtml.

Why would you want to do that?

One reason to do this is to avoid any runtime hit when your site starts, since there is nothing left to compile at runtime. This can be significant in sites with many views.

Also, you no longer need to deploy the cshtml files at all, resulting in a smaller deployment file set.

Another cool benefit is that it gives you the ability to unit test your views, which has always been something very difficult with the standard runtime compilation model. I’ll cover that in more details in a future post.

Generating files at design time vs. build time

The way the generation works is very similar to T4 templates you have you project. The generation happens as soon as you save the file. You can also force it to regenerate by right clicking on the .cshtml file and choosing Run Custom Tool.

Generally, the guidance is to commit those generated files along with the cshtml file, the same way that you commit all your ‘hand-written’ source files. If you do that, everything will run just fine in an automated build environment.

Another reason to commit the generated files is that it allows you to write code against them with full VS intellisense. e.g. if you use this technique to write Razor helpers that you want to call from other views, you really want VS to know about the generated file at design time. Ditto if you want to write unit tests against your views.

That being said, if you really want to postpone the generation until build time, we’re working on an MsBuild task that will do that. For now, you can find it by getting the RazorGenerator sources on CodePlex.

If you want to help or report issues

This project is hosted on under the Apache 2.0 Source license, so feel free to contribute! You can also use CodePlex to discuss and report issues.

comments edit

One thing we hear occasionally from users is that it would be nice to be able to install NuGet packages from the command line (e.g. this issue, that one, and this thread). There are good reasons why this isn’t supported today, which I will discuss in this post.

What does installing a package even mean?

Before we go further, we need to be clear about exactly what we mean by ‘installing a package’. The reason this needs to be clarified is that there are really two definitions, which can cause confusion.

  • Getting the bits onto the machine: ‘installing a NuGet package’ is sometimes used to mean the act of getting the contents of the package onto your hard drive.
  • Making a project use a package: more commonly, it refers to not only downloading the package bits, but also ‘applying’ them to a project.

Item #1 is something that is fully supported today outside of Visual Studio using nuget.exe (see my previous post). NuGet also supports restoring packages outside of VS so you don’t have to commit them.

But for the sake of this post, I am strictly referring to #2, and that’s what I mean any time I use the term ‘installing a package’ below.

Now that we have this out of the way, let’s discuss why installing a package outside Visual Studio is non-trivial, as well as why it is in most cases not useful at all, although a case can be made for updating packages.

What makes installing a package outside Visual Studio non-trivial

Installing a NuGet package into a project (e.g. a csproj file) is a rich operation which does a lot more than just copying files. Here is a rough list of what NuGet can do when you install a package from VS (whether using the Package Manager Console or the NuGet Dialog):

  • Add references to assemblies contained in the package
  • Add references to framework assemblies in the GAC
  • Add content files to an app (e.g. JavaScript files, code files, …)
  • Add assembly binding redirects to deal with version incompatibilities
  • Perform config transformations, typically to add settings related to the package
  • Bring in tools that can then be run from Package Manager Console
  • Run PowerShell scripts which can do arbitrary things by automating the DTE object model

Now let’s think about what it would take to perform those operations outside of VS.

The first 3 involve making modifications to the csproj file. When done within VS, it happens automatically by calling DTE methods, but outside of VS it would need to be done using custom parsing logic. While it’s clearly possible, it needs to be done carefully to avoid corrupting the csproj file. e.g. a GAC reference should not be added if it’s already there.

Items #4 to #6 should not be too different from doing it in VS.

Item #7 is basically impossible, since you cannot really ‘fake’ the DTE to let those script run.

So conceivably, with some good amount of work, we could support all scenarios except #7. It would be a little quirky as some packages would not fully work, but in many cases it would work.

But let’s now discuss how useful it would be.

Why installing packages outside of Visual Studio rarely makes sense

So let’s say we had this feature and it fully worked. What would it let you do that you can’t do today?

You could use the command line outside VS to install a Foo package in your project, but that in itself is rarely useful. e.g. suppose the package brings in a Foo.dll. You now have that assembly added as a reference in your project, but you don’t have any code using it. You now need to go in VS to write code against that new assembly, so it would have been simpler to just add it from VS in the first place!

And that’s generally the case for most packages: the logical step after installing them is to go to VS and actually use them, which mostly negates any benefits you may find by installing it outside of VS.

Admittedly, there are exceptions, like the Elmah package which is more or less ‘ready to run’ after you install it. But for the wide majority of packages, there is no direct ‘install and run’ workflow.

What about package updates?

If package installs don’t make sense outside of VS, what about package updates?

So you have this Foo packages that you installed from VS, but now you want to update it to a new versions from the command line. Does that make sense?

I think it does make a lot more sense than the install scenario, because by that point, you (presumably) already wrote some code that uses the package. So by updating it, you might get a newer Foo.dll with bug fixes, but all the code you wrote is still valid and ready to run against.

In particular, update could work well in the constrained scenario where the new version on the package just updates an assembly but doesn’t do much else.

On the other hand, it would be hard to support in the general case, since in theory, the updated package can be completely different from the older one. e.g. suppose the new package contains some new install-time PowerShell scripts. We’d be right back with the same tough issues discussed above.

Where do we go from here? You tell us!

My take is that we need to experiment with supporting package update outside on VS for at least a subset of scenarios. The big question is deciding how far this needs to go to reach sufficiently useful state.

The first step would be to start with the ‘only the assembly changed’ scenario, which is relatively simple, and probably is the 90+% case.

If you have some thoughts on this, we’d love to hear them! Would you use such feature, and would limiting it to updating assembly references be enough for your needs?

comments edit

The standard way to create NuGet packages today is to:

  • Create a nuspec file with all the metadata and package dependencies
  • Lay out the files that to want to include
  • Run ‘nuget pack’ to create the package
  • Run ‘nuget push’ to push them to the gallery

See Phil’s post more more details on those steps.

While this is pretty straightforward, it can be made yet easier if we take advantage of the fact that your VS project already contains a lot of information that shouldn’t have to be repeated.

Today, we are releasing a new nuget.exe feature of that makes this a lot easier.

Debugging support via

The other really exciting thing we’d like to announce today is that we have partnered with the folks at to offer a really simple way of publishing your sources and PDB’s along with your package.

Up until now, there really wasn’t a great way for package authors to let their users debug into the package’s binaries. The user would have needed to download the sources separately from wherever the project is hosted, making sure that they exactly match the state of the binary. They would also need to locate the matching PDBs. That’s pretty hard to get right, and most users would generally not bother.

Now with almost no additional effort, package authors can publish their symbols and sources, and package consumers can debug into them from Visual Studio.

What the package author needs to do

Let’s first play the part of the author of a package that contains an assembly, which itself makes use of a library from another package. Let’s say that other package is Clay as an example. Step 1: create a project Let’s start by creating a new Class Library project in VS. I’ll call it DavidSymbolSourceTest. Step 2: set some metadata on it This is an often forgotten step, but it is important to set some basic metadata on your assembly. As you’ll see later, it’s particularly important with this workflow. To do this, just open the Properties\AssemblyInfo.cs file and change a few values:

[assembly: AssemblyTitle("DavidSymbolSourceTest")]
[assembly: AssemblyDescription("David's little test package to demonstrate easy package creation")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("David Ebbo")]
// Stuff omitted
[assembly: AssemblyVersion("")]

Here, I set a description for the assembly, and put my name as the ‘Company’ (which is basically the author). I also changed the assembly version to 0.5. Step 3: bring in our dependencies using NuGet Next, let’s use NuGet to bring in our Clay dependency. I assume you’re familiar with the steps to do this, but in case you’re not, start here.

Note that because Clay itself has dependencies, this ends up bringing in 4 packages:

image Step 4: let’s write some code! In our library, we’ll just write some simple code that uses Clay:

namespace DavidSymbolSourceTest {
 public class Demo {
     public static dynamic GetPersonObject(string first, string last) {
         dynamic New = new ClaySharp.ClayFactory();

         return New.Person(new {
             FirstName = first,
             LastName = last

It just has a little test method with builds a Clay object based on two fields. Pretty boring stuff, but enough to demonstrate the concepts. Step 5: save your access key From here on, we’ll be using the NuGet.exe command line tool. Make sure you get the latest from here, or if you already have an older build, run ‘nuget update’ to self-update it.

Now go to to get your access key, and use nuget.exe save it so you don’t have to deal with it every time (so this is a one-time step, not for every project!). e.g.

D:\>nuget setapikey 5a50d497-522a-4436-bf90-b65362e65f52
The API Key '5a50d497-522a-4436-bf90-b65362e65f52' was saved for the NuGet
gallery feed ( and the symbol
server (

Note: no, this is not actually my key, but thanks for asking! :) Step 6: specify additional metadata using a nuspec file In step 2, we added some metadata in AssemblyInfo.cs, which NuGet can directly understand. Unfortunately, some of the NuGet concepts don’t have a matching CLR attribute yet, so we still need a nuspec file to specify the rest.

To create one, just run ‘nuget spec’ from the folder where the csproj is.

D:\DavidSymbolSourceTest\DavidSymbolSourceTest>nuget spec
Created 'DavidSymbolSourceTest.nuspec' successfully.

NuGet.exe detects that the nuspec file is meant as a ‘companion’ to a VS project, and will generate a file with replacement tokens. e.g.

<?xml version="1.0"?>
<package xmlns="">
  <tags>Tag1 Tag2</tags>

Note how a number of the fields use a token syntax like $version$. This basically means: I don’t want to repeat what’s already in my AssemblyInfo.cs, so just get values from there.

Now all you need to do is:

  • Fill in the fields you care about, like
  • Remove the ones you don’t care about. e.g. if you don't have an icon.

Note that technically, this whole step is optional, and you can omit the nuspec file entirely if you don’t need any metadata other than what’s in AssemblyInfo.cs. However, since all packages are supposed to specify a and a , in practice it's not a step you'll want to skip. Step 7: create the package This is where the new and exciting stuff really starts. Go to the folder where the csproj file is and run:

D:\DavidSymbolSourceTest\DavidSymbolSourceTest>nuget pack -sym DavidSymbolSourceTest.csproj
Attempting to build package from 'DavidSymbolSourceTest.csproj'.
Building project for target framework '.NETFramework,Version=v4.0'.
Packing files from 'D:\DavidSymbolSourceTest\DavidSymbolSourceTest\bin\Release'.
Found packages.config. Using packages listed as dependencies
Successfully created package 'D:\DavidSymbolSourceTest\DavidSymbolSourceTest\DavidSymbolSourceTest.0.5.nupkg'.

Attempting to build symbols package for 'DavidSymbolSourceTest.csproj'.
Building project for target framework '.NETFramework,Version=v4.0'.
Packing files from 'D:\DavidSymbolSourceTest\DavidSymbolSourceTest\bin\Release'.
Found packages.config. Using packages listed as dependencies
Successfully created package 'D:\DavidSymbolSourceTest\DavidSymbolSourceTest\DavidSymbolSourceTest.0.5.symbols.nupkg'.

Note that we are passing the -sym flag to the ‘nuget pack’ command, and that we’re giving it as input the csproj file!

The command will build the project if needed, and then create both a regular package (DavidSymbolSourceTest.0.5.nupkg) and a ‘symbolsource’ package (DavidSymbolSourceTest.0.5.symbols.nupkg).

Note how it used the version we had specified in AssemblyInfo.cs in step 2. Likewise, the Author and Description in the package came from there. This happens because of the token replacement logic from step 6.

In addition to the metadata inherited from AssemblyInfo.cs, the package will contain the metadata you explicitly added to the nuspec file, like the Project Url.

And one more thing: it also found our dependency on Clay and added that in the package, again without having to add that explicitly to the nuspec file! Step 8: push the packages to and Now that we created the packaged, we just need to push them out: one goes to and the other one the This can all be done in one command:

D:\DavidSymbolSourceTest\DavidSymbolSourceTest>nuget push DavidSymbolSourceTest.0.5.nupkg
Pushing DavidSymbolSourceTest 0.5 to the NuGet gallery feed (
Publishing DavidSymbolSourceTest 0.5 to the NuGet gallery feed (
Your package was published.

Pushing DavidSymbolSourceTest 0.5 to the symbol server (
Publishing DavidSymbolSourceTest 0.5 to the symbol server (
Your package was published.

Note that we ran ‘nuget push’ on the main package, and it automatically pushed the symbol package at the same time.And now we’re done, our package is live and ready to be installed from NuGet and debugged with full sources!

What the package Consumer needs to do

Now let’s play the part of the package Consumer that uses this package. Here I’ll demonstrate using a simple Console app, though the steps apply equally well to other apps.

Important note: these steps are more interesting when done on a different machine than the ‘Package Author’ steps! If you do them on the same machine, rename or delete the Author project to make sure VS doesn’t take any shortcuts on you when debugging (which it will!). Step 1: set up the VS debugger settings This is a one time setup step. In VS, go under Debug / Options and Settings, and make a few changes:

  • Under General, turn off “Enable Just My Code”

  • Under General, turn on “Enable source server support”. You may have to Ok a security warning.

  • Under Symbols, add “” t the list. Dialog will look like this:


Step 2: create a test console app Make sure you set its Target Framework to the Server profile (see my previous post). Step 3: use NuGet to bring in our test package Here is what you’ll see in the Online tab of the NuGet dialog:


Notice that not only our new package shows up on the public feed, but all the metadata and package dependencies are there as well!

Now click Install to install the package and its dependencies. Step 4: write some test code to use the package We’ll just call the method we defined and display some output:

using System;

namespace ConsoleApplication12 {
 class Program {
     static void Main(string[] args) {
         var person = DavidSymbolSourceTest.Demo.GetPersonObject("David", "Ebbo");
         Console.WriteLine("{0} {1}", person.FirstName, person.LastName);

Step 5: debug into the package! This is the final step that makes it all worth it! Set a breakpoint on the line that calls our GetPersonObject method and press F5 to start debugging.

When you hit the breakpoint, click F11 and be amazed!


Here we are debugging into our new package. Here both the sources and PDB files are coming straight from!

Registering with

Note that in all the steps below, we never actually went to the web site. The nice thing is that everything can work without even setting up an account on there. Note that SymbolSource does verify that you own the package by checking with using your key.

But even though the registration step is optional, it is recommended that you register with the site in order to be able to manage the symbol packages that you upload there. To register, just go to and follow the instructions. During registration, you’ll be asked for your NuGet key, which is how your account will get associated with your submissions.

comments edit

When you run under UAC (User Account Control), nothing runs as admin by default, and that’s a good thing! But sometimes, you do need to run some things as administrator.

There are a few well known ways of doing this. You can right click on an EXE and choose ‘Run As Admin’. Or if you have the app pinned on your taskbar, you can Ctrl-Shift click it to run as admin. If you don’t know those tricks, you should learn them as they often come handy.

However, there is one common scenario for which there is no well documented technique: how do you launch a program as admin from a data file? Taking a particularly interesting example, how do you launch Visual Studio as admin from a .sln file?

First, you try the obvious and right click it, hoping to see the familiar ‘Run As Administrator’ item. But no luck there:


While this at first appears hopeless, it turns that there is a way to do this by adding some simple things to your registry.

The general technique is explained here (thanks to @meligy for pointing me to it). The post describes how to do it for any file type, but I can save you a bit of time by giving you the reg change you need to make (and it’s not scary!):

Windows Registry Editor Version 5.00

@="\"C:\\Program Files (x86)\\Common Files\\Microsoft Shared\\MSEnv\\VSLauncher.exe\" \"%1\""

Just save that in a foo.reg file somewhere and run it. After you do that, right clicking on a .sln file will look like this:


And that’s it, just what we wanted!

Final note: my reg file above is hard coded to “C:\Program Files (x86)”, which won’t work on all systems so you may need to adjust things. I tried to change it to use the ProgramFiles(x86) env variable but I couldn’t make that work in the registry. Seems default values can’t be REG_EXPAND_SZ? Let me know if you know how to do this!

comments edit

Update: In case that’s not totally obvious, this post was an April Fool’s hoax. A number of people got briefly fooled on the day, which was the idea! Now that April 1st is behind us, I may as well make this super clear :)

Since we launched NuGet in January, its popularity has grown faster than we anticipated. A couple days ago, we reached an important milestone with over 1000 unique packages (see for more fun numbers).

Up until now, NuGet has an been entirely free offering both for package authors and consumers. This free model has surely helped contribute to the growth of NuGet, and we are glad to have taken this approach.

Now that NuGet is more mature and has reached fairly wide acceptance, we have decided to switch to a pay model. This is actually something we had been planning from the start, but we chose not to announce early as it may have hindered our initial growth.

How will the pay model work

For the most part, it will be similar to the pricing models used in popular phone app stores (though apparently we can’t use that term!). There will be a mix of free and pay packages, based on the author’s decision.

As an example, the basic Fart package might be free, while SuperFarter will be priced at 99c. The more feature rich MegaFarter package may cost as much as $2.99 with all the bells and whistles, though in the end the pricing decision is up to the author.

When you buy a package, you will be allowed 10 downloads of that package for free, after which you will be able to ‘reload’ your purchase at the original price to get 5 more downloads. Why not allow unlimited downloads once you buy a package? Based on our studies, we found that not only the patented ‘reload’ model will end up being more profitable, but by making the user think a bit more about where they choose to use a package, the average quality of the resulting software actually increases (this surprised us as well, but the results were clear).

So that’s the story for the consumer, but what about the author? We deliberated for a while on this, and decided on sharing as much as 5% of the package income with the author. That number was partially inspired by the fact that it is tax season, and that the Beatles wisely wrote in their Taxman song:

Let me tell you how it will be. There’s one for you, nineteen for me. Should five per cent appear too small. Be thankful I don’t take it all.

The last part is key, as we will in fact reserve the right to take it all in certain scenarios.

Everyone is #winning!

While this is a departure from the current model, this new pricing model will end up benefiting everyone:

Authors win by getting a share of the revenue. To put the 5% in perspective, everyone 20 million downloads of your 99c package, you’ll be making almost a million dollars. We’re talking real money!

Consumers win by getting higher quality packages. With the current free model, there isn’t as much incentive for authors to put hard work into their packages. But with money at stake, we expect it’ll be a different landscape. So packages will do things that you don’t even know are possible today.

Last but not least, the NuGet team wins. Most of the money will end up there, so that should be self-obvious. But we will put it to good use. For instance we will upgrade our cars, which will allow us to get to work fresher in the morning, and make NuGet even more awesome that it is today.

I hope you are all sharing our excitement about this game changing announcement!

comments edit

While you can push NuGet packages from, it is often more convenient to do it straight from the command line using nuget.exe.

Phil did a nice post on how that works, which you should read first if you have not done this before.

The one pain point about this technique is that you need to find your API key every time you push a package. I have had to go to each time to locate my key, copied it and pasted it on the command line. It gets old quickly! :)

The good news is that the newest version of nuget.exe (get it here) lets you save it once and for all! Credit goes to Matthew Osborn for this new feature.

Here is how it works.

Saving your key

First, you run the new SetAPIKey command, e.g.

D:\test>nuget SetApiKey 78a53314-c2c0-45c6-9d92-795b2096ae6c
The API Key '78a53314-c2c0-45c6-9d92-795b2096ae6c' was saved for the source ''.

This encrypts the key and saves it in a config file under your %APPDATA% folder. e.g. mine ends up in C:\Users\davidebb\AppData\Roaming\NuGet\NuGet.Config. This file contains:

<?xml version="1.0" encoding="utf-8"?>
  <add key="" value="AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAAnMGkdu4+rkqpSdQUWwjfIgAAAAACAAAAAAADZgAAwAAAABAAAAA5gG4wxeb8Vn4X0Y0p//OvAAAAAASAAACgAAAAEAAAAF/llublBpBgL9lSFaE9/A0oAAAAC4NVHflYsUU5UgVgOq+h3t1jwY6l2BEji6Td4F0lvxsZcZ73L2m6BRQAAABJ0TZLKdIYStn8DWawbtzdo3mrKg==" />

Note that the key is saved per server URL, with the server defaulting to (you can pass -src to change that).

Using the saved key

Once you have done this one-time step, pushing packages becomes a breeze, as the only thing you need to pass is your package file. e.g.

D:\test>nuget push DavidTest.1.0.nupkg
Publishing DavidTest 1.0 to the live feed...
Your package was published to the feed.

Likewise, if you want to delete a package, you’d do:

D:\test>nuget delete -noprompt DavidTest 1.0
Deleting DavidTest 1.0 from the server.
DavidTest 1.0 was deleted from the server

Hopefully this will make your NuGet package management experience a little bit easier!

comments edit

Update (8/16/2011): also check out this **newer post that describes an easier way to set up this workflow.**

The current NuGet workflow has always been to commit the Packages folder into source control. The reasoning is that it matches what developers typically do when they don’t have NuGet: they create a ‘Lib’ or ‘ExternalDependencies’ folder, dump binaries into there and commit them to source control to allow others to build.

While this has worked fine for some users, we have also heard from many that committing packages into source control is not what they want to do. When using a DVCS like Mercurial or Git, committing binaries can grow the repository size like crazy over time, making cloning more and more painful. In fact, this has been one of the top requests on NuGet our issue tracker.

The good news is that NuGet now offers a workflow which goes a long way to solving this problem. It isn’t 100% automated yet, but with some minimal pain you can set up your project to do this.

Running ‘nuget install’ on a packages.config file

Earlier, I blogged about how you can install NuGet packages from the command line by using NuGet.exe.

Get NuGet.exe from here if you don’t already have it, and run ‘nuget -update’ to self-update it.

This lets you install one package at a time, e.g.

D:\Mvc3Application>nuget install NHibernate -o Packages

As an aside, the -o flag is new and lets you specify where the package is installed.

But the big new thing is that you can now run it on a packages.config file. packages.config is a file that NuGet creates at the root of every project that has packages installed. So if you install the ‘EFCodeFirst.Sample’ package in your app, you’ll find a packages.config next to the .csproj file, and it will contain:

<?xml version="1.0" encoding="utf-8"?>
<package id="EFCodeFirst" version="0.8" />
<package id="EFCodeFirst.Sample" version="0.8" />

So this holds all the information about what packages are needed for your project. Suppose you don’t commit your Packages folder (which lives under the solution folder), and another developer clones your repository. They can now run:

D:\Mvc3Application>nuget i Mvc3Application\packages.config -o Packages
Successfully installed 'EFCodeFirst 0.8'.
Successfully installed 'EFCodeFirst.Sample 0.8'.

And the Packages will be restored! The other nice thing is that this command is smart enough not to do any expensive work if they are already installed, e.g.

D:\Mvc3Application>nuget i Mvc3Application\packages.config -o Packages
All packages listed in packages.config are already installed.

This completes very quickly with no network requests.

Integrating package restore into msbuild

Integrating this into your build is a simple matter of adding a Pre-build event.

First, I would suggest committing nuget.exe into your solution, e.g. under a Tools folder. Once you do that, you can then add the following Pre-build event:

$(SolutionDir)Tools\nuget install $(ProjectDir)packages.config -o $(SolutionDir)Packages

Note how packages.config lives under the project folder while the Packages folder lives under the solution folder.

And that’s it, you’re done! Now each time you build, NuGet will first make sure that you have all the packages that you need, and will download anything that’s missing from the live feed.

If your solution has multiple projects that use NuGet, add the same Pre-Build event to each project.

As an alternative, you can use an msbuild custom build target to achieve the same thing. Check out Danny Tuppeny’s post for details on that. This worked better for him when using App Harbor.

We want your feedback

This is new, so it’s possible that it doesn’t quite work perfectly in all cases. Please let us know how it works for you: bugs, feedback, suggestion. Thanks!

comments edit

NuGet has drastically simplified the process of getting .NET libraries into your projects. What used to be an error prone and painful process has become as simple as adding an assembly reference.

While it has solved an important part of the developer workflow, it has the potential to also solve another key piece of the puzzle: helping user learn to use libraries.

I found these cool packages, but now what?

There are tons of cool packages available on NuGet today, and the number is growing daily. I’ve heard of a number of users who go down the list and install all kind of packages into their projects to try them out. But if you’re not familiar with a library, how do you get started with it?

As an example to illustrate the discussion, let’s take the nifty little Clay package written by the Orchard guys. Say you have installed it into your project and want to start using it. Here is what you might do:

  • The NuGet dialog gives you a link to the ‘project URL’. Typically, it’s a link to where the project is hosted on CodePlex/BitBucket/github, and indeed this one takes you to
  • Once you’re there, you try clicking on the Documentation tab. Unfortunately, many projects don’t have much there. But here it at least has a pointer to Bertrand’s blog posts on the topic. So you now go to his post.
  • You read through it, and after a while, you can piece together enough bits and pieces to know what it’s about and start using it into your code.

I took Clay as an example, but this is a fairly typical experience. The fact is that a lot of knowledge about immature (yet useful) projects only exists in ‘blog post series’ rather than in any formal documentation. Not ideal, but that’s how things happen.

NuGet to the rescue with Sample Packages

Luckily, there is a simple and effective solution to this problem: use NuGet to distribute basic samples that get your users on the right path with less pain.

So to illustrate this post, I went ahead and created one such package for Clay: Clay.Sample. This package depends on Clay, such that installing it also installs Clay (as well as other things Clay depends on, like Castle).

It’s a ‘source only’ package, meaning that it doesn’t contain any binaries of its own. So let’s go ahead and try it in a brand new Console app (and change it NOT to use the client profile). Go in NuGet’s ‘Add Library Reference’ dialog and search for Clay. You’ll get this:


After you install it, your project will look like this:


First, note how you got all the expected references to Clay and to its dependencies: Castle.* and log4net.

But more interestingly, it also brought in a ClaySamples source file under Samples\Clay. It contains a number of Clay samples, which I shamelessly copied from Bertrand’s post. Here is one example:

public static void AnonymousObject() {
   dynamic New = new ClayFactory();

   var person = New.Person(new {
       FirstName = "Louis",
       LastName = "Dejardin"

   Console.WriteLine("{0} {1}", person.FirstName, person.LastName);

There are about 10 such samples in there, which demonstrate everything that the post discusses. Now go to your Console Main and make a call to a method that runs all the samples:

class Program {
   static void Main(string[] args) {

While there is nothing in there that’s not in the blog post, the big advantage is that you can trivially get it into your project via NuGet, and you can then directly run/debug the samples without having to piece them together.

Of course, the blog post (or documentation) may still be worth reading for extra insight. But you may find that the samples give you all you need for now, and save the deeper reading for later.

Call to packages authors: write Sample packages!

I think this type of packages can have a huge impact on developer productivity. But for that to actually happen, those packages need to be created! And while I created the one for Clay, I am not volunteering to create all the sample packages :) Clearly, the best person to do that is the author of the package, though anyone who knows it well enough can certainly do it as well.

So if you own a NuGet package, please try to take on that task. It’s super easy, and your users will thank you for it!

Conventions, conventions, conventions

I recently blogged about using the App_Start convention for WebActivator startup code and got a great response, with almost all WebActivator users converting their existing packages to use this.

The situation here is quite similar, and calls for a similar convention, which is what I showed above. In a nutshell:

  • If your package is named Blah, call the sample package Blah.Sample. If you want multiple sample packages, you can call them Blah.Sample.Something and Blah.Sample.SomethingElse.
  • Make your Blah.Sample package dependent on Blah.
  • Within that package, just include source files. Place those file under the Samples\Blah. You can have one or more, and call them whatever you think make sense.
  • The code on there is up to you, but the general idea to to include whatever you think will help the user get started. Try to make the sample code easily runnable without too much extra setup. This may be harder for some packages, but do your best :)

Creating the package

Taking Clay as an example, here is the structure of the files before packing them into a nupkg:


So there are just two files, the nuspec and the preprocessed sample file. Here is the nuspec:

<?xml version="1.0"?>
<package xmlns:xsd="" xmlns:xsi="">
<metadata xmlns="">
<authors>Outercurve Foundation</authors>
<owners>Outercurve Foundation</owners>
<description>This package contains samples that demonstrate the use of the Clay library.</description>
<dependency id="Clay" version="1.0" />

The interesting parts here are the package Id, the description, and the dependency on Clay.

Then ClaySamples.cs.pp is a normal source file, except for a tiny bit of preprocessing for the namespace, e.g.

using System;
using ClaySharp;

namespace $rootnamespace$.Samples.Clay {
   public static class ClaySamples {
      // Sample code here

And that’s it! Once you have that, just run ‘nuget pack’ from the folder with the nuspec, and you’ll have a sample package ready to be pushed to the feed.

comments edit

Disclaimer: running a custom build of MVC 3 is not recommended in most scenarios. Please run against the official MVC 3 bits unless you really cannot. Be aware that using custom builds will make it harder to use 3rd party libraries built against the official bits (you’ll need to rebuild those libraries).

One nice thing about ASP.NET MVC is that you can get the full sources on CodePlex and compile them yourself.

Once you copy the sources locally, you can open the WebRuntime solution in VS 2010 and build it. As an aside, note that this solution also contains the ASP.NET WebPages sources, since MVC3 makes you of some of that logic (mostly for Razor support).

So this lets you modify the sources and build everything. However, one thing that makes the use of the resulting assemblies a bit tricky is that unlike the official bits, the bits produced by this solution are unsigned.

Let’s take this step by step.

Step 1: Create a new MVC 3 application

So let’s start by creating a new MVC 3 Web Application using the regular project template that come from installing MVC 3.

This gives you a working app, but obviously at this point you’re still using the official MVC 3 bits.

Step 2: Reference your custom assemblies

The next step is to reference your custom MVC assemblies. Start by removing the System.Web.Mvc and System.Web.WebPages references. Instead, reference the version you’ve built of those same assemblies, which you’ll find under mvc3\bin\Debug (from the root of the WebRuntime solution).

Once you do that, your MVC project will build fine. However, if you try running it, you’ll get some scary looking runtime compilation error. Something like:

CS0433: The type 'System.Web.Mvc.WebViewPage<TModel>' exists in both 'c:\Windows\Microsoft.NET\assembly\GAC_MSIL\System.Web.Mvc\v4.0_3.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll' and 'c:\Users\David\AppData\Local\Temp\Temporary ASP.NET Files\root\d305385c\948d4291\assembly\dl3\ef116fd6\5f110ce1_44dccb01\System.Web.Mvc.DLL'

The reason this happens is that while you’ve changed the project references to point to your assembly, the two web.config files that come with the project template are still pointing to the official assemblies left and right. Which leads us to…

Step 3: Fix up your web.config files

The project comes with two web.config files, and they each contain all kind of references to the official assemblies (which are strong name). e.g. in the root web.config, you’ll find:

<add assembly="System.Web.Mvc, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<add assembly="System.Web.WebPages, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />

Luckily, the fix is as simple as yanking the strong name from a few assemblies. This is easily done using a project-wide search/replace. Specifically, do the following three replacements:

  1. Replace all instances of (excluding the quotes!)

“System.Web.Mvc, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35”



  1. Replace all instances of

“System.Web.WebPages, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35”



  1. Replace all instances of

“System.Web.WebPages.Razor, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35”



And that should be it. Your app will now be up and running against your custom MVC bits.

comments edit

[Please see the WebActivator wiki for the latest docs] When I first blogged about WebActivator, I showed in my example using a source file named AppStart_SparkMvc.cs.pp under the Content folder in the package, which means when you install it you end up with a file named AppStart_SparkWebMvc.cs at the root of your web project.

Now suppose you install a few more packages that use the same WebActivator pattern, and you would end up with something like that at the root of your project:

More files...

That starts getting really ugly, and most devs like to keep the root of their app free of clutter.

We need a better convention!

The solution is simply to agree on a different convention where we put all this startup code into a folder. To match ASP.NET conventions, the obvious name to pick is App_Start. And once we do that, we no longer need to prefix the file names with AppStart, so we would have:

More files...

Likewise, the full class names would change from WebApplication1.AppStart_SQLCEEntityFramework to WebApplication1.App_Start.SQLCEEntityFramework. Note that the namespace doesn’t matter a whole lot since you won’t call this code explicitly. But since existing convention is to have the namespace match the folder structure, we may as well do that here.

As of today, there are 17 packages that use WebActivator, so I’ll need to try to convince all the authors to follow this. Fun time ahead! :)

But note that it’s just a convention, with no code changes to enforce it. Nothing written here breaks any existing packages. It’s just something where by agreeing on a better convention, we make NuGet yet a little bit better!

An example: EFCodeFirst.SqlServerCompact

As an example, here is what I ended up with for the EFCodeFirst.SqlServerCompact package using this pattern.

The source file transform in the package is in Content\App_Start\SQLCEEntityFramework.cs.pp, and contains:

// namespaces, etc...

[assembly: WebActivator.PreApplicationStartMethod(
typeof($rootnamespace$.App_Start.SQLCEEntityFramework), "Start")]

namespace $rootnamespace$.App_Start {
public static class SQLCEEntityFramework {


Note the use of $rootnamespace$ and of App_Start in the namespace.

comments edit

[Please see the WebActivator wiki for the latest docs]

Back in October, I blogged about the WebActivator NuGet package, which allows packages to bring in some source code that runs on startup in a Web Application. It’s been a pretty popular package, as there are many scenarios where running startup logic is important. The alternative of forcing the user to modify their global.asax is just not compelling.

There have also been a few feature requests since the initial 1.0 release, and I will describe them here.

Ability to run code after Application_Start

When you use a WebActivator PreApplicationStartMethod attribute, the method it points to runs before your global.asax’s. Yep, that’s fairly obvious from the name Pre ApplicationStart :)

But in some cases, that’s just too early. Scott Hanselman ran into that when trying to register MVC areas, and I added this feature as a result (see his post).

This feature works exactly the same as the PreApplicationStartMethod attribute, except using a different attribute named… drums rolling… PostApplicationStartMethod! e.g.

[assembly: WebActivator.PostApplicationStartMethod(
typeof(TestLibrary.MyStartupCode), "CallMeAfterAppStart")]

So when does that run exactly? It runs at the time the very first HttpModule get initialized. Internally, it’s using the dynamic module registration mechanism I blogged about recently.

Ability to run code when the app shuts down

WebActivator can also help you execute cleanup logic when the app shuts down. This is done via yet another attribute that works much like the other two, e.g.

[assembly: WebActivator.ApplicationShutdownMethod(
typeof(TestLibrary.MyStartupCode), "CallMeWhenAppEnds")]

This code runs at the time Dispose is called on the last HttpModule in the app.

Support for code in App_Code in Web Sites

In a Web Site (as opposed to a Web Application), you typically put your shared code in the App_Code folder. Now if you have code in there that uses the PostApplicationStartMethod attribute, it will get called when the app starts, giving Web Sites some WebActivator love.

Please note that you can only use PostApplicationStartMethod in App_Code, and not PreApplicationStartMethod. The reason is that when PreApplicationStartMethod fires, the App_Code folder has not even been compiled!

Support for invoking the start methods outside of ASP.NET

This change came courtesy of Jakub Konecki, who needed it for unit testing purpose. This comes as a set of static methods that you can use to invoke the startup methods:

// Run all the WebActivator PreStart methods

// Run all the WebActivator PostStart methods

// Run all the WebActivator start methods

// Run all the WebActivator shutdown methods

You can find the WebActivator sources on bitbucket.

comments edit

Disclaimer: let me start by saying that the technique described in this blog is experimental, and is meant as a first step to see where this might take us. This is not in any way an officially supported technique!

If you are an ASP.NET user, you are likely aware that there are two different types of apps that you can create: Web Sites and Web Applications. Here is a quick summary of how they differ:

Web Sites

In web sites, all compilation is done at runtime rather than design time. They don’t use any VS project systems, and msbuild is never involved.

Advantages: very dynamic. You can just FTP files to the server, and everything just works. In that sense, it’s similar to ASP Classic and PHP.

Disadvantages: lack of fine control over the build process; hard to unit test; often slower in VS; not available for MVC.

Web Applications

In Web Applications, all the source code is built by VS in the designer using a standard .csproj file and msbuild. Pages and views (.aspx, .cshtml, …) are still built dynamically at runtime, so it’s sort of a mixed mode model.

Advantages: full power of msbuild, easy to unit test code, fast build in VS.

Disadvantages: once you xcopy your built app to the server, you can’t modify the code by just changing files (though you can still do this for pages & views).

What if we could get the best of both worlds?

I was recently chatting with my coworker Louis DeJardin about compilation models, and he put out the idea that we might get something interesting if we were to run msbuild on the server, which is where this came from.

In a sense, it’s sort of an ‘obvious’ thing to try if you look at the Pros can Cons of Web Sites and Web Applications. We want the full power of msbuild, but we also want the more dynamic nature of Web Sites, so the only logical thing to do is to run msbuild dynamically on the server!

Try it now using NuGet!

Before I give you more details, let me show you how you can try this in no time via NuGet:

  • Create a new MVC app
  • Install my ‘WebAppBuilder’ NuGet package
  • Run the app
  • Change the message in Controllers\HomeController.cs, and don’t rebuild
  • Refresh the page in the browser (and then again per the message you’ll get)
  • Now try to make a change with a compile error and refresh again

How does it all work?

There really isn’t much code at all to make this work. First, it uses the technique I described in my previous post to dynamically register a module. This is what allows it to kick in without any registration.

Whenever the appdomain starts, the module looks for the csproj file and builds it. Doing this is quite simple since msbuild is well exposed to managed code (take a look at Microsoft.Build.Execution.BuildManager). Note that it always does that on startup, with the assumption that the incremental build will be super fast if there is nothing to build.

Then if something actually got built, it sends back a simple page telling the user to refresh. This is a bit ugly as it effectively takes two refreshes to get the result, but it’s necessary since we can’t use the freshly built assembly in the same domain used to build it (since creating it causes a domain unload).

The other thing it does is listen to file change notification so it can unload the domain if any source files change. Then on the next request things get built as above.

There may be smarter ways of doing this, but this works pretty well as a proof of concept.

You can see find code on bitbucket.

Caveat: requires full trust

One big caveat of this approach is that it doesn’t work in partial trust, because launching msbuild requires full trust. This is not something that I think can be worked around easily, so I’d say it’s an inherent limitation.

Where can we take this?

Well, I’m not really sure yet, but it is certainly interesting to think about the possibilities of using this type of build model in ASP.NET.

Let me know if you think this is crazy or may have potential :)

comments edit

In ASP.NET 4, we added the concept of a PreApplicationStart method that an assembly can use to execute code early on in the appdomain without any configuration. Phil Haack covered it a while back in this post. It’s pretty simple to use. You just define a class that looks like:

public class PreApplicationStartCode {
    public static void Start() {
        // Your startup code here

And then you add an assembly level attribute pointing to it:

[assembly: PreApplicationStartMethod(typeof(PreApplicationStartCode), "Start")]

With the release of MVC3 and ASP.NET Web Pages, we added another little gem: a RegisterModule() API that lets you dynamically register an IHttpModule without touching config. Sadly, the method is hidden so deep that it is hard to find by accident (it’ll get cleaned up in the next framework version).

By combining the two techniques, you have everything you need to register a module dynamically, e.g.

public class PreApplicationStartCode {
    public static void Start() {
        // Register our module

I warned you it was well hidden! :)

Update: in recent versions of the framework, the method is much easier to find. Just call HttpApplication.RegisterModule.

The module type that you pass in to that is just a standard IHttpModule, e.g. here is a basic module that writes to the response on every request:

class MyModule : IHttpModule {
    public void Init(HttpApplication context) {
        context.BeginRequest += (sender, e) => {
            var response = ((HttpApplication)sender).Response;

    public void Dispose() { }

The beauty of this is that it allows you to create fully encapsulated assemblies that you can just drop into a web app’s bin folder and have them light up without having to add any ugly registration to the app.

And yes, all this works fine in partial trust!

You can download a minimal sample from here.

comments edit

When you create a Console app in Visual Studio 2010, it gets create in a way that it targets the “.NET Framework 4 Client Profile”. What that means is that it can’t use any ASP.NET component, which for ASP.NET developers is rather useless.

The standard way to fix this is to go to the Project’s properties and change the Target Framework:


You then get an alert telling you that your project needs to be reloaded:


And once you click yes, you can be on your way to other greatness.

While this works, it’s frankly painful when you have to do this many times a day. To make things worse, if you forget to do it, you often get strange failures which don’t make it obvious what the issue is, leading to frustration or worse.

I have no idea who was behind the decision to make the default be the client profile, but I’ll go on record saying that it was a dumb idea! :)

Fix this permanently using a custom project template

Luckily, it’s pretty easy to fix this by using a custom VS Project Template. @luhmann sent one to me, so I didn’t even have to write it :)

Here is what you need to do:

  • Go to this folder: %USERPROFILE%\Documents\Visual Studio 2010\Templates\ProjectTemplates\Visual C#\
  • Under that, create a ‘Windows’ folder if you don’t already have one (you probably don’t)
  • Download the custom template from here, and save it into that Windows folder (but don’t unzip it!).

Now when you need to create a C# Console app, you’ll see a new entry from the custom template:


If you use that, your console app won’t be using the evil Client Template, which will lead to greater happiness.

Note that if you really wanted, you could replace the default template by that one, but I like seeing both entries side by side as a reminder of what’s going on. And who knows, some day I might just want to use the Client Template!

comments edit

Note: this post is a bit outdated. Checkout this other post for more up to date information on this topic.

Dynamic Data works out of the box with Entity Framework, but it takes a small trick to get it working with the latest EF Code First bits (known as CTP5).

Here is quick walk through of what you need to do.

As a first step, create a new ASP.NET Dynamic Data Entities Web Application. Then, let’s use NuGet to add EF Code First to your project (I never miss a chance to pitch my new product!). We’ll use it with SQL Compact, and also bring in a sample to get started.

Right click on References and choose ‘Add Library Package Reference’ to bring in the NuGet dialog. Go to the Online tab and type ‘efc’ (for EFCodeFirst) in the search box. Then install the EFCodeFirst.SqlServerCompact and EFCodeFirst.Sample packages:


Now we need to register our context with Dynamic Data, which is the part that requires special handling. The reason it doesn’t work the ‘usual’ way is that when using Code First, your context extends DbContext instead of ObjectContext, and Dynamic Data doesn’t know about DbContext (as it didn’t exist at the time).

I will show you two different approaches. The first is simpler but doesn’t work quite as well. The second works better but requires using a new library.

Approach #1: dig the ObjectContext out of the DbContext

The workaround is quite simple. In your RegisterRoutes method in global.asax, just add the following code (you’ll need to import System.Data.Entity.Infrastructure and the namespace where your context lives):

public static void RegisterRoutes(RouteCollection routes) {
DefaultModel.RegisterContext(() => {
    return ((IObjectContextAdapter)new BlogContext()).ObjectContext;
}, new ContextConfiguration() { ScaffoldAllTables = true });

So what this is really doing differently is provide a Lambda that can dig the ObjectContext out of your DbContext, instead of just passing the type to the context directly.

And that’s it, your app is ready to run!


One small glitch you’ll notice is that you get this EdmMetadatas entry in the list. This is a table that EF creates in the database to keep track of schema versions, but since we told Dynamic Data to Scaffold All Tables, it shows up. You can get rid of it by turning off ScaffoldAllTables, and adding a [ScaffoldTable(true)] attribute to the entity classes that you do want to see in there.

Another issue is that this approach doesn’t work when you need to register multiple models, due to the way the default provider uses the ObjectContext type as a key. Since we don’t actually extend ObjectContext, all contexts end up claiming the same key.

Approach #2: use the DynamicData.EFCodeFirstProvider library

This approach is simple to use, but just requires getting a library with a custom provider. If you don’t already have NuGet, get it from here.

Then install the DynamicData.EFCodeFirstProvider package in your project:

PM> Install-Package DynamicData.EFCodeFirstProvider
'EFCodeFirst 0.8' already installed.
Successfully installed 'DynamicData.EFCodeFirstProvider'.
WebApplicationDDEFCodeFirst already has a reference to 'EFCodeFirst 0.8'.
Successfully added 'DynamicData.EFCodeFirstProvider' to WebApplicationDDEFCodeFirst.

After that, this is what you would write to register the context in your global.asax:

   new EFCodeFirstDataModelProvider(() => new BlogContext()),
   new ContextConfiguration() { ScaffoldAllTables = true });

And that’s it! This approach allows registering multiple contexts, and also fixes the issue mentioned above where EdmMetadatas shows up in the table list.

comments edit

Yesterday, I blogged about how the NuGet command line tool can now be used to bring down packages without using VS.

Another cool new trick that it just gained is the ability to update itself. What that means is that after you get the tool on your machine (e.g. get the latest from here), keeping it up to date becomes super easy.

I’ll demonstrate how it works by example. First, let’s run nuget.exe with no params just to see what version we have:

NuGet Version: 1.1.2120.136
usage: NuGet <command> [args] [options]
Type 'NuGet help <command>' for help on a specific command.

We’re running 1.1.2120.136. Now let’s check for updates:

D:\>nuget update
Checking for updates from
Currently running NuGet.exe v1.1.2120.136.
Updating NuGet.exe to 1.1.2121.140.
Update successful.

And now let’s make sure we’re running the new one:

NuGet Version: 1.1.2121.140
usage: NuGet <command> [args] [options]
Type 'NuGet help <command>' for help on a specific command.

And just like that, we’re now running the newer build!

How is the update performed

Being a package manager, it’s pretty natural for NuGet to be able to do that, as NuGet.exe is itself a package in its own feed! The package is named NuGet.CommandLine.

To perform the in-place update, nuget.exe simply renames itself to nuget.exe.old, and downloads the new one as nuget.exe. The old file can then be deleted, or if for whatever reason you’re not happy with the newer build, you can simply delete it and rename nuget.exe.old back into nuget.exe.

What about updates to the NuGet Visual Studio add-in?

Just a final note in case you’re wondering why update is done this way for nuget.exe, but not for the NuGet VS integration. Since the VS tooling is a standard extension, it gets an update story ‘for free’ via the VS Extension Manager. In VS, just go into Tools / Extension Manager and go to the Updates tab, which will tell you if there are updates available to any of the extensions you have installed.

comments edit

Most of the coverage around NuGet revolves against its clean integration to Visual Studio, which makes adding references to packages as easy as adding references to local assemblies. While this is indeed a key scenario, it is important to note that the core of NuGet is completely decoupled from Visual Studio, and was designed with that goal from day 1.

If we look back at the early days of NuGet, it was in many ways inspired by the ‘Nu’ project (which members have since joined NuGet). What Nu had was a solid command line driven experience to bring down .NET bits to your machine. In their case, it was based on Ruby Gems, but that is an implementation details. Take a look at Rob Reynolds’s original screencast to see what the Nu experience was about.

While we have been planning all along to provide the same experience with NuGet (in addition to the VS experience of course), it’s something that had somewhat fallen off the radar, and it just had not been done. This was unfortunate, because we already had all the plumbing to make it happen, and all it needed was about 10 lines of code to expose this!

So I’m happy to say that we have now filled this little hole by implementing a new ‘install’ command in our NuGet.exe command line tool. Using it couldn’t be any easier, and I’ll walk you through an example.

Where do I get NuGet.exe?

You first need to get NuGet.exe. This is the same tool that package authors have been using to create packages and upload them to the gallery.

The easiest way to get it is to download it from CodePlex.

You can also obtain it via NuGet itself by installing the package name NuGet.CommandLine (using Visual Studio).

How do I run it?

The best way to demonstrate it is to just show a sample session.

D:\>md \Test

D:\>cd \Test

D:\Test>nuget list nhi
NHibernate.Linq 1.0
NHWebConsole 0.2
SolrNet.NHibernate 0.3.0

D:\Test>nuget install NHibernate
'Iesi.Collections (≥ 1.0.1)' not installed. Attempting to retrieve dependency from source...
'Antlr (≥' not installed. Attempting to retrieve dependency from source...
'Castle.Core (≥ 2.5.1)' not installed. Attempting to retrieve dependency from source...
Successfully installed 'Iesi.Collections 1.0.1'.
Successfully installed 'Antlr'.
Successfully installed 'Castle.Core 2.5.2'.
Successfully installed 'NHibernate'.

Folder PATH listing
Volume serial number is 26FF-2C8A
│   └───lib
│   └───lib
│       ├───NET35
│       ├───NET40ClientProfile
│       ├───SL3
│       └───SL4
│   └───lib

D:\Test>dir Antlr.\lib
Volume in drive D has no label.
Volume Serial Number is 26FF-2C8A

Directory of D:\Test\Antlr.\lib

01/20/2011  05:06 PM           117,760 Antlr3.Runtime.dll


Why would you want to use this instead of the Visual Studio integration?

For most users, the Visual Studio integration will be the right choices. But suppose you want to work much more ‘manually’, and not deal with VS or even with a .csproj file. e.g. all you want is to bring down nhibernate.dll so you can write some code against it, and compile it manually using ‘csc /r:nhibernate.dll MyCode.cs’.

In this scenario, you just want NuGet to download the assemblies for you, and leave the rest to you. It still saves you a lot of time by letting you easily download the bits and all their dependencies, but it doesn’t force you into a development model that may not be want you want.

So I don’t think it’s a feature that the majority of users will use, but it is important to have it for those who need it.