comments edit


I ordered a carbon monoxide detector from a 3rd party Amazon seller and they sent me one that's way past its expiration date. I tried to write a negative seller review, and found myself sucked in the Kafkaesque world of Amazon Support. In the end, I never was able to get my review posted, despite trying pretty much everything.

I don't know if this is particularly interesting to others, but in any case, this is the story...

The perfect seller

October 5th: My carbon monoxide detector had reached the end of its life, so I did what I always do: I went to Amazon and looked for a replacement. I quickly settled on this model which has good reviews.

Where it gets interesting is when I found I could save a tiny bit by going with a 3rd party seller instead of Amazon. The difference was under a dollar, but it was just as simple, so my cheapness sense took over, and I went with that.

Of course, I checked the seller ratings: 100% positive over the past 3 months, and 99% lifetime. Surely, I had found the world's most trustworthy seller: Great Lake Toys:

Seller ratings

Dreams shattered

It arrived quickly (Go Prime!), and I proceeded to install it. As I was writing the date on it (always a good habit), I noticed a disturbing detail: it had been manufactured on July 19, 2007!

Expired detector

So wait, this detector is rated good for 5 years, but was made over 7 years ago? For a device that's supposed to save my family in case of a CO leak, that's not exactly what I want to see. Time to give them a one star review:

I ordered a Carbon Monoxide detector. When I opened it, I saw that the date of manufacture was July 2007, more than 7 years ago!! For a safety device that is supposed to last 5 years, that is a scary finding. I'm guessing that it's been sitting in a garage for 7 years, and that it will not be performing well. And I'm not seeing an obvious way to contact the seller to get a fresher replacement :(


October 13th: I get a response from Amazon that my review had been rejected, with message:

Feedback submitted was a product review and not related to seller


October 19th: I write back to Amazon

My review of this product was removed with the comment "Feedback submitted was a product review and not related to seller".

This is not a correct resolution. I'm sure the product in general is fine. The problem is that the specific detector that the seller sent me was manufactured in July 2007, and is way too old to be used as a life safety device.

This is not a product review, but a case of the seller sending an item that is too old to be sold to customer. Hence it is absolutely a seller issue.

Also, there appears to be no way of contacting the seller.

I would appreciate if you could restore my review, and let me know what my options are.

October 20th: Amazon replies

I've forwarded the details you sent us to our investigations team. Each report they receive is investigated and the appropriate action is taken. However, we won't be able to release the outcome of the investigation.

Anytime you buy from a seller, we encourage you to rate the seller's performance through feedback. You have 90 days from the date of your order to leave a rating and comments on your experience. I'd suggest you to please resubmit your review.

Best regards, Yogesh P

That was the first promise to send my case to the 'investigation team'.

As for resubmitting my review, brilliant idea, except for one small detail: the button to submit a seller review is no longer there...

October 21st:

Hi Yogesh,

Thanks for your reply. I’m not finding a way to re-submit my feedback. I see the item under ‘order history’, but there is no ‘Leave Seller Feedback’ button.

Please advise how I can restore my feedback.

Thanks, David

I then 'pinged' then twice more as I was not hearing back. I can be persistent :)

October 26th:


I understand that you want to re submit the feedback with the seller.

I'm sorry; once feedback for a seller has been removed or posted, it can't be restored or re-submitted as new feedback for that seller.

Best regards, Swathi

Ok, now they're starting to piss me off.


I think there is a misunderstanding here. I did not remove the seller review. It was removed by Amazon, and it was done mistakenly as discussed. Can you please re-enable the ability for me to repost my review?

Thanks, David

Same day, I hear back:


I have forwarded this issue to our community review department, they receive is investigated and the appropriate action is taken. Your comments and suggestions will help us improve our store and offer better service to our customers.

They will write back to you after investigate the issue.

Best regards, Shalu S

That was the second promise to have the review department take a look. Still, I'm a nice guy, so I show some appreciation for her fine work:

Thank you Shalu, your help is appreciated.


And within minutes, I hear back:


Thanks for writing back with your kind words. I'm glad to hear that my colleague, Shalu, was able to assist you, and I'll be sure to forward your message.

We look forward to seeing you again soon.

Best regards, Inderjeet S

So now this guy is acting like all my issues have been resolved, and he's super happy that his colleague was able to help me. That's very nice.

Only problem is: I never hear back from them, nor from the now-infamous (and probably non-existing) 'review department'.

November 6th

For the fun of it, I decide to post my original seller review as a product review. After all, the reason it got rejected in the first place was that they claimed it was a product review, so let's play along!

Later that day, I get a reply:

Hello David,

I hope this email finds you well. This is Kath from the Communities team.

We encourage reviews on website, both positive and negative.

However, your recent review titled "Received expired carbon monoxide detector" for "First Alert CO600 Plug In Carbon Monoxide Alarm " doesn't comply with our customer reviews guidelines. Specifically, your review isn't a review of the product itself, but rather a review of the seller from which you ordered.

Best regards, Kath A.

You don't say!

Just give me a new one

November 7th

I finally get around to returning the item, and ordering a new one from real Amazon (i.e. not a 3rd party seller). The returning part was smooth enough, and the new detector was as fresh as it gets.

Expired detector

It was manufactured on September 24th 2014, less then two months earlier! Very nice, and it confirms that how unacceptably stale the initial detector was.

Chat time

Now you might say that my problem is solved, as I'm in possession of a freshly minted detector. But part of me really wants to answer the question: is it at all possible to post a negative seller review?

So I decide to keep the fun going a bit. This time, I choose to go with a direct chat with customer service.

You are now connected to Ravi from

Me:I'm trying to write a Seller review for this order (107-3573704-4046606) but I'm not able to. Can you please help?

Ravi:Hello, my name is Ravi. I'll certainly try to help regarding your concern.

Me:I wrote a review back on 10/19, but it was incorrectly removed with the comment "Feedback submitted was a product review and not related to seller"

Ravi:Hi, David. I'm so sorry to hear this from you. Let me help you. Please provide the issue in detail.

Me:When I opened it, I saw that the date of manufacture was July 2007, more than 7 years ago. But the detector is only good for 5 years, so it is expired.

Ravi:I'm so sorry for the inconvenience. Are you referring to this item? First Alert CO600 Plug In Carbon Monoxide Alarm.

Me:Yes, that is correct. It's a life safety device, so I don't want to take chances with such an old item.

Ravi:Let me help you in this issue. I'll go ahead and request for the return label to return the item for full refund. I suggest you to please return the expired item back to us for the full refund. I can assure that this won't happen again.

Me:I've already initiated the return today. But my issue here is that I'd like to leave a Seller review and I'm not able to.

Ravi:This item is fulfilled by Amazon. So we'll will take care of this issue. We're the responsible for this inconvenience. In addition to our large selection, one of the benefits we try very hard to offer our customers is convenience. I'm very sorry for the inconvenience you experienced in this case.

Me:On the seller's page, I see reviews by other users who have bought from them, so it is possible to leave reviews for the specific seller. But right now I am not able to. The problem is that my review was incorrectly removed saying it was a 'product review', which it wasn't. It's a seller review. But now the site won't let me resubmit it

Ravi:I can understand your concern.

Me:This is the page with user reviews: link

Ravi:I'll go ahead and forward this feedback to the investigation team.

Me:They have lots of new reviews everyday by users. I just want to add mine. Note that I made that same request earlier and they also said they'd forward to investigation team. But I did not hear back. Is there a way you can check the status of previous investigation?

Ravi:Sure, David. I've forward this issue to our investigation team.

Me:Thanks Ravi. Will I hear back from them via email?

Ravi:The investigation team will take the appropriate steps and should hear within 1-2 business days.

Me:Ok, thanks Ravi for your help. I will be awaiting follow up from them.

Ravi:Sure. You're most welcome. Is there anything else I can do for you today?

Me:No, thank you, that was it.

Ravi:You're welcome! It's certainly my pleasure assisting you. Thanks for contacting Amazon. We hope to see you again soon. Have a good one! bye!

Ok, so that's now the third promise to have the investigation team look it it!

November 16th

I haven't heard anything back, so time for another chat with my Amazon friends!

You are now connected to Jeric from

Me:I had a chat with customer service (Ravi) relating this item on 11/7. He said an investigation was being launched and that they would get back to me within 1-2 business days, but I have not heard back. I'd like to check the status.

Jeric:Hello David , my name is Jeric. I'm sorry to know that you haven't yet received the response. No worries, I'll be glad to check this for you and see what I can do to help.

Me:Thank you Jeric

Jeric:It's my pleasure David. I will need to gather additional information may I place you on hold for a couple of minutes?

Me:Yes, no problem.

Jeric:Thanks David, please hold on. Please bear with me David. Thank you for holding David. Just to be sure you are referring to your feedback to the item you received expired, right David ?

Me:Yes, specifically the seller feedback that I have not been able to re-submit I did initially but it had been mistakenly removed

Jeric:Thanks for verifying David. I checked the issue and I am sorry that you haven't yet received the response. I'll go ahead and forward this issue to our team.

Me:Were you able to find a record of the previous request for investigation?

Jeric:Yes David but I am not able to find the response yet. I'll go ahead and resubmit your concern about the issue. I also suggest that you can write your review for the item.

Me:I cannot submit it right now, because the button to do this is gone. That's in fact the reason I had to contact service. If I was able to post the review, I would have just done it.

Jeric:Oh. I am so sorry to know that David.

Me:Please make it clear to the investigation team that this is now the 3rd time that a request is made to them. Each time I'm being told that I'll hear back soon, and it never happened.

Jeric:David can you please try this link: link

Me:This link is for a product review. I want to post a seller review. This is the seller link: link

Jeric:Thanks David. Can you please try if this link will work: link

Me:this seems different from a seller review. A seller review is one that would end up showing up on the seller page for others to see. While this one appears to be a way to contact Amazon.

Jeric:Thanks so much David. I will also re-forward your issue to our team fro the investigations.Customer feedback like yours really helps us continue to improve our store and provide better service to our customers.Customer feedback like yours really helps us continue to improve our store and provide better service to our customers.

Me:Thank you. When should I expect to hear back?

Jeric:The team will responds once the investigations is done. According to my experience it takes 1-2 business days David.

Me:Ok, I will wait for this. Thanks for your help.

Jeric:You're always welcome. It's my pleasure to help you. Did I cover all your concerns for today? If you have another issue I'll be more than happy to assist you with this.

Me:That was the only issue, thanks.

Jeric:It's my pleasure David. Thanks for visiting and have a great day! Bye for now :)

So now for the fourth time, my case is sent to the hard working investigation team. And of course, I never hear back. This is the point where I decide that it isn't much fun anymore, and I concede defeat against an insurmountable enemy.


If you've read this far, the first conclusion is that you have no life, and that you are thinking the same thing about me. That puts us on even ground.

The second conclusion is that Amazon Seller ratings are basically a scam. Positive reviews go in (if they're even real), while legitimate negative reviews get rejected. Naturally, that leads to questionable sellers getting extremely high ratings.

I've been wondering what incentive Amazon has to protect 3rd party seller ratings so much. My guess is that there is some shady money deal between them that leads to that. It does seem strange that a company like Amazon would do this, but it's not implausible.

But then again, as the saying goes "Never attribute to malice that which can be adequately explained by stupidity". So it's conceivable that it is a case of extreme incompetence. But that's a lot less fun, so I'll go with the conspiracy theory.

Time to go to sleep and breath in some deadly odorless gas.

comments edit

We just added some preliminary support for deploying ASP.NET vNext projects to Azure Websites using git.

You can easily try it yourself: Clone my test project. Then Create an Azure Website with git enabled. Then just push the repo to Azure.

And that's it, your ASP.NET vNext site is up an running in Azure!

Here are various additional notes:

  • the support is alpha-level at this point, just like everything relating to ASP.NET vNext
  • it only support vNext solutions created by Visual Studio 2014 CTP
  • we've only tested with the alpha3 release of the K runtime
  • the first deployment takes a bit over a minute as it has to download a bunch of NuGet packages. Subsequent deployments are quite a bit faster
  • when running the deployed site, the cold start time is on the slow side. vNext is still very new, and there are lots of things left to tune and optimize!

Anyway, give it a try, and let us know what you think! Please report issues to

comments edit

My preferred method of deploying to an Azure Web Site is using git, partially because this is a feature I've been involved with from the beginning (it's known as Kudu).

However, in some cases, I just need to deploy a bunch of files from my local machine with minimal fuss, and using git is overkill. For these scenarios WebDeploy (aka msdeploy) really shines.

The problem with WebDeploy is that using it from the command line can be rather challenging. After you download the publishing profile (from the dashboard), you have to extract a bunch of chunks from it. From those chunks, you can piece together this charming command line, which will deploy a local folder to your Azure Web Site:


WAWSDeploy to the rescue

To make things easier, I wrote a little tool which makes this as simple as it can be. You still need to download the Publish Profile, but then you simply run:

WAWSDeploy c:\FolderToDeploy MyAzureSite.PublishSettings

So basically, you tell it where your files are, and where they need to go.

As an alternative, you can have it deploy from the content of a zip file, e.g.

WAWSDeploy c:\ MyAzureSite.PublishSettings

To get the tool, you can either build it yourself from the sources, or get it from Chocolatey.

Random notes:

  • it's best used for simple sites that don't need any build steps (so not for ASP.NET MVC)
  • it's just a fun little tool I wrote on the side, and not a supported Microsoft thing
  • WebDeploy only works on Windows, so WAWSDeploy has the same limitation

Let me know if this is useful, and feel free to send a PR if you find issues or want to improve it.

comments edit

Yesterday, I tweeted:

Every Time someone enables #nuget package restore on a solution, a kitten dies. Learn the new workflow!

It got a lot of RTs and Favs, but based on a number of comments, I think I may have caused some confusion, because it is in fact a confusing topic.

So first, let's get one thing out of the way: I am not suggesting that you commit your NuGet packages into your git repo! That is the worst thing you can do, and if you do that, you've moved on to killing unicorns.

The NuGet docs explain it all, but it's a little hard to read through, so I'll try to summarize the key points here. I'm keeping things concise, so please read that page for the complete story.

The old way to do package restore

In the old way, you right click on your solution in VS and choose Enable package restore. This causes VS to modify your csproj files, and create .nuget folder containing nuget.exe and some other files.

After that, your packages get restored as part of msbuild when you build your project.

Don't do this! I hope the NuGet team will remove that option soon, and point people to...

The Right way to do package restore

What the NuGet team is now recommending is both a lot cleaner and a lot simpler. In short, you don't do anything special, and it just happens! This is because NuGet now always restores packages before building in VS. So no changes whatsoever are needed on your files, which is beautiful!

Note: when building from the command line, you need to run 'nuget restore' yourself before msbuild. You could argue that scenario became harder than before, but it's for the greater good.

Converting from the old way to the new way

The NuGet team has a document that takes you step by step. In an ideal world, it would be automated, but going forward, if people stop using the Old Way on new projects, the issue will disappear over time.

What if you have custom package sources?

All you need to do is create a NuGet.Config file next to your .sln file, containing:

<?xml version="1.0" encoding="utf-8"?>
    <add key="" value="" />
    <add key="aspnetwebstacknightlyrelease" value="" />

Note that if you have private package sources that you want to keep out of your repo, you can add them to %APPDATA%\NuGet\Nuget.config (see this page) for details.

comments edit

Yesterday, I blogged about my reasons to move away from Blogger, and why I picked GitHub pages to replace it. Today, I'll discuss some of what I went through to port my old blog over.

I'm not going to go in great details about parts that have been discussed everywhere, like using GitHub pages.

Porting the comments to Disqus

I thought this would be the hardest part, but it turned out to be quite easy. I already had a Disqus account that I needed earlier in order to comment on other sites. All I had to do was add my own site to it, and tell it to import all comments from Blogger. One little OAuth workflow later, all my comments were on Disqus!

Getting Jekyll installed

First, I had to get Jekyll running on my machine, which is quite a bit more painful on Windows than on Mac/Linux. I found a good post that guided me through all the steps, and there sure are quite a few of them!

Even though I have it working, it produces some weird errors/warnings that appear to be harmless:

D:\Code\Github\>jekyll build
Configuration file: D:/Code/Github/
            Source: D:/Code/Github/
       Destination: D:/Code/Github/
      Generating... C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/posix-spawn-0.3.8/lib/posix/spawn.rb:162: warning: cannot close fd before spawn
'which' is not recognized as an internal or external command,
operable program or batch file.

So cannot close fd before spawn, and missing which (even though I have that on my PATH from git). Whatever, it still works so I'm ignoring that.

Porting the posts

That's where things got nasty. The Jekyll import page makes it look really easy: install the jekyll-import gem and profit.

Sadly, I just couldn't get that damn gem to install, and after some investigation, I concluded that it's just busted on Windows (see this thread).

If I had any common sense, I would have switched to using my old MacBook Air, which probably would have worked. But I didn't have the Mac with me at that time, and I didn't want to wait. So I did the usual dumb thing that devs do: I wrote my own conversion tool from scratch!

To make things more interesting, I had decided ahead of time that I didn't want to keep my old posts as HTML (even though Jekyll supports that), and instead wanted everything as Markdown. Just because.

So here is the tool I wrote for the job.

Warning: it's all dirty, hacky, and comment free. I wrote it for this one-time purpose, it did the job, and now I'm done with it. I had fun doing it, too! If someone finds a need for it, be my guest, but expect roughness :)

High level, here is what it does:

  • reads the big XML file that I downloaded from blogger, which contains all the posts and comments (of course, I don't care about the comments at this point).
  • extracts all the Post's metadata out of it: title, date, tags, ...
  • gets each post's content, and convert it for HTML to Markdown. I used the brilliant Html Agility Pack to parse the HTML (I'm not crazy enough to do this from scratch). And then I just went through the tags to convert them to Markdown.
  • writes out all the Jekyll/markdown files, preserving the original URLs

It's only good enough to convert the pretty restricted HTML that my posts were using. I'm sure if you throw some arbitrary HTML at it, you'll get some quality vomit out of it.

Dealing with images

This is one part where I fell a bit short. The right thing to do is to bring all the images into the git repo so it's all self-contained.

But I got lazy, so I ended up continuing to point to their original blogger location, which I'm sure I will sorely regret in 2019 when they suddenly disappear.

Styling the blog

I have no talent for styling, and no css skills, so the goal was to find a fully ready theme. I started using one from But then I decided I didn't like it, so I ended up just ripping off Phil Haack's blog, because I figure if it was good enough for him, it must be good enough for me (and he didn't mind).

If you look carefully, you might notice some hints of similarities :)

comments edit

For the past few years, I've had my blog hosted on blogger, and for the most part, I hated it. While Windows Live Writer was helping make the authoring experience bearable, in the end there was no getting away from the fact that I hate HTML!

On the other hand, I love Markdown, so I knew I had to move to a system that let me just use that directly. But when I asked on Twitter, people threw all kind of interesting options at me, and I had to make a choice. In the end, I went with Jekyll/GitHub pages, so I'll use this post to discuss the thought process.

Here are some of the other options I looked at based on people's suggestions.


Using Ghost was really tempting. It's new and shiny, it has a clean interface, and it has a very nice browser hosted Markdown editor. Also, it runs great on Azure Web Sites, which is what I work on.

But then I realized something else: I hate databases. And I love files :)

I just didn't want to deal with a system where my posts ended up somewhere in a database. So that was that.


Orchard also has nice Markdown support, which looked like a potential option.

But for the same reason as Ghost, I didn't want to go down that route.


Several folks suggested that I look at Sandra.Snow, which is a .NET based system inspired by Jekyll. Being a .NET guy, it was tempting instead of using something based on Ruby/Python.

But this came with a big catch: if I used it with GitHub pages, I would need to locally generate the HTML, and then commit that to my repo. And the very thought of committing generated files to a repository makes me sad.

Another big one is that it would not have allowed me to tweak posts online and have them just go live.


Steve Marx suggested site44, which would let me publish my blog simply by adding files to a dropbox folder. And that's certainly a cool way to publish files with no fuss.

But similarly to Sandra.Snow, I would have had to run Jekyll manually to create HTML files each time I want to publish, and I decided that wasn't for me.

GitHub pages with Jekyll solved most issues

While not perfect, using GitHub pages with Jekyll provides a workflow that best matched what I was looking for:

  1. No database: it's just a bunch of files. Yeah!
  2. No HTML: that's not completely true, as I did install Jekyll locally, and when I run it, I get local HTML files. But I think in most cases when I'll want to author a new post, I'll directly push my new Markdown file and let GitHub do the dirty work.
  3. Built-in history: it's a git repo. Enough said!
  4. Browser based editing: Github's editor is rather awful (e.g. compared to Ghost), but it's good enough to tweak existing posts. I hit save, and in under a minute, it's live on my blog. I can do this from my phone if I need to. This would not be possible with Sandra.Snow or Site44.
  5. Collaborative workflow: if someone finds a typo is my post, they can just send a pull request. And then I can accept it without leaving my browser. This is brilliant, and none of the other 4 solutions above provide this.

Well, it's too early to say that the end to end workflow is working great for me, but hopefully time will prove that it was a wise decision, as I'm not planning another move for a while!

comments edit

Windows Azure Web Sites provide a nice selection of ways to deploy a site from source code. You can deploy from:

  • a local git repository
  • a TFS project
  • a git project on GitHub
  • a git or Mercurial project on Bitbucket
  • a git or Mercurial project on CodePlex
  • code in a Dropbox folder

One thing that all these approaches have in common is that you own the code. e.g. in the GitHub case, you must be an admin on the project. The reason is that Azure needs to set up a hook in the project to enable continuous deployment, and only project owners can do that.

Deploying 'external' repositories you don't own

In some scenarios, it can be interesting to deploy a site based on sources that you don't own. For example, you might want to deploy your own instance of the NuGet gallery, but you have no intention to modify the source. You're happy with it as is, and you just want to deploy it.

To cater to this scenario, we added a new 'External repository' entry in the Azure portal:


Note: the 'External repository' entry is using the git icon, which is technically incorrect since it supports both git and Mercurial. We just didn't have time to come up with a better icon for it for this initial release! We'll probably change that later.

Once you pick that, the next page in the wizard is pretty simple: you just paste any http(s) git or Mercurial URL and you're good to go!


And as soon as you Ok the dialog, a deployment from that repository gets triggered.

What about updates?

One important point about this mode is that it doesn't support continuous deployment. This is because Azure cannot possibly register for change notifications on an arbitrary repo that you don't own.

Concretely, that means that your site will not be automatically deployed when the repo is updated. Instead, you need to tell it when you want to pick up changes, by clicking the Sync button in the Deployments page:


While this feature may not see the same kind of usage as the full continuous deployment workflows, it has its uses and nicely completes the overall source deployment story in Azure Web Sites.

comments edit

This topic is not specific to git publishing, but it's particularly useful in that scenario

In its first release, the Azure portal had a couple of limitations relating to database connections:

  • The database name had to match the connection string, which was quirky
  • You could not add external connection strings

The good news is that both limitations have now been removed, as you'll see here.

Using a SQL Azure database associated with the site

Let's say you'd like to publish the awesome NuGet Gallery to an Azure Web Site. When you create the site in Azure, choose the 'Create with database' option:


You will see a dialog that looks like this:


Let's assume that you don't have a database yet, and you'd like one to be created for you. All you have to do here is give Azure your connection string name (highlighted above).

So where does this 'NuGetGallery' string come from? It's simply the name of the connection string from the app's web.config:


This way, you don't need to change your sources to point to the SQL Azure database. You instead rely on Azure to use the right connection string at runtime.

After the following step, you can complete the wizard by either creating a new DB Server or use an existing one. Note that the database itself can be named anything you like (or keep the random name), since it is now decoupled from the connection string name.

At this point, you can just 'git push azure master' the NuGet Gallery sources, and your site is up and running with no further configuration!

Now if you go into the Configure tab for your site, you'll see your associated connection string:


Note that it's hidden by default, but you can choose to display it if you need it (e.g. if you want to connect via SQL management studio). You can even edit it if you want to tweak it!

Working with external connection strings

In the scenario above, we were using a database that Azure created for us along with the site. In some cases, you will instead need to work with an existing database, which may or may not be hosted on Azure.

In that scenario, you'd create your site without any databases. Instead, you can manually add the connection string in the Configure tab, e.g.


Note: don't forget to click the Save button at the bottom of the page when you're done!

Note that as before, we're naming the connection string after the one in web.config. The only difference is that the value now comes from you instead of coming from Azure.

comments edit

Update (9/17/2012): as of today, Azure Web Sites have direct support for A record, so the hack below is no longer necessary!

Warning: the following is just something that worked for me, and that others asked me about. It is not a Microsoft approved technique, and it could cause your cat to self-combust with no warning. I'm just a guy trying random things here.

Windows Azure Web Sites (WAWS) support custom domain names, as documented on this page. This makes it easy to use a name like, by setting up a CNAME record in your DNS.

But a lot of people are asking how to make the name just, which is known as a naked domain (aka a bare or root domain). Normally, this is done by setting up an A record, but this requires a stable IP address, which is not currently available in WAWS.

But it turns out that you can use use a CNAME for the naked domain, even though many people say it's a bad idea (more on this below).

I'm not sure if this works with all registrars, but I use NameCheap, and it works with them. Here is what it looks like in the NameCheap DNS records:


So I'm doing two things here:

Then I have the following in the Configure tab of my WAWS:


Though really, I only need the last entry since I'm redirecting www to the naked domain. I just left the www entry in there because it doesn't hurt. The first one could go too.

So what's wrong with doing this?

If you search around, you'll find a number of pages telling you that it's unsupported, and breaks RFC1034 (e.g. see this page). And I'm sure that the experts will crucify me and call me an idiot for blogging this, but heck, I can live with that!

Personally, I don't care so much about breaking an RFC, as much as I care about breaking my award winning web site, which brings me most of my income.

So what might break? From what I'm told, doing this breaks MX records, which matters if you're running an email server under your host name. So if I wanted to be, I probably couldn't. But I don't, so I don't care. It might also affect other types of records that I'm not using.

All I can say is that so far, I'm yet to find something broken about it, and I've heard from several others that they've been using this successfully for a while (not with WAWS, but that shouldn't matter).

Anyway, I think you get my point: try at your own risk! And sorry about your cat.

comments edit

A few days ago, I discussed how the git support in Azure Web Sites was written as an Open Source project called Kudu. This time, I'll give a few more details on how to run it, both locally and on Azure.

To be clear, you don't have to care about any of this if you just want to use git deployment to Azure. Just use it as is and it should work great!

This is for folks who are interested in modifying the git deployment engine itself, either to contribute some changes to the project, or just to play around with things.

Running Kudu locally

First, you can see it all in action in this 3 minute screencast!

Here are the basic steps to run Kudu locally. Note that this requires using IIS7, and will not work with IIS Express.

  • Clone it from
  • In Visual Studio, open Kudu.sln. Important: VS needs to run as administrator!
  • Set Kudu.Web as the Startup solution
  • Ctrl-F5 to run
  • You'll get an admin page that lets you create sites, and gives you the git URL for them
  • Try git pushing a test project, and see it run!

Important note: the primary purpose of running Kudu locally is to make it easier to develop outside of Azure. Conceivably, you can take this non-Azure Kudu and host it on a VM, to have your own mini deployment server. However, it's missing a few features that would make it really usable there. e.g. it doesn't set up host names, and doesn't set up authentication. We would love to add these features, and welcome contributions!

Running a private Kudu build on Azure

First, see it in action in this 5 minute screencast.

This is the more crazy one. Suppose you want to make changes to the Kudu service, and make it even more awesome. :) You can make these changes locally and test them outside of Azure, per the previous section.

But wouldn't it be great if you could actually use your latest Kudu bits in Azure itself? Turns out you can, using a special hook that we put in for the exact purpose.

Here are the basic steps:

And you're done! If you now do a git push to your Azure site, you are now using your very own git engine instead of the one that comes with Azure. How cool is that? :)

Important notes:

  • Doing this only affects this one site. If you have multiple sites where you want to use your private Kudu bits, you'll need to set up each of them the same way.
  • It probably goes without saying, but once you are running your own git deployment engine, you're in unsupported territory, so don't call product support if something doesn't work! However, the Kudu team will always be happy to talk to you on JabbR, or our MSDN forum, or on github. :)

comments edit

Yesterday, ScottGu unveiled the new Azure, which brings a whole list of exciting changes to the platform.

One of the most exciting new features is the ability to deploy Web Sites to Azure using git. Scott's post covers that, and I also did a screencast on that topic.

One part that has not yet been discussed is that the engine that powers this feature was developed as an Open Source project from the first line. The project is code named Kudu and can be found at Kudu is a member of the the Outercurve Foundation, and is released under the Apache License 2.0 (the same as NuGet).

This project is actually not tied to Azure, and can run standalone on any machine. In that mode, you can push project and have them run in your local IIS.

So why is this project interesting to you?

There are a few reasons that you may be interested in this project.

The first is that it's a good place to file bugs that you run into when you git push your project to Azure. You can also use our forum to discuss things.

The second reason is that the associated wiki contains lots of useful info about it. Well, at this time there isn't all that much there, but the idea is that we'll grow it as we go forward. And of course, wiki contributions are welcome!

And finally, you may be interested in contributing to the project, as we do accept contributions!

comments edit

I have just moved T4MVC to a new CodePlex project, instead of it being part of the MvcContrib project. Its new home is

If you're a T4MVC user, that should not make much difference except that there is now a new place to discuss it and file bugs. NuGet is still the place to go to get T4MVC!

Note that T4MVC is still part of the MvcContrib effort, even if it doesn't share the same source tree. Here are the reasons for the move.

Reduce confusion

T4MVC is quite separate from the rest of MvcContrib, because it's just a T4 template, and not some code that's part of an assembly. Having the T4MVC files be in their own little island in the middle of a repo with many unrelated thing has been a bit of a barrier of entry for people wanting to make a quite contribution.

Also, since all MvcContrib bugs are files in the same place, there was always additional pain for me to filter T4MVC issues from unrelated ones.

Likewise, we'll now have our own discussion forum that only focuses on T4MVC. Most users have been using StackOverflow for T4MVC support, and you can continue to do that if you prefer.

Switch to git!

I've been increasingly using git over Mercurial (like everyone else it seems!), to the point where having to use Mercurial is becoming an annoyance. Since CodePlex now supports git, it was the perfect opportunity to switch to that!

comments edit

Mono comes with a really cool CSharp compiler as a service. The only problem is that no one seems to know about it!

I think the main reason for this is that anything related to Mono causes a fair bit of confusion to all the people who are not familiar with it. And that certainly includes myself, as I know very little about it besides what I'm discussing in this post!

Talking to various people, the general misconceptions are:

  • Mono only runs on Linux
  • Even if it runs on Windows, it doesn't use the CLR, so I can't use it
  • Mono is for strange people :)

And while that may be true for some aspects of Mono, it certainly isn't for Mono.CSharp.dll. In fact, it's a totally 'normal' library that you can use in your very 'normal' C# projects in Visual Studio.

The next hurdle is that it's not all that easy to just get Mono.CSharp.dll. You have to either install an 80MB setup from here, or get a big .tar.gz file with lots of other things from here. And a lot of people on Windows don't like dealing with tar.gz files (hint: use 7zip).

Now the good news: after chatting with Miguel de Icaza on Twitter, I put Mono.CSharp.dll on NuGet, making it totally trivial to use from VS. There goes that hurdle. (note: I'm the package owner for now, until some Miguel-blessed dev claims it).

Try Mono.CSharp in under 5 minutes

Just open VS and create a Console app, and add a NuGet package reference to Mono.CSharp. That takes a whole 30 seconds. And I'll re-emphasize that there is nothing 'Mono' about this Console app. It's just plain vanilla.

Now write some basic code to use the compiler. It all revolves around the Evaluator class. Here is the sample code I used (GitHub). It's quick and dirty with poor error handling, as the focus is to just demonstrate the basic calls that make things work:

using System;
using System.IO;
using Mono.CSharp;

namespace MonoCompilerDemo
    public interface IFoo { string Bar(string s); }

    class Program
        static void Main(string[] args)
            var evaluator = new Evaluator(
                new CompilerSettings(),
                new Report(new ConsoleReportPrinter()));

            // Make it reference our own assembly so it can use IFoo

            // Feed it some code
    public class Foo : MonoCompilerDemo.IFoo
        public string Bar(string s) { return s.ToUpper(); }

            for (; ; )
                string line = Console.ReadLine();
                if (line == null) break;

                object result;
                bool result_set;
                evaluator.Evaluate(line, out result, out result_set);
                if (result_set) Console.WriteLine(result);

It feeds it some starter code and start a REPL look to evaluate expressions. e.g. run it and try this. You type the first two, and the 3rd is output:

MonoCompilerDemo.IFoo foo = new Foo();
foo.Bar("Hello Mono.CSharp");

You get the idea!

What about Roslyn?

I blogged a few months back about using Roslyn to implement an MVC Razor view engine. I'm far from a Roslyn expert, and frankly haven't done much with it since that post. From what I read, Roslyn has the potential to enable some very compelling scenarios in the future.

But there is one major argument right now in favor of using the Mono compiler: it's pretty much feature complete today, while Roslyn is not even close. Totally understandable given that it's a CTP, and is only meant to give an early taste of the feature.

So anyway, I still know close to nothing about Mono, but if I need to dynamically compile some pieces of C# in a 'normal' non-Mono project, I know that Mono.CSharp is not far away!

comments edit

Update 12/29/2011: the Roslyn CTP is now available on NuGet, so it's no longer necessary to install it before running this sample!

Note: the code for this view engine sample is on Github.

The C# team has just announced the public availability of the first Roslyn CTP. See their post here, and download it from here. I really hope they can make it available on NuGet soon, but right now it's not there, so you'll have to run their setup. Sorry!

As you've probably heard from various conferences earlier this year, Roslyn offers a compiler as a service for C# and VB. Since we do a lot of compilation in ASP.NET land, I figured I'd play around with trying write an MVC view engine that uses it instead of the standard compilation path.

Word of warning: the Roslyn CTP is still very rough and is missing a lot of key features, like dynamic, anonymous types, indexers and using statements (get the full list here). So while I did get something working, the language limitations prevent it from being useful in any real scenario. This is just an exercise to see how far we can get. Lower your expectations! :)

Why would we want to do this

When you have a standard MVC project, compilation happens at two different levels:

  • Your Controllers, Models, and most of your C# code get compiled by msbuild (or Visual Studio) into a single assembly which ends up in the 'bin' folder
  • All the Views (whether .aspx or .cshtml) get compiled dynamically at runtime by ASP.NET.

One drawback of compiling views at runtime is that it's pretty slow. And since it's slow, ASP.NET tries really hard to save assemblies to disk so it can reuse them across AppDomain cycles. Those assemblies all go under the infamous 'Temporary ASP.NET Files' folder. There is a huge amount of complexity to make this work, with settings like batching which can either help or hurt depending on the situation.

One thing I've been working on to avoid this dynamic compilation is RazorGenerator, which lets you precompile your views into the same assembly as your controllers. This works quite nicely, but it does have one big drawback: you can't just update a view and have it get picked up at runtime. Instead, you need to rebuild using msbuild (or VS), just like you would when you change a controller file.

What would be nice is to be able to support dynamic compilation of the views, but with a much lighter system then what the standard ASP.NET Build Manager provides. Enter Roslyn!

Compile views using Roslyn: fast and lightweight

The main reason that the standard build manager is pretty slow is that it goes through CodeDom, which launching csc.exe for every compilation. csc.exe is actually very fast at compiling C# code, but the fact that we have to pay for the csc process startup time each time we compile anything ends up making things slow.

By contrast, Roslyn gives us an API to compile code in memory, without ever having to launch another process, making things much faster. In fact, it is so fast that the incentive that we had to preserve compiled assembly in 'Temporary ASP.NET Files' mostly disappears.

Instead, we can take a much simpler approach: whenever we need to compile a view, we just compile it on the fly in memory using Roslyn, and then cache it for the lifetime of the AppDomain. But we never need to cache it to disk, and generally don't use the disk at all.

In preliminary tests, I have measured the perf of compiling pages using Roslyn to be more than 50 times faster than doing it via CodeDom. So it's looking quite promising!

So to summarize, the benefits of using Roslyn to implement a view engine are:

  • Fast dynamic compilation
  • No need to cache assemblies to disk, leading to a much simpler and lighter weight system.
  • New shiny thing! :)

More detail about the code

The code for my sample view engine is on Github (, so I'll mostly let you check it out there. All the interesting code is in RoslynRazorViewEngine.cs.

Here are the main steps that it goes through to turn a Razor file into an Assembly: - First it uses the Razor Engine to generate a CodeCompileUnit from the Razor file. - It then uses CodeDom to turn the CodeCompileUnit into C# source code. Note that we only use CodeDom as a code generator here, and not to actually compile anything. - We then use Roslyn to compile the course code into a byte[]. That byte array is basically an in memory copy of what would normally be a .dll file. - Finally, we call Assembly.Load to load that byte[] into a runtime Assembly.

How restrictive are the limitations in the Roslyn CTP?

As I mentioned above, there are lots of limitations, which make this little more than a proof of concept.

To begin with, it doesn't support dynamic, which MVC uses pretty heavily. By default, MVC views extend WebViewPage, so I had to add '@model object' at the top of my test view to get around that.

Then there is ViewBag, which is also dynamic, and allows writing things like '@ViewBag.Message'. I tried replacing that by '@ViewData["Message"]', only to find out that indexers were not supported either. Duh!

And then it doesn't support anonymous objects, which MVC uses quite a bit...

So don't even think of trying to use this for anything real at this time. Still, the approach feels pretty sound, and whenever Roslyn becomes more feature complete, I have good hope that it can help us improve the ASP.NET compilation system.

comments edit

Note (12/22/2011): in NuGet 1.6 or later, this feature is built in, so you no longer need to use the NuGetPowerTools. Just right click on the Solution and choose 'Enable NuGet Package Restore'.

A few months ago, I described a workflow that lets you use NuGet without committing the packages to source control. This has been a very popular workflow, and generally works quite well.

The down side is that it's a bit painful to set up: you have to get NuGet.exe and add it to your tree, then you have to add a pre-build event for every project that uses NuGet.

The good news is that the ever-resourceful David Fowler has come up with a much easier way to set that up, using his NuGetPowerTools package. Here is the way to do it:

Let's assume that you have a solution that is either already using NuGet, or planning to use it, and that you want to set up the no-commit workflow.

Now, you just need to go to the Package Manager Console and run a couple commands:

PM> Install-Package NuGetPowerTools
Successfully installed 'NuGetPowerTools 0.28'.

- To enable building a package from a project use the Enable-PackageBuild command
- To enable restoring packages on build use the Enable-PackageRestore command.
- When using one of the above commands, a .nuget folder will been added to your
solution root. Make sure you check it in!
- For for information, see

PM> Enable-PackageRestore
Attempting to resolve dependency 'NuGet.CommandLine (≥ 1.4)'.
Successfully installed 'NuGet.CommandLine 1.4.20615.182'.
Successfully installed 'NuGet.Build 0.16'.

Copying nuget.exe and msbuild scripts to D:\Code\StarterApps\Mvc3Application\.nuget
Successfully uninstalled 'NuGet.Build 0.16'.
Successfully uninstalled 'NuGet.CommandLine 1.4.20615.182'.

Don't forget to commit the .nuget folder
Updated 'Mvc3Application' to use 'NuGet.targets'
Enabled package restore for Mvc3Application

And you're done! So basically, the first command installs a NuGet package which brings in some helpful commands, and the second one runs one of those commands.

After doing this, you'll notice a new .nuget folder under your solution, containing nuget.exe plus a couple msbuild target files. Make sure you commit that folder to source control! You'll also find a few changes in your csproj files to trigger the restore functionality when you build.

I have now become a strong supporter of the don't commit packages workflow, and if you're going to use it, this is the way to do it!

comments edit

Update: my terminology in this post is not quite correct. Whenever I refer to the server part of OData, I really mean to say 'WCF Data Services'. OData is the protocol, and WCF Data Services is the specific implementation. So the 'quirk' we ran into is a WCF Data Services thing and not an OData thing.

As you may know, NuGet uses an OData feed for its packages. Whenever you install packages, or search for packages from Visual Studio, it goes through this feed, which you can find at

If you're a NuGet user, you may also have noticed that the perf of NuGet searches from Visual Studio had been quite bad in recent months. You'd go to the NuGet package dialog and type a search string, and it would take 10 or more seconds to give you results. Ouch! :(

It turns out that the perf issue was due to a nasty OData quirk that we've since worked around, and I thought it might be interesting to share this with others. I'm partly doing this as you might run into this yourself if you use OData, partly to poke a little fun at OData, and also to poke a little fun at ourselves, since we really should have caught that from day one.

A whole stack of query abstractions

When you make an OData query from a .NET client, you go through a whole bunch of abstraction layers before a SQL query is made. Let's say for example that you're looking for packages that have the string 'T4MVC' in their description. It would roughly go though these stages:

First, in your .NET client, the OData client library would let you write something like:

var packages = context.Packages.Where(p => p.Description.Contain("T4MVC"));

Second, this code gets translated by the OData client LINQ provider into a URL with a query string that looks like this:


Third, this is processed by the OData server, which turns it back into a LINQ expressing, which in theory will look similar to what you had on the client, which was:

var packages = context.Packages.Where(
    p => p.Description.Contain("T4MVC"));

Of course, the 'context' here is a very different beast from what it was in step 1, but from a LINQ expression tree point of view, there shouldn't be much difference.

And finally, the Entity Framework LINQ provider turns this into a SQL query, with a WHERE clause that looks something like:

WHERE Description LIKE N'%T4MVC%'

And then it executes nice and fast (assuming a proper index), and all is well.

When the abstractions break down

Unfortunately, that clean sequence was not going as planned, resulting is much less efficient queries, which started to get really slow as our package count started to get large (and we're already at over 7000 as of writing this post!).

So which of these steps went wrong? For us, it turned out to be the third one, where the OData server code was creating a very complex LINQ expression.

To understand why, let's first briefly discuss OData providers. When you write an OData DataService, you actually have the choice between three types of providers: - An Entity Framework provider which works directly over an EF ObjectContext

  • A reflection provider which works on an arbitrary context that exposes entity sets that are not tied to a specific database technology

  • A custom provider, which is something so hard to write that almost no one has ever done it (maybe a slight exaggeration, but not by much!)

Give that we're using EF, #1 seems like the obvious choice. Unfortunately, the EF provider is very inflexible, as it doesn't let you use any calculated properties on your entities. In other words, it only works if the only thing you want on your OData feed are fields that come straight from the database. So for most non-trivial apps, it's not a very usable option, and it wasn't for us (we have some calculated fields like ReportAbuseUrl).

So we ended up using the reflection provider, and wrapping the EF objects with our own objects which exposed whatever we wanted.

Functionally, this worked great, but what we didn't realize is that the use of the reflection provider causes OData to switch to a different LINQ expression tree generator which does 'crazy' things. Specifically, it makes the bad assumption that when you use the reflection provider, you must be using LINQ to object.

So it protects you by using some 'null propagation' logic which makes sure that when you write p.Description.Contain("T4MVC"), it won't blow up if the Description is ever null. It does this by inserting some conditional checks in the LINQ expression. This is very useful if you are in fact using LINQ to object, but it's a perf disaster if you are using LINQ to EF!

Now, when translated into SQL, what should have been the simple WHERE clause above was in fact becoming something like this:

WHERE  1 = ( CASE 
               WHEN ( Description LIKE N'%T4MVC%' ) THEN 
               CAST(1 AS BIT) 
               WHEN ( NOT ( Description LIKE N'%T4MVC%' ) ) THEN 
               CAST(0 AS BIT) 
             END ) 

which was running significantly slower. Note that in reality, we're querying for multiple fields at once, so the final SQL statement ended up being much scarier than this. I'm just using this simple case for illustration.And to make things worse, we learned that there was no way of turning off this behavior. What to do?

The solution: use some LINQ ninja skills to restore order

LINQ ninja David Fowler found this an irresistible challenge, and came up with a fix is both crazy and brilliant: he wrote a custom LINQ provider that analyses the expression tree generated by the OData LINQ provider, searches for the unwanted conditional null-check pattern, and eliminates it before the expression gets handed out to the EF LINQ provider.

If you want to see the details of his fix, it's all on github, split into two projects:

QueryInterceptor ( is a helper library that makes it easier to write this type of query modification code.

ODataNullPropagationVisitor ( builds on QueryInterceptor and specifically targets the removal of the unwanted null check.

Naturally, these are available via NuGet (with the second depending on the first). After importing those packages, all that's left to do is add one small call to your IQueryable, e.g.

query = query.WithoutNullPropagation();

and your expression trees will be given a gardener's special pruning :)

Lesson learned: always check your SQL queries

Some might conclude that all those query abstractions are just too dangerous, and we should just be writing raw SQL instead, where this never would have happened. But I think that would be way too drastic, and I certainly wouldn't stop using abstractions because of this issue.

However, the wisdom we learned is that no matter what query abstractions you're using (LINQ, OData, or other), you should always run SQL query analyzer on your app to see what SQL statements get run in the end. If you see any queries that doesn't completely make sense based on what your app is doing, get to the bottom of it and address it!

Of course, this is really 'obvious' advice, and the fact that we never did that is certainly a bit embarrassing. Part of the problem is that our tiny NuGet team is mostly focused on the NuGet client, and that the server hasn't been getting enough love. But yes, these are just bad excuses, and in the end, we messed that one up. But now it's fixed :)

comments edit

Like many others, I have been playing around with Google+ to see what the new kid on the block has to offer. And it does have some good things going for it, with the concepts of Circles providing a pretty nice approach to privacy.

Unfortunately, it suffers from the same flaw that Facebook and Twitter have always had: it makes the naïve assumption that when you follow someone, you want to hear everything they have to say. In other words, it treats us as one-dimensional beings, which doesn't match the 'real world'.

This is something I have always found particularly painful on Twitter, both as a tweet consumer and a tweet producer.

As a consumer, I end up not following a bunch of interesting .NET folks because they're too 'noisy', meaning they tweet about a lot of unrelated things that I don't care about. I've tried to follow Scott Hanselman's philosophy and let the river of crap wash over me, but it just didn't work for me. I guess I couldn't take the smell.

As a producer, I end up not tweeting many things I would want to say, because I know that a lot of my 2500 followers only care about the .NET side, and I don't want to add crap to their rivers. For instance, I follow tennis closely, but I'm not going to tweet super insightful things like “OMG, Federer lost!!”, because I know most followers don't care.

So to summarize, I'm missing out as a consumer, and repressed as a producer. Sad! :(

Aren't Twitter hashtags the way to follow topics instead of users?

Twitter hashtags are an ugly hack over a weak platform, and don't do much to solve this.

First of all, as a producer, it makes no difference to my followers, since they will see my tweets no matter what hashtags they contain.

As a consumer, hashtags fail pretty badly for a number of reasons. First of all, many people don't use them correctly. They get misspelled, forgotten, and often conflict with unrelated things. But more importantly, they assume that you want to hear about that topic from everybody, while in many cases I only want to hear what a selected set of users are saying about that topic.

If I could set a search criteria for each user that I follow, I might be getting somewhere, but that's just not an option today. And even that would work poorly given the inconsistent use of hashtags.

But don't Google+ Circles solve this issue?

No, not one bit! Circles are about privacy and nothing else. The issue I'm discussing here has nothing to do with privacy; it's about filtering of public information.

I see people saying that Google+ successfully merges what Facebook and Twitter are good at: connecting with friends and having a public voice. They are wrong! Let's put that to the test…

Let say I convince all my family to get on Google+ (a tough challenge, but bear with me). I add them to my 'family' circle and they do the same thing. We can share family things with great privacy; that's nice, and is where circles shine.

But now let's say I'm also using Google+ the way I use twitter today, and write a whole bunch of things about .NET.

What happens when my family members click on their 'family' circle? They're inundated with all that .NET stuff from me that they couldn't care less about! Their first reaction is that they want to go back to Facebook, where they don't see that 'work' stuff.

Now let's look at a second scenario: I want to publicly share various things about both .NET and tennis. They key word here is publicly. I don't want to have to add everyone who can read my tennis and .NET comments two circles, since I want it to be wide open. Circles are just not meant to solve this.

The answer: Channels

One simple way to solve this is to add a concept called 'channels'. Here is how it would work:

First everyone can (optionally) define a list of channels. In my case, I might create channels called 'tech', 'tennis', and 'personal'. For each channel, you can write a one line 'advertisement' of what you generally discuss there. e.g. my tech channel might say 'stuff I work on, mostly related to .NET and NuGet'.

Then whenever you share something, you can choose whether it should go to everyone or just some channel. Note that when I say 'everyone' here, I really mean 'everyone that is allowed to see it'. Again, channels are not a privacy concept; they are orthogonal.

Finally, when you follow someone (i.e. add them to a circle), you get to choose whether you want the whole person, or only some of the channels. e.g. my mom would pick my 'personal' channel, while some .NET folks may choose 'tech', and others might leave it unfiltered and get it all (which would be the default, as it is today).

As an additional option, you could attach a channel to each circle. e.g. my 'family' circle would use to the 'personal' channel, so I don't have to think about it when I share from there. Note that this setting only applies to what I share. For each family member that I follow, I can still select what I want from their channels (which are likely not named the same as mine).

This may seem a bit complicated, but I don't think it would be in practice, because:

  • Users coming from Facebook who only use it to connect to friends would not define any channels.
  • When you start following someone, you'd typically follow the whole person, as you do today. Then if you start getting too much noise from them, an easy-to-find option would allow you to tune it down. e.g. the context menu on my 'tennis' comment would offer “Don't show any more 'tennis' comments from this user”. Conceptually, this is similar to Facebook offering you to ignore Farmville entries from some users, and that's an easy concept to understand.

So it would not make the platform any less approachable to newbies, but the extra power would be readily available when needed.

Good old blogs have had that forever

Interestingly, if you view 'things that you share' as 'blog posts', and 'following someone' as 'subscribing to their RSS feed', you find that the channel feature I describe here is almost identical to the concept of tags/labels in a blog.

e.g. You subscribe to to get all my posts, and to to only get my posts about NuGet.

So the basic concept is far from new, but for some reason the big social networks have not caught on to it.

Will this feature ever be available?

Well, that's good question! My hope is that enough people want it that the big social networks will eventually want to implement something like it.

If I had to choose, I'd prefer Google+ to be the one offering this, since I think it has a model which lends itself to it best.

And if all else fails, I'll just have to start a new social network. Or not! :)

comments edit

Click here to find all the posts relating to the Razor Generator

A few days ago, I blogged about how you can use Razor Generator to precompile your MVC Razor views. In this post, I will demonstrate how you can then unit test your precompiled views. Note that this is still very much experimental, so at this point the primary goal is to get feedback on the concept.

Simple walkthrough to unit test views

After installing RazorGenerator, create an MVC 3 Razor app, using the 'Internet Application' template and including the unit test project.

In the previous post, we used precompiled views in a different library, so this time let's keep them in the MVC project to show something different. First, use NuGet to install the RazorGenerator.Mvc package in your MVC project.Then, as in the previous post, set the custom tool on Views\Home\Index.cshtml to 'RazorGenerator', causing Index.cs to be generated under it.But now, let's do something new and use NuGet again to add the RazorGenerator.Testing package to the unit test project (not to the MVC app!).And that's all it takes to get set up! Now we can write a unit test for our precompiled Index.cshtml view. e.g. create a Views\HomeViewsTest.cs (in the unit test project):

using HtmlAgilityPack;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using MvcApplication2.Views.Home;
using RazorGenerator.Testing;

namespace MvcApplication1.Tests.Views {
    public class HomeViewsTest {
        public void Index() {
            // Instantiate the view directly. This is made possible by
            // the fact that we precompiled it
            var view = new Index();

            // Set up the data that needs to be accessed by the view
            view.ViewBag.Message = "Testing";

            // Render it in an HtmlAgilityPack HtmlDocument. Note that
            // you can pass a 'model' object here if your view needs one.
            // Generally, what you do here is similar to how a controller
            //action sets up data for its view.
            HtmlDocument doc = view.RenderAsHtml();

            // Use the HtmlAgilityPack object model to verify the view.
            // Here, we simply check that the first <h2> tag contains
            // what we put in view.ViewBag.Message
            HtmlNode node = doc.DocumentNode.Element("h2");
            Assert.AreEqual("Testing", node.InnerHtml.Trim());

A few notes about unit testing views

Unit testing views in ASP.NET MVC is something that was very tricky to do before, due to the fact that the views are normally compiled at runtime. But the use of the Razor Generator makes it possible to directly instantiate view classes and unit test them. Now the big question, is whether unit testing views is desirable. Some people have expressed concerns that it would be a bit fragile due to the changing nature of the HTML output.My take here is that while it would be a bad idea to try to compare the entire HTML output, the test can be made pretty solid by selectively comparing some interesting fragments, as in the sample above.That being said, I have not tried this is a real app, so there is still much to learn about how this will all play out. This is just a first step!

What about partial views?

When designing this view testing framework, we took the approach that we wanted to focus on the output of just one view at a time. Hence, if a view calls @Html.Partial(…) to render a sub-view, we don't let the sub-view render itself, and instead just render a token to mark where the sub-view would be. This seemed more true to the nature of what a unit test should be, compared to letting the whole composite page render itself, which would be more of a functional test (plus there were some tough challenged to making it work).

Where do we go from here?

Well, it'll be interesting to hear what people think about the general idea. We're interested in two types of feedback. First, what do you think about the overall concept of unit testing views using this approach. Second, please report bugs that you run into to At this point, I expect it to be a bit buggy and probably blow up on some complex views. Treat it as a proof of concept! :)

comments edit

Click here to find all the posts relating to the Razor Generator A while back, I blogged about a single file generator I wrote that can precompile Razor helpers. A bit later, Chris van de Steeg started from that code base and extended it to support precompiling MVC views (see his post).

On my side, this project stayed dormant for a little while, but recently we started extending it to support a number of interesting new scenarios, including precompiling MVC views (albeit with a slightly different approach from Chris's).

Most of the new code was written by Pranav Krishnamoorthy, a dev on the ASP.NET team. Many thanks to him for moving this project forward.

Installing the generator

It's on the VS extension gallery, so install it from there. It's called “Razor Generator” (not to be confused with “Razor Single File Generator for MVC”, which is Chris').


Walkthrough to precompile MVC views

You can use it to precompile MVC views either in a separate library or in the MVC project itself. I'll demonstrate the separate library case, as it's a bit more interesting.

To begin with, create a new MVC 3 app using Razor (and the 'Internet Application' template). Then add a new class library project to the solution (e.g. call it MyPrecompiledViews), and add a reference to it from the MVC project.

Update (4/26/2012): the best approach is to actually create an MVC project for that library, instead of a library project. You'll never actually run it as an Mvc app, but the fact that it comes with the right set of config files allows intellisense and other things to work a lot better than in a library project. See for latest info. Now the fun part begins: using NuGet, install the RazorGenerator.Mvc package into your class library. This adds a number of things to the project:

  • A reference to RazorGenerator.Mvc.dll, which contains the view engine
  • Logic to register the view engine using WebActivator (in App_Start\PrecompiledMvcViewEngineStart.cs).
  • Two web.config files that are there to make intellisense work while you author your views (they're not used at runtime)
  • A sample view, which you can later remove

Let's take a closer look at that sample view:



Notice that it has a Custom Tool set to RazorGenerator, which causes it to generate a .cs file underneath itself (thanks to the generator we installed earlier).

This is just a sample, so now let's move the Views\Home\Index.cshtml from the MVC project to the same folder in the class library (you can press Shift during the drag/drop to make it a move). Then set the generator to RazorGenerator as in test.cshtml. You'll now get an Index.cs nested under Index.cshtml.

And that's it you're done! You can now run your app, and it will be using the precompiled version of Home\Index.cshtml.

Why would you want to do that?

One reason to do this is to avoid any runtime hit when your site starts, since there is nothing left to compile at runtime. This can be significant in sites with many views.

Also, you no longer need to deploy the cshtml files at all, resulting in a smaller deployment file set.

Another cool benefit is that it gives you the ability to unit test your views, which has always been something very difficult with the standard runtime compilation model. I'll cover that in more details in a future post.

Generating files at design time vs. build time

The way the generation works is very similar to T4 templates you have you project. The generation happens as soon as you save the file. You can also force it to regenerate by right clicking on the .cshtml file and choosing Run Custom Tool.

Generally, the guidance is to commit those generated files along with the cshtml file, the same way that you commit all your 'hand-written' source files. If you do that, everything will run just fine in an automated build environment.

Another reason to commit the generated files is that it allows you to write code against them with full VS intellisense. e.g. if you use this technique to write Razor helpers that you want to call from other views, you really want VS to know about the generated file at design time. Ditto if you want to write unit tests against your views.

That being said, if you really want to postpone the generation until build time, we're working on an MsBuild task that will do that. For now, you can find it by getting the RazorGenerator sources on CodePlex.

If you want to help or report issues

This project is hosted on under the Apache 2.0 Source license, so feel free to contribute! You can also use CodePlex to discuss and report issues.

comments edit

One thing we hear occasionally from users is that it would be nice to be able to install NuGet packages from the command line (e.g. this issue, that one, and this thread). There are good reasons why this isn't supported today, which I will discuss in this post.

What does installing a package even mean?

Before we go further, we need to be clear about exactly what we mean by 'installing a package'. The reason this needs to be clarified is that there are really two definitions, which can cause confusion. - Getting the bits onto the machine: 'installing a NuGet package' is sometimes used to mean the act of getting the contents of the package onto your hard drive.
- Making a project use a package: more commonly, it refers to not only downloading the package bits, but also 'applying' them to a project.

1 is something that is fully supported today outside of Visual Studio using nuget.exe (see my previous post). NuGet also supports restoring packages outside of VS so you don't have to commit them.

But for the sake of this post, I am strictly referring to #2, and that's what I mean any time I use the term 'installing a package' below.

Now that we have this out of the way, let's discuss why installing a package outside Visual Studio is non-trivial, as well as why it is in most cases not useful at all, although a case can be made for updating packages.

What makes installing a package outside Visual Studio non-trivial

Installing a NuGet package into a project (e.g. a csproj file) is a rich operation which does a lot more than just copying files. Here is a rough list of what NuGet can do when you install a package from VS (whether using the Package Manager Console or the NuGet Dialog): - Add references to assemblies contained in the package
- Add references to framework assemblies in the GAC
- Add content files to an app (e.g. JavaScript files, code files, …)
- Add assembly binding redirects to deal with version incompatibilities
- Perform config transformations, typically to add settings related to the package
- Bring in tools that can then be run from Package Manager Console
- Run PowerShell scripts which can do arbitrary things by automating the DTE object model

Now let's think about what it would take to perform those operations outside of VS.

The first 3 involve making modifications to the csproj file. When done within VS, it happens automatically by calling DTE methods, but outside of VS it would need to be done using custom parsing logic. While it's clearly possible, it needs to be done carefully to avoid corrupting the csproj file. e.g. a GAC reference should not be added if it's already there.

4 to #6 should not be too different from doing it in VS.

7 is basically impossible, since you cannot really 'fake' the DTE to let those script run.

So conceivably, with some good amount of work, we could support all scenarios except #7. It would be a little quirky as some packages would not fully work, but in many cases it would work.

But let's now discuss how useful it would be.

Why installing packages outside of Visual Studio rarely makes sense

So let's say we had this feature and it fully worked. What would it let you do that you can't do today?

You could use the command line outside VS to install a Foo package in your project, but that in itself is rarely useful. e.g. suppose the package brings in a Foo.dll. You now have that assembly added as a reference in your project, but you don't have any code using it. You now need to go in VS to write code against that new assembly, so it would have been simpler to just add it from VS in the first place!

And that's generally the case for most packages: the logical step after installing them is to go to VS and actually use them, which mostly negates any benefits you may find by installing it outside of VS.

Admittedly, there are exceptions, like the Elmah package which is more or less 'ready to run' after you install it. But for the wide majority of packages, there is no direct 'install and run' workflow.

What about package updates?

If package installs don't make sense outside of VS, what about package updates?

So you have this Foo packages that you installed from VS, but now you want to update it to a new versions from the command line. Does that make sense?

I think it does make a lot more sense than the install scenario, because by that point, you (presumably) already wrote some code that uses the package. So by updating it, you might get a newer Foo.dll with bug fixes, but all the code you wrote is still valid and ready to run against.

In particular, update could work well in the constrained scenario where the new version on the package just updates an assembly but doesn't do much else.

On the other hand, it would be hard to support in the general case, since in theory, the updated package can be completely different from the older one. e.g. suppose the new package contains some new install-time PowerShell scripts. We'd be right back with the same tough issues discussed above.

Where do we go from here? You tell us!

My take is that we need to experiment with supporting package update outside on VS for at least a subset of scenarios. The big question is deciding how far this needs to go to reach sufficiently useful state.

The first step would be to start with the 'only the assembly changed' scenario, which is relatively simple, and probably is the 90+% case.

If you have some thoughts on this, we'd love to hear them! Would you use such feature, and would limiting it to updating assembly references be enough for your needs?