Unit Testing Powershell and Hello Pester by Matt Wrock

It has been a long time since I have blogged. Too long. I have changed roles at Microsoft moving to Cloud Developer Services and have taken on two new Open Source projects: Chocolatey and Pester, joining as a project committer to both. RequestReduce is still alive but has gone far under nourished. Fortunately it is pretty stable except for a few edge cases and I hope to get back to enhancements and fixes soon. All these commitments in addition to my Boxstarter project keep me pretty much working nearly 24/7. Its been a lot of fun but its not a sustainable life style I can recommend.

A year of Powershell

This has been my year of Powershell. I was introduced about three years ago and was immediately impressed with its power and also a bit daunted by the learning curve. Unless you work with it often, some things are hard to get used to and not obvious to the curmudgeon C# developer like myself. About a year ago, I decided that I was tired of complaining about my organization’s deployment practices and began to revamp the process from a 50 page document of manual steps to a few thousand lines of powershell. I learned a lot.

You can pretty much do anything in Powershell. There really is no excuse for manual deployments. If you must have manual steps, then there is something wrong with your process or tool set.

The Pain of a large Powershell Code Base

One interesting thing about powershell is that it lends itself extremely well to small but powerful functions and modules. Once you reach a certain level of competence, its almost addictive to automate everything and add lots of scripts to your profile. Over time, these small scripts can grow and I have now worked on about a half dozen powershell projects that have become full blown applications.

As a Powershell application becomes larger and more complex, the pain of no compiler (one might call this the first unit test) and lacking unit testing ecosystem begins to rear its ugly head. As a C# developer practicing TDD in my day job, I have grown to appreciate TDD (test driven development where you write tests before implementation code) as a design tool and as a productivity tool. Without it, it is easy for a growing code base to get sloppy and to get mired down in regression bugs as you add features. I like to think that V1 (or at least the first prototype) always ships faster without tests but with each release, the lack of strong Unit Test coverage slows feature work exponentially. After several years, where Unit tests are done later if at all, its easy to end up with a maintenance nightmare where teams are simply chasing bugs.

Powershell is not immune and needs and deserves a strong Unit Testing toolset.

Unit testing Powershell? Seems a bit overkill doesn’t it?

Maybe. I do think there is a place for the ability to rattle off ad hoc scripts without the compelling need for unit testing. What also makes Powershell unique is that it tends to work in the domain of “external dependencies” that are typically mocked out in traditional unit testing. Look at most powershell scripts and you will find that databases, file systems, the registry, large enterprise systems are central areas of concern. How do you unit test something that simply acts to glue these things together?

So maybe you don’t. I’m not going to advise that every line of powershell needs to be Unit Tested. However, once you start writing scripts with lots of conditionals and alternate paths and flows, Unit Testing is a must I think and its all too often that more time goes by without tests to make catching up painful. So if you think your powershell app is going to hold some weight, its best to start  unit testing right away. Of course in languages like C#, Java, and Ruby this is a no brainer. There is a rich set of tools and vibrant communities committed to polishing unit testing patterns. There are a countless number of unit testing frameworks, mocking frameworks, test runners and now “continuous testing” tools to choose from. Powershell?…Not so much. The tool is new, the domain is often difficult to test and a large part of the community lack a unit testing background.

How do you do it?

Its at this point where we should all bow our heads and offer up a minute of silence as we submit warm rays of gratitude toward Scott Muc (@scottmuc). Scott created a great tool called Pester that is a BDD style Unit Testing framework for powershell. I do know a bit about BDD (behavior driven development) but not enough to pontificate about it on the internets. I think that’s ok. Regardless, I’m going to demonstrate how you use Pester to test powershell scripts.

Chocolatey: A Pester Case Study

For the past few months, I have been an active contributor to Chocolatey – an awesome Windows application package management command line app originated by Rob Reynolds (@ferventcoder) who is one of the very early team members of what we now call Nuget back when it was called Nu and on Ruby Gems. In fact, chocolatey is largely developed on top of Nuget.

Chocolatey is written in 100% powershell. As many applications go,it is a relatively small app, but it is a fairly large Powershell application. Chocolatey deals a lot with communicating with Nuget.exe and also managing downloads from various package and download feeds. So there is a lot of “heavy machinery glue” tying together black boxes but there is also a lot of raw logic involved in deciding which black box to attach to what and which entry points to use to enter the black boxes. Do we call Nuget or WebPI or Ruby Gems? How do we deal with the package versioning APIs, are we raising appropriate exceptions when things go wrong? This is where unit testing helps us and where Pester helps us with unit testing. We only have about 125 tests right now. We got started a little late but we are catching up.

Lets Write a Test!

Describe "When calling Get-PackageFoldersForPackage against multiple versions and other packages" {  $packageName = 'sake'
  $packageVersion1 = '0.1.3'
  $packageVersion2 = '0.1.3.1'
  $packageVersion3 = '0.1.4'
  Setup -File "chocolatey\lib\$packageName.$packageVersion1\sake.nuspec" ''
  Setup -File "chocolatey\lib\$packageName.$packageVersion2\sake.nuspec" ''
  Setup -File "chocolatey\lib\sumo.$packageVersion3\sake.nuspec" ''
  $returnValue = Get-PackageFoldersForPackage $packageName
  $expectedValue = "$packageName.$packageVersion1 $packageName.$packageVersion2"

  It "should return multiple package folders back" {
    $returnValue.should.be($expectedValue)
  }    

  It "should not return the package that is not the same name" {
    foreach ($item in $returnValue) {
      $item.Name.Contains("sumo.$packageVersion3").should.be($false)
    }
  }  
}

Here we are asking Chocolatey to tell us what versions it has for a given package. If Chocolatey has three packages, two being separate versions of the one we are asking about and the third for an entirely different package, we would expect to get back the two version packages for the package we are querying. There are several more tests around this feature alone but we will focus on this one.

The first thing to call out is the Describe block. This often contains the “Given” and the “When” clauses of BDD’s Given-When-Then idiom. The Chocolatey team are not BDD purists.Otherwise the test might read:

Given a caller for package folders

When there are multiple versions of that package

Then it should return multiple packages

This might also be considered the “Arrange-Act” of the Arrange-Act-Assert” pattern. Here we are setting up the conditions of our test to match the scenario we want to replicate. Finally the “It” block performs our validation. It’s the “Then” of Given-When-Then or the “Assert” of Arrange-Act-Assert. If the validation does not resolve, then our test should fail.

Make it simple, use lots of functions

As I mentioned above, I see Unit testing first and foremost as a design tool. This is largely because it is down right painful to test code that follows poor design principles. One way to make your powershell scripts a bear to test is to have long running procedural scripts. As it so happens, it also makes the code a bear to maintain and add features to. So just like in other languages, use the power of encapsulation and the Single Responsibility Principle. The pattern that the Chocolatey code follows is to break the code up into a file per function and another file for all the tests of that function. The actual functions are dot-sourced into the test file.

At the top of most test functions, you will find:

$here = Split-Path -Parent $MyInvocation.MyCommand.Definition
$common = Join-Path (Split-Path -Parent $here)  '_Common.ps1'
. $common

This dot sources _common.ps1 which is responsible for dot sourcing all of the chocolatey functions and includes some other test setup logic.

Mocking

I personally got involved with the Pester project when I wanted a true Mocking framework. We were stubbing out all of the Chocolatey functions and used flags to signal whether the real function or a stub should be invoked. This got very tedious to setup and maintain. The Chocolatey code base had well over a 1000 lines of this stubbing code. Given the dynamic nature of Powershell, it seemed to me that it would be straight forward to create a true Mocking framework. It also looked like a lot of fun (the main reason why I do anything) and a way to fill in some knowledge gaps I had with powershell. This all proved to be true (this does not often happen for me).

Essentially you can tell the mocking functions to endow any Powershell command (or custom function) with new behavior. You can also have it track or record what is being called and with what parameters. Here is a test that uses mocking:

Describe "Install-ChocolateyVsixPackage" {
  Context "When not Specifying a version and version 10 and 11 is installed" {
    Mock Get-ChildItem {@(@{Name="path\10.0";Property=@("InstallDir");PSPath="10"},@{Name="path\11.0";Property=@("InstallDir");PSPath="11"})}
    Mock get-itemproperty {@{InstallDir=$Path}}
    Mock Get-ChocolateyWebFile
    Mock Write-Debug
    Mock Write-ChocolateySuccess
    Mock Install-Vsix

    Install-ChocolateyVsixPackage "package" "url"
    It "should install for version 11" {
        Assert-MockCalled Write-Debug -ParameterFilter {$message -like "*11\VsixInstaller.exe" }
    }
  }

  Context "When not Specifying a version and only 10 is installed" {
    Mock Get-ChildItem {@{Name="path\10.0";Property=@("InstallDir");PSPath="10";Length=$false}}
    Mock get-itemproperty {@{InstallDir=$Path}}
    Mock Get-ChocolateyWebFile
    Mock Write-Debug
    Mock Write-ChocolateySuccess
    Mock Write-ChocolateyFailure
    Mock Install-Vsix

    Install-ChocolateyVsixPackage "package" "url"
    It "should install for version 10" {
        Assert-MockCalled Write-Debug -ParameterFilter {$message -like "*10\VsixInstaller.exe" }
    }
  }
}

These are tests of the Chocolatey VSIX installer. A VSIX is a Visual Studio extension. Before focusing on the mocking, lets look at a new form of syntactic sugar here to call out the “When” as a first class expression using the Context block. Its essentially a nested Describe but expresses the Given-When-Then pattern more naturally.

A user can specify a version of visual studio to have the extension installed in if that user has more than one version installed. If they do not specify a version, we want to install it in the most recent version. That is what the first context is testing. The way that one checks is to inspect the registry for the versions installed and their locations. That’s where mocking comes in. We do not want to have to install and uninstall multiple versions of Visual Studio on our dev boxes if we do not have them. We certainly do not want to do this on a CI server. We also do not want to be futzing with the Windows Registry and try to feed it fake data.

Instead we can Mock the Get-ChildItem Cmdlet that we use to query the registry. Using the Mock function, we state that when Get-ChildItem is called, it should return a Hashtable that “looks” just like the Registry Key collection that a real Get-ChildItem would give us when querying VS versions on a machine with multiple versions. Then you see several functions mocked but no alternate behavior is specified. Specifying no behavior is the same as specifying simply {} or do nothing. We do this because none of those functions are relevant to this test and we don’t want the real code to execute. For instance, we do not want Get-ChocolateyWebFile to make a network call for a file.

In our It block, you can see how we use the Assert-MockCalled to verify that our code called into a mocked function as expected. Here if things go as planned, a debug message will record the path of the VsixInstaller used. Since we previously declared Write-Debug as a mocked function, we allow the mocking framework to monitor its calls. Using Assert-MockCalled we state that Write-Debug should be called at least once. There is further functionality that allows us to specify an exact number of calls or 0. Using the ParameterFilter, we state that only calls where the message argument contains the VS 11 path should be counted.

If you are curious about the implementation details of the mocking in Pester, the code is only a couple hundred lines.

And now a few words on state vs. behavior based testing

A common cause for debate in testing discussions is the use of state based vs. behavior based testing. It is generally agreed that it is best to test the state of a function and its collaborators once the function has completed. Too often we instead test the behavior of a function. We are reaching deeper into the function and assuming we know more that we should about what the function is doing. We are testing the implementation details of the function rather that the outcome of its state. The behavior based tests open the testing framework up to added fragility since changes in implementation often requires changing the tests. Mocking can lead developers to overusing behavior based tests since it makes it so easy to test the functions interactions at various points of the function’s logic.

I very much agree with the need to test state. This can be more difficult I think given the nature and eco system of powershell. Powershell is largely built for connecting and controlling the interactions of “black boxes.” Often the behavior being tested is blurred with state. I think we can get better at this and I admit to being new to this kind of unit testing. I hope to evolve patterns more indicative of state based tests as I learn in this space.

Tying Powershell Test Results into your CI Builds

One great feature of Pester is its ability to output test results according to the NUnit XML schema. While I personally have difficulty using the words “great” and “XML schema” in the same sentence, here it is sincere. This is particularly cool because most continuous integration servers understand this schema and support tooling that surfaces these results for easy visibility. Both Pester and Chocolatey use the CodeBetter TeamCity server to run builds after commits to the master repo and leverage this feature. There is a wiki on the Pester wiki page explaining how to set this up.

Here is a shot of how this looks:

image

I can see what tests failed and how long the tests took.

Learning more

We just scratched the surface of pester and in no way toured all of its great features. The Pester github wiki has lots of detailed information on its API and usage. When you install the Pester Powershell module, you also have access to the command line help which is pretty thorough. I’d also encourage you to look at both Pester and Chocolatey for a suite of example tests that demonstrate how to use all of the functionality in Pester. Also both Pester and Chocolatey have a google groups discussion page for raising issues and questions.

I hope you have found this post informative and please do let us know what you think of Pester and how you would like to see it improved.

Also If you are attending the Powershell Summit this Spring, please catch my talk on this exact subject.

Released AutoWrockTestable: Making test class composition easier by Matt Wrock

birdLate last year I blogged about a unit testing pattern I have been using for the past couple years. It’s a pattern that I initially learned from Matt Manela (@mmanela). I adapted the pattern to use Automocking with Richard Cirerol’s wrapper. Over the last week I have been working to plug this in to Visual Studio as a template that can easily add test classes and make one’s unit testing work flow more efficient.

I could, should, maybe will, but probably won’t write a separate post dedicated to making the Visual Studio extension. Ironically, while I am a member of the Visual Studio Gallery team, this is my first public extension I have written. While it is a trivial extension as extensions go, there were some interesting learnings that made what I thought would be a couple night’s worth of work into a week of my spare time. Despite some frustrating incidents, it was a lot of fun.

Now lets dive into AutoWrockTestable!

Whats better than AutoMocking? Why of course, AutoWrocking!
yo.

Visual Studio integration makes composing tests a snap!

Here is how to effortlessly add test classes to your solution with all mockable dependencies mocked:

1. You can download The Visual Studio Extension from Codeplex or the Visual Studio Gallery.
The extension will also install Nuget if you do not already have it and will add Structuremap, Structuremap.Automocking and Moq to your Nuget repository.

2. Create a skeleton of your implementation class.

public class OAuthTokenService{    private readonly IWebClientWrapper webClientWrapper;    private readonly IRegistryWrapper registry;

    public OAuthTokenService(IWebClientWrapper webClientWrapper,        IRegistryWrapper registry)    {        this.webClientWrapper = webClientWrapper;        this.registry = registry;    }    public string GetAccessToken(string clientId, IOauthUrls oauthUrls)    {        return null;    }}

3. Click on the "Add Testable..." menu item in Solution Explorer's "Add" context menu.

ContextMenu.png

4. Enter the name of the class you want to test. You can enter any name but the text box will auto complete using all class files open in the editor. The first class in the active file is initially selected.

Wizard.png


5. AutoWrockTestable creates a new class file with the same name as your implementation class appending "Tests" to the name and containing this code:

using AutoWrockTestable;

namespace SkyCli.Facts{    class OAuthTokenServiceTests    {        class TestableOAuthTokenService : Testable<SkyCli.OAuth.OAuthTokenService>        {            public TestableOAuthTokenService()            {

            }        }    }}

Writing tests using Testable<ClassToTest>

The Testable class has its dependencies automatically mocked. Now you can start to write test methods using your Testable. I like to use nested classes (a class for every method I want to test) to organize my tests. Here is how a test might look:

class OAuthTokenServiceTests{    class TestableOAuthTokenService : Testable<SkyCli.OAuth.OAuthTokenService>    {        public TestableOAuthTokenService()        {

        }    }

    public class GetAccessToken    {        [Fact]        public void WillReturnTokenFromRegistryIfAFreshOneIsFoundThere()        {            var testable = new TestableOAuthTokenService();            var registryValues = new Dictionary<string, string>();            registryValues.Add("access_token", "token");            registryValues.Add("expires_in", "3600");            registryValues.Add("grant_time", DateTime.Now.Ticks.ToString());            testable.Mock<IRegistryWrapper>().Setup(x => x.GetValues("path"))                .Returns(registryValues);

            var result = testable.ClassUnderTest.GetAccessToken("clientId", null);

            Assert.Equal("token", result);        }    }}

See Using the Testable<T> Class for a complete explanation of the Testable<T> API.

For more information on the Testable pattern and Auto Mocking in general see The Testable Pattern and Auto Mocking Explained or see my previous blog post on the subject.

Turn off Internet Explorer Enhanced Security by Matt Wrock

If you enjoy lots of dialog boxes that require you to take action before you can review any unique URL, then you will not want to use this:

$AdminKey = "HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\`    {A509B1A7-37EF-4b3f-8CFC-4F3A74704073}"$UserKey = "HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\`    {A509B1A8-37EF-4b3f-8CFC-4F3A74704073}"Set-ItemProperty -Path $AdminKey -Name "IsInstalled" -Value 0Set-ItemProperty -Path $UserKey -Name "IsInstalled" -Value 0Stop-Process -Name Explorer -ForceWrite-Host "IE Enhanced Security Configuration (ESC) has been disabled." -ForegroundColor Green

If on the other hand, IE enhanced security makes you want to stick your fist in an activated blender, do your self a favor and step away from the blender and then invoke the above script.

If you walk the fence between these two, invoke it anyways and you’ll be glad you did.

You are welcome.

The Perfect Build Revisited by Matt Wrock

build

About two and a half years ago I wrote a series of posts documenting the work my team had done to automate our build process. We had completed a migration from VSS to SVN and used a combination of nAnt and CruiseControl to facilitate continuous integration and push button deployments to any of our environments including production.

Over the last couple months, I’ve had the opportunity to put together an automated deployment process for my current organization at Microsoft. Throughout my career, I’ve worked on a few projects that were essentially a rewrite of a similar project I had worked  on in the past for a different employer. What I love about these kinds of projects is that it is a great opportunity to do so many things better. I can remember those architectural decisions I had made and regretted but was too far in to easily change (usually a smell of an architectural weakness itself). Well now I can avoid them and approach it from the angle I wished I had before. In a way this was a similar situation.

While I felt good about the system I had put together before, I now had better tools at my disposal. I still think nAnt and CruiseControl are fine tools, but now I’m using PowerShell with PSake instead of nAnt, TeamCity instead of CruiseControl and our source code is in Mercurial instead of SVN. The other major difference between the system I’m building now and the one I had worked on before is that this system also includes the automation of server setup and configuration, bringing a clean OS to a full functioning application node serving any tier in the app (web, db, admin, etc.)

This post is intended to provide an overview of the new system and I may follow up with future posts that dive into more detailed coverage of various parts of the system.

Do you really need an automated build and deployment system?

Yes. You do.

You may be thinking that while an automated system sounds neat and all, you simply don’t have time to build one. While I tend to be very pragmatic in my approach to software architecture, I definitely see automated deployments as a must have and not a “nice to have.” The reason I say this is that over several deployments, more time is lost in the mechanics of deploying and there is far more risk of a bad deployment and there is more difficulty and time spent in troubleshooting deployments than if the deployment were automated.

Often, teams do not recognize the value of automated deployments until they experience it. Once they work with one, they cant imagine going back. With automated build and deployments, the drama of deployments is reduced to a simple routine task and teams have more time to focus on building features and business has more confidence that their features will move forward reliably and consistently. If you want to release more often and perhaps extend continuous integration to continuous deployment, you simply must automate the deployment process.

If they are so important, why did it take you over two years to start building one?

Fair question. I don’t intend to enumerate the political reasons, which there are many, here. That will have to wait for my memoire due out in 2042, “My life, a love song,” please keep an eye out for that one.

Throughout my tenure in the MSDN/Technet org at Microsoft, deployments have been managed by a combination of test and a “build team” in the Ops group. While I have certainly been vocal in pushing for more automation, the fact that other people do most of the work and that there was resistance from some to automating the process, caused me to direct my focus on other things. There were certainly pain points along the way. There was a lot of ceremony involved in preparing for a deployment and in scheduling “hot fixes” with the build team. When there were problems with a deployment, it could be difficult sometimes to determine where things went wrong.

Recently, we transitioned to a new offshore vendor company. One of their responsibilities would be deployments and setting up new environments. Because these were mostly done manually, the logistics involved were often communicated verbally and via large step by step Word documents.

A side note: Many cultures have built a very rich and vibrant heritage around Oral history and story telling. I do not in any way want to disrespect these traditions. On the contrary, we should celebrate them. I do not believe that oral histories lend themselves well to automated builds and deployments.

Without going into the details, a lot fell through the cracks as the new team came on  board. I do not fault the people on this team, I wouldn’t expect anyone to be able to build an environment for a complex app that they have never worked on before based on a few phone conversations and a sharepoint wiki. Our environment setups and deployments suddenly started having problems. Because a large part of the code I am involved with spans over several apps, I am often approached when things go wrong here and before long I found myself spending most of my time troubleshooting and fixing environments and their deployments. It soon became crystal clear that until an automated system was in place, this would continue to stand in my way of getting real feature work done. And instead of whining and complaining about it, I decided to just do it.

What exactly does a automated build and deployment system do?

For the system I set out to build, the following key components are included:

  1. Application compilation and packaging
  2. Deployment of application packages to various environments
  3. Bootstrap scripts for setting up a new server or environment

The last one has inspired a new personal side project, Autobox, that sets out to automate the building of a developer machine (or any kind of personal machine) from bare OS via a single command line. After all, if I can create a test server with sql server, app fabric caching, various windows services, and web applications along with all the file permissions and firewall rules involved, certainly I can create my own machine with all my preferred apps and settings ready to go.

Lets examine each of these individually.

Application compilation and packaging

This is essentially the process that transforms the raw application bits with all of its code files, static assets, sql scripts, config files, and other application specific files into a zip file that can be consumed by the deployment scripts. This package in our case is typically composed of a directory for each application tier. Here is the package for our Galleries application:

image

 

 

 

 

 

 

 

 

The packaging process is responsible for the actual compilation which typically involves a call to msuild and which invokes the appropriate msbuild tasks from the original Visual Studio solution. In addition to transforming source files to compiled DLLs, the packaging process copies everything needed to deploy the application into a coherent directory structure and nothing more. This typically includes powershell scripts and various command line tools that run sql scripts to update the database with any schema changes, adds meta data to lookup tables or migrates old data to conform to new schema or logic. It may also include scripts responsible for transforming web.config and app.configs  with settings appropriate for the environment.

This first step of the build and deployment process had been in place for quite some time so I just had to make some minor tweaks here and there. The individual application teams in my group are responsible for keeping the packaging scripts up to date and it is wired into our continuous Integration process. Every push of source code to the central Mercurial repository forces our build server, Teamcity, to invoke a set of scripts that include compilation, running unit tests and finally packaging. TeamCity then saves the zipped package and makes it available to the deployment scripts. If you are familiar with Teamcity, you know this is the build “Artifacts.”

Deployment of application packages to various environments

Here is where my work largely started. Until recently, we had a script that TeamCity would invoke twice a day which would collect the packages of each app and aggregate them into another package for each deployable environment. This uses TeamCity dependent builds which will pull the build artifacts of the last successful application build into the deployment script’s working directory. Here are my Deployment Build settings that declare the dependencies:

image

So in our case, we would have application packages for Forums, Search, Profile and various internal services as seen above and these would all be rolled into a single 7z file for each environment including test, staging, production, etc. This packaging script was also responsible for the final transformation of the configuration files. It would merge settings specific to each environment into the web and app configs so that the final package, say prod-7791.7z (7791 being the build number), had the exact web and app configs that would end up in production.

Well this would take 2 and a half hours to run. Back in the day it was fairly fast but as environments got added, the process took longer and longer. It would then take the build team a couple hours to take this package and deploy its bits to each server, run the database upgrade scripts, stop and restart services, smoke test, etc. This could become more and more painful the closer we got to release because as dev would fix bugs, it could take one to two days before they received feedback from test on those bugs.

Revamping this was fairly straight forward. I rewrote this script to transform the configs for only a single environment which it would receive via a command parameter from TeamCity. I created a separate build config in TeamCity to make this very clear:

image

 

 

 

Each of these build configurations run the exact same script but they each pass different command line arguments to the build script indicating their environment. Also, some are wired to different Version Control branches. For example, our Int (Integration) environment builds off of the Release Candidate branch while the others build off of Trunk. Finally there is an “Ad Hoc” config where anyone can run a custom build with custom command line parameters. If the Ad Hoc build fails no one is notified and we don’t get particularly alarmed. Here is how the command line parameters are wired up for custom builds in TeamCity:

image

 

The script is a normal powershell script that gets called via psake. Psake provides a very nice powershell based container for running builds. Think of it as an alternative to writing an MSBuild script. While MSBuild is more XML based and very declarative in nature, PSake allows you to script out all of your build tasks in powershell which makes a lot of sense for the type of things that a build script does - such as copying files around. I’m not going to dive into a PSake tutorial here but here is a snippet of my PSake script:

properties {    $current = Resolve-Path .\default.ps1 | Split-Path    $path = $current    $BuildNumber = 0    $ConfigDrop = ".\_configs"    $WebDrop = "http"    $Environment = "DEFAULT"    $configVariables = New-Object System.Collections.Queue}

Include .\psake\teamcity.ps1

task default -depends Packagetask Configs -depends Copy-readme, SetupEnvironment, Configure-Social,     Configure-StoApps, Configure-Services, Configure-SocialServicestask Package -depends SetupEnvironment, Clean, Configs, Database, preparesearch,     SocialSites, StoApps, SocialServices, StopServices, Services, CopyConfigs,     StartServices, FlushRequestReduce, RestartIIS, RestartAppFabric, TestPages,     CleanEnvironment

TaskSetup {    TeamCity-ReportBuildStart "Starting task $($psake.context.Peek().currentTaskName)"}

TaskTearDown {    TeamCity-ReportBuildFinish "Finishing task $($psake.context.Peek().currentTaskName)"}

Task SetupEnvironment {    .\Utilities\EnvironmentSetup.ps1 $current $Environment $configVariables}

This is not any kind of special scripting language. It is normal powershell. PSake provides a Powershell module which exposes several functions like Task, Properties, etc. Many of these take script blocks as parameters. The PSake module really is not very large and therefore it does not take much investment to understand what it does and what functionality it provides. It really does not provide much “functionality” at all in terms of utility methods but it provides a very nice framework for organizing the various parts of your build script and specifying dependencies.

The snippet above is the beginning of my deployment script. The Properties section defines and sets script wide variables and these can be overridden via command line parameters when calling PSake. Next are my tasks. Tasks might actually do something like the SetupEnvironment task at the bottom. Or they might alias a group of tasks to be run in a specific order like the default, Configs and Package tasks. If you are familiar to msbuild, these are simply the equivilent of msbuild targets.

When you call PSake, you can tell it to run a specific task or if you do not, it will run the default task. Even though I am not including most of my script here, it is not difficult to tell what the deployment script does by simply looking at the dependencies of the default task. It first sets up the environment by calling another powershell script that will set a bunch of global environment variables specific to the Environment property. It performs a clean of any previous build, it transforms the configs, and runs the database scripts. Then it executes several tasks that copy different directories to the web server, stops some windows services, copies the services code, starts the services, restarts IIS, runs some quick tests to make sure the apps are loading and finally cleans up after itself.

One nice thing about this script is that it does not use any kind of remoting which can be important in some environments. The script can be run directly from the build agent (the server running the TeamCity Build Agent service) and target any environment. It does require that the Service Identity under which TeamCity runs, is an administrator on the target web servers and sql servers. To give you a glimpse into what is going on here, I specify all the server names specific to each environment in a config file named after the environment. So our Next (daily build) environment has a file called Next.ps1 that among many other things contains:

$global:WebServers                = "RR1STOCSAVWB18", "RR1STOCSAVWB17"$global:ServicesServer                = "RR1STOCSAVWB17"
 
Then my RestartIIS task looks like this:
Task RestartIIS {    Restart-IIS $global:WebServers}

function Restart-IIS([array] $servers) {    foreach ($server in $servers) {        .\Utilities\RemoteService.ps1 ($server -split "\\")[0] restart -service "W3SVC"    }}

RemoteServices.ps1 contains a bunch of functions to make working with services on remote servers not so painful.

 

Did the deployment succeed?

At any point in the scripts, if an error occurs, the build will fail. However, I also want to have some way to quickly check each application and ensure they can at least load. It is very possible that  the build script will complete just fine, but there may be something in the latest app code or some change to the environment that causes an application to fail. If this happens, I want to know which app failed, fail the build and provide straight forward reporting to testers to discover where things broke down. Yes, each app build has its own set of unit tests. Most apps have thousands but there are a multitude of issues both code related and server or network related that can slip through the cracks and cause the app to fail.

At the end of every deployment, a series of URLs are “pinged” and expected to return a 200 HTTP status code. Currently we have 28 URLs in our tests. Now a big reason for overhauling this system was to make it faster,so a big concern is that launching a bunch of app URLs will profoundly slow the build. To try to make this as efficient as possible, we use powershell jobs to multi thread the http requests and set a 5 minute timeout that will automatically fail all tests that do not complete before the timeout.

Here is the testing script:

task TestPages -depends SetupEnvironment {    . .\tests.ps1    Wait-Job -Job $openRequests -Timeout 300    foreach ($request in $openRequests) {        TeamCity-TestStarted $request.Name        $jobOutput = (Receive-Job $request)        if($jobOutput -is [system.array]) {$jobOutput = $jobOutput[-1]}        $testParts = $jobOutput -split " ::: "        if($testParts.Length -eq 2) {            $testMessage=$testParts[1]            $testTime=$testParts[0]        }        else {            $testMessage=$testParts[0]            $testTime=300        }        if($request.state -ne "Completed") {            TeamCity-TestFailed $request.Name "Timeout" "Test did not complete within timeout."        }        Remove-Job $request -Force        if ($testMessage -like "Failed*") {            TeamCity-TestFailed $request.Name "Did not Recive a 200 Response" $testMessage        }        TeamCity-TestFinished $request.Name $testTime    }}

function Ping ([string] $pingUrl) {    $jobArray = @()    $job = Start-Job -scriptblock {param($url)        $host.UI.RawUI.BufferSize = New-Object System.Management.Automation.Host.Size(8192,50)        $ms = (Measure-Command {            $web=[net.httpwebrequest]::create($url)            $web.AllowAutoRedirect = $true            $web.PreAuthenticate = $true            $web.Timeout = 300000            $systemWebProxy = [net.webrequest]::GetSystemWebProxy()            $systemWebProxy.Credentials = [net.CredentialCache]::DefaultCredentials            $web.Proxy = $systemWebProxy            $web.Credentials = [net.CredentialCache]::DefaultCredentials            try {                $resp=$web.GetResponse()            }            catch [System.Net.WebException]{                $resp=$_.Exception.Response                $outerMessage = $_.Exception.Message                $innerMessage = $_.Exception.InnerException            }        }).TotalMilliseconds        $status = [int]$resp.StatusCode        if ($status -ne 200) {            $badServer = $resp.Headers["Server"]            Write-Output "$ms ::: Failed to retrieve $url in $ms ms with status code:                 $status from server: $badServer"            Write-Output $outerMessage            Write-Output $innerMessage        }        else {            Write-Output "$ms ::: Succeeded retrieving $url in $ms ms"        }    } -name "$pingUrl" -ArgumentList $pingUrl    $jobArray += $Job    return $jobArray}

The individual test URLs are in the dot sourced tests.ps1:

$openRequests += Ping "http://$global:ServicesUrl/ChameleonService/Api.svc"$openRequests += Ping "http://$global:AdminUrl/ChameleonAdmin/"$openRequests += Ping "http://$global:ServicesUrl/SearchProviderServices/SearchProviderService.svc"$openRequests += Ping "http://$global:ProfileApiUrl/ProfileApi/v1/profile/displayname/vamcalerts"$openRequests += Ping http://$global:UserCardLoaderUrl...

An interesting thing to note here are the use of the functions beginning with TeamCity-. These are functions coming from a module provided by the pake-contrib project that exposes several functions allowing you to interact with TeamCity’s messaging infrastructure. The functions I am using here create standard output messages formatted in such a way that TeamCity will treat them like test output reporting when a test starts and finishes as well as if it succeeded or failed and how long it took. What is really nice about all of this is that now these tests light up in TeamCity’s test reporting:

 
image

I can zoom in on my failed tests to see why they failed:

image

Pretty slick eh?

Bootstrap scripts for setting up a new server or environment

In my original Perfect Build series, I did not include automation around setting up servers or environments. However one of the habits I picked up from the teams I work with at Microsoft is the inclusion of a build.bat file at the root of every source code repo that can build a development environment from scratch. In the past I had never followed this practice. I had not really used powershell and was not aware of all the possibilities available which is basically that you can do pretty much anything in powershell. I’ll admit there is a learning curve involved but it is well worth it. Being able to fire up a development environment for an app with a single command has proven to be a major time saver and a great way to “document” application requirements.

Now its one thing to get a dev environment up and running but getting a true server environment up can be more challenging. Since many organizations don’t give developers access to the server environments, setting these up often falls under server operations. This may involve dev sending ops instructions or sitting down with an ops engineer to get a server up and running. A lot of time can be lost here and its easy not to update and properly update these instructions. I have personally spent an aggregate of weeks troubleshooting environments not set up correctly.

One solution commonly employed here is to use VM images. Once you get an environment set up the way it is supposed to be inside of a VM, take a snapshot and simply apply that snapshot whenever you need to setup a new server. I don’t like this approach. It is too easy for VM images to become stale and they don’t serve well to “document” all of the requirements of an application. The fact is, just about anything can be scripted in powershell and in my opinion, if it cannot be scripted then you have probably made a poor choice in technology. Powershell scripts can replace “deployment documents” or server setup documents. They should be readable by both developers and server support engineers. Even if one is not well versed in powershell, I believe any technical professional should at least be able to read a powershell script and deduce the gist of what it is doing.

For my applications, I put together a script, again in psake format, that can build any application tier from a bare OS. It can also build a complete environment on a stand alone server. To provide an idea of what my script can do, here is the head of the psake script:

properties {    $currentDir = Resolve-Path .\cleansetup.ps1 | Split-Path    $profile = "$currentDir\CommApps\Profile"    $forums = "$currentDir\CommApps\Forums"    $search = "$currentDir\CommApps\Search"    $chameleon = "$currentDir\CommApps\Chameleon"    $configVariables = New-Object System.Collections.Queue    IF ( TEST-PATH d:\) { $httpShare="d:\http" } else { $httpShare="c:\http" }    $env = "test"    $appFabricShareName = "velocity"    $buildServerIdentity = "Redmond\Idiotbild"    $domain = "redmond.corp.microsoft.com"    $buildServer = "EpxTeamCityBuild.redmond.corp.microsoft.com"    $buildServerQueue = "bt51"    $doNotNeedsBits = $false    $addHostFileEntries = $false    $sqlServer = $env:computername    $appFabricServer = $env:computername    $AdminServer = $env:computername    $restartSuffix = ""    $noProxy = $true}

Include .\psake\teamcity.ps1

task default -depends standalonetask standalone -depends Setup-Proxy, Set-EnvironmentParams, Pull-Bits, Setup-Roles,     Disable-InternetExplorerESC, Database-server, Install-IIS-Rewrite-Module,     Install-Velocity, Setup-MSDTC, Install-Event-Sources, Install-Certificates,     Setup-Response-Headers, Register-ASP, Wait-For-Bits, Setup-IIS, Add-DB-Perms,     Configure-Velocity, Install-WinServices, Set-Queue-Permstask WebAppCache-Server -depends Setup-Proxy, Set-EnvironmentParams, Pull-Bits,     Setup-Roles, Configure-Group-Security, Install-IIS-Rewrite-Module, Install-Velocity,     Setup-MSDTC, Install-Event-Sources, Install-Certificates, Setup-Response-Headers,     Register-ASP, Wait-For-Bits, Setup-IIS, Add-DB-Perms, Configure-Velocity,     Install-WinServices, Set-Queue-Permstask AppFabric-Server -depends Setup-Proxy, Set-EnvironmentParams, Setup-Roles,     Configure-Group-Security, Install-Velocity, Configure-Velocitytask Web-server -depends Setup-Proxy, Set-EnvironmentParams, Pull-Bits, Setup-Roles,     Configure-Group-Security, Install-IIS-Rewrite-Module, Setup-MSDTC, Install-Event-Sources,     Install-Certificates, Setup-Response-Headers, Register-ASP, Wait-For-Bits,     Setup-IIS, Add-DB-Permstask Admin-Server -depends Setup-Proxy, Set-EnvironmentParams, Pull-Bits, Setup-Roles,     Configure-Group-Security, Install-IIS-Rewrite-Module, Setup-MSDTC, Install-Event-Sources,     Setup-Response-Headers, Register-ASP, Wait-For-Bits, Setup-IIS, Add-DB-Perms,     Install-WinServices, Set-Queue-Permstask Database-Server -depends Set-EnvironmentParams, Configure-Group-Security,     Install-SqlServer, Create-Databasestask Post-Restart-Full -depends Set-EnvironmentParams, Remove-Startup,     Configure-Velocity, Install-WinServices, Set-Queue-Permstask Post-Restart -depends Remove-Startup, Configure-Velocitytask Get-Bits -depends Set-EnvironmentParams, Pull-Bits, Wait-For-Bits

By looking at the tasks you can get a feel for all that’s involved at each tier. First let me say that this script took about 20x more effort to write than the deployment script. I’m proud to report that I mastered file copying long ago. Once I finally managed to figure out the difference between source and destination, its been smooth sailing ever since. This script really taught me a lot about not only powershell but also a lot about how the windows os and many of the administrative apps work together.

If I had to identify the step that was the biggest pain in the butt to figure out, by far and away it was installing and configuring AppFabric. This is Microsoft’s distributed caching solution formerly known as Velocity. One thing that makes it tricky is that, at least in my case, it requires a reboot after installation and before configuration. I certainly do not want to include our entire server setup script here but let me include the AppFabric portion. Again keep in mind this is coming from a psake consumable script. So the tasks can be thought of as the “entry points” of the script while the functions serve as “private” helper methods to those from more formal programming languages.

task Install-Velocity -depends Install-DotNet4 {    $global:restartNeeded = $false    Start-Service -displayname "Windows Update"    if (!(Test-Path "$env:windir\system32\AppFabric")){          $dest = "appfabric.exe"          if (Is64Bit){              $url = "http://download.microsoft.com/download/1/A/D/                1ADC8F3E-4446-4D31-9B2B-9B4578934A22/WindowsServerAppFabricSetup_x64_6.1.exe"          } else{              $url = "http://download.microsoft.com/download/1/A/D/                1ADC8F3E-4446-4D31-9B2B-9B4578934A22/WindowsServerAppFabricSetup_x86_6.1.exe"               }          Download-File $url (join-path $currentDir $dest)        ./appfabric.exe /i "cachingservice,cacheclient,cacheadmin"

        Start-Sleep -s 10        $p = Get-Process "appfabric"        $p.WaitForExit()        $global:restartNeeded = $true    } else      {          Write-Host "AppFabric - Already Installed..." -ForegroundColor Green      }      }task Configure-Velocity -depends Create-Velocity-Share, Install-Velocity {    if($global:restartNeeded -eq $true -or $global:restartNeededOverride -eq $true) {         RebootAndContinue     }    Load-Module DistributedCacheConfiguration    $clusterInfo = Get-CacheClusterInfo "XML" "\\$env:computername\$appFabricShareName"    if( $clusterInfo.IsInitialized -eq $false ) {        new-CacheCluster "XML" "\\$env:computername\$appFabricShareName" "Medium"        Register-CacheHost -Provider XML -ConnectionString "\\$env:computername\$appFabricShareName"              -CachePort 22233 -ClusterPort 22234  -ArbitrationPort 22235 -ReplicationPort 22236             -HostName $env:computername -Account "NT AUTHORITY\Network Service"        Add-CacheHost -Provider XML -ConnectionString "\\$env:computername\$appFabricShareName"             -Account "NT AUTHORITY\Network Service"        Load-Module DistributedCacheAdministration        use-cachecluster -Provider XML -ConnectionString "\\$env:computername\$appFabricShareName"        New-Cache ForumsCache -TimeToLive 1440        Set-CacheClusterSecurity -SecurityMode None -ProtectionLevel None

        start-cachecluster        netsh firewall set allowedprogram             $env:systemroot\system32\AppFabric\DistributedCacheService.exe APPFABRIC enable    }}

function Is64Bit  {      [IntPtr]::Size -eq 8  }function Download-File([string] $url, [string] $path) {    Write-Host "Downloading $url to $path"    $downloader = new-object System.Net.WebClient    $downloader.DownloadFile($url, $path) }function RebootAndContinue {    $global:restartNeededOverride = $false    Copy-Item "$currentDir\post-restart-$restartSuffix.bat"         "$env:appdata\Microsoft\Windows\Start Menu\programs\startup"    Restart-Computer -Force}
Now there are several ways to configure AppFabric and this just demonstrates one approach. This uses the XML provider and it only installs the caching features of AppFabric.

Installing applications with Chocolatey

One “rediscovery” I made throughout this process is an open source project built on top of Nuget called Chocolatey. This is the brain child of Rob Reynolds who is one of the original creators of what we know of as Nuget today and was once called Nu before development was handed off to Microsoft and Outercurve. I say “rediscovery” because I stumbled upon this a year ago but didn’t really get it. However it really makes sense when it comes to build/setup automation whether that be an application server or your personal machine.

Chocolatey is a framework around installing and setting up applications via silent installations. Many of the apps that you and I are used to manually downloading then launching the installer and clicking next, next, next, finish are available via Chocolatey’s public feed. In addition to its own feed, it exposes the web platform installer’s command line utility so that any application available via the web platform installer can be silently installed with Chocolatey. Since it really just sits on top of Nuget, you can provide your own private feed as well.

So lets look at exactly how this works by exploring my setup script’s bootstrapper:

param(    [string]$task="standalone",    [string]$environment="test",    [string]$sqlServer = $env:computername,    [string]$appFabricServer = $env:computername,    [string]$AdminServer = $env:computername,    [string]$domain = "redmond.corp.microsoft.com",    [switch]$doNotNeedBits,    [switch]$addHostFileEntries,    [switch]$skipPrerequisites,    [switch]$noProxy)iex ((new-object net.webclient).DownloadString('http://bit.ly/psChocInstall'))

if(-not $skipPrerequisites) {    .$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd install hg    if( test-path "$env:programfiles\Mercurial" ) {        $mPath="$env:programfiles\Mercurial"    }     else {         $mPath = "${env:programfiles(x86)}\Mercurial"     }

    if( -not( test-path $env:systemdrive\dev )) { mkdir $env:systemdrive\dev }    set-location $env:systemdrive\dev    if( test-path socialbuilds ) {        set-location socialbuilds        .$mPath\hg pull        .$mPath\hg update    }    else {        .$mPath\hg clone https://epxsource/SocialBuilds        set-location socialbuilds    }}if($task -eq "standalone") {$addHostFileEntries=$true}if($task -ne "AppFabric-Server") {$restartSuffix="Full"}./psake/psake.ps1 cleansetup.ps1 -tasklist $task -properties @{env=$environment;sqlServer=$sqlServer;    appFabricServer=$appFabricServer;AdminServer=$AdminServer;domain=$domain;    doNotNeedBits=$doNotNeedBits;addHostFileEntries=$addHostFileEntries;    restartSuffix=$restartSuffix;noProxy=$noProxy}

Notice these key lines:
iex ((new-object net.webclient).DownloadString('http://bit.ly/psChocInstall'))

This Downloads and installs Chocolatey and then here is an example of using chocolatey to download the Mercurial source control client:

.$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd install hg

I should point out that under most circumstances, the above line could simply be:

cinst hg

Chocolatey’s install puts itself in your path and creates some aliases that makes this possible but because I use Chocolatey here in the same script that installs Chocolatey, the environment variables it sets are not available to me yet. I’d need to open a new shell.

As a side note, I use chocolatey all the time now. If I need to hop on a random box and install a tool or set of tools, I now just launch a few lines of powershell and its all there. At Microsoft I often get asked for source code to my repos by fellow employees who are unfamiliar with Mercurial. I have found that sending an email like this is very effective:

Hi Phil,

You can get that from https://epxsource/Galleries. We use Mercurial. The easiest way to get everything you need is to launch this from Powershell as admin:

iex ((new-object net.webclient).DownloadString('http://bit.ly/psChocInstall'))

.$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd install hg

$env:programfiles\Mercurial\hg clone https://epxsource/Galleries

This will install Mercurial and clone the galleries repo.

Matt

How cool is that. No Mercurial tutorial needed and sometimes I get a reply back telling me what a cool script that is. I should really forward the compliment to Rob Reynolds since he was the one who basically wrote it.

So this really makes the consumption of my server setup script simple. As you can see it basically clones (or updates) my script repo on the target machine where the script runs. This also means that if I commit changes to my script, rerunning this script on the box will automatiucally pull in those changes. To simplify things further, I provide a batch file wrapper so that the script can be launched from any command line:

@echo off

powershell -NonInteractive -NoProfile -ExecutionPolicy bypass     -Command "& '%~dp0bootstrap\bootstrap.ps1' %*"

all this does is call the powershell bootstrap.ps1 script (the one listed before) but key to this call is:
-ExecutionPolicy bypass

Without this and assuming this script is being run on a fresh box, the user would get an error trying to run most powershell scripts. This prevents any scripts from blocking and suppresses all warnings regarding the security of the scripts. Often you will see advice suggesting that you use “unrestricted”. However, I have found that “bypass” is better especially since I have had issues with setting the execution policy to unrestricted on Windows 8. According to the documentation on execution policies:

Bypass
- Nothing is blocked and there are no warnings or

prompts.

- This execution policy is designed for configurations
in which a Windows PowerShell script is built in to a

a larger application or for configurations in which

Windows PowerShell is the foundation for a program

that has its own security model.

This seems to match the use case here.

The one liner setup call

So now as long as I put my batch file and bootstrap.ps1 on a network share accessible to others who need to use it, simply typing this at any command prompt will kick off the script:

\\server\share\bootstrap.bat

By default with no command line parameters passed in, a standalone setup will be installed. In my case, it takes about an hour to complete and I have a fully functioning set of applications when finished.

Making this personal

Being really impressed with what I can get done in powershell and how easy it is to install many applications using Chocolatey has inspired me to create a personal bootstrapper which I have been tweaking over the past several weeks. It is still very rough and there is much I want to add but I’d like to craft it into a sort of framework allowing individuals to create sort of “recipes” that will serve up an environment to their liking. We are all VERY particular about how our environments are laid out and there really is no one size fits all.

If you are interested in seeing where I am going with this, I have been keeping it at Codeplex here. Right now this is really about setting up MY box, but it does do some interesting things like download and install windows updates, turns off UAC (that dialog box that you may have never clicked “no” on) and makes windows explorer usable by changing the defaults and showing me hidden files and known extensions. Here is the script for the windows explorer “fix”:

function Configure-ExplorerOptions([switch]$showHidenFilesFoldersDrives,                                     [switch]$showProtectedOSFiles,                                     [switch]$showFileExtensions) {    $key = 'HKCU:\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced'    if($showHidenFilesFoldersDrives) {Set-ItemProperty $key Hidden 1}    if($showFileExtensions) {Set-ItemProperty $key HideFileExt 0}    if($showProtectedOSFiles) {Set-ItemProperty $key ShowSuperHidden 1}    Stop-Process -processname explorer -Force}

So I hope you have found this helpful. I may dive into further detail in later posts or provide some short posts where I may include little “tidbits” of scripts that I have found particularly helpful. Then again, I may not.

Released RequestReduce 1.8: Making website optimization accessible to even more platforms by Matt Wrock

This week RequestReduce 1.8 was released expanding its range of platform compatibility along with some minor bug fixes.

Key Features Released

  • Syncing generated sprites and bundles across multiple web servers using sql server is now .net 3.5 compatible. Thanks to Mads Storm (@madsstorm) for migrating the EntityFramework 4 implementation over to PetaPoco
  • Added support for Azure CDN end points. See below for API details needed to enable this.
  • Fixed dashboard and cache flushing to function on IIS6
  • Ability to manually attach the RequestReduce Response Filter earlier in the request processing pipeline via a new API call.
  • Fixed .Less implementation to pass querystring parameters. Thanks

    Andrew Cohen (@omegaluz) for this bug fix.

There were a couple bugs caught by some users the day of release but those were fixed in the first 24 hours and all is stable now. You can now get this version from Nuget or RequestReduce.com. Its been very satisfying hearing from users who use RequestReduce on platforms such as classic ASP and even PHP on IIS and I’m glad to be able to expand this usage even further.

Why RequestReduce is no longer using Entity Framework

The short answer here is compatibility with .Net 3.5. It may seem odd as we stand on the precipice of the release of .Net 4.5 that this would be a significant concern, but I have received several requests to support Sql Server synchronization on .net 3.5. A lot of shops are still on 3.5 and the Sql Server option is a compelling enterprise feature. Its what we use at Microsoft’s EPX organization to sync the generated bundles and sprites across approximately 50 web servers. Since Entity Framework Code First is only compatible with .Net 4.0, we had to drop this in favor of a solution that would work with .Net 3.5.

The reason I chose to originally implement this feature using Entity Framework was mainly to become more familiar with how it worked and compared to the ORM that I have historically used, nHibernate. The data access needs of RequestReduce.SqlServer are actually quite trivial so I felt like it would be a good project to test out this ORM with little risk. In the end, I achieved what I wanted which was to understand how it worked at a nuts and bolts level beyond the white papers and podcasts I had been exposed to. I have to say that it had come a long way since my initial exposure to it a few years back. The code first functionality felt very much like my nHibernate/Fluent nHibernate work flow. It still has some maturing to do. especially in regards to caching.

Mads Storm was kind enough to submit a pull request overhauling the EF implementation using a Micro ORM called PetaPoco. While I certainly could have ported RequestReduce to straight ADO given its simple data needs, the PetaPoco migration was simple given that it follows a similar pattern to Entity Framework. I would definitely recommend PetaPoco to anyone interested in a Micro ORM that needs .Net 3.5 compatibility. I had previously held interested in using a framework like MassiveSimple.Data or Dapper. However all of these make use of the .Net 4 Dynamic type. PetaPoco is the only micro ORM that I am aware of that is compatible with .Net 3.5.

How to integrate RequestReduce with Azure CDN Endpoints

Azure’s CDN (content delivery network) implementation is a little different from most standard CDNs like Akamai. My experiences working with a couple of the major CDN vendors has been that you point your URLs to the same Url that you would use locally with the exception that the host name is one dedicated to static content and whose DNS points to your CDN provider. The CDN provider serves your content from its own cache which is geographically located close to the requesting browser. If the CDN does not have the content cached, it makes a normal HTTP call to the “origin” server (your local server) using the same url it was given but using the host name of your local site. Azure follows this same model with the exception that it expects your CDN content to reside in a directory (physical or virtual) explicitly named “CDN”.

Standard Implementation:

Browser –> http://cdn.yoursite.com/images/logo.png –> CDN Povider (cold cache) –> http://www.yoursite.com/images/logo.png

Azure Implementation:

Browser –> http://azurecdn.com/images/logo.png –> CDN Povider (cold cache) –> http://www.yoursite.com/cdn/images/logo.png

RequestReduce allows applications to serve its generated content via a CDN or cookie less domain by specifying a ContentHost configuration setting. When this setting is provided, RequestReduce serves all of its generated javascript and css and any local embedded resources in the CSS using the host provided in the ContentHost setting. However, because not only the host but also the path differs when using Azure CDN endpoints, this solution fails because http://longazurecdnhostname.com/images/logo.png fails to get content from http://friendlylocalhostname.com/images/logo.png since the content is actually located at http://friendlylocalhostname.com/cdn/images/logo.png. RequestReduce’s ContentHost setting will now work with Azure as long as you include this API call somewhere in your application’s startup code:

RequestReduce.Api.Registry.UrlTransformer = (x, y, z) => z.Replace("/cdn/", "/");

This tells requestReduce that when it generates a URL, remove the CDN directory from the path.

Attaching the RequestReduce response filter early in the request

RequestReduce uses a Response Filter to dynamically analyze your web site’s markup and manipulate it by replacing multiple css and javascript references with bundled javascript and css files transforming the background images in the CSS with sprites where it can. RequestReduce waits until the last possible moment of the request processing pipeline to attach itself to the response so that it has all of the information about the response needed to make an informed decision as to whether or not it should attach itself. This works well in almost all cases.

There are rare cases where an application may have another response filter that either simply does not play nice with other response filters by not chaining its neighboring filter correctly or it manipulates the content of the response in such a way that makes it necessary that RequestReduce filters the content after this filter has performed its manipulations.

I ran into this last week working with the MSDN and Technet Dev Centers in their adoption of RequestReduce. They have a ResponseFilter that gets attached in an MVC controller action filter which is before RequestReduce attaches itself. The nature of chained response filters is that the first filter to attach itself is the last filter to receive the response. Since the dev center Response Filter explicitly removes some excess css and javascript, it is important that RequestReduce receives the content last and is therefore attached first. To accommodate this scenario, I added the following API method that they were able to call in their action filter just before attaching their own method:

RequestReduce.Api.Registry.InstallResponseFilter();

This tells RequestReduce to attach itself Now.

Now excuse me as I slip into my 100% polyester leisure suit…

So what are you waiting for? Head over to Nuget and download RequestReduce today! It will make your site faster or my name isn’t Matt Wrock. Oh…and its Freeeeeeeeeeeeeee!!!!