Released AutoWrockTestable: Making test class composition easier by Matt Wrock

birdLate last year I blogged about a unit testing pattern I have been using for the past couple years. It’s a pattern that I initially learned from Matt Manela (@mmanela). I adapted the pattern to use Automocking with Richard Cirerol’s wrapper. Over the last week I have been working to plug this in to Visual Studio as a template that can easily add test classes and make one’s unit testing work flow more efficient.

I could, should, maybe will, but probably won’t write a separate post dedicated to making the Visual Studio extension. Ironically, while I am a member of the Visual Studio Gallery team, this is my first public extension I have written. While it is a trivial extension as extensions go, there were some interesting learnings that made what I thought would be a couple night’s worth of work into a week of my spare time. Despite some frustrating incidents, it was a lot of fun.

Now lets dive into AutoWrockTestable!

Whats better than AutoMocking? Why of course, AutoWrocking!
yo.

Visual Studio integration makes composing tests a snap!

Here is how to effortlessly add test classes to your solution with all mockable dependencies mocked:

1. You can download The Visual Studio Extension from Codeplex or the Visual Studio Gallery.
The extension will also install Nuget if you do not already have it and will add Structuremap, Structuremap.Automocking and Moq to your Nuget repository.

2. Create a skeleton of your implementation class.

public class OAuthTokenService{    private readonly IWebClientWrapper webClientWrapper;    private readonly IRegistryWrapper registry;

    public OAuthTokenService(IWebClientWrapper webClientWrapper,        IRegistryWrapper registry)    {        this.webClientWrapper = webClientWrapper;        this.registry = registry;    }    public string GetAccessToken(string clientId, IOauthUrls oauthUrls)    {        return null;    }}

3. Click on the "Add Testable..." menu item in Solution Explorer's "Add" context menu.

ContextMenu.png

4. Enter the name of the class you want to test. You can enter any name but the text box will auto complete using all class files open in the editor. The first class in the active file is initially selected.

Wizard.png


5. AutoWrockTestable creates a new class file with the same name as your implementation class appending "Tests" to the name and containing this code:

using AutoWrockTestable;

namespace SkyCli.Facts{    class OAuthTokenServiceTests    {        class TestableOAuthTokenService : Testable<SkyCli.OAuth.OAuthTokenService>        {            public TestableOAuthTokenService()            {

            }        }    }}

Writing tests using Testable<ClassToTest>

The Testable class has its dependencies automatically mocked. Now you can start to write test methods using your Testable. I like to use nested classes (a class for every method I want to test) to organize my tests. Here is how a test might look:

class OAuthTokenServiceTests{    class TestableOAuthTokenService : Testable<SkyCli.OAuth.OAuthTokenService>    {        public TestableOAuthTokenService()        {

        }    }

    public class GetAccessToken    {        [Fact]        public void WillReturnTokenFromRegistryIfAFreshOneIsFoundThere()        {            var testable = new TestableOAuthTokenService();            var registryValues = new Dictionary<string, string>();            registryValues.Add("access_token", "token");            registryValues.Add("expires_in", "3600");            registryValues.Add("grant_time", DateTime.Now.Ticks.ToString());            testable.Mock<IRegistryWrapper>().Setup(x => x.GetValues("path"))                .Returns(registryValues);

            var result = testable.ClassUnderTest.GetAccessToken("clientId", null);

            Assert.Equal("token", result);        }    }}

See Using the Testable<T> Class for a complete explanation of the Testable<T> API.

For more information on the Testable pattern and Auto Mocking in general see The Testable Pattern and Auto Mocking Explained or see my previous blog post on the subject.

Turn off Internet Explorer Enhanced Security by Matt Wrock

If you enjoy lots of dialog boxes that require you to take action before you can review any unique URL, then you will not want to use this:

$AdminKey = "HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\`    {A509B1A7-37EF-4b3f-8CFC-4F3A74704073}"$UserKey = "HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\`    {A509B1A8-37EF-4b3f-8CFC-4F3A74704073}"Set-ItemProperty -Path $AdminKey -Name "IsInstalled" -Value 0Set-ItemProperty -Path $UserKey -Name "IsInstalled" -Value 0Stop-Process -Name Explorer -ForceWrite-Host "IE Enhanced Security Configuration (ESC) has been disabled." -ForegroundColor Green

If on the other hand, IE enhanced security makes you want to stick your fist in an activated blender, do your self a favor and step away from the blender and then invoke the above script.

If you walk the fence between these two, invoke it anyways and you’ll be glad you did.

You are welcome.

The Perfect Build Revisited by Matt Wrock

build

About two and a half years ago I wrote a series of posts documenting the work my team had done to automate our build process. We had completed a migration from VSS to SVN and used a combination of nAnt and CruiseControl to facilitate continuous integration and push button deployments to any of our environments including production.

Over the last couple months, I’ve had the opportunity to put together an automated deployment process for my current organization at Microsoft. Throughout my career, I’ve worked on a few projects that were essentially a rewrite of a similar project I had worked  on in the past for a different employer. What I love about these kinds of projects is that it is a great opportunity to do so many things better. I can remember those architectural decisions I had made and regretted but was too far in to easily change (usually a smell of an architectural weakness itself). Well now I can avoid them and approach it from the angle I wished I had before. In a way this was a similar situation.

While I felt good about the system I had put together before, I now had better tools at my disposal. I still think nAnt and CruiseControl are fine tools, but now I’m using PowerShell with PSake instead of nAnt, TeamCity instead of CruiseControl and our source code is in Mercurial instead of SVN. The other major difference between the system I’m building now and the one I had worked on before is that this system also includes the automation of server setup and configuration, bringing a clean OS to a full functioning application node serving any tier in the app (web, db, admin, etc.)

This post is intended to provide an overview of the new system and I may follow up with future posts that dive into more detailed coverage of various parts of the system.

Do you really need an automated build and deployment system?

Yes. You do.

You may be thinking that while an automated system sounds neat and all, you simply don’t have time to build one. While I tend to be very pragmatic in my approach to software architecture, I definitely see automated deployments as a must have and not a “nice to have.” The reason I say this is that over several deployments, more time is lost in the mechanics of deploying and there is far more risk of a bad deployment and there is more difficulty and time spent in troubleshooting deployments than if the deployment were automated.

Often, teams do not recognize the value of automated deployments until they experience it. Once they work with one, they cant imagine going back. With automated build and deployments, the drama of deployments is reduced to a simple routine task and teams have more time to focus on building features and business has more confidence that their features will move forward reliably and consistently. If you want to release more often and perhaps extend continuous integration to continuous deployment, you simply must automate the deployment process.

If they are so important, why did it take you over two years to start building one?

Fair question. I don’t intend to enumerate the political reasons, which there are many, here. That will have to wait for my memoire due out in 2042, “My life, a love song,” please keep an eye out for that one.

Throughout my tenure in the MSDN/Technet org at Microsoft, deployments have been managed by a combination of test and a “build team” in the Ops group. While I have certainly been vocal in pushing for more automation, the fact that other people do most of the work and that there was resistance from some to automating the process, caused me to direct my focus on other things. There were certainly pain points along the way. There was a lot of ceremony involved in preparing for a deployment and in scheduling “hot fixes” with the build team. When there were problems with a deployment, it could be difficult sometimes to determine where things went wrong.

Recently, we transitioned to a new offshore vendor company. One of their responsibilities would be deployments and setting up new environments. Because these were mostly done manually, the logistics involved were often communicated verbally and via large step by step Word documents.

A side note: Many cultures have built a very rich and vibrant heritage around Oral history and story telling. I do not in any way want to disrespect these traditions. On the contrary, we should celebrate them. I do not believe that oral histories lend themselves well to automated builds and deployments.

Without going into the details, a lot fell through the cracks as the new team came on  board. I do not fault the people on this team, I wouldn’t expect anyone to be able to build an environment for a complex app that they have never worked on before based on a few phone conversations and a sharepoint wiki. Our environment setups and deployments suddenly started having problems. Because a large part of the code I am involved with spans over several apps, I am often approached when things go wrong here and before long I found myself spending most of my time troubleshooting and fixing environments and their deployments. It soon became crystal clear that until an automated system was in place, this would continue to stand in my way of getting real feature work done. And instead of whining and complaining about it, I decided to just do it.

What exactly does a automated build and deployment system do?

For the system I set out to build, the following key components are included:

  1. Application compilation and packaging
  2. Deployment of application packages to various environments
  3. Bootstrap scripts for setting up a new server or environment

The last one has inspired a new personal side project, Autobox, that sets out to automate the building of a developer machine (or any kind of personal machine) from bare OS via a single command line. After all, if I can create a test server with sql server, app fabric caching, various windows services, and web applications along with all the file permissions and firewall rules involved, certainly I can create my own machine with all my preferred apps and settings ready to go.

Lets examine each of these individually.

Application compilation and packaging

This is essentially the process that transforms the raw application bits with all of its code files, static assets, sql scripts, config files, and other application specific files into a zip file that can be consumed by the deployment scripts. This package in our case is typically composed of a directory for each application tier. Here is the package for our Galleries application:

image

 

 

 

 

 

 

 

 

The packaging process is responsible for the actual compilation which typically involves a call to msuild and which invokes the appropriate msbuild tasks from the original Visual Studio solution. In addition to transforming source files to compiled DLLs, the packaging process copies everything needed to deploy the application into a coherent directory structure and nothing more. This typically includes powershell scripts and various command line tools that run sql scripts to update the database with any schema changes, adds meta data to lookup tables or migrates old data to conform to new schema or logic. It may also include scripts responsible for transforming web.config and app.configs  with settings appropriate for the environment.

This first step of the build and deployment process had been in place for quite some time so I just had to make some minor tweaks here and there. The individual application teams in my group are responsible for keeping the packaging scripts up to date and it is wired into our continuous Integration process. Every push of source code to the central Mercurial repository forces our build server, Teamcity, to invoke a set of scripts that include compilation, running unit tests and finally packaging. TeamCity then saves the zipped package and makes it available to the deployment scripts. If you are familiar with Teamcity, you know this is the build “Artifacts.”

Deployment of application packages to various environments

Here is where my work largely started. Until recently, we had a script that TeamCity would invoke twice a day which would collect the packages of each app and aggregate them into another package for each deployable environment. This uses TeamCity dependent builds which will pull the build artifacts of the last successful application build into the deployment script’s working directory. Here are my Deployment Build settings that declare the dependencies:

image

So in our case, we would have application packages for Forums, Search, Profile and various internal services as seen above and these would all be rolled into a single 7z file for each environment including test, staging, production, etc. This packaging script was also responsible for the final transformation of the configuration files. It would merge settings specific to each environment into the web and app configs so that the final package, say prod-7791.7z (7791 being the build number), had the exact web and app configs that would end up in production.

Well this would take 2 and a half hours to run. Back in the day it was fairly fast but as environments got added, the process took longer and longer. It would then take the build team a couple hours to take this package and deploy its bits to each server, run the database upgrade scripts, stop and restart services, smoke test, etc. This could become more and more painful the closer we got to release because as dev would fix bugs, it could take one to two days before they received feedback from test on those bugs.

Revamping this was fairly straight forward. I rewrote this script to transform the configs for only a single environment which it would receive via a command parameter from TeamCity. I created a separate build config in TeamCity to make this very clear:

image

 

 

 

Each of these build configurations run the exact same script but they each pass different command line arguments to the build script indicating their environment. Also, some are wired to different Version Control branches. For example, our Int (Integration) environment builds off of the Release Candidate branch while the others build off of Trunk. Finally there is an “Ad Hoc” config where anyone can run a custom build with custom command line parameters. If the Ad Hoc build fails no one is notified and we don’t get particularly alarmed. Here is how the command line parameters are wired up for custom builds in TeamCity:

image

 

The script is a normal powershell script that gets called via psake. Psake provides a very nice powershell based container for running builds. Think of it as an alternative to writing an MSBuild script. While MSBuild is more XML based and very declarative in nature, PSake allows you to script out all of your build tasks in powershell which makes a lot of sense for the type of things that a build script does - such as copying files around. I’m not going to dive into a PSake tutorial here but here is a snippet of my PSake script:

properties {    $current = Resolve-Path .\default.ps1 | Split-Path    $path = $current    $BuildNumber = 0    $ConfigDrop = ".\_configs"    $WebDrop = "http"    $Environment = "DEFAULT"    $configVariables = New-Object System.Collections.Queue}

Include .\psake\teamcity.ps1

task default -depends Packagetask Configs -depends Copy-readme, SetupEnvironment, Configure-Social,     Configure-StoApps, Configure-Services, Configure-SocialServicestask Package -depends SetupEnvironment, Clean, Configs, Database, preparesearch,     SocialSites, StoApps, SocialServices, StopServices, Services, CopyConfigs,     StartServices, FlushRequestReduce, RestartIIS, RestartAppFabric, TestPages,     CleanEnvironment

TaskSetup {    TeamCity-ReportBuildStart "Starting task $($psake.context.Peek().currentTaskName)"}

TaskTearDown {    TeamCity-ReportBuildFinish "Finishing task $($psake.context.Peek().currentTaskName)"}

Task SetupEnvironment {    .\Utilities\EnvironmentSetup.ps1 $current $Environment $configVariables}

This is not any kind of special scripting language. It is normal powershell. PSake provides a Powershell module which exposes several functions like Task, Properties, etc. Many of these take script blocks as parameters. The PSake module really is not very large and therefore it does not take much investment to understand what it does and what functionality it provides. It really does not provide much “functionality” at all in terms of utility methods but it provides a very nice framework for organizing the various parts of your build script and specifying dependencies.

The snippet above is the beginning of my deployment script. The Properties section defines and sets script wide variables and these can be overridden via command line parameters when calling PSake. Next are my tasks. Tasks might actually do something like the SetupEnvironment task at the bottom. Or they might alias a group of tasks to be run in a specific order like the default, Configs and Package tasks. If you are familiar to msbuild, these are simply the equivilent of msbuild targets.

When you call PSake, you can tell it to run a specific task or if you do not, it will run the default task. Even though I am not including most of my script here, it is not difficult to tell what the deployment script does by simply looking at the dependencies of the default task. It first sets up the environment by calling another powershell script that will set a bunch of global environment variables specific to the Environment property. It performs a clean of any previous build, it transforms the configs, and runs the database scripts. Then it executes several tasks that copy different directories to the web server, stops some windows services, copies the services code, starts the services, restarts IIS, runs some quick tests to make sure the apps are loading and finally cleans up after itself.

One nice thing about this script is that it does not use any kind of remoting which can be important in some environments. The script can be run directly from the build agent (the server running the TeamCity Build Agent service) and target any environment. It does require that the Service Identity under which TeamCity runs, is an administrator on the target web servers and sql servers. To give you a glimpse into what is going on here, I specify all the server names specific to each environment in a config file named after the environment. So our Next (daily build) environment has a file called Next.ps1 that among many other things contains:

$global:WebServers                = "RR1STOCSAVWB18", "RR1STOCSAVWB17"$global:ServicesServer                = "RR1STOCSAVWB17"
 
Then my RestartIIS task looks like this:
Task RestartIIS {    Restart-IIS $global:WebServers}

function Restart-IIS([array] $servers) {    foreach ($server in $servers) {        .\Utilities\RemoteService.ps1 ($server -split "\\")[0] restart -service "W3SVC"    }}

RemoteServices.ps1 contains a bunch of functions to make working with services on remote servers not so painful.

 

Did the deployment succeed?

At any point in the scripts, if an error occurs, the build will fail. However, I also want to have some way to quickly check each application and ensure they can at least load. It is very possible that  the build script will complete just fine, but there may be something in the latest app code or some change to the environment that causes an application to fail. If this happens, I want to know which app failed, fail the build and provide straight forward reporting to testers to discover where things broke down. Yes, each app build has its own set of unit tests. Most apps have thousands but there are a multitude of issues both code related and server or network related that can slip through the cracks and cause the app to fail.

At the end of every deployment, a series of URLs are “pinged” and expected to return a 200 HTTP status code. Currently we have 28 URLs in our tests. Now a big reason for overhauling this system was to make it faster,so a big concern is that launching a bunch of app URLs will profoundly slow the build. To try to make this as efficient as possible, we use powershell jobs to multi thread the http requests and set a 5 minute timeout that will automatically fail all tests that do not complete before the timeout.

Here is the testing script:

task TestPages -depends SetupEnvironment {    . .\tests.ps1    Wait-Job -Job $openRequests -Timeout 300    foreach ($request in $openRequests) {        TeamCity-TestStarted $request.Name        $jobOutput = (Receive-Job $request)        if($jobOutput -is [system.array]) {$jobOutput = $jobOutput[-1]}        $testParts = $jobOutput -split " ::: "        if($testParts.Length -eq 2) {            $testMessage=$testParts[1]            $testTime=$testParts[0]        }        else {            $testMessage=$testParts[0]            $testTime=300        }        if($request.state -ne "Completed") {            TeamCity-TestFailed $request.Name "Timeout" "Test did not complete within timeout."        }        Remove-Job $request -Force        if ($testMessage -like "Failed*") {            TeamCity-TestFailed $request.Name "Did not Recive a 200 Response" $testMessage        }        TeamCity-TestFinished $request.Name $testTime    }}

function Ping ([string] $pingUrl) {    $jobArray = @()    $job = Start-Job -scriptblock {param($url)        $host.UI.RawUI.BufferSize = New-Object System.Management.Automation.Host.Size(8192,50)        $ms = (Measure-Command {            $web=[net.httpwebrequest]::create($url)            $web.AllowAutoRedirect = $true            $web.PreAuthenticate = $true            $web.Timeout = 300000            $systemWebProxy = [net.webrequest]::GetSystemWebProxy()            $systemWebProxy.Credentials = [net.CredentialCache]::DefaultCredentials            $web.Proxy = $systemWebProxy            $web.Credentials = [net.CredentialCache]::DefaultCredentials            try {                $resp=$web.GetResponse()            }            catch [System.Net.WebException]{                $resp=$_.Exception.Response                $outerMessage = $_.Exception.Message                $innerMessage = $_.Exception.InnerException            }        }).TotalMilliseconds        $status = [int]$resp.StatusCode        if ($status -ne 200) {            $badServer = $resp.Headers["Server"]            Write-Output "$ms ::: Failed to retrieve $url in $ms ms with status code:                 $status from server: $badServer"            Write-Output $outerMessage            Write-Output $innerMessage        }        else {            Write-Output "$ms ::: Succeeded retrieving $url in $ms ms"        }    } -name "$pingUrl" -ArgumentList $pingUrl    $jobArray += $Job    return $jobArray}

The individual test URLs are in the dot sourced tests.ps1:

$openRequests += Ping "http://$global:ServicesUrl/ChameleonService/Api.svc"$openRequests += Ping "http://$global:AdminUrl/ChameleonAdmin/"$openRequests += Ping "http://$global:ServicesUrl/SearchProviderServices/SearchProviderService.svc"$openRequests += Ping "http://$global:ProfileApiUrl/ProfileApi/v1/profile/displayname/vamcalerts"$openRequests += Ping http://$global:UserCardLoaderUrl...

An interesting thing to note here are the use of the functions beginning with TeamCity-. These are functions coming from a module provided by the pake-contrib project that exposes several functions allowing you to interact with TeamCity’s messaging infrastructure. The functions I am using here create standard output messages formatted in such a way that TeamCity will treat them like test output reporting when a test starts and finishes as well as if it succeeded or failed and how long it took. What is really nice about all of this is that now these tests light up in TeamCity’s test reporting:

 
image

I can zoom in on my failed tests to see why they failed:

image

Pretty slick eh?

Bootstrap scripts for setting up a new server or environment

In my original Perfect Build series, I did not include automation around setting up servers or environments. However one of the habits I picked up from the teams I work with at Microsoft is the inclusion of a build.bat file at the root of every source code repo that can build a development environment from scratch. In the past I had never followed this practice. I had not really used powershell and was not aware of all the possibilities available which is basically that you can do pretty much anything in powershell. I’ll admit there is a learning curve involved but it is well worth it. Being able to fire up a development environment for an app with a single command has proven to be a major time saver and a great way to “document” application requirements.

Now its one thing to get a dev environment up and running but getting a true server environment up can be more challenging. Since many organizations don’t give developers access to the server environments, setting these up often falls under server operations. This may involve dev sending ops instructions or sitting down with an ops engineer to get a server up and running. A lot of time can be lost here and its easy not to update and properly update these instructions. I have personally spent an aggregate of weeks troubleshooting environments not set up correctly.

One solution commonly employed here is to use VM images. Once you get an environment set up the way it is supposed to be inside of a VM, take a snapshot and simply apply that snapshot whenever you need to setup a new server. I don’t like this approach. It is too easy for VM images to become stale and they don’t serve well to “document” all of the requirements of an application. The fact is, just about anything can be scripted in powershell and in my opinion, if it cannot be scripted then you have probably made a poor choice in technology. Powershell scripts can replace “deployment documents” or server setup documents. They should be readable by both developers and server support engineers. Even if one is not well versed in powershell, I believe any technical professional should at least be able to read a powershell script and deduce the gist of what it is doing.

For my applications, I put together a script, again in psake format, that can build any application tier from a bare OS. It can also build a complete environment on a stand alone server. To provide an idea of what my script can do, here is the head of the psake script:

properties {    $currentDir = Resolve-Path .\cleansetup.ps1 | Split-Path    $profile = "$currentDir\CommApps\Profile"    $forums = "$currentDir\CommApps\Forums"    $search = "$currentDir\CommApps\Search"    $chameleon = "$currentDir\CommApps\Chameleon"    $configVariables = New-Object System.Collections.Queue    IF ( TEST-PATH d:\) { $httpShare="d:\http" } else { $httpShare="c:\http" }    $env = "test"    $appFabricShareName = "velocity"    $buildServerIdentity = "Redmond\Idiotbild"    $domain = "redmond.corp.microsoft.com"    $buildServer = "EpxTeamCityBuild.redmond.corp.microsoft.com"    $buildServerQueue = "bt51"    $doNotNeedsBits = $false    $addHostFileEntries = $false    $sqlServer = $env:computername    $appFabricServer = $env:computername    $AdminServer = $env:computername    $restartSuffix = ""    $noProxy = $true}

Include .\psake\teamcity.ps1

task default -depends standalonetask standalone -depends Setup-Proxy, Set-EnvironmentParams, Pull-Bits, Setup-Roles,     Disable-InternetExplorerESC, Database-server, Install-IIS-Rewrite-Module,     Install-Velocity, Setup-MSDTC, Install-Event-Sources, Install-Certificates,     Setup-Response-Headers, Register-ASP, Wait-For-Bits, Setup-IIS, Add-DB-Perms,     Configure-Velocity, Install-WinServices, Set-Queue-Permstask WebAppCache-Server -depends Setup-Proxy, Set-EnvironmentParams, Pull-Bits,     Setup-Roles, Configure-Group-Security, Install-IIS-Rewrite-Module, Install-Velocity,     Setup-MSDTC, Install-Event-Sources, Install-Certificates, Setup-Response-Headers,     Register-ASP, Wait-For-Bits, Setup-IIS, Add-DB-Perms, Configure-Velocity,     Install-WinServices, Set-Queue-Permstask AppFabric-Server -depends Setup-Proxy, Set-EnvironmentParams, Setup-Roles,     Configure-Group-Security, Install-Velocity, Configure-Velocitytask Web-server -depends Setup-Proxy, Set-EnvironmentParams, Pull-Bits, Setup-Roles,     Configure-Group-Security, Install-IIS-Rewrite-Module, Setup-MSDTC, Install-Event-Sources,     Install-Certificates, Setup-Response-Headers, Register-ASP, Wait-For-Bits,     Setup-IIS, Add-DB-Permstask Admin-Server -depends Setup-Proxy, Set-EnvironmentParams, Pull-Bits, Setup-Roles,     Configure-Group-Security, Install-IIS-Rewrite-Module, Setup-MSDTC, Install-Event-Sources,     Setup-Response-Headers, Register-ASP, Wait-For-Bits, Setup-IIS, Add-DB-Perms,     Install-WinServices, Set-Queue-Permstask Database-Server -depends Set-EnvironmentParams, Configure-Group-Security,     Install-SqlServer, Create-Databasestask Post-Restart-Full -depends Set-EnvironmentParams, Remove-Startup,     Configure-Velocity, Install-WinServices, Set-Queue-Permstask Post-Restart -depends Remove-Startup, Configure-Velocitytask Get-Bits -depends Set-EnvironmentParams, Pull-Bits, Wait-For-Bits

By looking at the tasks you can get a feel for all that’s involved at each tier. First let me say that this script took about 20x more effort to write than the deployment script. I’m proud to report that I mastered file copying long ago. Once I finally managed to figure out the difference between source and destination, its been smooth sailing ever since. This script really taught me a lot about not only powershell but also a lot about how the windows os and many of the administrative apps work together.

If I had to identify the step that was the biggest pain in the butt to figure out, by far and away it was installing and configuring AppFabric. This is Microsoft’s distributed caching solution formerly known as Velocity. One thing that makes it tricky is that, at least in my case, it requires a reboot after installation and before configuration. I certainly do not want to include our entire server setup script here but let me include the AppFabric portion. Again keep in mind this is coming from a psake consumable script. So the tasks can be thought of as the “entry points” of the script while the functions serve as “private” helper methods to those from more formal programming languages.

task Install-Velocity -depends Install-DotNet4 {    $global:restartNeeded = $false    Start-Service -displayname "Windows Update"    if (!(Test-Path "$env:windir\system32\AppFabric")){          $dest = "appfabric.exe"          if (Is64Bit){              $url = "http://download.microsoft.com/download/1/A/D/                1ADC8F3E-4446-4D31-9B2B-9B4578934A22/WindowsServerAppFabricSetup_x64_6.1.exe"          } else{              $url = "http://download.microsoft.com/download/1/A/D/                1ADC8F3E-4446-4D31-9B2B-9B4578934A22/WindowsServerAppFabricSetup_x86_6.1.exe"               }          Download-File $url (join-path $currentDir $dest)        ./appfabric.exe /i "cachingservice,cacheclient,cacheadmin"

        Start-Sleep -s 10        $p = Get-Process "appfabric"        $p.WaitForExit()        $global:restartNeeded = $true    } else      {          Write-Host "AppFabric - Already Installed..." -ForegroundColor Green      }      }task Configure-Velocity -depends Create-Velocity-Share, Install-Velocity {    if($global:restartNeeded -eq $true -or $global:restartNeededOverride -eq $true) {         RebootAndContinue     }    Load-Module DistributedCacheConfiguration    $clusterInfo = Get-CacheClusterInfo "XML" "\\$env:computername\$appFabricShareName"    if( $clusterInfo.IsInitialized -eq $false ) {        new-CacheCluster "XML" "\\$env:computername\$appFabricShareName" "Medium"        Register-CacheHost -Provider XML -ConnectionString "\\$env:computername\$appFabricShareName"              -CachePort 22233 -ClusterPort 22234  -ArbitrationPort 22235 -ReplicationPort 22236             -HostName $env:computername -Account "NT AUTHORITY\Network Service"        Add-CacheHost -Provider XML -ConnectionString "\\$env:computername\$appFabricShareName"             -Account "NT AUTHORITY\Network Service"        Load-Module DistributedCacheAdministration        use-cachecluster -Provider XML -ConnectionString "\\$env:computername\$appFabricShareName"        New-Cache ForumsCache -TimeToLive 1440        Set-CacheClusterSecurity -SecurityMode None -ProtectionLevel None

        start-cachecluster        netsh firewall set allowedprogram             $env:systemroot\system32\AppFabric\DistributedCacheService.exe APPFABRIC enable    }}

function Is64Bit  {      [IntPtr]::Size -eq 8  }function Download-File([string] $url, [string] $path) {    Write-Host "Downloading $url to $path"    $downloader = new-object System.Net.WebClient    $downloader.DownloadFile($url, $path) }function RebootAndContinue {    $global:restartNeededOverride = $false    Copy-Item "$currentDir\post-restart-$restartSuffix.bat"         "$env:appdata\Microsoft\Windows\Start Menu\programs\startup"    Restart-Computer -Force}
Now there are several ways to configure AppFabric and this just demonstrates one approach. This uses the XML provider and it only installs the caching features of AppFabric.

Installing applications with Chocolatey

One “rediscovery” I made throughout this process is an open source project built on top of Nuget called Chocolatey. This is the brain child of Rob Reynolds who is one of the original creators of what we know of as Nuget today and was once called Nu before development was handed off to Microsoft and Outercurve. I say “rediscovery” because I stumbled upon this a year ago but didn’t really get it. However it really makes sense when it comes to build/setup automation whether that be an application server or your personal machine.

Chocolatey is a framework around installing and setting up applications via silent installations. Many of the apps that you and I are used to manually downloading then launching the installer and clicking next, next, next, finish are available via Chocolatey’s public feed. In addition to its own feed, it exposes the web platform installer’s command line utility so that any application available via the web platform installer can be silently installed with Chocolatey. Since it really just sits on top of Nuget, you can provide your own private feed as well.

So lets look at exactly how this works by exploring my setup script’s bootstrapper:

param(    [string]$task="standalone",    [string]$environment="test",    [string]$sqlServer = $env:computername,    [string]$appFabricServer = $env:computername,    [string]$AdminServer = $env:computername,    [string]$domain = "redmond.corp.microsoft.com",    [switch]$doNotNeedBits,    [switch]$addHostFileEntries,    [switch]$skipPrerequisites,    [switch]$noProxy)iex ((new-object net.webclient).DownloadString('http://bit.ly/psChocInstall'))

if(-not $skipPrerequisites) {    .$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd install hg    if( test-path "$env:programfiles\Mercurial" ) {        $mPath="$env:programfiles\Mercurial"    }     else {         $mPath = "${env:programfiles(x86)}\Mercurial"     }

    if( -not( test-path $env:systemdrive\dev )) { mkdir $env:systemdrive\dev }    set-location $env:systemdrive\dev    if( test-path socialbuilds ) {        set-location socialbuilds        .$mPath\hg pull        .$mPath\hg update    }    else {        .$mPath\hg clone https://epxsource/SocialBuilds        set-location socialbuilds    }}if($task -eq "standalone") {$addHostFileEntries=$true}if($task -ne "AppFabric-Server") {$restartSuffix="Full"}./psake/psake.ps1 cleansetup.ps1 -tasklist $task -properties @{env=$environment;sqlServer=$sqlServer;    appFabricServer=$appFabricServer;AdminServer=$AdminServer;domain=$domain;    doNotNeedBits=$doNotNeedBits;addHostFileEntries=$addHostFileEntries;    restartSuffix=$restartSuffix;noProxy=$noProxy}

Notice these key lines:
iex ((new-object net.webclient).DownloadString('http://bit.ly/psChocInstall'))

This Downloads and installs Chocolatey and then here is an example of using chocolatey to download the Mercurial source control client:

.$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd install hg

I should point out that under most circumstances, the above line could simply be:

cinst hg

Chocolatey’s install puts itself in your path and creates some aliases that makes this possible but because I use Chocolatey here in the same script that installs Chocolatey, the environment variables it sets are not available to me yet. I’d need to open a new shell.

As a side note, I use chocolatey all the time now. If I need to hop on a random box and install a tool or set of tools, I now just launch a few lines of powershell and its all there. At Microsoft I often get asked for source code to my repos by fellow employees who are unfamiliar with Mercurial. I have found that sending an email like this is very effective:

Hi Phil,

You can get that from https://epxsource/Galleries. We use Mercurial. The easiest way to get everything you need is to launch this from Powershell as admin:

iex ((new-object net.webclient).DownloadString('http://bit.ly/psChocInstall'))

.$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd install hg

$env:programfiles\Mercurial\hg clone https://epxsource/Galleries

This will install Mercurial and clone the galleries repo.

Matt

How cool is that. No Mercurial tutorial needed and sometimes I get a reply back telling me what a cool script that is. I should really forward the compliment to Rob Reynolds since he was the one who basically wrote it.

So this really makes the consumption of my server setup script simple. As you can see it basically clones (or updates) my script repo on the target machine where the script runs. This also means that if I commit changes to my script, rerunning this script on the box will automatiucally pull in those changes. To simplify things further, I provide a batch file wrapper so that the script can be launched from any command line:

@echo off

powershell -NonInteractive -NoProfile -ExecutionPolicy bypass     -Command "& '%~dp0bootstrap\bootstrap.ps1' %*"

all this does is call the powershell bootstrap.ps1 script (the one listed before) but key to this call is:
-ExecutionPolicy bypass

Without this and assuming this script is being run on a fresh box, the user would get an error trying to run most powershell scripts. This prevents any scripts from blocking and suppresses all warnings regarding the security of the scripts. Often you will see advice suggesting that you use “unrestricted”. However, I have found that “bypass” is better especially since I have had issues with setting the execution policy to unrestricted on Windows 8. According to the documentation on execution policies:

Bypass
- Nothing is blocked and there are no warnings or

prompts.

- This execution policy is designed for configurations
in which a Windows PowerShell script is built in to a

a larger application or for configurations in which

Windows PowerShell is the foundation for a program

that has its own security model.

This seems to match the use case here.

The one liner setup call

So now as long as I put my batch file and bootstrap.ps1 on a network share accessible to others who need to use it, simply typing this at any command prompt will kick off the script:

\\server\share\bootstrap.bat

By default with no command line parameters passed in, a standalone setup will be installed. In my case, it takes about an hour to complete and I have a fully functioning set of applications when finished.

Making this personal

Being really impressed with what I can get done in powershell and how easy it is to install many applications using Chocolatey has inspired me to create a personal bootstrapper which I have been tweaking over the past several weeks. It is still very rough and there is much I want to add but I’d like to craft it into a sort of framework allowing individuals to create sort of “recipes” that will serve up an environment to their liking. We are all VERY particular about how our environments are laid out and there really is no one size fits all.

If you are interested in seeing where I am going with this, I have been keeping it at Codeplex here. Right now this is really about setting up MY box, but it does do some interesting things like download and install windows updates, turns off UAC (that dialog box that you may have never clicked “no” on) and makes windows explorer usable by changing the defaults and showing me hidden files and known extensions. Here is the script for the windows explorer “fix”:

function Configure-ExplorerOptions([switch]$showHidenFilesFoldersDrives,                                     [switch]$showProtectedOSFiles,                                     [switch]$showFileExtensions) {    $key = 'HKCU:\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced'    if($showHidenFilesFoldersDrives) {Set-ItemProperty $key Hidden 1}    if($showFileExtensions) {Set-ItemProperty $key HideFileExt 0}    if($showProtectedOSFiles) {Set-ItemProperty $key ShowSuperHidden 1}    Stop-Process -processname explorer -Force}

So I hope you have found this helpful. I may dive into further detail in later posts or provide some short posts where I may include little “tidbits” of scripts that I have found particularly helpful. Then again, I may not.

Released RequestReduce 1.8: Making website optimization accessible to even more platforms by Matt Wrock

This week RequestReduce 1.8 was released expanding its range of platform compatibility along with some minor bug fixes.

Key Features Released

  • Syncing generated sprites and bundles across multiple web servers using sql server is now .net 3.5 compatible. Thanks to Mads Storm (@madsstorm) for migrating the EntityFramework 4 implementation over to PetaPoco
  • Added support for Azure CDN end points. See below for API details needed to enable this.
  • Fixed dashboard and cache flushing to function on IIS6
  • Ability to manually attach the RequestReduce Response Filter earlier in the request processing pipeline via a new API call.
  • Fixed .Less implementation to pass querystring parameters. Thanks

    Andrew Cohen (@omegaluz) for this bug fix.

There were a couple bugs caught by some users the day of release but those were fixed in the first 24 hours and all is stable now. You can now get this version from Nuget or RequestReduce.com. Its been very satisfying hearing from users who use RequestReduce on platforms such as classic ASP and even PHP on IIS and I’m glad to be able to expand this usage even further.

Why RequestReduce is no longer using Entity Framework

The short answer here is compatibility with .Net 3.5. It may seem odd as we stand on the precipice of the release of .Net 4.5 that this would be a significant concern, but I have received several requests to support Sql Server synchronization on .net 3.5. A lot of shops are still on 3.5 and the Sql Server option is a compelling enterprise feature. Its what we use at Microsoft’s EPX organization to sync the generated bundles and sprites across approximately 50 web servers. Since Entity Framework Code First is only compatible with .Net 4.0, we had to drop this in favor of a solution that would work with .Net 3.5.

The reason I chose to originally implement this feature using Entity Framework was mainly to become more familiar with how it worked and compared to the ORM that I have historically used, nHibernate. The data access needs of RequestReduce.SqlServer are actually quite trivial so I felt like it would be a good project to test out this ORM with little risk. In the end, I achieved what I wanted which was to understand how it worked at a nuts and bolts level beyond the white papers and podcasts I had been exposed to. I have to say that it had come a long way since my initial exposure to it a few years back. The code first functionality felt very much like my nHibernate/Fluent nHibernate work flow. It still has some maturing to do. especially in regards to caching.

Mads Storm was kind enough to submit a pull request overhauling the EF implementation using a Micro ORM called PetaPoco. While I certainly could have ported RequestReduce to straight ADO given its simple data needs, the PetaPoco migration was simple given that it follows a similar pattern to Entity Framework. I would definitely recommend PetaPoco to anyone interested in a Micro ORM that needs .Net 3.5 compatibility. I had previously held interested in using a framework like MassiveSimple.Data or Dapper. However all of these make use of the .Net 4 Dynamic type. PetaPoco is the only micro ORM that I am aware of that is compatible with .Net 3.5.

How to integrate RequestReduce with Azure CDN Endpoints

Azure’s CDN (content delivery network) implementation is a little different from most standard CDNs like Akamai. My experiences working with a couple of the major CDN vendors has been that you point your URLs to the same Url that you would use locally with the exception that the host name is one dedicated to static content and whose DNS points to your CDN provider. The CDN provider serves your content from its own cache which is geographically located close to the requesting browser. If the CDN does not have the content cached, it makes a normal HTTP call to the “origin” server (your local server) using the same url it was given but using the host name of your local site. Azure follows this same model with the exception that it expects your CDN content to reside in a directory (physical or virtual) explicitly named “CDN”.

Standard Implementation:

Browser –> http://cdn.yoursite.com/images/logo.png –> CDN Povider (cold cache) –> http://www.yoursite.com/images/logo.png

Azure Implementation:

Browser –> http://azurecdn.com/images/logo.png –> CDN Povider (cold cache) –> http://www.yoursite.com/cdn/images/logo.png

RequestReduce allows applications to serve its generated content via a CDN or cookie less domain by specifying a ContentHost configuration setting. When this setting is provided, RequestReduce serves all of its generated javascript and css and any local embedded resources in the CSS using the host provided in the ContentHost setting. However, because not only the host but also the path differs when using Azure CDN endpoints, this solution fails because http://longazurecdnhostname.com/images/logo.png fails to get content from http://friendlylocalhostname.com/images/logo.png since the content is actually located at http://friendlylocalhostname.com/cdn/images/logo.png. RequestReduce’s ContentHost setting will now work with Azure as long as you include this API call somewhere in your application’s startup code:

RequestReduce.Api.Registry.UrlTransformer = (x, y, z) => z.Replace("/cdn/", "/");

This tells requestReduce that when it generates a URL, remove the CDN directory from the path.

Attaching the RequestReduce response filter early in the request

RequestReduce uses a Response Filter to dynamically analyze your web site’s markup and manipulate it by replacing multiple css and javascript references with bundled javascript and css files transforming the background images in the CSS with sprites where it can. RequestReduce waits until the last possible moment of the request processing pipeline to attach itself to the response so that it has all of the information about the response needed to make an informed decision as to whether or not it should attach itself. This works well in almost all cases.

There are rare cases where an application may have another response filter that either simply does not play nice with other response filters by not chaining its neighboring filter correctly or it manipulates the content of the response in such a way that makes it necessary that RequestReduce filters the content after this filter has performed its manipulations.

I ran into this last week working with the MSDN and Technet Dev Centers in their adoption of RequestReduce. They have a ResponseFilter that gets attached in an MVC controller action filter which is before RequestReduce attaches itself. The nature of chained response filters is that the first filter to attach itself is the last filter to receive the response. Since the dev center Response Filter explicitly removes some excess css and javascript, it is important that RequestReduce receives the content last and is therefore attached first. To accommodate this scenario, I added the following API method that they were able to call in their action filter just before attaching their own method:

RequestReduce.Api.Registry.InstallResponseFilter();

This tells RequestReduce to attach itself Now.

Now excuse me as I slip into my 100% polyester leisure suit…

So what are you waiting for? Head over to Nuget and download RequestReduce today! It will make your site faster or my name isn’t Matt Wrock. Oh…and its Freeeeeeeeeeeeeee!!!!

What you should know about running ILMerge on .Net 4.5 assemblies targeting .Net 4.0 by Matt Wrock

I might have also entitled this:

“How to avoid TypeLoadException: Could not load type 'System.Runtime.CompilerServices.ExtensionAttribute' “

But I didn’t.

First, the moral of this story

I am about to take you on a debugging journey that will make some laugh and others cry and a fortunate few will travel through the entire spectrum of human emotion that will take them down into the seven bowels of hades only to be resurrected into the seven celestial states of ultimate being that will consummate in a catharsis that unifies soul, body and mind with your favorite My Little Pony character. If this does not appeal to you then know this:

If you use ILMerge to merge several assemblies into one on a machine with .Net 4.5 Beta installed and intend to have this merged assembly run on a machine running .Net 4.0, DO NOT use the following TargetPlatform switch value:

/targetplatform:"v4,c:\windows\Microsoft.NET\Framework\v4.0.30319"
 
Instead use this:
/targetplatform:"v4,C:\Program Files\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0"
If you are interested in learning some details about targeting different frameworks, some nice IL debugging tips or what that means in an upgrade like 4.5 that is “in place” or does not officially change the runtime version along with some techniques for debugging such interesting scenarios, then read on.

Twas the night before Beta

So it seems serendipitous that I write this on the eve of the Visual Studio 11 launch. This last weekend I installed the beta bits on my day to day development environments. As a member of the team that owns the Visual Studio Gallery and the MSDN Code Samples Gallery and their integration with the Visual Studio IDE, I’ve been viewing bits hot out of the oven for some time now. The product seems stable and everyone seem to feel comfortable installing it side by side with VS 10. You can target .Net 4.0 so why not just dev on it full time to enjoy the full, rich dogfooding experience. At home where I do development on my OSS project RequestReduce, I work on a 5 year old Lenovo T60P laptop. Its name is dale. So the perf and memory footprint improvements of VS11 have a special appeal to me. Oh and to Dale too. Right Dale? Thought so.

Most solutions can be loaded in both VS10 and VS11 without migration

So day 1, Saturday after some initial pain of getting a XUnit test runner up and running it looks like I’m ready to go. I load up RequestReduce and all projects load up fine. I’m also happy that after loading them in VS11, they still load and compile in VS10. Next I build in VS11 and all unit and integration tests pass. Sweet! Lets get to work and make it happen.

Cut on the bleeding edge

So I had been exchanging emails all week with a developer having issues with getting RequestReduce to play nice with the Azure CDN. Turns out Azure handles CDN URLs differently from the other CDNs I have worked with in the past by requiring all CDN content to be placed in a CDN directory on the origin server. However the CDN URL should not include the CDN directory in the path. There were some minor changes I needed to make to get the RequestReduce API to work nice with this setup. Just to be sure my changes were good, I spun up a new Azure instance and created a CDN endpoint. Then I deployed my test app with RequestReduce plugged in to do bundling and minification and WHAT?!

 

[TypeLoadException: Could not load type 'System.Runtime.CompilerServices.ExtensionAttribute' from assembly 'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.]

This doesn’t look good and my first reaction is “Stupid Azure.” But I have users who report that RequestReduce runs fine on azure. To double check I run the same test app on my dev box, all is good. So despite the promises that .net 4.5 and VS11 will run fine on .net 4.0, I create a clean VM with .Net 4.0 and run my test app there. Yep. I get the same error. I promptly shout to my secretary, “Marge, get me building 41 and cancel my tennis match with Soma!” Perhaps a cunning power play like this will motivate him to light a proverbial fire under the sorry devs that are causing these exceptions to be thrown. Then it all hits me: its Saturday, I have no secretary and lets be honest, I wont be playing tennis with Soma (head honcho at MS DevDiv that pumps out Visual Studio) any time soon. I weep.

Debugging Begins

I soon wake up recalling that I don’t even like tennis and I’m ready to fight. I’m gonna win this. From the stack trace I can see that the error is coming from my StructureMap container setup. StructureMap is a very cool IOC container used for managing Dependency Injection. It allows you to write code that references types by their interfaces so that you can easily swap out concrete implementation classes via configuration or API. There are many reasons for doing this that is beyond the scope of this post, but the most widely used application I use it for is testing. If I have all my services and dependencies handed to me from StructureMap as Interfaces, then I can create tests that also pass in interfaces with mock implementations. Now I have tests that don’t have to worry if all that code in the dependent services actually works. I have other tests that test those. I can control the data that these services will pass into the method I am testing and that allows my test to hyper focus on just the code in my method I’m testing. If this sounds unfamiliar or confusing, I urge you to research the topic. It transformed the way I write code for the better.

So one thing StructureMap has to do to make all of this possible at app startup time is scan my assemblies for interfaces and the concrete classes that I tell it to use. It is here where I am running into this TypeLoadException. So who are you 'System.Runtime.CompilerServices.ExtensionAttribute' and what is your game? Why do you play with me?

After some googling, I get a little dirt on this Type. Apparently it is an Attribute that decorates any class that contains an extension method In C#, you will never have to include this attribute directly. The compiler will do it for you. You will see it in the IL. Well this type has moved assemblies in .Net 4.5. Apparently it was too good for System.Core and has moved to an executive suite in mscorelib. This is all complicated by the fact that  the upgrade from 4.0 to 4.5 is what they call in “the industry” an In Place Upgrade. That means that you will not see a v4.5.078362849 folder in your c:\windows\Microsoft.Net\Framework directory. No. The 4.5 upgrade gets paved over the 4.0 bits and simply updates the DLLs in the C:\Windows\Microsoft.NET\Framework\v4.0.30319 folder. Not a real fan of this and I don’t know what the reasoning is but that’s how its done.

So now I’m thinking that this must be some edge case caused by StructureMap using Reflection in such a way where it demands to load types from what assemblies are there at compile time. I should also mention that I have a class in my assembly that has an extension method. So I find a way to tell StructureMap to go ahead and scan the assemblies but just don’t worry about any attributes. Since you can decorate classes with special StructureMap attributes that tell StructureMap that a class is a type that can be plugged into a specific interface, it will try to load all attributes it finds to see if it is one of these special attributes. Well I don’t use those so I tell StructureMap IgnoreStructureMapAttributes().

Ahhh. I am convinced that I “have it.” while my code builds and deploys to my VM all on my 5 year old laptop (remember Dale?), I have time to file a Connect bug nice and snug in the self righteous knowledge that I am doing the right thing. I have been wronged by these bleeding edge bits but I’m not angry. I am simply informing the authorities of the incident in the hopes that others will not need to suffer the same fate.

Ok. My code has now been built and deployed and is ready to run. Its gonna be great. I’ll see my asp.net web page with bundled and minified javascript and I can move on with my weekend chores. I launch the test app in a browser and…Criminey! Same error but different stack trace. Now its coming from code that Instantiates a ResourceManager. This is not instilling confidence. This isn’t even technically my code. It is code auto generated by VS when you add Resources to the Resources tabs in the VS Project properties. Really? VS’s own code isn’t even backwards compatible? The rubbish I have to work with. So it turns out the ResourceManager does something similar as StructureMap’s initialization, it scans an assembly for resources. It iterates over every type to see if it matches what you have told the ResourceManager what to look for. Ok Ok. I guess I’m just gonna have to refactor this too. And what next? When does it stop? When is enough enough?!

So I do refactor my Resource.

using (var myStream = Assembly.GetExecutingAssembly()    .GetManifestResourceStream("RequestReduce.Resources.Dashboard.html")){    using (var myReader = new StreamReader(myStream))    {        dashboard = myReader.ReadToEnd();    }}

If you are familiar with RequestReduce, this is the HTML page that is the RequestReduce Dashboard. I embed this as a resource in the RequestReduce DLL.  Now I load it through a ManifestResourceStream into a static string and to be honest this does seem much more efficient than scanning every class for resources when I know exactly which file contains my resource  and only need to load the contents of that file.

So I build and deploy again. And now, sweet victory, I see the beautiful blue background that is the default ASP.NET project home page and I see my minified CSS. But wait…oh no…something’s not right. The navigation tabs are stacked on top of each other and don’t look like tabs at all. Where is my JavascriptjQuery152004988980293273926_1358662903428 It is completely gone? If RequestReduce has a zero byte string after minification (maybe it was just a comment that gets minified out) then it will remove the script all together. So diving into the code deeper and running several tests to narrow the possibilities, I discover that the call into the MS Ajax minifier returns an empty string. Now that’s Minification!!

Its not .net 4.5’s fault

So all of the sudden I begin to wonder. Oh no! Is this not the Framework itself causing this mayhem but perhaps related to my use of ILMerge.exe which takes RequestReduce.dll, AjaxMin.dll, Structuremap.Dll and nQuant.dll and merges them all into one RequestReduce.Dll? I do this because the principle behind RequestReduce is to make Website optimizations as easy and automatic as dropping a single DLL into your bin. After I replace my merged DLL with the original unmerged ones in my test app on my .Net 4.0 VM, everything magically works. So I know now that it is either a problem in ILMerge or perhaps still a problem in the framework that is surfaced when interacting with ILMerge. Either way I want to get back up and running so I need to figure out what is going on with AjaxMin specifically. Is there something I can fix with the way I use AjaxMin or can I figure the more root problem with the Framework or ILMerge? Wouldn’t it be great if I were using an old version of ILMerge and simply updating it would fix everything?

I am using a version of ILMerge that was last updated in May and there have been two updates since the last being in November. I’m hopeful that since November was after the Preview Release of Visual Studio 11, this latest update will address .Net 4.5 issues. I update my ILMerge bits and alas, the problem still exists. So now I’m hoping that some thumbing through the ILMerge documentation or searching online will turn up some clues. Nothing. This is always the problem with working with bleeding edge bits. You don’t often get to learn from other peoples problems. You are the other people. I write this today as that “other person” who will hopefully shine light on your problem.

In a last ditch effort I send an email to Mike Barnett, the creator of ILMerge in hopes he may have come across this before and can provide guidance to get me around the issue and on my way to running code. He responded as I expected. He hadn’t played with the 4.5 bits yet and was not surprised that there would be issues especially given the fact of the in place upgrade. He was gracious enough to offer to look at the problem if I could provide him with the breaking code and access to a machine with.Net 4.5 installed.

I’m not one to quickly hand off problems on to someone else. First because it is rude and second I enjoy being able to solve problems on my own especially these kinds. In fact I had walked into the exact kind of problem that (while frustrated that my code was not running) I enjoy the most and often have a nack for getting to the bottom of figuring out what is going on. This is not because I am smart but because I am stubborn and off balance and will stick with problems long after smarter people have moved on with their lives.

The first thing I do is pull down the source code of the Ajax Minifier. I have a suspicion that the minifier is stumbling on the same exception I have been fighting with and it simply catches it and gracefully returns nothing. I discover that if there are any internal errors in the minification of content given to the minifier, it will store these errors away in a collection accessible from the minifier’s ErrorList property. When I inspect this property, there is one error reporting a type initialization error in Microsoft.Ajax.Utilities.StringMgr. So I look up that class and bang:

// resource manager for retrieving stringsprivate static readonly ResourceManager s_resourcesJScript =     GetResourceManager(".JavaScript.JScript");private static readonly ResourceManager s_resourcesApplication =     GetResourceManager(".AjaxMin");

// get the resource manager for our stringsprivate static ResourceManager GetResourceManager(string resourceName){  string ourNamespace = MethodInfo.GetCurrentMethod().DeclaringType.Namespace;  // create our resource manager  return new ResourceManager(    ourNamespace + resourceName,    Assembly.GetExecutingAssembly()    );}

My friend the ResourceManager again. Unfortunately because this is not my code, I cant refactor it as easily. Sure it is an open source project that I think takes pull requests and whose owner, Ron Logan, is very responsive to bug fixes, but refactoring these run ins with ExtensionAttribute is beginning to feel like an unwinnable game of Whack-A-Mole and since the error does not occur without ILMerge, I need to figure out what is going on there instead of cleaning up after the mess. As far as I’m concerned at this point, the only viable options are to find a way to work with ILMerge that will prevent these errors or gather enough data that I can give to either Mike Barnett or the .net team to pursue further. I’m hoping for the former but thinking the later scenario is more likely.

Isolate and minify the problem space

I often find with these sorts of problems, the best thing to do at this point is to widdle down the problem space to as small of a surface area as possible. I create a new VS11 solution with two projects:

Project 1

static class Program{    static void Main()    {        System.Console.Out.WriteLine(Resources.String1);    }

    public static string AnotherTrim2(this string someString)    {        return someString.Trim();    }}
 
This project also contains a simple resource string and the auto generated file produced by Visual Studio that contains the following line of code that reproduces the error:
global::System.Resources.ResourceManager temp =     new global::System.Resources.ResourceManager(        "ReourceManUnMerged.Properties.Resources",         typeof(Resources).Assembly    );

Project 2

An empty class file. I just need a second assembly produced that I can merge.

I ILMerge them together on my .Net 4.5 machine and then I copy ILMerge.exe and my unmerged bits to my .Net 4.0 VM and merge the same bits on the 4.0 platform. I then run both merged versions on .net 4.0 and sure enough the one that was merged on the .Net 4.5 machine breaks and the one merged on .Net 4.0 runs just fine. I now know I can work with these assemblies to troubleshoot. With the minimal code, there is a lot les to look at and get confused by. I did mention that I get easily confused right? Hell I’m confused as I type right now. Did I also mention that I am one of the individuals responsible for deploying MSDN Win 8/Vs11 Beta documentation in the next hour? Don’t let my boss know about the whole confusion thing. Some things are better kept a secret.

Pop open the hood and look at IL

The first thing I want to do is look at both assemblies using a new tool put out by JetBrains, the makers of such popular tools like Resharper, called DotPeek. This is the equivilent of the highly popular tool Reflector except it is free. It lets you view the C# source code of the disassembled IL of any .net assembly. A very handy tool when you cannot access the source of an assembly and want to peek inside. I’m curious if ILMerge reassembled these assemblies in such a way that they have different source that would clue me in to something useful.

They do not. The only difference between the two sets of source code is that the assembly that works includes a reference to System.Core – the .Net 4.0 home of the offending ExtensionAttribute. While this is interesting at one level, its not very actionable data since my source code explicitly references System.Core. So I cant just add the reference and expect things to fix themselves.

Next thing I do is I use ILDasm.exe, a tool that ships with the .Net SDK that can decompile a .Net DLL down to its IL code. Ive mentioned IL a couple of times now. This is the Intermediate Language that all .net languages get compiled to. I use this tool to view the actual IL emitted by ILMerge’s compile. You can access this tool from any VS command prompt or from C:\Program Files\Microsoft SDKs\Windows. Now I’m seeing something more interesting. The two sets of IL are identical except the one merged on 4.5 contains three instances of this line:

.custom instance void [mscorlib]    System.Runtime.CompilerServices.ExtensionAttribute::.ctor()     = ( 01 00 00 00 )

and the one merged on 4.0 has the same line but slightly different:

.custom instance void [System.Core]    System.Runtime.CompilerServices.ExtensionAttribute::.ctor()     = ( 01 00 00 00 )

See the difference? Now I’m thinking that one option I have is to modify my build script to use ILDasm to extract the IL, do a simple string replacement to get ExtensionAttribute to load from System.Core and then use ILAsm to reassemble the transformed IL back to a usable assembly. This is certainly a workable option but it does not feel ideal. What would be ideal is to find some way to tell ILMerge to use System.Core instead of mscorelib. Oh ILMerge, can’t we work together on this. Why must we fight. I love you, you love me, we’re a happy fam… Whoa. Where am I…oh yeah. When do I need to deploy those Beta bits?

What’s the compiler doing and how to get it to target 4.0

So I ask myself what is VS doing that is different when you tell it to target .Net 4.0 as opposed to 4.5? Since it seems that the outdated 4.0 bits are simply blown away when you install 4.5, how does the compiler know to use mscorelib when targeting 4.0? If I could answer this question, maybe that would reveal something I could work with. To discover this I go to Tools/Options… in VS11 and then I select Projects and Solutions –> Build and Run on the left. This gives me options to adjust the verbosity of the MSBuild output displayed in the output window at compile time. I switch this from Detailed to Diagnostic. I want to see the actual call to CSC.exe, the C# compiler and what switches Visual Studio is passing into it. Thankfully I get just that. When targeting 4.0, the command line call looks like this:

C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe     /noconfig     /nowarn:1701,1702,2008     /nostdlib+     /platform:AnyCPU     /errorreport:prompt     /warn:4     /define:DEBUG;TRACE     /errorendlocation     /highentropyva-     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.0\Microsoft.CSharp.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.0\mscorlib.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.0\System.Core.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.0\System.Data.DataSetExtensions.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.0\System.Data.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.0\System.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.0\System.Xml.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.0\System.Xml.Linq.dll"     /debug+     /debug:full     /filealign:512     /optimize-     /out:obj\Debug\ReourceManUnMerged.exe     /resource:obj\Debug\ReourceManUnMerged.Properties.Resources.resources     /target:exe     /utf8output     Program.cs     Properties\AssemblyInfo.cs     Properties\Resources.Designer.cs     "C:\Users\mwrock\AppData\Local\Temp\.NETFramework, Version=v4.0.AssemblyAttributes.cs" (TaskId:26)

When targeting 4.5 I get this:

C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe     /noconfig     /nowarn:1701,1702,2008     /nostdlib+     /platform:AnyCPU     /errorreport:prompt     /warn:4     /define:DEBUG;TRACE     /errorendlocation     /highentropyva+     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.5\Microsoft.CSharp.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.5\mscorlib.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.5\System.Core.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.5\System.Data.DataSetExtensions.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.5\System.Data.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.5\System.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.5\System.Xml.dll"     /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework \.NETFramework\v4.5\System.Xml.Linq.dll"     /debug+     /debug:full     /filealign:512     /optimize-     /out:obj\Debug\ReourceManUnMerged.exe     /resource:obj\Debug\ReourceManUnMerged.Properties.Resources.resources     /target:exe     /utf8output     Program.cs     Properties\AssemblyInfo.cs     Properties\Resources.Designer.cs     "C:\Users\mwrock\AppData\Local\Temp\.NETFramework,Version=v4.5.AssemblyAttributes.cs"     obj\Debug\\TemporaryGeneratedFile_E7A71F73-0F8D-4B9B-B56E-8E70B10BC5D3.cs     obj\Debug\\TemporaryGeneratedFile_036C0B5B-1481-4323-8D20-8F5ADCB23D92.cs (TaskId:21)

Telling ILMerge where to find mscorelib – the devil in the details

There are a few differences here. The one that is most notable is where the framework references are being pulled from. They are not pulled from the Framework directory in c:\windows\Microsoft.Net which is where I would expect and where one usually looks for framework bits. Instead they are coming from c:\Program Files\Reference Assemblies. The MSBuild team talks about this folder here.

When you call ILMerge to merge your assemblies, you pass it a /targetplatform swich which tells it which platform to build for. Currently this switch can take v1, v1.1, v2 or v4 followed by the Framework directory. When I build for 4.0 I use this command line call via powershell:

.\Tools\ilmerge.exe     /t:library     /internalize     /targetplatform:"v4,$env:windir\Microsoft.NET\Framework$bitness\v4.0.30319"     /wildcards     /out:$baseDir\RequestReduce\Nuget\Lib\net40\RequestReduce.dll     "$baseDir\RequestReduce\bin\v4.0\$configuration\RequestReduce.dll"     "$baseDir\RequestReduce\bin\v4.0\$configuration\AjaxMin.dll"     "$baseDir\RequestReduce\bin\v4.0\$configuration\StructureMap.dll"     "$baseDir\RequestReduce\bin\v4.0\$configuration\nquant.core.dll"

Most point the directory like I am to the actual platform directory that can always be located off of %windir%\Microsoft.net It’s the obvious location to use. Oh wait…hold on…

Ok I’m back. The win8 and VS11 beta docs are now deployed. Now where was I. Oh yeah…the framework directory passed to ILMerge. According to the ILMerge documentation it just needs the directory containing the correct version of mscorelib.dll. So I’m thinking, lets use this Reference Assemblies directory. I do that and re merge on .Net 4.5. I run on my 40 VM and…Hallelujah!! There’s my javascript in all of its minified glory.

I hope this helps someone out because I didn’t find any help on this error.

Also, if you have made it this far, as I promised in the beginning, you should now have a sense of unity with your favorite My Little Pony character. Mine is Bon Bon what's yours?