Released: RequestReduce 1.2 with Javascript merge and minify by Matt Wrock

With this week’s release of RequestReduce 1.2, RequestReduce is even more effective in reducing the HTTP requests made by a browser when visiting your website. Most significantly, this release adds the minification and merging of external javascript resources. A big thanks goes to Matt Manela for providing the initial Javascript Reduction code! Other mentionable features in this release include:

  • Automatic expansion of CSS @Import. All imports will be downloaded and injected into the calling CSS and then minified and merged with the rest of the page’s head CSS.
  • Wider support for PNG lossless compression. RequestReduce now issues the –fix parameter to optipng which will attempt to fix PNGs with extraneous data.
  • Ignore 404 and 500 failures of individual resources. In other words, if one script or stylesheet fails to crunch, the rest should still succeed.
  • Several bug fixes to the overall response filtering.

RequestReduce 1.2 is available via Nuget installations and from downloading it at http://www.RequestReduce.com. I will discuss the new features in some detail but first…

What makes RequestReduce the best static resource cruncher available?

OK. Glad I got that off my chest. Now for the details of this release’s features:

Javascript Crunching: What exactly is crunched

RequestReduce will minify all external javascripts that have a valid javascript mime type, do not have a Cache-Control header set to No-Cache or No-Store and does not have an Expires or max-age of less than a week. Furthermore you can explicitly tell RequestReduce what scripts you would like it to skip by supplying a comma separated list of url substrings. For example, on the Microsoft properties I work on, I have this set to /tiny_mce since the way that /tiny_mce loads its plugins does not play nice when the scripts are not in the same directory where tiny_mce expects them to be.

By default, if no urls are provided in this list, RequestReduce will ignore the Google and Microsoft JQuery CDNs. It does this since it is likely that browsers will not have to download this content because most will have already cached those resources while visiting a Google or Microsoft property in the past.

You can also use more complicated or customized rules via code. RequestReduce exposes a Func for approving the processing of urls. You can hook into this by registering a lambda or delegate to this Func in your Global.asax startup. Here is an example:

var javascriptResource = RRContainer.Current.GetInstance<JavaScriptResource>();javascript.TagValidator = ((tag, url) =>             {                if(url.EdsWith(".myjss") || tag.Contains("data-caching=dont"))                    return false;            });

The first input param is the full script tag element and the url is the javascript SRC.

Important considerations in the placement of your javascripts

In a perfect world, I would crunch all your scripts together and dynamically inject them into the DOM to load asynchronously. In fact, because the world is perfect, I could even defer the loading of the scripts until after document complete. But alas, the world is not perfect. RequestReduce knows absolutely nothing about your code and takes the safest approach to ensure that your page does not break.Therefore, RequestReduce will make sure that any dependencies that your markup has on your scripts or that your scripts have on other scripts are not broken.

RequestReduce will only merge blocks of scripts that can be reduced contiguously. For example assume you have the following scripts on a page:

<html>    <head>        <link href="http://localhost:8877/Styles/style1.css" rel="Stylesheet" type="text/css" />        <link href="/Styles/style2.css" rel="Stylesheet" type="text/css" />        <script src="http://localhost:8877/Scripts/script1.js" type="text/javascript"></script>        <script src="/Scripts/script2.js" type="text/javascript"></script>    </head>    <body>        <div class="RatingStar">&nbsp;</div>        <div class="Up">&nbsp;</div>        <div class="Rss">&nbsp;</div>        <div class="Search">&nbsp;</div>        <script src="http://localhost:8877/Scripts/script3.js" type="text/javascript"></script>        <script src="http://www.google-analytics.com/ga.js" type="text/javascript"></script>        <script src="/Scripts/script4.js" type="text/javascript"></script>    </body></html>

You will end up with five scripts:

  1. The scripts in the <head/> will be merged together and remain in the head.
  2. Script 3 will be minified but will not be merged since it lies just below common markup and above a near-future expiring script (google analytics).
  3. The google analytics script remains entirely untouched since it is a near future expiring script.
  4. Script 4 will be minified but not merged since it is not adjacent to any other non near futures script.

Although script three and four do not benefit from merging, they are at least minified and served with far future expiration and a valid ETag.

With all this in mind, you want to make sure that you group scripts together that are not near future expiring and can remain together without causing issues with load order dependencies. Also make sure that you do not have any setting that causes all of your javascript to be served with a near futures expiration unless you have a particularly good reason to do so.

I intend to roll out more functionality here to allow site owners to provide hints via data attributes in the script tags where scripts can be loaded asynchronously or deferred. I also intend to use some different approaches in the ways I discover changing content on the origin server and may be able to merge and minify near future expiring content. The fact is that most near future expiring content does not change often but wants to enforce that client’s check with the server frequently just in case there is a change.

Plugging in alternative minifiers and settings

RequestReduce attempts to follow a decoupled architecture which allows developers to swap out certain parts with their own behavior. To override RequestReduce's use of the Microsoft Ajax minifier library, you simply create a class that derives from IMinifier. There is not much to IMinifier:

public interface IMinifier{    string Minify<T>(string unMinifiedContent) where T : IResourceType;}

Here Is RequestReduce's implementation:

public class Minifier : IMinifier{    private readonly Microsoft.Ajax.Utilities.Minifier minifier = new Microsoft.Ajax.Utilities.Minifier();    private readonly CodeSettings settings = new CodeSettings {EvalTreatment = EvalTreatment.MakeAllSafe};

    public string Minify<T>(string unMinifiedContent) where T : IResourceType    {        if (typeof(T) == typeof(CssResource))            return minifier.MinifyStyleSheet(unMinifiedContent);        if (typeof(T) == typeof(JavaScriptResource))            return minifier.MinifyJavaScript(unMinifiedContent, settings);

        throw new ArgumentException("Cannot Minify Resources of unknown type", "unMinifiedContent");    }}

It's not difficult to imagine how you would change this implementation to use something like the YUI Compressor for.Net. Lets say you had a YuiMinifier class that you want RequestReduce to use instead of its own minifier. You would just need to add the following code to your startup code:

 
RRContainer.Current.Configure(x => x.For<IMinifier>().Use<YuiMinifier>());

That's it. Now your minification code will be used.

Suddenly CSS @import is not quite so evil

Just about anyone worth their salt in web development will tell you never to use CSS @Import. Except maybe the internet latency monster. He loves CSS @import but will quickly bite off your toes when he see you using it. Suffice it to say that the internet latency monster is a meanie.

RequestReduce can now identify @import usages and auto expand them so that no extra latency is incurred. I still do not think it is a good idea to use @import but now it lacks its sting.

So if you have not started using RequestReduce already, I strongly encourage you to do so. I would also greatly appreciate any feedback you have of what features you would like to see added or bugs you are running into. I’d also like to hear your success stories. You can submit all feature requests and bug reports to the github page at https://github.com/mwrock/RequestReduce/issues?sort=comments&direction=desc&state=open.

nQuant Reduces The Visual Studio Gallery and MSDN Code Samples page size down by 10% by Matt Wrock

Today the Microsoft Galleries team where I work and which supports the Visual Studio extensions gallery and the MSDN Code Samples gallery, among many others, began quantizing its sprited images with nQuant and has realized a 10% reduction in page size.

image

 

 

 

 

 

 

 

A few months ago, the visual studio gallery and the MSDN code samples gallery adopted my OSS project RequestReduce, which merges and minifies CSS as well as automatically sprites and optimizes background images. As I reported then, we experienced a 20% improvement in global page load times. At that time RequestReduce performed lossless PNG compression which can dramatically reduce the size of the sprited images which RequestReduce generates. I had also played with some “lossy” open source command line utilities that further reduced the size of PNG images – sometimes dramatically and often without perceptible quality loss. However, when I integrated these utilities into RequestReduce and applied it to some of the galleries that the Microsoft Galeries team develops (most notably the Silverlight and Azure galleries), the lossy optimization quality was simply unacceptable.

I did quite a bit of research on the topic of image quantization which is the process of removing colors from an image to produce a much smaller image while utilizing sophisticated algorithms to make this color loss imperceptible (or nearly imperceptible) to the human eye. It quite possibly may even be just as effective on alien eyes but to date, we lack the empirical evidence. You can count on me to update this post as more data accumulates in that exciting area of study.

While investigating this, I came across an algorithm developed by Xiaolin Wu that appeared to optimize RGB images (without transparency) with a quality unmatched by any other algorithm I had experimented with. Unfortunately, the algorithm was not immediately compatible with the transparent PNGs generated by RequestReduce. After several weeks of tinkering during very odd hours, I managed to adapt the algorithm to convert 32 bit transparent PNGs to 8 bit 256 color PNGs with far superior quality than those produced by many popular C command line tools. Furthermore, this is a C# library that can be easily integrated into any other .net assembly, nQuant also provides a command line wrapper which can be used for build tasks or ad hoc optimizations.

If you would like to see how nQuant can optimize images, head on over to nquant.codeplex.com where you can download either the compiled assembly and command line utility or the full source code. The site also provides complete instructions on the propper use of the nQuant API. It is dead simple. Here is an example of how to quantize a single image from within C#:

var quantizer = new WuQuantizer();using(var bitmap = new Bitmap(sourcePath)){    using(var quantized = quantizer.QuantizeImage(bitmap, alphaTransparency, alphaFader))    {        quantized.Save(targetPath, ImageFormat.Png);    }}

Using the command line, you would issue a command like this:

nQuant myimage.png /o mynewimage.png

If you would like to not only optimize your images but also minify and merge your CSS as well as sprite your CSS background images into a single sprite, then check out RequestReduce. Unlike other similar optimization tools, you do not need to change your code or rearrange your folder structure and you do not need to supply a wealth of config options to get started. It works with multiple server environments and supports content on a CDN. RequestReduce also includes nQuant which it uses to reduce the size of the images it produces.

For more details on the algorithm included in the nQuant library and my efforts to adapt it to ARGB transparent images, read my post on this subject.

www.orangelightning.co.uk makes 40% improvement in page load performance using RequestReduce by Matt Wrock

This week I worked with Phil Jones (@philjones88) of to get RequestReduce up and running on his web site hosted on AppHarbor. There was a couple issues specific to AppHarbor’s configuration that prevented RequestReduce’s default configuration from working. Its actually a fairly typical situation where their load balancers forward requests to their web servers on different ports. RequestReduce then assumes that the site is publicly accessible on this non standard port which it is not and things quickly begin to not work too well. In fact they did not work well at all. It was easy to work around this and by doing so, I was able to make my app all the more accessible.

So now that Phil has got orangelightning up and running on RequestReduce, their Google Page speed score went from 81 to 96 and their Yslow grade went from a low B at 83 to a solid A at 95.

image

image

 

Total number of HTTP requests were cut in half from 13 to 6 requests. And a page size of 93K to 54K.

And of course the bottom line is page load times. Using http://www.webpagetest.org, I tested from the Eastern United States (orangelightning  is in the UK) over three runs here are the median results:

With RequestReduce

image

 

Without RequestReduce

image

RequestReduce is free, Requires very little effort to install and supports both small blogs and large multi server, CDN based enterprises. You can download it from http://requestreduce.com/ or even easier, simply enter:

Install-Package RequestReduce

From the Nuget Packet Console right inside Visual Studio. Source code, wiki with thorough documentation and bug reporting is available from my github page at https://github.com/mwrock/RequestReduce.

RequestReduce now fully compatible with AppHarbor and Medium Trust hosting environments. by Matt Wrock

Now even more sites can take advantage of automatic CSS merging and minification as well as image spriting and color optimization with no code changes or directory structure conventions.

This week I rolled out two key features which add compatibility to RequestReduce’s core functionality and some popular hosting environments. In a nutshell, here is what has been added:

  • Support for web server environments behind proxies. No extra configuration is needed. It just works.
  • Full support for AppHarbor applications. If you have not heard of AppHarbor, I strongly encourage you to check it out. It ties into your GIT repository and automatically builds and deploys your Visual Studio solution upon git push.
  • RequestReduce now runs in Medium Trust environments such as GoDaddy. There are some features that will not work here such as image color and compression optimizations and other multi server synchronization scenarios, but the core out of the box functionality of CSS merging, minification and on the fly background image spriting will work in these environments.

And as from the beginning, RequestReduce will run on ANY IIS hosted environment including ASP.NET Web Forms, all versions and view engines of MVC, Webmatrix “Web Pages” and even static html files.

So download the latest bits from www.RequestReduce.com or simply enter:

Install-Package RequestReduce

from the nuget power shell to get these features added to your site with no change to your code, almost no configuration and no rearranging of files and stylesheets into arbitrary folder conventions. As long as your background images are marked no-repeat and have explicit widths in their class properties, RequestReduce does all of the tedious work for you on the fly and makes sure that these resources have far future Expires headers and multi server friendly ETags allowing browsers to propperly cache your content.

 
Do you need multi server synchronization and CDN support? RequestReduce has got you covered.

Resolving InvalidCastException when two different versions of Structuremap are loaded in the same appdomain by Matt Wrock

Last week I was integrating my automatic css merge, minify and sprite utility, RequestReduce, into the MSDN Forums and search applications. Any time you have the opportunity to integrate a component into a new app, there are often new edge cases to explore and therefore new bugs to surface since no app is exactly the same. Especially if the application has any level of complexity.

The integration went pretty smotthly until I started getting odd Structuremap exceptions in the search application. I had never encountered these before. I had a type that was using the HybridHttpOrThreadLocalScoped Lifecycle and when structuremap attempted to create this type I received the following error:

System.InvalidCastException: Unable to cast object of type 'StructureMap.Pipeline.MainObjectCache' to type 'StructureMap.Pipeline.IObjectCache'

Well that’s odd since MainObjectCache derives from IObjectCache. This smelled to me like some sort of a version conflict. The hosing application also uses Structuremap and uses version 2.6.1 while my component RequestReduce uses 2.6.3. I use IlMerge to merge RequestReduce and its dependencies into a single dll - RequestReduce.dll. While Nuget does make deployment much more simple, I still like having just a single dll for consumers to drop into their bin.

Unfortunately, searching online for this exception turned up absolutely nothing; so I turned to Reflector. The exception was coming from the HttpContextLifecycle class and it did not take long to track down what was happening. HttpContextLifecycle includes the following code:

public static readonly string ITEM_NAME = "STRUCTUREMAP-INSTANCES";

public void EjectAll(){    FindCache().DisposeAndClear();}

public IObjectCache FindCache(){    IDictionary items = findHttpDictionary();

    if (!items.Contains(ITEM_NAME))    {        lock (items.SyncRoot)        {            if (!items.Contains(ITEM_NAME))            {                var cache = new MainObjectCache();                items.Add(ITEM_NAME, cache);

                return cache;            }        }    }

    return (IObjectCache) items[ITEM_NAME];}

public string Scope { get { return InstanceScope.HttpContext.ToString(); } }

public static bool HasContext(){    return HttpContext.Current != null;}

public static void DisposeAndClearAll(){    new HttpContextLifecycle().FindCache().DisposeAndClear();}

protected virtual IDictionary findHttpDictionary(){    if (!HasContext())        throw new StructureMapException(309);

    return HttpContext.Current.Items;}

Its ITEM_NAME which is the culprit here. This is a static readonly field that is the key to the object cache stored in the HttpContext. There is no means to change or override this so whichever version of Structuremap is the first to create the cache, the other version will always throw an error when retrieving the cache because while both with store an IObjectCache, they will be different versions of IObjectCache and therefore different classes altogether which will lead to an InvalidCastException when one tries to cast to the other.

The work around I came up with was to create a new class that has the same behavior as HttpContextLifecycle but uses a different key:

public class RRHttpContextLifecycle : ILifecycle{    public static readonly string RRITEM_NAME = "RR-STRUCTUREMAP-INSTANCES";

    public void EjectAll()    {        FindCache().DisposeAndClear();    }

    protected virtual IDictionary findHttpDictionary()    {        if (!HttpContextLifecycle.HasContext())            throw new StructureMapException(309);

        return HttpContext.Current.Items;    }

    public IObjectCache FindCache()    {        var dictionary = findHttpDictionary();        if (!dictionary.Contains(RRITEM_NAME))        {            lock (dictionary.SyncRoot)            {                if (!dictionary.Contains(RRITEM_NAME))                {                    var cache = new MainObjectCache();                    dictionary.Add(RRITEM_NAME, cache);                    return cache;                }            }        }        return (IObjectCache)dictionary[RRITEM_NAME];    }

    public string Scope    {        get { return "RRHttpContextLifecycle"; }    }}
 
As you can see, I copy most of the code from HttpContextLifecycle but use a different key for the string and scope. To get this all wired up correctly with HybridHttpOrThreadLocalScoped, I also need to subclass HttpLifecycleBase. Here is the code from HttpLifecycleBase:
 
public abstract class HttpLifecycleBase<HTTP, NONHTTP> : ILifecycle    where HTTP : ILifecycle, new()    where NONHTTP : ILifecycle, new(){    private readonly ILifecycle _http;    private readonly ILifecycle _nonHttp;

    public HttpLifecycleBase()    {        _http = new HTTP();        _nonHttp = new NONHTTP();    }

    public void EjectAll()    {        _http.EjectAll();        _nonHttp.EjectAll();    }

    public IObjectCache FindCache()    {        return HttpContextLifecycle.HasContext()                   ? _http.FindCache()                   : _nonHttp.FindCache();    }

    public abstract string Scope { get; }}

All HybridHttpOrThreadLocalScoped does is derrive from HttpLifecycleBase and use HttpContextLifecycle as the HTTP cache; so I need to do the same using RRHttpContextLifecycle instead:
 
public class RRHybridLifecycle : HttpLifecycleBase<RRHttpContextLifecycle, ThreadLocalStorageLifecycle>{    public override string Scope    {        get        {            return "RRHybridLifecycle";        }    }}
 
Then I change my container configuration code from:
 
x.For<SqlServerStore>().HybridHttpOrThreadLocalScoped().Use<SqlServerStore>().    Ctor<IStore>().Is(y => y.GetInstance<DbDiskCache>());

to
 
x.For<SqlServerStore>().LifecycleIs(new RRHybridLifecycle()).Use<SqlServerStore>().    Ctor<IStore>().Is(y => y.GetInstance<DbDiskCache>());

This does feel particularly dirty. Copying and pasting code always feels wrong. What happens if Structuremap makes changes to the implementation of HttpContextLifecycle and I do not update my code to sync with those changes. You can see how this could become fragile. It would be nice if ITEM_NAME were not static and there was a way for derived types to override it. Or if the key name at least was appended by the version name of the Structuremap assembly.

Well until such changes are made in Structuremap, I see no better alternative to my work around.
 
I hope this is helpful to any others who have experienced this scenario. I am also very open to suggestions for a better workaround. In the meantime, I have submitted a pull request to the Structuremap repository that appends the assembly version to the the HttpContext.Items key name.