Continued Evolution of Minification in my Solutions

In a previous post I detailed how I added JavaScript and stylesheet minification. The changes here are based upon that setup.

JavaScript and Stylesheets

The changes are as follows:

  1. I've upgraded to version 5.7 of the Microsoft Ajax Minifier.

  2. I still use a Batch script, but now it is a pre-build step rather than a post-build step. (You can refer to step 4 in the original post.)

  3. I am now using a manifest XML file rather than individual invocations within the Batch script. Overall, this seems to do the job faster than before.

    The manifest allows the all of the same options as the command line, but I can put all of the minification tasks in a single manifest and it is easier for me to read than a Batch script command line.

    Details on the format can be found in the documentation (XML File Formats).

                      <output path="css\wickedstrategery.min.css" type="css">
                        <input path="css\layout.css" />
                        <input path="css\style.css" />
                      <output path="scripts\wickedstrategery.min.js" type="js">
                        <arguments>-clobber -strict:true -global:$</arguments>
                        <norename name="$" />
                        <symbolMap name="v3" path="scripts\" />
                        <input path="scripts\jquery.codeformatter.js" />

    The neat thing about the minifier is that will combine multiple source files into a single output file. This means that only a single HTTP round-trip is needed to download the content, which thanks to the minifier is now smaller.

    And the Batch script has changed to:

                    set PATH=%~dp0;"C:\Program Files (x86)\Microsoft\Microsoft Ajax Minifier\"
                    pushd "<full path to my project here>"
                    ajaxmin -xml ajaxmin.manifest

I still get the same benefits as before, only now it's easier for me to read and add new things to be minified.


Additionally, all the PNG images have been processed via PNGGauntlet and there was a substantial difference in filesize without a noticible difference in how the image looks.

Minifying JavaScript and Cascading Style Sheets

I think I've finally figured out how to work JavaScript and stylesheet minification into my development workflow. I've been looking for something that will preserve the ability to work in a formatted file, generates a minified file, and doesn't leave Visual Studio complaining about missing files.

The set up involves the Microsoft Ajax Minifier, a batch file, and some Visual Studio setup.

  1. First download and install Microsoft Ajax Minifier. Make note of the install folder (it was "C:\Program Files (x86)\Microsoft\Microsoft Ajax Minifier\" for me).

  2. Add a Batch script to the Visual Studio project. Make sure to set the "Build Action" to "None". I'm calling my Batch script "minify.bat".

  3. The Batch script is what minifies the JavaScript and stylesheets for me. The contents look like:

                    set PATH=%~dp0;"C:\Program Files (x86)\Microsoft\Microsoft Ajax Minifier\"
                    pushd "<full path to my project here>"
                    ajaxmin -clobber
                            -o Styles\wickedstrategery.min.css

    I've broken out the lines for readability only.

  4. Setup the project to run the Batch script when the project is built via the Post-build Build Events in the Project Properties:

                    call "$(ProjectDir)minify.bat"
  5. Run the script once at the command line (or build in Visual Studio) and the minified files would be created.

  6. Back in Visual Studio, use the "Show All Files" in the Solution Explorer to expose the newly created external files. I then imported them into the solution, and made sure the "Build Action" was set to "Content".

  7. Go to each of the source JavaScript and stylesheet files and set their "Build Action" to "None". They will stay in the project, but will not show up when the solution is built. The post-build step will regenerate the minified versions and copy the output to the output folder. Leaving the output to contain minified files but not the originals.

  8. From there, the last step is to go through the project (really just my MasterPage) and ensure the references to the originals are replaced with references to the minified version.

The inspiration for getting this all running was Scott Hanselman's write up The Importance (and Ease) of Minifying your CSS and JavaScript and Optimizing PNGs for your Blog or Website.

The end result is that my project still contains the nicely formatted scripting and stylesheets, but outputs minified versions. The minification is part of the build and doesn't interupt me working. I haven't added any steps to publishing the site either. And Visual Studio's built-in sanity check for files that exist continue to function.

Google Reader & The Importance of Data Portability

[Google Reader]

The closure of Google Reader has once again highlighted the importance of data portability. Fortunately, Google Reader supports export to OPML and every alternative I've looked at supports import from OPML. As a replacement, I've set up Tiny Tiny RSS in a virtual machine. The base image was the TurnKey LAMP Application and the installation was straight-forward.

The ability to export and import my subscription list has been crucial to switching between the feed readers I've tried. It amplifies the need for standards-compliant tools and really drives home the fact that I need to support those that allow portability and create those standards-compliant tools.


The DataPortability Project does a better job of explaining the importance than I could. There are several good reads on why the idea of data portability is important both from a technology standpoint and a business standpoint, as well as what can people do to promote the idea.

Images used without permission, courtesy of Google and

A Fast Alternative to MethodInfo.Invoke

Not surprisingly, calls to MethodInfo.Invoke are slow for the same reasons outlined that were outlined in A Fast Alternative to Activator.CreateObject.

The same approach can be used to generate methods that will invoke the MethodInfo instance, but provide an amazing performance boost by handling validation up front once rather than upon every invocation.

            public static Func<TTarget, TResult> GenerateFastInvoker<TTarget, TResult>(MethodInfo method)
                #if DEBUG
                    ParameterExpression targetExpression = Expression.Parameter(typeof(TTarget), "target");
                    ParameterExpression targetExpression = Expression.Parameter(typeof(TTarget));

                Expression<Func<TTarget, TResult>> expression = Expression.Lambda<Func<TTarget, TResult>>(
                Func<TTarget, TResult> functor = expression.Compile();
                return functor;

The same caveats apply as before, but the performance increase is well worth it. Further, the same alterations used to create weakly-typed fast activators can be applied to generate weakly-typed fast invokers.

A Fast Alternative to Activator.CreateObject

I'm working on performance issues and tracked down a place where the code repeatedly instantiating objects via Activator.CreateInstance. The use case doesn't allow for me to change that behavior - the instances must be created. So, I need a faster alternative. Jon Skeet wrote an excellent article titled Making reflection fly and exploring delegates that discusses in depth as well as various alternatives.

Basically, by generating a method I incur the costs of evaluation once, rather than repeatedly like Activator.CreateInstance.

I know the types of parameters I need to use at compile-time, but actual type instantiated is not known until runtime. But I can generate a method at runtime to do the instantiation for me. Better still I can use LINQ expressions to generate that method and not have to emit raw IL.

            public static class FastActivator
                public static Func<T1, TResult> Generate<T1, TResult>()
                    ConstructorInfo constructorInfo = typeof(TResult).GetConstructor(new Type[] { typeof(T1), });

                    #if DEBUG
                        ParameterInfo[] parameters = constructorInfo.GetParameters();
                        ParameterExpression parameterExpression = Expression.Parameter(typeof(T1), parameters[0].Name);
                        ParameterExpression parameterExpression = Expression.Parameter(typeof(T1));

                    Expression<Func<T1, TResult>> expression = Expression.Lambda<Func<T1, TResult>>(
                    Func<T1, TResult> functor = expression.Compile();
                    return functor;

The code is for a simple case - instanting a type via a constructor that accepts a single argument. But I can easily add additional overrides to handle constructors that accept more parameters.

The preprocessor directives in the middle just make it a little easier to debug. When building in a configuration specifying the DEBUG flag, I will get parameters with names that I can use when looking over an Expression's DebugView property. In a build that does not have the flag, I can skip over that unnecessary information.

So now we have a different problem. I have a strongly typed Func<,> that I can't really use since the resultant type isn't known at compile time. So let's create a weakly typed variant that will do the same work and be in a form that will allow me reference the delegate and cache it.

            public static class WeakFastActivator
                public static Func<object, object> Generate(Type resultantType, Type parameterType)
                    ConstructorInfo constructorInfo = resultantType.GetConstructor(new Type[] { parameterType, });

                    #if DEBUG
                        ParameterInfo[] parameters = constructorInfo.GetParameters();
                        ParameterExpression parameterExpression = Expression.Parameter(typeof(object), "boxed_" + parameters[0].Name);
                        ParameterExpression parameterExpression = Expression.Parameter(typeof(object));

                    Expression<Func<object, object>> expression = Expression.Lambda<Func<object, object>>(
                                Expression.Not(Expression.TypeIs(parameterExpression, parameterType)),
                                Expression.Throw(Expression.Constant(new ArgumentException("Parameter type mismatch.", parameterExpression.Name)))),
                                    Expression.Convert(parameterExpression, parameterType)),
                    Func<object, object> functor = expression.Compile();
                    return functor;

Now I have a method that I can use. I can store it away in a private member and invoke it repeatly to instantiate the objects I need. The weakly-typed variant includes some parameter checking as well so we have some better information in case something goes wrong.

            public static class Program
                // A contrived example of how to use the WeakFastActivator.
                public static void Main()
                    Type runtimeKnownType = ...; // Some type whose constructor accepts an integer.

                    Func<object, object> weakFastActivator = 
                        WeakFastActivator.Generate(runtimeKnownType, typeof(int));

                    // Now the fast activator can be repeatedly used.
                    for(int x = 0; x < int.MaxValue; x++)
                        // This will throw an ArgumentException if we do not supply an int.
                        object instance = weakFastActivator(x);
                        // Do something with the instance here.

There are three cases we can directly test - using the Activator, using a FastActivator delegate, and invoking the constructor directly.

I have this mock object for testing:

            public sealed class Mock
                private readonly int _parameter;
                public Mock(int parameter)
                    this._parameter = parameter;
                public int Parameter
                    get { return this._parameter; }

And three test cases to evaluate:

            private static void Case1()
                Type mockType = typeof(Mock);
                for (int x = 0; x < ITERATION_COUNT; x++)
                    Mock mock = (Mock)Activator.CreateInstance(mockType, new object[] { x, });
                    Debug.Assert(mock.Parameter == x, "Parameter does not match the expected value.");

            private static void Case2()
                Func<int, Mock> fastActivator = FastActivator.GenerateFastActivator<int, Mock>();
                for (int x = 0; x < ITERATION_COUNT; x++)
                    Mock mock = fastActivator(x);
                    Debug.Assert(mock.Parameter == x, "Parameter does not match the expected value.");

            private static void Case3()
                ConstructorInfo constructor = typeof(Mock).GetConstructor(new Type[] { typeof(int), });
                for (int x = 0; x < ITERATION_COUNT; x++)
                    Mock mock = (Mock)constructor.Invoke(new object[] { x, });
                    Debug.Assert(mock.Parameter == x, "Parameter does not match the expected value.");

And the results of the test - these are from a Release configuration running without a debugger attached where ITERATION_COUNT = 1000. The tests have a single sample execution and then run the 1000 iterations that are timed.

Case1: 24226.428ms ticks/iteration (22459ms-29665ms; σ=688.468ms)
  Bucket # 1 (22459.000ms-23179.700ms):  17
  Bucket # 2 (23179.700ms-23900.400ms): 389
  Bucket # 3 (23900.400ms-24621.100ms): 300
  Bucket # 4 (24621.100ms-25341.800ms): 238
  Bucket # 5 (25341.800ms-26062.500ms):  45
  Bucket # 6 (26062.500ms-26783.200ms):   9
  Bucket # 7 (26783.200ms-27503.900ms):   1
  Bucket # 8 (27503.900ms-28224.600ms):   0
  Bucket # 9 (28224.600ms-28945.300ms):   0
  Bucket #10 (28945.300ms-29666.000ms):   1
Case2: 1673.090ms ticks/iteration (1305ms-9792ms; σ=585.248ms)
  Bucket # 1 (1305.000ms-2153.800ms): 964
  Bucket # 2 (2153.800ms-3002.600ms):  16
  Bucket # 3 (3002.600ms-3851.400ms):   0
  Bucket # 4 (3851.400ms-4700.200ms):  10
  Bucket # 5 (4700.200ms-5549.000ms):   4
  Bucket # 6 (5549.000ms-6397.800ms):   2
  Bucket # 7 (6397.800ms-7246.600ms):   2
  Bucket # 8 (7246.600ms-8095.400ms):   1
  Bucket # 9 (8095.400ms-8944.200ms):   0
  Bucket #10 (8944.200ms-9793.000ms):   1
Case3: 11153.252ms ticks/iteration (10115ms-14759ms; σ=565.624ms)
  Bucket # 1 (10115.000ms-10579.500ms):  26
  Bucket # 2 (10579.500ms-11044.000ms): 545
  Bucket # 3 (11044.000ms-11508.500ms): 263
  Bucket # 4 (11508.500ms-11973.000ms):  74
  Bucket # 5 (11973.000ms-12437.500ms):  47
  Bucket # 6 (12437.500ms-12902.000ms):  29
  Bucket # 7 (12902.000ms-13366.500ms):   8
  Bucket # 8 (13366.500ms-13831.000ms):   3
  Bucket # 9 (13831.000ms-14295.500ms):   1
  Bucket #10 (14295.500ms-14760.000ms):   4

We can see that overall, Case2 runs the fastest - having the lowest per-execution time at 1673.090ms, as well as is consistently fast with 96.4% of the executions being under 2153.800ms which clearly beats the other two approaches.

It's a bit difficult to read through, and really does seem a bit too clever for my own good, but the performance benefits are too good to be ignored.

As for solving the original performance problem, the generated method can be cached and repeatedly reused, which allows for a significant performance increase. There is a one-time cost upon the first invocation when a dynamic assembly is created and loaded, but it's well worth the cost.


My name is Doug Jenkinson. I write and ramble on about whatever comes to mind at .


I'm currently employed as a Development Team Lead and Architect with Hyland Software in Cleveland, Ohio.


You can email me at

Looking for an experienced and talented software engineer in the Akron or Cleveland area? Be sure to look at my résumé.