VerificationException: “Operation could destabilize the runtime” An exception with a bark much worse than its bite by Matt Wrock

As a software engineer my greatest ambition is to produce code that will have a lasting impact on my fellow humans. I want to make good things happen to both good and bad people. The last thing I want to do is destabilize the runtime. Whether it be my runtime or your runtime, if you cant run and there is no time…hmmm…sounds kinda like death which of course has its own baggage and negative set of connotations.

So, you can now understand the horror I must have experienced yesterday morning when I got this:

image

 

Couple things worth noting here: “[No relevant source lines]”. Oh that’s helpful. Is the source so potentially harmful that the runtime has deemed the offending code “irrelevant.” Yeah? Well I think YOU are irrelevent .NET! I’m gonna put your worker threads in a warehouse and shut down your I/O completion ports. Minutes pass. Absolute silence. .Net remains unmoved by my threats and I realize that I must use what little intelligence I have to slog through this and figure it out for myself. The rest of this post is a narrative account of just that process. So sit back, reach for that glass of pinot and enjoy this tale of Verification Exceptions in Medium Trust on the .Net 2.0 runtime. Mmmm. You can tell already its gonna be good.

What is a Verification Exception

According to the official MSDN Library documentation, a VerificationException is “The exception that is thrown when the security policy requires code to be type safe and the verification process is unable to verify that the code is type safe.” Good enough for me. But for the obtuse, we will explore further.

Well unfortunately it doesn’t take much exploring to discover that there is not a lot of detailed explanation on this error and that it can also be raised by a large variety of very different scenarios. From what I can gather from all the fragments of blogs and forums I found that touched on various permutations of my scenario.

So what is my Scenario?

Occurs in .net 2.0 runtime under Medium Trust on x64

I have an assembly that I want to be able to run in .net 3.5 and up and in hosting environments that are restricted to Medium Trust. Here is the line that triggers this exception:

AuthorizedUserList = config == null || string.IsNullOrEmpty(config.AuthorizedUserList)                                     ? Anonymous                                     : config.AuthorizedUserList.Split(',').Length == 0                                           ? Anonymous                                           : config.AuthorizedUserList.Split(',');

Its note worthy that variables AuthorizedUserList and Anonymous are both typed IEnumerable<string>.

This exception is only thrown when running this line on .net 3.5 in Medium trust. What I find particularly odd and don’t have an explanation for is that it worked fine in .net 3.5 Medium trust on my 32 bit machine but throws this exception on my 64 bit work laptop. I’m not convinced that it is the bitness level and not some other environment issue that makes the difference here. Sometimes things are just more fun when they remain a mystery. Especially in software don’t you think?

How to find the root of the problem

So looking at the above line of code my first reaction was that there was something wrong with the debug symbol mapping to the assembly I was using. I mean how does this line look harmful. Wrap it in fur and stuff it with cotton and you’d want to do no less than squeeze it close to your bosom and sing soft lullabies to it. So I proceeded to play with compiling configurations and changing references which proved entirely futile.

The golden nugget that I was missing was a tool that ships with the .net SDK called PEVerify. I made this connection reading this StackOverflow answer. One key thing to be aware of is that each version of the .NET runtime has its own version of PEVerify so make sure to use the one that ships with the version of the runtime you are getting this exception with. In my case I needed the .Net 2.0 SDK which you can find here.

PEVerify is a Command line utility that verifies that the IL in an assembly is type safe in a particular runtime environment. Why the .net 2 compiler cant report these as warnings, I do not know. So entering:

C:\Program Files (x86)\Microsoft Visual Studio 8\SDK\v2.0\Bin>peverify "C:\RequestReduce\RequestReduce\bin\v3.5\debug\RequestReduce.dll" /verbose

I got this output:

Microsoft (R) .NET Framework PE Verifier.  Version  2.0.50727.42Copyright (c) Microsoft Corporation.  All rights reserved.

[IL]: Error: [C:\RequestReduce\RequestReduce\bin\v3.5\debug\RequestReduce.dll :RequestReduce.Configuration.RRConfiguration::.ctor][mdToken=0x60000ce][offset 0x000000BB][found ref 'System.Collections.IEnumerable'][expected ref 'System.Collections.Generic.IEnumerable`1[System.String]'] Unexpected type on the stack.1 Error Verifying C:\RequestReduce\RequestReduce\bin\v3.5\debug\RequestReduce.dll

Ahhh. Its all making perfect sense now. Well if you think about it (a practice that I’m quite rusty with but sometimes still capable of), it does help. As I mentioned above and is illustrated here, AuthorizedUserList expects an IEnumerable<string>. The PEVerify output complains that it is getting a plain old IEnumerable (not its generic cousin). This does make sense since the config.AuthorizedUsers.Split(‘,’) returns a string[].

Obviously under most circumstances, there will be no problem implicitly casting the string[] to an IEnumerable<string>. The code does run in .Net 2 and 4 in Full Trust and has never caused a problem. So .Net 2.0 must think that this conversion could potentially be un type safe and if the user is running in partial trust, the fact that its runtime type checking verification fails, causes this exception to be thrown.

Forcing a cast using ToList() fixes the problem

So adding nine characters to the end of the line fixes everything:

AuthorizedUserList = config == null || string.IsNullOrEmpty(config.AuthorizedUserList)    ? Anonymous    : config.AuthorizedUserList.Split(',').Length == 0        ? Anonymous        : config.AuthorizedUserList.Split(',').ToList();

And now the output of PEVerify is:

C:\Program Files (x86)\Microsoft Visual Studio 8\SDK\v2.0\Bin>peverify "C:\RequestReduce\RequestReduce\bin\v3.5\debug\RequestReduce.dll" /verbose

Microsoft (R) .NET Framework PE Verifier.  Version  2.0.50727.42Copyright (c) Microsoft Corporation.  All rights reserved.

All Classes and Methods in C:\RequestReduce\RequestReduce\bin\v3.5\debug\RequestReduce.dll Verified.

So that’s it. Hopefully someone will find this useful but from what I can tell from others who have experienced this same error, the causes can vary widely.

Looking back on this title, I’m wondering if the bark really is worse than the bite. I suppose the good news is that in the end, no runtimes were destabilized in the making of this blog post or the events that led up to it.

Being Nice by Matt Wrock

friendsOne of qualities that I think make a great Software Development Engineer not to mention a good human being is the ability to be nice. In fact if I had to list the top 5 things that have helped me in my career, being nice would be one of them. I’m not saying that I have been particularly successful in the art of being nice (yes, it is an art) and I can think of far too many times when I was not nice, but I have made being nice a guiding principle from which I draw to color my decisions and actions and it is an incredibly powerful tool. This may be obvious and if you are thinking the same, that’s a good sign for you and your employer. One may think that it’s a shame that such a concept even deserves a blog post but alas…

The world has far too many meanies

I came head to head with this reality many many moons ago when I worked as a waiter. Up until this point I was certainly aware that there were a good share of not nice and downright mean and inconsiderate people in the world. However, I thought they were a small minority. Waiting on tables for a “Just a notch above Denny’s scale” restaurant quickly brought home the reality that there are a lot more unpleasant people than I had ever thought. I had never suspected that there could be so many rude, inpatient, and down right awfully behaved adults among me.

Well fortunately I run into fewer not nice people per capita in the Software industry, but they are there. However, this post is not really intended to focus on mean programmers, but about developers who go the extra mile to be nice. Sure a lot of us can be nice or at least  tolerable, but some of us are lacking when it comes to being nice backed with intention.

Being nice adds value

When I think of the top dozen people that have been a true joy to work with and taught me the most in terms of leadership and software best practices, they were all nice. Being nice provides a wonderful delivery mechanism for all sorts of value beyond just being nice. While I believe there are rare exceptions, being nice is the best means by which you can color your personal interactions and effectively drive home quality. Of course quality can certainly be delivered without being nice – certainly without going out of one’s way to be nice, but I guarantee you that if you add niceness to your interactions, it will positively influence your ability to drive home your philosophies, to promote adoption of your software and to unlock career opportunities.

Going back to thinking about the people in the industry I admire who are either “celebrity” types I don’t personally know or people I have worked extensively with, their ability to be nice really is more than a nuance, it a concrete thing about their character that is obvious and ends up being something that I specifically call out as being something I like about that person and motivates me to want to interact with them again.

It has to be genuine

The last thing I want is for someone to interpret this as a Machiavellian attempt to gain favor from others or get others to do your bidding. We have all been exposed to those types. You’ll see this a lot in recruiters or sales individuals. You can often smell the insincerity. They like to say your name a lot in a way that makes you want to shower with a wire brush after each utterance.

No. When I think about these nice value packed individuals, they are nice because it’s a core part of who they are. They didn’t just read a pamphlet of nice pointers. They are nice because they want to be nice and they enjoy being nice.

Nice. A definition.

So what is this nice I speak of? Perhaps nice is not the best word for the point I am trying to get across. I’m talking about more than just good manners but I’m not necessarily talking about Jesus Christ either. Nor am I talking about Santa Clause. But he does seem super super nice.

I’m taking about an intentional, proactive means of communicating and relating with others that seeks out their benefit (and perhaps among several other benefits) and is ignorant of status or other artificial cast systems in your organization. Interacting with others as an equal and trying to be helpful. Here’s a small but perhaps tangible example. I’m talking about the difference between someone that answers all their IMs with “Hey”  or “Yeah” compared with Hi Matt. This can be incredibly subtle. Here is why this example resonates with me. I used to work with a developer in Chile. Remotely of course. He was incredibly competent and he was very nice. He never sent me flowers or showed huge concern if I was sick but there was a tone to every IM conversation and email he sent. A tone of sincere helpfulness. Every initial answer to an IM was “Hi Matt!” His emails were often genius. But there was no ego in them, they were plain, to the point, clear and thorough and had an air of “I hope that this is helpful” to them. Everyone thought he was nice but then no one would say, “That Fernando, he is one of the nicest guys I know.” He was pretty ordinary but an example of how simply being nice can go a long long way.

In an an environment where the dev team hated to work with offshore vendors, everyone loved working with this person. He was one of the most highly paid engineers on staff. At the time he reported to me, he made more money than I made and he was worth every penny. In large part because he was nice. Honestly, when I think back to working with this individual, that was one of his most valuable characteristics. Sure his technical competencies were superb, but colored with his niceness, they were effective as well.

Can brilliance be an excuse for not being nice?

I know there are some who will argue that although being nice is indeed a desirable quality, There are some who are certainly not nice but they are “brilliant.” I agree. There are some truly brilliant minds who are not nice. Yes, these people have often managed to be successful and influential. But generally speaking, I’m not interested in working with them. I may value a lot of what they have to say, but I’m not particularly keen to really engage with them.

When it comes to recruiting, I simply will not hire someone (or suggest hiring someone) who comes across as being a jerk. While we should strive to produce quality software and attract  candidates who are great problem solvers and solution architects, above that I want quality of life. I want to work with others who are pleasant to be around. Its good for me, its good for the product and its good for the team. Software development competence is easily taught, being nice is built on a lifestyle developed over a lifetime  Having interviewed hundreds of candidates, if I could put candidates into buckets of not nice, some what nice (basically not not nice) and candidates who were truly nice. The ones in the not nice bucket just don't get hired, the somewhat nice group have a fair chance of being hired proportional to their standard competencies. The nice ones often get hired and people are excited to hire them and look for ways to justify hiring them. Don’t get me wrong. They have to show promise as developers, but being nice is like the Malt Vinegar on fish and chips. It’s the more than subtle nuance that makes me say Mmmm.

Things that make me go Mmm (Things that make me go Mmm) Things that make me go Mmm, yeah-eah-eah.Things that make me go Mmm, Mmm, Mmm (Mmm). Things that make me go Mmm.

I apologize. That was embarrassing. I will now commence with the blog post.

Compare two developers next to each other, one is difficult to be around but extremely competent and the other is very pleasant and competent but not as much as the former. I would argue that the potential of the second individual’s success is greater than the first.

The consequences of impolite behavior is often more destructive than valuable

As software engineers, many of us were not popular when we were younger. We escaped to a world of code and video games to avoid others and feed our spirit with challenges and opportunities for private success. Many of us were not stars on the soccer field. We were asked to kick the ball out (I was) instead of risking what we might actually do with that ball in the extremely unlikely event that we manage to actually gain control of the ball at all (I never did).

So it should come as no surprise that many of us share the urge to put others down in order to enhance our own sense of lacking self importance. Just spend a day reading your technically saturated twitter feed. There is a lot more complaining and snark than truly helpful tweets. But I’d ask anyone to think back on their own snarky remarks. The ones they thought were true gems. They really put a person or followers of a particular technology in their place. A place much inferior to your own views. Now consider how valuable the comment was. How many were likely to change their attitudes or stumble upon better practices after hearing such remarks? Did the remark have an impact? Yeah it felt good to say at the time, but looking back did it add or subtract from your net value as a contributor to your craft? I’m gonna guess that it deducted more value than it added. At least that is my own experience.

When I think back on some of the scenarios where I publicly put someone down or spoke condescendingly to someone as I complained about their technology or practices, in the end I really only made myself feel good and look bad at the same time. And none of those moments were moments that “made a difference” in moving my team, my values or my career forward.

I hope you have found this to be a nice post.

Bug fixes and enhancements included in RequestReduce 1.7.26 by Matt Wrock

I usually don’t blog on bug fix releases. However the bug fix release I deployed today addresses a couple serious bugs (albeit edge cases) and their fixes forced a few significant enhancements I want to call out.

  1. Css that reference the same image twice may produce sprite sheets that cut out some of the images at the end of the sprite sheet.
  2. The logic that determines which css selectors can be included within another css selector sometimes breaks with child selectors (eg. .parent-class > .child).
  3. Upgraded to v4.44 of the MS Ajax Minifier which addresses a bug that causes a JS error in IE8 with Google’s Prettify.js. While IE8 throws an error, all other browsers suffer from the fact that this bug essentially breaks the intentions of prettify.js.
  4. RequestReduce was not handling @font-face urls or some data URIs. These are now supported.

There were other bugs but these are the really important ones worth mentioning. At work (Microsoft MSDN and Technet web properties) we are preparing for our February release and there have been a couple minor issues raised in addition to the bugs above that have forced the following enhancements.

  1. RequestReduce now handles background-position properties that use pixels, percentages or the directional attributes of top, left, right, center, bottom. Previously, RequestReduce did not consider percentage values or positive pixel units.
  2. Some tweaks to the nQuant quantizer parameters increase the quality of the images without sacrificing file size. While RequestReduce generally produces high quality png8 sprites, there are rare cases where images may appear grainy or overly opaque. Today’s fix addresses most of these instances. There may still be times where image quality is sub par. This is most likely to happen in very smooth gradients that has a lot of variance in transparency levels. The best way to deal with these is to either disable the image quantization or to decrease the spriteColorCount setting. The fewer colors that the nQuant quantizer has to reduce to 255, the higher the quality of the final image. There is really no hard fast rule here. RequestReduce defaults to 5000, but depending on your images, 10,000 may be fine or you may need to reduce to 2000.
  3. Thanks to PureKrome, the RequestReduce dashboard is a little easier on the eyes when displaying lists of urls and it includes exception info for failed requests. I really like this and it improved my quality of life doing some debugging last night.
  4. Added a API hook to allow one to transform the absolute url generated by RequestReduce. A popular application of this is to add prefixed subdomains to CDN hosts to allow browsers to load more content at the same time. See the API Wiki for details.
  5. Upgraded the SassAndCoffee dependency to 2.0.2. This eliminates some of the x64 instability issues of the former version and uses the new IE Chakra JS engine if available. It no longer uses V8.

Released RequestReduce 1.7.0: Giving the RequestReduce onboarding story a happy beginning by Matt Wrock

About six weeks ago I blogged about an issue with RequestReduce and its limitations with resolving image properties of each CSS class. To recap, until today, RequestReduce treated each CSS class as an atomic unit and ignored any other classes that it may be able to inherit from. The worst side effect of this is a page that already has sprites but uses one class to specify the image, width, height, and repeatability. Then uses several separate classes each containing the background-position property of each image in the sprite sheet. Something like this:

.nav-home a span{    display:block;    width:110px;    padding:120px 0 0 0;    margin:5px;float:left;    background:url(../images/ui/sprite-home-nav.png?cdn_id=h37) no-repeat 0 1px;    cursor:pointer}

.nav-home a.get-started span{background-position:0 1px}

.nav-home a.download span{background-position:-110px 1px}

.nav-home a.forums span{background-position:-220px 1px}

.nav-home a.host span{background-position:-330px 1px}

What RequestReduce would do in a case like this is resprite .nav-home a span because it has all of the properties needed in order to construct the viewport and parse out the sprite correctly. However, once this was done, the lower four classes containing the positions of the actual images rendered a distorted image. This is because RequestReduce recreated a new sprite image with the original images placed in different positions than they were on the original sprite sheet. So the background positions of the other nav-home images point to invalid positions.

If you are creating a site that pays tribute to abstract art, you may be pleasantly surprised by these transformations. You may be saying, “If only RequestReduce would change my font to wing dings, it would be the perfect tool.” Well, unfortunately you are not the RequestReduce target audience.

RequestRecuce should never change the visual rendering of a site

One of the primary underlying principles I try to adhere to throughout the development of RequestReduce is to leave no visible trace of its interaction. The default behavior is always to optimize as much as possible without risk of breaking the page. For example, I could move scripts to the bottom or dynamically create script tags in the DOM to load them asynchronously and in many cases improve rendering performance but very often this would break functionality. Any behavior that could potentially break a page must be “requested” or opted in to via config or API calls.

This spriting behavior violated this rule all to often. I honestly did not know how wide spread this pattern was. My vision is to have people drop RequestReduce onto their site and have it  “just work” without any tweaking. What I had been finding was that many many sites and most if not all “sophisticated” sites already using some spriting render with unpleasant side-effects when they deploy RequestReduce without adjustments. While I have done my best to warn users of this in my docs and provide guidance on how to prepare a site for RequestReduce, I had always thought that the need for this documentation and guidance would be more the exception than the rule.

I have now participated in onboarding some fairly large web properties at Microsoft onto RequestReduce. The process of making these adjustments really proved to be a major burden. Its not hard like rocket science, its just very tedious and time consuming. I think we’d all rather be building rockets over twiddling with css classes.

Locating other css properties that may be used in another css class

It just seemed to me that given a little work, one could discover other properties from one css class that could be included in another. So my first stab at this was a very thorough reverse engineering of css inheritance and specificity scoring. For every class, I determined all the other classes that could potentially “contribute” to that class. So given a selector:

h1.large .icon a

Dom elements that can inherit from this class could also inherit from:

a.icon ah1 .icon a.large .icon a.large aetc...

For every class that had a “transformable” property (background-image or background-position), I would iterate all other classes containing properties I was interested in (width, height, padding, background properties) and order them by specificity. The rules of css specificity can be found here. Essentially each ID in a selector is given a score of 100, each class and pseudo class a score of 10 and each element and pseudo element a score of 1. Inline styles get a score of 1000, but I can’t see the dom and the “Universal” element or * is given a score of 0. Any two selector with a matching score determines its winner by the one that appears last in the css.

Once I had this sorted list, I would iterate down the list stealing missing properties until all my properties were occupied or I hit the end of the list.

At first this worked great and I thought I was really on to something but I quickly realizing that this was breaking experience all too often. Given the endless possibilities of dom structures, there is just no way to calculate without knowledge of the dom, which class is truly representative. Eventually I settled on only matching up a background image less selector with a background-property with the most eligible and specific selector containing a background-image. While even this strategy could break down,  so far every page I throw this at renders perfectly.

Although this feature does not add any optimization to other sites and only assists larger sites to RequestReduce, I’m excited to provide a smoother adoption plan. As a project owner who wants his library to be used, I want this adoption plan to be as smooth and frictionless as possible.

What else is in v1.7.0?

Here is a list in the other features that made it into this release:

  1. Improved performance of processing pages with lots of sprites. This is done by loading each master sprite sheet into memory only once and not each time an individual sprite image is found.
  2. Prevent RequestReduce from creating an empty file when it processes a single script or css file containing only a comment. After minification, the file is empty.
  3. Support Windows Authentication when pulling script/css/sprite resources.

What’s Next?

Good question. Probably being able to process scripts due to expire in less than a week. Soon after that I want to start tackling foreground image spriting.

Reflecting on two years as a Microsoft employee by Matt Wrock

So its New Years Day and I’m thinking maybe its appropriate to write a post that’s deep and introspective. Something that speaks to a broad audience and asks the reader to stop, reach deep within. Real deep. Ok even deeper…deeper still. Wait. Uh oh we’ve gone too deep now. Pull back. Further. Keep going. Ugh…now I’m just tired.

Anyhoo, I really have been wanting to write about the things I have learned since joining Microsoft. Things I have learned about working at Microsoft in general and things I have learned about software engineering. So I’ll start with some observations about my employee experience at Microsoft and then get a bit more technical talking about practices I have learned and found valuable. These are not necessarily “Microsoft practices” but just things I have learned by working with a new team and new people.

This dovetails nicely into my first point:

One Microsoft Way is just an address

Its very common to hear non Microsoft employees say things like “Microsoft employees are…” or “That’s very typical Microsoft.” People often  think of Microsoft as an organization that acts in one accord and that all Microsoft employees, managers and practices can be codified within a single set of characteristics, values and practices. I largely subscribed to this perspective before becoming employed at Microsoft.

The reality is that Microsoft is like a collection of many small to mid sized companies and each can have dramatically different practices and employee profiles. There are teams that follow a variety of practices from traditional waterfall to “scrumerfall” to text book scrum to strict TDD and pair programming XP like disciplines. There are teams where everyone works in their own office, others that always work in an open team room and others that work sometimes in a team room but have their own office if they want some “alone time.” There are teams who follow strict policies prohibiting the use of any Open Source Software and others who actively seek out OSS to incorporate in there code base and others who look for opportunities to open source their own projects.

You have microsofties that will only ever use Microsoft tools and others carrying around IPhones and wearing Chrome T-Shirts. I think this is an important fact to keep in mind and most likely typical of other large companies. I think perhaps15 years ago, the culture may have been more homogeneous but it is far from that now. There is a lot going on behind Microsoft source control that would surprise a lot of anti MS geeks.

Per Capita, Microsoft engineers are the brightest and most passionate I have ever worked with

Honestly, this has been both a curse and a blessing for me but by far mostly a blessing. Overall, the caliber of the engineers that I work with is higher than the startups I have worked with in the past. I had been used to easily obtaining the role “rock star” developer at previous gigs. This is not because I am particularly smart or clever. Far be it. I just work incredibly hard (too hard) and really really like what I do. Being around so many great developers was difficult to adjust to because it blends poorly with my self conscious nature. Its normal for me to go through a 3 month period of “Oh shit, I’m gonna get fired today” and this period was dramatically prolonged at Microsoft.

The flipside is that I get to come to work everyday and blab for long periods about technology and developing practices and disciplines with others who are equally if not more informed and enthusiastic. I am constantly learning new technical tidbits and insightful disciplines and interesting ways of looking at problems.

There is a “Microsoft Bubble” and it must be popped

This may appear to contradict my first point. I still stand by my statement that the Microsoft employee population and vast array of different business units cannot be pigeon holed. However, I have found a surprisingly strong tide of what some call “old Microsoft.” What is Old Microsoft? Well I‘ve only been there two years so I can’t speak with much authority here. Some think it’s a group of grey haired and clean shaven engineers hunkering down in the basement of Building one, writing UML diagrams and a huge Gnat chart behind a technology to bring down  all continuous integration servers, DVCS repositories and instances of Fire Bug. I’m keeping backups of all of these in case this is true.

In all honesty, “old Microsoft” to me is waterfall processes, large, monolithic architectures and a “not created here” mentality. What is interesting to me is that this is not an active tide striving to beat down any ideology opposed to it. Rather, its sheer ignorance resulting from a simple lack of awareness of what goes on outside of Microsoft. I have noticed that especially among the upper ranks, it is infrequent to see outsiders recruited. There are a lot of very seasoned engineers who have been in the industry for years and years and almost every one of those years have been at Microsoft. Some of these individuals simply have not had exposure to other organizational practices and have grown comfortable with what they have practiced for years.

These are not evil people. They are smart and simply need to be educated.I need to be educated. We all need education every day and a diverse one at that. If others do not approach this “old guard” and introduce them to evolving and progressive practices, because they are intimidated or are afraid of demonstrating a lack of company loyalty, it is mainly Microsoft who will suffer. Fortunately there are some very influential and some not so influential folks doing just this. As a result you are seeing more groups releasing earlier and more often and using things like OSS instead of the stock MS tools, and using tools like Mercurial instead of TFS. These engineers are more loyal to quality than they are to the Microsoft brand. I believe it is employees like these that an employer should seek out. An employee loyal to quality and engineering efficiencies should never be perceived as undermining Microsoft interests but rather raising the bar to higher standards and continuous improvement.

Some Valuable Technical Practices

Here are some purely technical practices I have picked up by working with my team over the past couple years. As I said before, these are not practices unique to Microsoft, but are simply a collection of new tools learned like I have learned from any other new team.

Test Driven Development: It supports and is not opposed to rapid development

For years before joining Microsoft I had been highly intrigued by TDD (Test Driven Development). It was a practice I truly believed in and wanted to master. Unfortunately I was too mired in managerial responsibilities to really master it and teach it to others. Also, TDD is one of those practices that is difficult to learn on your own and it is very easy to adopt anti patterns without knowing it. One great example is understanding the differences between unit tests and integration tests. If you look at the “unit tests” written by someone without any guidance, these tests are often actually integration tests and not unit tests at all. Developers end up frustrated writing these tests because they are fragile, time consuming and difficult and sometimes nearly impossible to write before writing implementation code. A typical response you hear from developers struggling to adopt TDD is something like, “we tried it but we ran out of time” or “management did not want us spending the time writing tests.”

Having lived in a strict TDD culture and learning a lot of the tricks and principles of true TDD, I now see that I don’t have time NOT to write unit tests. Yes, I do believe that unit tests will make V1 longer to develop. Some may disagree. But with each new version or release, unit testing incrementally increases the velocity of getting new features to market. When done right, unit tests are the safety harness any new team member needs when adding code to an existing codebase. I have worked with code bases that had parts of code that developers were scared to death to touch for fear of breaking some functionality that they were not aware of. On the other hand, if there is good test coverage, I can be fairly confident that if I break existing functionality, tests will fail and alert me to this fact.

One of the key leanings for me about TDD was the principle of testing ONLY the functionality of the method being tested. Too often, tests try to test all the way down the stack and you end up with a lot of tests repeating one another. The tests take longer to write and take much longer to refactor when refactoring code. Learning to mock or fake underlying services here is key. If you have a MVC action method that writes to a logging service, there is no need to test the logging in the tests built around the action method. You do that in the tests you write for the logging service.

Perhaps above all, the virtue of TDD is that it imposes a requirement of designing decoupled code, each component having as few responsibilities as possible. That’s just good design. However I will say as an OSS contributor to a code base with no dedicated test engineers, I am indebted to the QA virtues as well.

Always create a Build.bat file that can setup, build and deploy your entire app

This has been so incredibly helpful. I never create anything either at work or on my own without this. This bat file is probably a very small batch script that invokes a much larger power shell script ensuring that the build can be launched from both powershell as well as an old school DOS command console. As powershell gains momentum, the batch file may not be necessary. I plan to devote a future post to this topic alone.

This script should ideally setup ALL dependencies including IIS site configuration, database creation, installation of any services, certificates, etc. All of this can be done via power shell and other command line tools. If there is something that has to be manually setup, I would argue that you either do not fully understand the scripting capabilities of powershell or the technology you need to configure or you are using a technology that you should not be using.

This script (or collection of scripts) should also be able to package all build artifacts. Typically this is the script invoked by your continuous integration system.

The script may take some time to write, but it will pay off fast and will drastically reduce the time needed for onboarding new team members.  You no longer have to sit with a new dev and hold their had to get an app up and running or hand them a 20 page document that is old and has some frustrating mistakes. The script itself is now your build and deployment document. If the build script fails, it means no new code is written until it is fixed.

For an example of what such a script looks like in the wild, check out Build.bat in my RequestReduce Github Repo and follow it through to its powershell script. I use PSake to manage the build. I like its native support for powershell over MSBuild or NANT but those will work too,

Use DVCS for source control

The biggest downside to using a DVCS (Distributed Version Control System) like Mercurial at Microsoft is that if you ever find yourself on a team that uses TFS, and inevitably you will, you will find yourself craving to dine on broken glass in an effort to mask the pain of bad merges, locked files and a less than poor disconnected user experience.

My team uses Mercurial because it is more Windows friendly than GIT. I do use git for my personal project RequestReduce. I’ll admit than when I was migrating from SVN to Mercurial 18 months ago, there were some learning curve issues.I often find that it takes a while for DVCS newbies to “get it.” But once they do, they will swear by it. Lets face it, this is the future of source control.

From a install and setup perspective, the comparison between Mercurial or git to TFS is stark indeed. TFS is a monster (and not one of those cute and friendly monsters in Monsters Inc.) and Mercurial or git is refreshing to setup in comparison (like a young Golden Retriever puppy – so small and cute you just want to spoon it – perhaps I have said too much here). However, this is the least of the benefits of DVCS. To me the true beauty is having a complete local repository that I can commit to, branch and merge all locally. Also the complete history of the repo including all activity both local and remote is stored locally. I can play and merge and commit ar granularly as I want without disturbing or being disturbed by others. And merging is rock solid. I get far fewer merge conflicts than I ever had on SVN or other weirdness on TFS.

My other big tip here is learn the command line. This may seem daunting if you are used to a GUI but once you learn it, I assure you that you will have a better understanding of what it is doing and how it works which can be very important. Also, even to an appalling typist like myself, it is much faster.

Microsoft: A far from perfect great place to work

To sum up my time so far at Microsoft, its been a really good place to work. Overall it has been a very positive experience and I have no regrets. Its like working in a technical play ground. Ever seen a young kid in a big play ground? That’s me when I realize I get to build apps for a day. However its far from perfect. Working for Microsoft had been a long time dream which I had long idealized. I had spent my career in startup mode and always wondered how outfits like Microsoft handled things like data center deployments and recruiting. I had always assumed that they probably handled those things in a far superior manner than I ever had. Turns out there are a lot of kinks here even at Microsoft. Probably because they are really hard things to get right. There are times when I laugh at how much I idealized and even romanticized Microsoft. Its run by a bunch of humans like most other companies and humans can only be so right so often.

My next place of employment will be run by aliens. That should really take things to the next level.