Course Correction by Matt Wrock


March 1st marks a significant one year milestone for me. Over the past year I have made several lifestyle changes as a sort of major "course correction" that has had a profound impact over my general well-being and outlook on life. I made the first intentional and tangible change on March 1, 2017. However as I am writing this I am remembering other actions taken slightly earlier that seem more significant now than they did at the time. Still, March 1 seems like a good solid checkpoint. Mainly for the fact that I can actually remember it!

One byproduct of these changes has been a near halt in blogging over the last year as well as a significant cut back in open source contributions made on my own time. So I thought this one year mark may make a fine occasion to share the changes I have made, what led me to make them and how I think those changes are shaping a new life for me now.

This is not going to be a technical post by any means but I think that what I have to share may resonate with others who find themselves in self-defeating patterns of spending far too much time working at their jobs or contributing to open source or other "side projects" in their free time and are feeling a lack of connection and meaning in their lives like I was feeling a year ago. This post is also an opportunity for me personally to process the last year and to try and better make sense of what all has transpired in order to better understand the place where I am today so I can better plot a path forward.

So let's go back to February, 2017. What was going on at that time that would prompt me to change course? I had not exercised regularly or even semi-regularly in 5 years. It's worth mentioning that just prior to then I was an ultra-marathoner and had completed 100 and 50 mile events. I was heavier than I had ever been - not morbidly obese but uncomfortably over weight, well over a healthy BMI. I worked constantly - not necessarily work I was being paid for - but it could indeed be called work: blogging, open source contributions, answering forum questions, troubleshooting packer builds. Every day I brought my laptop to bed and would usually work myself to sleep. Sometimes I would wake later in the night and work some more and then I would start working again just after waking. I was always disappointed when weekends or a vacation showed up and was relieved on Mondays. Oh and I was generally miserable and knew it. I felt like a failure in every area of my life.

I could now spend several paragraphs going into some detail about how the events over the preceding 10 years led me to this state. Believe me I did and I just cut them all out. You don't want to read all that. Let me see if I can sum that all up in a couple sentences. I thought and hoped that maybe if I worked hard enough, I could create something great. This started by dedicating some personal time to an open source project and I loved the experience but also stopped exercising to buy more time for the project. Eventually one project led to another and next I was contributing to several projects and actively blogging. Now I'm averaging 4 to 5 hours of sleep a night.

As time went on I lost track of what I was trying to accomplish. I was working constantly and had no clear vision of where I wanted to go. Eventually all the constant work simply became habit and a new default state. Eventually I was conscious of the fact that I had lost any clear long term goal. I was just chasing multiple "personal" assignments and feeling like I was drifting about getting nowhere. Simultaneously I felt totally out of shape, uncomfortably over weight and a failure as a father and husband. Something had to change.

I had a general idea of what initial changes I needed to make for years prior to this. It was simple: Change my diet, start exercising regularly, sleep like a normal person and sit down and think hard about what I wanted for myself and my family and figure out what I needed to do to get there. Again I could get very philosophical and explain over many paragraphs why it took years for me to actually make a move. I just couldn't let go of the terrible habits I acquired. I was afraid of what I might be giving up. What if stopping my work plunges me into a void of mediocrity. Well on March 1, 2017, I made the first move and have kept on going ever since.

As I mentioned above, there were actually a couple important changes I made prior to March, 2017. At the end of October, 2016 I changed teams at work that allowed me to focus on some technology that truly stimulated and interested me during work hours which made me feel less compelled to seek technical satisfaction after hours. As I very slowly weaned myself off of some open source projects I soon instituted a new personal rule: don't bring my laptop to bed. At the time that was not part of some great plan to alter my habits, it just seemed like the right thing to do, but the impact was huge. You see, in my line of work, it's really super hard to work without a computer.

I have made a lot of changes between now and last March. These all transpired rather gradually. The first changes were all physical. The very first change was cutting out my daily habit of drinking two glasses of wine with dinner. This wasn’t so much about eliminating alcohol consumption. Rather it was a strategy to keep me from eating too much. After a couple glasses of wine, my self-control would disappear and my appetite would spike and I'd slip into a semi trance state of eating fatty foods and sipping wine. In terms of improving my eating habits, this move seemed like the lowest of all hanging fruit and a good place to start. My rule was simply no drinking at home. That seemed like it would squelch my nightly binge habit but allow me the occasional drink at social events. I had been wanting to make this particular change for months but could just never do it. By the time dinner time would roll around, the thought of denying myself those two glasses of wine just seemed cruel.

Well for whatever reason I was now properly motivated and managed to successfully drop the habit. After just a few days I was feeling better and perhaps more importantly felt like I had dug myself just a tiny bit out of my hole. Every week I would make some other change to my diet. Like replacing my breakfast of Starbucks lattes and pastries every morning with home brewed coffee and oatmeal. After a few months, my diet was pretty much what it remained until today: mostly whole, unprocessed  plant based foods. I'll eat dairy or fish when I'm out or if someone else cooks it but not as a staple.

In addition to dietary changes, I managed to carve out a daily exercise routine. This had been a real struggle over the past several years. I went from running 60 miles a week for years to intentionally dropping to zero so I could get more work done. Then years later and realizing how bad an idea that was, I just couldn't maintain a regular exercise habit. Over time my fitness digressed to where I could not run more than a couple miles without injury and then couldn't run at all. Well in March 2017 I started a daily walking habit that became a mix of walking and running and by mid June I was running four miles a day. Oh man this brought me so much joy and I remember ending those runs feeling so much gratitude. I had thought my running days were over but now I was clearly back in the saddle.

While these physical changes in diet and fitness were super great, I still felt an uncertainty and an overall lack of vision regarding the forward momentum of my life. For so long I had been razor focused on open source projects with a hope in the back of my head that eventually I would just fall in to some great opportunity that would provide moderate wealth and total independence. In March, along with the health related adjustments, I mostly suspended my "extracurricular" open source involvement. Part of my overall plan was to completely reassess my goals and essentially recalibrate my personal mission. I was and am still passionate about windows server automation but I wanted to envision an end game and perhaps it would be something bigger and broader than writing code. I knew I needed to explore what it was I wanted to contribute to the world in my lifetime as well as what was the life I wanted for me and my family. Then I needed to determine what path was going to get me from where I am to that future vision.

This turned out to be a very difficult endeavor. I just didn't really know how to answer many of the questions that needed answering and I was the only person who could possibly answer them. I knew some of the basics: I wanted financial independence, to provide a nurturing environment for my wife and children, have more time to spend with family, and generally make the world a better place. These are great things to want but do not make for very actionable goals in themselves. I felt incredibly antsy and restless. I wanted something tangible I could do and apply myself toward that would propel me in the direction of obtaining all of these things, but I had no idea what that thing or activity could or should be.

I'm pretty good at setting goals and achieving them. I'm not always good and choosing the right goals. This has especially been the case in the past few years. Most of my life has been a migration from one obsession to another. I find an interest and fully submerge myself in it. It's both my biggest strength and weakness. So being in a state with no obsession to nurse felt empty and unsatisfying. Now that all being said, I felt oddly on the right track. Despite my restlessness, I felt the most positive I had in years. With my newly recovered health, I felt like I was standing on a solid foundation and like I was observing life and my surroundings through a new and more clear lens. With this more centered outlook, I was confident some action plan would reveal itself in time.

This search for a "mission" led me to make more changes to my daily routine. If diet and exercise changes could make me feel this much better, what other positive changes could I make to move this trend forward? First, I replaced listening to technical podcasts on workouts and while driving my car with listening to books from related to a variety of self-development topics. I've listened to about 40 to 50 books over the last year. The topics have been all over the place: popular psychology, philosophy, finance, religion. I've listened to some incredible books and also some real duds. All in all it has been a true journey. One book will introduce new concepts or authors which will lead me to another set of books. These have taught me a lot about a variety of topics and exposed me to a ton of new ideas.

In June, another new routine I took up was meditation. Years ago I had a daily meditation practice and I stuck with it for several years. However as my career blossomed, it eventually dropped away. But now, as I found myself seeking to learn about myself and discover a new path in life, it seemed like a good activity to take up again. Remembering back to my previous practice years ago, I recalled the honest introspection it could cultivate. This seemed like something sorely needed now. As I looked around myself for a meaningful way forward, I wanted to proceed with brutal honesty and authenticity. I did not want to choose goals that just made me feel good or would allow me to gain approval from others, I wanted to find and live my unique self, grounded in what was transpiring around me and not a fantasy of some future state to which I wanted to escape.

I am going to assume that the audience reading this blog may not have direct experience with meditation. That is totally ok and I will try to describe it in enough detail that you can have a sense of what I am talking about. The topic of meditation is immensely broad. There are a multitude of different meditation disciplines and traditions. Some differ so much from one another that it is hard to say that both are the same thing and many others may appear almost identical. While I dabbled in a few forms of meditation in my early twenties, I began what I would call a formal Zen meditation practice in the mid-nineties. I lived less than a mile from the San Francisco Zen Center and practiced there regularly for a few years until I moved back to Southern California where I continued to practice on my own for several more years. Zen meditation, from a "logistical" perspective is very simple. You typically sit on a cushion but may also sit in a chair or on anything that allows you to sit in an erect posture with your back straight.  As you sit, one typically places their concentration upon their breath - paying close attention to each inhale and exhale. The intent is not to find or discover some "understanding" but to remain in the present moment. Inevitably thoughts will arise. Thoughts about some event or interaction that happened or about some future fantasy or dread. In meditation, we don't try to avoid these thoughts because that is futile, rather we observe these thoughts without attaching to them or repelling them. At least that is the idea. In actual practice, attachment and repulsion are vibrant realities that are yet more fodder for observation. We catch our mind wandering and becoming absorbed in various thoughts and emotions and then gently bring ourselves back to the breath.

That’s all I'm going to cover on the mechanics of meditation. If it is something that interests you or you are curious to learn more, there are a ton of books, blogs, and YouTube posts on the topic that can do a far better job explaining things than I can. There are several "flavors" of meditation that all follow roughly the same technique I described above. They are sometimes grouped in their more contemporary and secular label: Mindfulness practice - so you might include that in your googling. A couple resources I think are great for beginners: Mindfulness: An Eight-Week Plan for Finding Peace in a Frantic World and an audible lecture series The science of mindfulness - A research based path to well-being.

This practice proved and continues to prove itself very powerful. I don't know if I just forgot my experience of meditating years ago but this time things seemed more focused, energetic and penetrating. Honestly, I think the experience of the past few years brought a sort of brokenness that breathed a deeper level of honesty and surrender into my practice.

Just before I started meditating again, I began taking walks with my dog Ocean in the afternoon and evening. Shortly after beginning a daily morning routine of sitting meditation, I started treating these afternoon and evening walks as a mindfulness exercise. I'd try to focus on being present during the walk instead of daydreaming about the future or obsessing about something that happened that day. Of course every day I have varying degrees of success and failure with that intention, but that’s OK. It's the intention that is important.

These new non-physical habits have unfolded a surprisingly fascinating internal journey. It's helped me to identify some of the warped ways I interpret my experiences and gain healthier insights on how to view my life and how to act in the world, but I really feel like I have just scratched the surface. This does not at all down play the benefits of my changes to diet and exercise and I sort of think I would have never gotten off the ground without the changes made to my health. While this was by no means my strategy, they gave me small attainable goals that had a tangible and measurable impact. This not only made me feel better physically but it gave me confidence in myself and left me wanting for more positive change and positivity in general.

So now after a year of making all these changes,  have I found my mission in life? Have I received transmission of my grand path and redefined life purpose? Well not really, but that does not indicate failure. On the one hand, learning to just allow myself to live more fully in the present moment without the need to constantly focus on some future state is a sort of "goal" in itself. That sounds really paradoxical and may be a completely wrong way to phrase it but I honestly believe we can get stuck in becoming "human doings" and lose sight of what it is to be a human being. I could almost describe my entire workaholic epic as just that. I was stuck thinking I needed to do, do , do in order to achieve some incredibly vague idea of myself in an unknown future state that was never real at all. That does not discount everything I did or judge all my actions as misguided, but the predominant energy I was tapping into was an energy of supporting and chasing an image of myself that was entirely illusory.

Maybe tomorrow I will wake up and I'll have a lightning flash of insight into "the thing" I need to do or maybe over the next several years, circumstances around me will shape themselves and guide me unknowingly into an entirely different future from the present I live in now. Either scenario may be equally valid but I believe that in either case, a genuine "calling" emerges form an understanding of our true self and such an understanding best arises out of a spirit of surrender and letting go to the present.

Please don't get me wrong. There are absolutely some who are in a bad place and need to take responsibility and act in order to get themselves to somewhere else ASAP.

Here is another possibility: maybe the ultimate path of truth is right in front of us right now doing just exactly what we are doing now. As we let go into the present, we become transformed from the inside and the outside starts to look very different. Maybe as I learn to live in the present, I approach my current day job as an opportunity for meaningful global change no matter what that job is. Its where I am right now and therefore is the absolute best place for me to be and exercise my unique talents. I, like all of us, bring something unique to my present that absolutely no one else has and by embracing that truth, we may become truly great at what we do. How many of us are climbing a ladder to nowhere and feel an utter failure we have not arrived at a somewhere we cannot even define. Maybe we need to just dust ourselves off and fall off the ladder to be rescued by right where we are now.

Retiring the Boxstarter Web Launcher by Matt Wrock


The "Web Launcher" which installs and runs Boxstarter using a "click once" install URL, will soon be retiring. This post will discuss why I have decided to sunset this feature and how those who regularly use this feature can access the same functionality via other methods.

What is the Web Launcher

When I originally wrote boxstarter, one of the primary design goals was that one could jump on a fresh Window OS install, and launch their configuration scripts with almost no effort or pre-bootstrapping. The click once install technology seemed like a good fit and indeed, I think it has served this purpose well. With a simple, easy to remember URL, one can install boxstarter and run a boxstarter package. This only works when invoked via internet explorer, and while I do not use IE as my default browser, this restriction is completely viable for a clean install where IE is guaranteed to be present.

Why retire a good thing?

Again, the click once installer has been a very successful boxstarter feature. The only hassle it has really caused has been for users wanting to use it from Chrome or Firefox. It has also been known to trigger false positive malware detection from Windows Smart Screen for reasons that usually baffle me. Both of these issues are really minor.

I am retiring it due to cost and time. Using click-once requires that I maintain a Software Signing certificate. I used to be able to obtain one for free, but the provider I have used has started to charge and made the renewal process particularly burdensome. The friction is not unreasonable given the nature of the company and I am truly grateful for the years of free service. Further, the click once installer requires some server side logic requiring me to pay hosting fees. As a former Microsoft Employee, I could host this on Azure for free but I no longer benefit from free Azure services.

I don't at all mean to come off like I'm on the brink of bankruptcy or anything like that. However, it seems unwise to pay hundreds of dollars a year for cert renewals and hosting fees when the fact of the matter is that almost all of this value can be accessed for free.

When will the Web Launcher retire?

I do not intend to yank the installer off the site right away. I'll likely keep it there for at least a few months. However, I will not be renewing the code signing certificate which means that starting June 28th 2017, Windows will warn users that the certificate is from an untrusted publisher.

I have removed documentation from the website that talks about the Web Launcher and replaced that documentation with new instructions for installing Boxstarter over the web and installing packages via boxstarter.

How can I install Boxstarter and install packages via the web without the Web Launcher?

Actually quite easily thanks to powershell. For some time now, I have shipped a bootstrapper.ps1 embedded in a setup.bat file downloadable from the website. I am making some minor enhancements to this bootstrapper that will make it easy to install the boxstarter modules by simply running:

. { iwr -useb } | iex; get-boxstarter -Force

This will install Chocolatey and even .Net 4.5 if either are not already installed and then install all of the necessary boxstarter modules and even import them into the current shell. The installer will terminate with a warning if you are not running as an administrator or have a Restricted Powershell Execution Policy.

Once this runs successfully, one can use the Install-BoxstarterPackage command to install their package or gist URL

Install-BoxstarterPackage -PackgeName -DisableReboots

One can consult the command line help of the boxstarter website for details on how to use the command.

I understand this is a tiny bit more involved than the Web Launcher. You cannot both install boxstarter and run a package in a single command and if you don't like to enter a you have to.

The reason I did not expose the bootstrapper like this in the first place was that then Powershell v3 where Invoke-WebRequest (aliased iwr) was not at all the norm at the time and the command that accomplishes the same in Powershell v2 was more verbose and awkward:

iex ((New-Object System.Net.WebClient).DownloadString('')); get-boxstarter -Force

Now I suspect that the majority of boxstarter users are on Powershell 3 or more likely even higher. If you are still on version 2, you can use the longer command above.

Habitat application portability and understanding dynamic linking of ELF binaries by Matt Wrock

I do not come from a classical computer science background and have spent the vast majority of my career working with Java, C# and Ruby - mostly on Windows. So I have managed to evade the details of exactly how native binaries find their dependencies at compile time and runtime on Linux. It just has not been a concern in the work that I do. If my app complains about missing low level dependencies, I find a binary distribution for Windows (99% of the time these exist and work across all modern Windows platforms) and install the MSI. Hopefully when the app is deployed, those same binary dependencies have been deployed on the production nodes and it would be just super if its the same version.

Recently I joined the Habitat team at Chef and one of the first things I did to get the feel of using Habitat to build software was to start creating Habitat build plans. The first plan I set out to create was .NET Core. I would soon find out that building .NET Core from source on Linux was probably a bad choice for a first plan. It uses clang instead of GCC, it has lots of cmake files that expect binaries to live in /usr/lib and it downloads built executables that do not link to Habitat packaged dependencies. Right out the gate, I got all sorts of various build errors as I plodded forward. Most of these errors centered around a common theme: "I can't find X." There were all sorts of issues beyond linking too that I won't get into here but I'm convinced that if I knew the basics of what this post will attempt to explain, I would have had a MUCH easier time with all the errors and pitfalls I faced.

What is linking and what are ELF binaries?

First lets define our terms:


There are no "Lord of the Rings" references to be had here. ELF is the Extensible and linkable format and defines how binary files are structured on Linux/Unix. This can include executable files, shared libraries, object files and more. An ELF file contains a set of headers and a number of sections for things like text, data, etc. One of the key roles of an ELF binary is to inform the operating system how to load a program into memory including all of the symbols it must link to.


Linking is a key part of the process of building an executable. The other key part is compiling. Often we refer to both jointly as "compiling" but they are really two distinct operations. First the compiler takes source code files and turn them into machine language instructions in the form of object files. These object files alone are not very useful to running a program.

Linking takes the object files (some might be from source code you wrote) and links them together with external library files to create a functioning program. If your source code calls a function from an external library, the compiler gleefully assumes that function exists and moves on. If it doesn't exist, don't worry, the linker will let you know.

Often when we hear about linking, two types are mentioned: static and dynamic. Static linking takes the external machine instructions and embeds them directly into the built executable. If all external dependencies of a program were statically linked, there would be only one executable file and no need for any dependent shared object files to be referenced.

However, we usually dynamically link our external dependencies. Dynamic linking does not embed the external code into the final executable. Instead it just points to an external shared object (.so) file (or .dll file on Windows) and loads that code into the running process at runtime. This has the benefit of being able to update external dependencies without having to ship and package your application each time a dependency is updated. Dynamic linking also results in a smaller application binary since it does not contain the external code.

On Unix/Linux systems, the ELF format specifies the metadata that governs what libraries will be linked. These libraries can be in many places on the machine and may exist in more than one place. The metadata in the ELF binary will help determine exactly what files are linked when that binary is executed.

Habitat + dynamic linking = portability

Habitat leverages dynamic linking to provide true application portability. It might not be immediately obvious what this means or why it is important or if it is even a good thing. So lets start by describing how applications typically load their dependencies in a normal environment and the role that configuration management systems like Chef play in these environments.

How you manage dependencies today

Lets say you have written an application that depends on the ZeroMQ library. You might use apt-get or yum to install ZeroMQ and its binaries are likely dropped somewhere into /usr. Now you can build and run your application and it will consume the ZeroMQ libraries installed. Unless it is told otherwise, the linker will scan the trusted Linux library locations for shared object files to link.

To illustrate this, I have built ZeroMQ from source and it produced and put it in /usr/local/lib. If I examine that shared object with ldd, I can see where it links to its dependencies:

mwrock@ultrawrock:~$ ldd /usr/local/lib/ =>  (0x00007ffffe05f000) => /usr/lib/x86_64-linux-gnu/ (0x00007f7e92370000) => /usr/local/lib/ (0x00007f7e92100000) => /lib/x86_64-linux-gnu/ (0x00007f7e91ef0000) => /lib/x86_64-linux-gnu/ (0x00007f7e91cd0000) => /lib/x86_64-linux-gnu/ (0x00007f7e91ac0000) => /usr/lib/x86_64-linux-gnu/ (0x00007f7e917a0000) => /lib/x86_64-linux-gnu/ (0x00007f7e91490000) => /lib/x86_64-linux-gnu/ (0x00007f7e910c0000)
/lib64/ (0x00007f7e92a00000) => /lib/x86_64-linux-gnu/ (0x00007f7e90e80000) => /lib/x86_64-linux-gnu/ (0x00007f7e90c60000)

They are all linked to the dependencies found in the Linux trusted library locations.

Now the time comes to move to production and just like you needed to install the ZeroMQ libraries in your dev environment, you will need to do the same on your production nodes. We all know this drill and we have probably all been burned at some point - something new is deployed to production and either its dependencies were not there or they were but they were the wrong version.

Configuration Management as solution

Chef fixes this right? Kind's complicated.

You can absolutely have Chef make sure that your application's dependencies are installed with the correct versions. But what if you have different applications or services on the same node that depend on a different version of the same dependency? It may not be possible to have multiple versions coexist in /usr/lib. Maybe your new version will work or maybe it won't. Especially for some of the lower level dependencies, there is simply no guarantee that compatible versions will exist. If anything, there is one guarantee: different distros will have different versions.

Keeping the automation with the application

Even more important - you want these dependencies to travel with your application. Ideally I want to install my application and know by virtue of installing it, everything it needs is there and has not stomped over the dependencies of anything else. I do not want to delegate the installation of its dependencies and the knowledge of which version to install to a separate management layer. Instead, Habitat binds dependencies with the application so that there is no question what your application needs and installing your application includes the installation of all of its dependencies. Lets look at how this works and see how dynamic linking is at play.

When you build a habitat plan, your plan will specify each dependency required by your application in your application's plan:

pkg_deps=(core/glibc core/gcc-libs core/libsodium)

Then when Habitat packages your build into its final, deployable artifact (.hart file), that artifact will include a list of every dependent Habitat package (including the exact version and release):

[35][default:/src:0]# cat /hab/pkgs/mwrock/zeromq/4.1.4/20161225135834/DEPS

At install time, Habitat installs your application package and also the packages included in its dependency manifest (the DEPS file shown above) in the pkgs folder under Habitat's root location. Here it will not conflict with any previously installed binaries on the node that might live in /usr. Further, the Habitat build process links your application to these exact package dependencies and ensures that at runtime, these are the exact binaries your application will load.

[36][default:/src:0]# ldd /hab/pkgs/mwrock/zeromq/4.1.4/20161225135834/lib/ (0x00007fffd173c000) => /hab/pkgs/core/libsodium/1.0.8/20161214075415/lib/ (0x00007f8f47ea4000) => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007f8f47c9c000) => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007f8f47a7e000) => /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/ (0x00007f8f47704000) => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007f8f47406000) => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007f8f47061000) => /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/ (0x00007f8f46e4b000)
/hab/pkgs/core/glibc/2.22/20160612063629/lib64/ (0x0000560174705000)

Habitat guarantees that the same binaries that were linked at build time, will be linked at run time. Even better, it just happens and you don't need a separate management layer to enforce this.

This is how a Habitat package provides portability. Installing and running a Habitat package brings all of its dependencies with it. They do not all live in the same .hart package, but your application's .hart package includes the necessary metadata to let Habitat know what other packages to download and install from the depot. These dependencies may or may not already exist on the node with varying versions, but it doesn't matter because a Habitat application only relies on the packages that reside within Habitat. And even within the Habitat environment, you can have multiple applications that rely on the same dependency but different versions, and these applications can run side by side.

The challenge of portability and the Habitat studio

So when you are building a Habitat plan into a hart package, what keeps that build from pulling dependencies from the default Linux lib directories? What if you do not specify these dependencies in your plan and the build links them from elsewhere? That could break our portability. If your application builds magically from a non-Habitat controlled location, then there is no guarantee that those dependencies will land when you install your application elsewhere. Habitat constructs a build environment called a "studio" to protect against this exact scenario.

The Habitat studio is a clean room environment. The only libraries you will find in this environment are those managed by Habitat. You will find /lib and /usr/lib totally empty here:

[37][default:/src:0]# ls /lib -la
total 8
drwxr-xr-x  2 root root 4096 Dec 24 22:46 .
drwxr-xr-x 26 root root 4096 Dec 24 22:46 ..
lrwxrwxrwx  1 root root    3 Dec 24 22:46 lib -> lib
[38][default:/src:0]# ls /usr/lib -la
total 8
drwxr-xr-x 2 root root 4096 Dec 24 22:46 .
drwxr-xr-x 9 root root 4096 Dec 24 22:46 ..
lrwxrwxrwx 1 root root    3 Dec 24 22:46 lib -> lib

Habitat installs several packages into the studio including several familiar Linux utilities and build tools. Every utility and library that Habitat loads into the studio is a Habitat package itself.

[1][default:/src:0]# ls /hab/pkgs/core/
acl       cacerts    gawk      gzip            libbsd         mg       readline    vim
attr      coreutils  gcc-libs  hab             libcap         mpfr     sed         wget bash      diffutils  glibc     hab-backline    libidn         ncurses  tar         xz
binutils  file       gmp       hab-plan-build  linux-headers  openssl  unzip       zlib bzip2     findutils  grep      less            make           pcre     util-linux

This can be a double edged sword. On the one hand it protects us from undeclared dependencies being missed by our package. The darker side is that your plan may be building source that has build scripts that expect dependencies or other build tools to exist in their "usual" homes. If you are unfamiliar with how the standard Linux linker scans for dependencies, discovering what is wrong with your build may be less than obvious.

The rules of dependency scanning

So before we go any further lets take a look at how the linker works and how Habitat configures its build environment to influence where it finds dependencies at both build and run time. The linker looks at a combination of environment variables, cli options and well known directory paths and in a strict order of precedence. Here is a direct quote from the ld (the linker binary) man page:

The linker uses the following search paths to locate required shared libraries:

1. Any directories specified by -rpath-link options.
2. Any directories specified by -rpath options.  The difference between -rpath and -rpath-link is that directories specified by -rpath options are included in the executable and used at runtime, whereas the -rpath-link option is only effective at link time. Searching -rpath in this way is only supported by native linkers and cross linkers which have been configured with the --with-sysroot option.
3. On an ELF system, for native linkers, if the -rpath and -rpath-link options were not used, search the contents of the environment variable "LD_RUN_PATH".
4. On SunOS, if the -rpath option was not used, search any directories specified using -L options.
5. For a native linker, search the contents of the environment variable "LD_LIBRARY_PATH".
6. For a native ELF linker, the directories in "DT_RUNPATH" or "DT_RPATH" of a shared library are searched for shared libraries needed by it. The "DT_RPATH" entries are ignored if "DT_RUNPATH" entries exist.
7. The default directories, normally /lib and /usr/lib.
8. For a native linker on an ELF system, if the file /etc/ exists, the list of directories found in that file.

At build time Habitat sets the $LD_RUN_PATH variable to the library path of every dependency that the building plan depends on. We can see this in Habitat's build output when we build a Habitat plan:

zeromq: Setting LD_RUN_PATH=/hab/pkgs/mwrock/zeromq/4.1.4/20161225135834/lib:/hab/pkgs/core/glibc/2.22/20160612063629/lib:/hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib:/hab/pkgs/core/libsodium/1.0.8/20161214075415/lib

This means that at run time, when you run your application built by habitat, it will load from the "habetized" packaged dependencies. This is because setting the $LD_RUN_PATH influences how the ELF metadata is constructed and causes it to point to these Habitat package paths.

Patching pre-built binaries

Habitat not only allows one to build packages from source but also supports "binary-only" packages. These are packages that are made up of binaries downloaded from some external binary repository or distribution site. These are ideal for closed-source software or software that may be too complicated or takes too long to build. However, Habitat cannot control the linking process for these binaries. If you try to execute these binaries in a Habitat studio, you may see runtime failures.

The dotnet-core package is a good example of this. I ended up giving up on building that plan from source and instead just download the binaries from the public .NET distribution site. Running ldd on the dotnet binary, we see:

[8][default:/src:0]# ldd /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225145648/bin/dotnet
/hab/pkgs/core/glibc/2.22/20160612063629/bin/ldd: line 117:
No such file or directory

Well that's not very clear. This isn't even able to show us any of the linked dependencies because the glibc interpreter the ELF metadata says to use is not where the metadata says it is:

[9][default:/src:1]# file /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225145648/bin/dotnet
ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked,
interpreter /lib64/, for GNU/Linux 2.6.32,
BuildID[sha1]=db256f0ac90cd718d8ec2d157b29437ea8bcb37f, not stripped

/lib64/ does not exist . We can manually fix this even after a binary is built with a tool called patchelf. We will declare a build dependency in our plan to core/patchelf and then we can use the following command:

find -type f -name 'dotnet' \
  -exec patchelf --interpreter "$(pkg_path_for glibc)/lib/"

Now lets try ldd again:

[16][default:/src:130]# ldd /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225151837/bin/dotnet (0x00007ffe421eb000) => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007fcb0b2cc000) => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007fcb0b0af000) => not found => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007fcb0adb1000) => not found => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007fcb0aa0d000)
/hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007fcb0b4d0000)

This is better. It now links our glibc dependencies to the Habitat packaged glibc binaries, but there are still a couple dependencies that the linker could not find. At least now we can see more clearly what they are.

There is another argument we can pass to patchelf --set-rpath that can edit the ELF metadata as if $LD_RUN_PATH was set when the binary was built:

find -type f -name 'dotnet' \
  -exec patchelf --interpreter "$(pkg_path_for glibc)/lib/" --set-rpath "$LD_RUN_PATH" {} \;
find -type f -name '*.so*' \
  -exec patchelf --set-rpath "$LD_RUN_PATH" {} \;

So we set the rpath to the $LD_RUN_PATH set in the Habitat environment. We will also make sure to do this for each *.so file in the directory where we downloaded the distributable binaries. Finally ldd now finds all of our dependencies:

[19][default:/src:130]# ldd /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225152801/bin/dotnet (0x00007fff3e9a4000) => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007f1e68834000) => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007f1e68617000) => /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/ (0x00007f1e6829d000) => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007f1e67f9f000) => /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/ (0x00007f1e67d89000) => /hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007f1e679e5000)
/hab/pkgs/core/glibc/2.22/20160612063629/lib/ (0x00007f1e68a38000)

Every dependency is a Habitat packaged binary as declared in our own application's (dotnet-core here) dependencies as low level as glibc. This should be fully portable across any 64 bit Linix distribution.

Creating a Docker container Host on Windows Nano Server with Chef by Matt Wrock

This week Microsoft launched the release of Windows Server 2016 along with its ultra light headless deployment option - Nano Server. The Nano server images are many times smaller than what we have come to expect from a Windows server image. A Nano Vagrant box is just a few hundred megabytes. These machines also boot up VERY quickly and require fewer updates and reboots.

Earlier this year, I blogged about how to run a Chef client on Windows Nano Server. Things have come a long way since then and this post serves as an update. Now that the RTM Nano bits are out, we will look at:

  • How to get and run a Nano server
  • How to install the chef client on Windows Nano
  • How to use Test-Kitchen and Inspec to test your Windows Nano Server cookbooks.

The sample cookbook I'll be demonstrating here will highlight some of the new Windows container features in Nano server. It will install docker and allow you to use your Nano server as a container host where you can run, manipulate and inspect Windows containers from any Windows client.

How to get Windows Nano Server

You have a few options here. One thing to understand about Windows Nano is that there is no separate Windows Nano ISO. Deploying a Nano server involves extracting a WIM and some powershell scripts from a Windows 2016 Server ISO. You can then use those scripts to generate a .VHD file from the WIM or you can use the WIM to deploy Nano to a bare metal server. There are some shortcuts available if you don't want to mess with the scripts and prefer a more instantly gratifying experience. Lets explore these scenarios.

Using New-NanoServerImage to create your Nano image

If you mount the server 2016 ISO (free evaluation versions available here), you will find a "NanoServer\NanoServerImageGenerator" folder containing a NanoServerImageGenerator powershell module. This module's core function is New-NanoServerImage. Here is an example of using to to produce a Nano Server VHD:

Import-Module NanoServerImageGenerator.psd1
$adminPassword = ConvertTo-SecureString "vagrant" -AsPlainText -Force

New-NanoServerImage `
  -MediaPath D:\ `
  -BasePath .\Base `
  -TargetPath .\Nano\Nano.vhdx `
  -ComputerName Nano `
  -Package @('Microsoft-NanoServer-DSC-Package','Microsoft-NanoServer-IIS-Package') `
  -Containers `
  -DeploymentType Guest `
  -Edition Standard `
  -AdministratorPassword $adminPassword

This will generate a Nano Hyper-V capable image file of a Container/DSC/IIS ready Nano server. You can read more about the details and other options of this function in this Technet article.

Direct EXE/VHD download

As I briefly noted above, you can download evaluation copies of Windows Server 2016. Instead of downloading a full multi gigabyte Windows ISO, you could choose the exe/vhd download option. This will download an exe file that will extract a pre-made vhd. You can then create a new Hyper-V VM from the vhd. With that vm, just login to the Nano console to set the administrative password and you are good to go.


This is my installation method of choice. I use a packer template to automate the download of the 2016 server ISO, the generation of the image file and finally package the image both for Hyper-V and VirtualBox Vagrant providers. I keep the image publicly available on Atlas via mwrock/WindowsNano. The advantage of these images is that they are fully patched (key for docker to work with Windows containers), work with VirtualBox and enable file sharing ports so you can map a drive to Nano.

Vagrant Nano bug

One challenge working with Nano Server and cross platform automation tools such as vagrant is that Nano exposes a Powershell.exe with no -EncryptedCommand argument which many cross platform WinRM libraries leverage to invoke remote Powershell on a Windows box.

Shawn Neal and I rewrote the WinRM ruby gem to use PSRP (powershell remoting protocol) to talk powershell and allow it to interact with Nano server. This has been integrated with all the Chef based tools and I will be porting it to Vagrant soon. In the meantime, a "vagrant up" will hang after creating the VM. Know that the VM is in fact fully functional and connectable. I'll mention a hack you can apply to get Test-Kitchen's vagrant driver working later in this post.

Connecting to Windows Nano Server

Once you have a Nano server VM up and running. You will probably want to actually use it. Note: There is no RDP available here. You can connect to Nano and run commands either using native Powershell Remoting from a Windows box (powershell on Linux does not yet support remoting) or use knife-windows' "knife winrm" from Windows, Mac or Linux.

Powershell Remoting:

$ip = "<ip address of Nano Server>"

# You only need to add the trusted host once
Set-Item WSMan:\localhost\Client\TrustedHosts $ip
# use usename and pasword "vagrant" on the mwrock vagrant box
Enter-PSSession -ComputerName $ip -Credential Administrator


# mwrock vagrant boxes have a username and password "vagrant"
# add "--winrm-port 55985 for local VirtualBox
knife winrm -m <ip address of Nano Server> "your command" --winrm-userator --winrm-password

Note that knife winrm expects "cmd.exe" style commands by default. Use "--winrm-shell powershell" to send powershell commands.

Installing Chef on Windows Nano Server

Quick tip: Do not try to install a chef client MSI. That will not work.

Windows Nano server jettisons many of the APIs and subsystems we have grown accustomed to in order to achieve a much more compact and cloud friendly footprint. This includes the removal of the MSI subsystem. Nano server does support the newer appx packaging system currently best known as the format for packaging Windows Store Apps. With Nano Server, new extensions have been added to the appx model to support what is now known as "Windows Server Applications" (aka WSAs).

At Chef, we have added the creation of appx packages into our build pipelines but these are not yet exposed by our Artifactory and Bintray fed Omnitruck delivery mechanism. That will happen but in the mean time, I have uploaded one to a public AWS S3 bucket. You can grab the current client (as of this post) here. To install this .appx file (note: if using Test-Kitchen, this is all done automatically for you):

  1. Either copy the .appx file via a mapped drive or just download it from the Nano server using this powershell function.
  2. Run "Add-AppxPackage -Path <path to .appx file>"
  3. Copy the appx install to c:\opscode\chef:
  $rootParent = "c:\opscode"
  $chef_omnibus_root - Join-Path $rootParent "chef"
  if(!(Test-Path $rootParent)) {
    New-Item -ItemType Directory -Path $rootParent

  # Remove old version of chef if it is here
  if(Test-Path $chef_omnibus_root) {
    Remove-Item -Path $chef_omnibus_root -Recurse -Force

  # copy the appx install to the omnibus_root. There are serious
  # ACL related issues with running chef from the appx InstallLocation
  # This is temporary pending a fix from Microsoft.
  # We can eventually just symlink
  $package = (Get-AppxPackage -Name chef).InstallLocation
  Copy-Item $package $chef_omnibus_root -Recurse

The last item is a bit unfortunate but temporary. Microsoft has confirmed this to be an issue with running simple zipped appx applications. The ACLs on the appx install root are seriously restricted and you cannot invoke the chef client from that location. Until this is fixed, you need to copy the files from the appx location to somewhere else. We'll just copy to the well known Chef default location on Windows c:\opscode\chef.

Running Chef

With the chef client installed, its easiest to work with chef when its on your path. To add it run:

$env:path += ";c:\opscode\chef\bin;c:\opscode\chef\embedded\bin"

# For persistent use, will apply even after a reboot.
setx PATH $env:path /M

Now you can run the chef client just as you would anywhere else. Here I'll check the version using knife:

C:\dev\docker_nano_host [master]> knife winrm -m "chef-client -v" --winrm-user vagrant --winrm-password vagrant Chef: 12.14.60

Not all resources may work

I have to include this disclaimer. Nano is a very different animal than our familiar 2012 R2. I am confident that the newly launched Windows Server 2016 should work just as 2012 R2 does today, but nano has APIs that have been stripped away that we have previously leveraged heavily in Chef and Inspec. One example is Get-WmiObject. This cmdlet is not available on Nano Server so any usage that depends on it will fail.

Most of the crucial areas surrounding installing and invoking chef are patched and tested. However, there may be resources that either have not yet been patched or will simply never work. The windows_package resource is a good example. Its used to install MSIs and EXE installers not supported on Nano.

Test-Kitchen and Inspec on Nano

The WinRM rewrite to leverage PSRP allows our remote execution ecosystem tools to access Windows Nano Server. We have also overhauled our mixlib-install gem to use .Net core APIs (the .Net runtime supported on Nano) for the chef provisioners. With those changes in place, Test-Kitchen can install and run Chef, and Inspec can test resources on your Nano instances.

There are a few things to consider when using Test-Kitchen on Windows Nano:

Specifying the Chef appx installer

As I mentioned above, the "OmniTruck" system is not yet serving appx packages to Nano. However, you can tell Test-Kitchen in your .kitchen.yml to use a specific .msi or .appx installer. Here is some example yaml for running Test-Kitchen with Nano:

  name: vagrant

  name: chef_zero

  name: inspec

  - name: windows-nano
      box: mwrock/WindowsNano

Inspec requires no configuration changes.

Working around Vagrant hangs

Until I refactor Vagrant's winrm communicator, it cannot talk powershell with Windows Nano. Because Test-Kitchen and Inspec talks to Nano directly via the newly PSRP supporting WinRM ruby gem, they make Vagrant's limitation nearly unnoticeable. However the RTM Nano bits exacerbated the Vagrant bug causing it to hang when it does its initial winrm auth check. This can unfortunately hang your kitchen create. You can work around this by applying a simple "hack" to your vagrant install:

Update C:\HashiCorp\Vagrant\embedded\gems\gems\vagrant-1.8.5\plugins\communicators\winrm\communicator.rb (adjusting the vagrant gem version number as necessary) and change:

result = Timeout.timeout(@machine.config.winrm.timeout) do


result = Timeout.timeout(@machine.config.winrm.timeout) do

This should get your test-kitchen runs unblocked.

Running on Azure hosted Nano images

If you prefer to run Test-Kitchen and Inspec against an Azure hosted VM instead of vagrant, use Stuart Preston's excellent kitchen-azurerm driver:

  name: azurerm

  subscription_id: 'your subscription id'
  location: 'West Europe'
  machine_size: 'Standard_F1'

  - name: windowsnano
      image_urn: MicrosoftWindowsServer:WindowsServer:2016-Nano-Server-Technical-Preview:latest

See the kitchen-azurerm readme for details regarding azure authentication configuration. As of the date of this post, RTM images are not yet available but thats probably going to change very soon. In the meantime, use TP5.

Using Chef to Configure a Docker host

One of the exciting new features of Windows Server 2016 and Nano Server is their ability to host Windows containers. They can do this using the same Docker API we are familiar with with linux containers. You could walk through the official instructions for setting this up or you could just have Chef do this for you.

Updating the Nano server

Note that in order for this to work on RTM Nano images, you must install the latest Windows updates. My vagrant boxes come fully patched and ready but if you are wondering how do you install updates on a Nano server, here is how:

$sess = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate -ClassName MSFT_WUOperationsSession
Invoke-CimMethod -InputObject $sess -MethodName ApplyApplicableUpdates

Then just reboot and you are good.

A sample cookbook to install and configure the Docker service

I converted the above mentioned instructions for installing Doker and configuring the service into a Chef cookbook recipie.  Its fairly straightforward:

powershell_script 'install Nuget package provider' do
  code 'Install-PackageProvider -Name NuGet -Force'
  not_if '(Get-PackageProvider -Name Nuget -ListAvailable -ErrorAction SilentlyContinue) -ne $null'

powershell_script 'install nano container package' do
  code 'Install-Module -Name xNetworking -Force'
  not_if '(Get-Module xNetworking -list) -ne $null'

zip_path = "#{Chef::Config[:file_cache_path]}/"
docker_config = File.join(ENV["ProgramData"], "docker", "config")

remote_file zip_path do
  source ""
  action :create_if_missing

dsc_resource "Extract Docker" do
  resource :archive
  property :path, zip_path
  property :ensure, "Present"
  property :destination, ENV["ProgramFiles"]

directory docker_config do
  recursive true

file File.join(docker_config, "daemon.json") do
  content "{ \"hosts\": [\"tcp://\", \"npipe://\"] }"

powershell_script "install docker service" do
  code "& '#{File.join(ENV["ProgramFiles"], "docker", "dockerd")}' --register-service"
  not_if "Get-Service docker -ErrorAction SilentlyContinue"

service 'docker' do
  action [:start]

dsc_resource "Enable docker firewall rule" do
  resource :xfirewall
  property :name, "Docker daemon"
  property :direction, "inbound"
  property :action, "allow"
  property :protocol, "tcp"
  property :localport, [ "2375" ]
  property :ensure, "Present"
  property :enabled, "True"

This downloads the appropriate docker binaries, installs the docker service and configures it to listen on port 2375.

To validate that all actually worked we have these Inspec tests:

describe port(2375) do
  it { should be_listening }

describe command("& '$env:ProgramFiles/docker/docker' ps") do
  its('exit_status') { should eq 0 }

describe command("(Get-service -Name 'docker').status") do
  its(:stdout) { should eq("Running\r\n") }

If this all passes, we know our server is listening on the expected port and that docker commands work.

Converge and Verify

So lets run these with kitchen verify:

C:\dev\docker_nano_host [master]> kitchen verify
-----> Starting Kitchen (v1.13.0)
-----> Creating <default-windows-nano>...
       Bringing machine 'default' up with 'hyperv' provider...
       ==> default: Verifying Hyper-V is enabled...
       ==> default: Starting the machine...
       ==> default: Waiting for the machine to report its IP address...
           default: Timeout: 240 seconds
           default: IP:
       ==> default: Waiting for machine to boot. This may take a few minutes...
           default: WinRM address:
           default: WinRM username: vagrant
           default: WinRM execution_time_limit: PT2H
           default: WinRM transport: negotiate
       ==> default: Machine booted and ready!
       ==> default: Machine not provisioned because `--no-provision` is specified.
       [WinRM] Established

       Vagrant instance <default-windows-nano> created.
       Finished creating <default-windows-nano> (1m15.86s).
-----> Converging <default-windows-nano>...


  Port 2375
     ✔  should be listening
  Command &
     ✔  '$env:ProgramFiles/docker/docker' ps exit_status should eq 0
  Command (Get-service
     ✔  -Name 'docker').status stdout should eq "Running\r\n"

Summary: 3 successful, 0 failures, 0 skipped
       Finished verifying <default-windows-nano> (0m11.94s).

Ok our docker host is ready.

Creating and running a Windows container

First if you are running Nano on VirtualBox, you need to add a port forwarding rule for port 2375. Also note that you will need the docker client installed on the machine where you intend to run docker commands. I'm running them from my Windows 10 laptop. To install docker on Windows 10:

Invoke-WebRequest "" -OutFile "$env:TEMP\" -UseBasicParsing

Expand-Archive -Path "$env:TEMP\" -DestinationPath $env:ProgramFiles

$env:path += ";c:\program files\docker"

No matter what platform you are running on, once you have the docker client, you need to tell it to use your Nano server as the docker host. Simply set the DOCKER_HOST environment variable to "tcp://<ipaddress of server>:2375".

So now lets download a nanoserver container image from the docker hub repository:

C:\dev\NanoVHD [update]> docker pull microsoft/nanoserver
Using default tag: latest
latest: Pulling from microsoft/nanoserver
5496abde368a: Pull complete
Digest: sha256:aee7d4330fe3dc5987c808f647441c16ed2fa1c7d9c6ef49d6498e5c9860b50b
Status: Downloaded newer image for microsoft/nanoserver:latest

Now lets run a command...heck lets just launch an interactive powershell session inside the container with:

docker run -it microsoft/nanoserver powershell

Here is what we get:

Windows PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.

PS C:\> ipconfig

Windows IP Configuration

Ethernet adapter vEthernet (Temp Nic Name):

   Connection-specific DNS Suffix  . :
   Link-local IPv6 Address . . . . . : fe80::2029:a119:3e4f:851a%15
   IPv4 Address. . . . . . . . . . . :
   Subnet Mask . . . . . . . . . . . :
   Default Gateway . . . . . . . . . :
PS C:\>

Ahhwwww yeeeeaaaahhhhhhh.

What's next?

So we have made alot of progress over the last few months but the story is not entirely complete. We still need to finish knife bootstrap windows winrm and plug in our azure extension.

Please let us know what works and what does not work. I personally want to see Nano server succeed and of course we intend for Chef to provide a positive Windows Nano Server configuration story.

Released WinRM Gem 2.0 with a cross-platform, open source PSRP client implementation by Matt Wrock

Today we released the gems: WinRM 2.0, winrm-fs 1.0 and winrm elevated 1.0. I first talked about this work in this post and have since performed extensive testing (but I have confidence the first bug will be reported soon) and made several improvements. Today its released and available to any consuming application wanting to use it and we should see a Test-Kitchen release in the near future upgrading its winrm gems. Up next will be knife-windows and vagrant.

This is a near rewrite of the WinRM gem. Its gotten crufty over the years and its API and internal structure needed some attention. This release fixes several bugs and brings some big improvements. You should read the readme to catch up on the changes but here is how it looks in a nutshell (or an IRB shell):

mwrock@ubuwrock:~$ irb
2.2.1 :001 > require 'winrm'
opts = {
  endpoint: '',
  user: 'vagrant',
  password: 'vagrant'
conn =; nil do |shell|'$PSVersionTable') do |stdout, stderr|
    STDOUT.print stdout
    STDERR.print stderr
end; nil => true
2.2.1 :002 > opts = {
2.2.1 :003 >       endpoint: '',
2.2.1 :004 >       user: 'vagrant',
2.2.1 :005 >       password: 'vagrant'
2.2.1 :006?>   }
 => {:endpoint=>"", :user=>"vagrant", :password=>"vagrant"}
2.2.1 :007 > conn =; nil
 => nil
2.2.1 :008 > do |shell|
2.2.1 :009 >'$PSVersionTable') do |stdout, stderr|
2.2.1 :010 >           STDOUT.print stdout
2.2.1 :011?>         STDERR.print stderr
2.2.1 :012?>       end
2.2.1 :013?>   end; nil

Name                           Value
----                           -----
PSVersion                      4.0
WSManStackVersion              3.0
CLRVersion                     4.0.30319.34209
BuildVersion                   6.3.9600.17400
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0}
PSRemotingProtocolVersion      2.2

Note this is run from an Ubuntu 14.04 host targeting a Windows 2012R2 VirtualBox VM. No Windows host required.

100% Ruby PSRP client implementation

So for the four people reading this that know what this means: yaaay! woohoo! you go girl!! we talk PSRP now. yo.

No...Really...why should I care about this?

I'll be honest, there are tons of scenarios where PSRP will not make any difference, but here are some tangible points where it undoubtedly makes things better:

  • File copy can be orders of magnitude faster. If you use the winrm-fs gem to copy files to a remote windows machine, you may see transfer speeds as much as 30x faster. This will be more noticeable transferring files larger than several kilobytes. For example, the PSRP specification PDF - about 4 and a half MB - takes about 4 seconds via this release vs 2 minutes on the previous release on my work laptop. For details as to why PSRP is so much faster, see this post.
  • The WinRM gems can talk powershell to Windows Nano Server. The previous WinRM gem is unable to execute powershell commands against a Windows Nano server. If you are a test-kitchen user and would like to see this in action, clone and:
bundle install
bundle exec kitchen verify

This will download my WindowsNanoDSC vagrant box, provision it, converge a DSC file resource and test its success with Pester. You should notice that not only does the nano server's .box file download from the internet MUCH faster, it boots and converges several minutes faster than its Windows 2012R2 cousin.

Stay tuned for Chef based kitchen converges on Windows Nano!

  • You can now execute multiple commands that operate in the same scope (runspace). This means you can share variables and imported commands from call to call because calls share the same powershell runspace whereas before every call ran in a separate powershell.exe process. The winrm-fs gem is an example of how this is useful.
def stream_upload(input_io, dest)
  read_size = ((max_encoded_write - dest.length) / 4) * 3
  chunk, bytes = 1, 0
  buffer = ''<<-EOS
    $to = $ExecutionContext.SessionState.Path.GetUnresolvedProviderPathFromPSPath("#{dest}")
    $parent = Split-Path $to
    if(!(Test-path $parent)) { mkdir $parent | Out-Null }
    $fileStream = New-Object -TypeName System.IO.FileStream -ArgumentList @(

  while, buffer)
    bytes += (buffer.bytesize / 3 * 4)[buffer].pack(BASE64_PACK)))
    logger.debug "Wrote chunk #{chunk} for #{dest}" if chunk % 25 == 0
    chunk += 1
    yield bytes if block_given?
  buffer = nil # rubocop:disable Lint/UselessAssignment

  [chunk - 1, bytes]

def stream_command(encoded_bytes)
    $fileStream.Write($bytes, 0, $bytes.length)

Here  we issue some powershell to create a FileStream, then in ruby we iterate over an IO class and write to that FileSteam instance as many times as we need and then dispose of the stream when done. Before, that FileStream would be gone on the next call and instead we'd have to open the file on each trip.

  • Non administrator users can execute commands. Because the former WinRM implementation was based on winrs, a user had to be an administrator in order to authenticate. Now non admin users, as long as they belong to the correct remoting users group, can execute remote commands.

This is just the beginning

In and of itself, a WinRM release may not be that exciting but lays the groundwork for some great experiences. I cant wait to explore testing infrastructure code on windows nano further and, sure, sane file transfer rates sounds pretty great.