I'm so excited. I'm about to join Chef and I think I like it! by Matt Wrock

A little over a year ago, a sparkly devops princess named Jennifer Davis came to my workplace at CenturyLink Cloud and taught me a thing or two about this thing she called Chef.  I had been heads down for the previous 3 years trying to automate windows. While wandering in the wilderness of windows automation I had previously bumped into Chocolatey where I met my good friend Rob Reynolds who had told me stories about these tools called Puppet, Chef, and Vagrant. Those were the toys that the good little girls and boys on the other side of the mountain (ie linux land) used for their automation. Those tools sounded so awesome but occupied with my day job at Microsoft that was not centered around  automation and my nights and weekends developing Boxstarter, I had yet to dabble in these tools myself.

So when the opportunity came to join a cloud computing provider CenturyLink Cloud and focus on data center automation for a mixed windows and linux environment, I jumped at the opportunity. Thats where Jennifer entered the scene and opened a portal door leading from my desert to a land of automation awe and wonder and oh the things I saw. Entire communities built, thriving and fighting all about transforming server configuration from lines of code. To these people, "next" and "finish" were commands and not buttons. These were my people and with them I wanted to hang.

There were windows folk I spied through the portal. Jeffrey Snover had been there for quite some time in full robe and sandals pulling windows folk kicking and screaming through to the other side of the portal. Steven Murauwski was there too translating Jeffrey's teachings and pestering Jeffrey's disciples to write tests because that's how its done. Rob had been sucked through the portal door at least a year prior and was working for Puppet. Hey look he's carrying a Mac just like all the others. Is that some sort of talisman? Is it the shared public key needed to unveil their secrets? I don't know...but its so shiny.

So what the heck, I thought. Tonight we'll put all other things aside and I followed Jennifer through the portal and never looked back. There was much ruby to be learned and written. Its not that different from powershell and had many similarities to C# but it was indeed its own man and would occasionally hit back when I cursed it. One thing I found was that even more fun that using Chef to automate CenturyLink data centers, was hacking on and around the automation tooling like Chef, Vagrant, Ruby's WinRM implementation, Test-Kitchen, etc.

There was so much work to be done here to build a better bridge between windows and the automation these tools had to offer. This was so exciting and Chef seemed to be jumping all in to this effort as if proclaiming I know, I know, I know, I know, I want you, I want you. Before long Steven took up residence with Chef and Snover seemed to be coming over for regular visits where he was working with Chef to feed multitudes on just a couple loaves of DSC. I helped a little here and there along with a bunch of other smart and friendly people.

Well meanwhile on the CenturyLink farm, we made alot of progress and added quite a bit of automation to building out our data centers. Recently things have been winding down for me on that effort and it seemed a good time to asses my progress and examine new possibilities. Then it occured to me, if Jennifer can be a sparkly devops princess at chef, could I be one too?

We may not yet know the answer to that question yet but I intend to explore it and enter into the center sphere of its celestial force.

So I'll see my #cheffriends Monday. I'm so excited and I just cant hide it!

A dirty check for Chef cookbooks: testing everything that has changed and only when it is changed by Matt Wrock

I was recently pinged by a #cheffriend of mine, Alex Vinyar. A few months back, I sent him a bit of code that I wrote at CenturyLink Cloud that runs on our build agents to determine which cookbooks to test after a cookbook commit is pushed to source control. He just mentioned to me that he has gotten some positive feedback on that code so I thought I'd blog on the details of that for others who may be interested. This can also be considered a follow up to my post from almost a year ago on cookbook dependency management where I shared some theory I was pondering at the time on this subject. This post covers how I followed up on that theory with working code.

I'm going to cover two key strategies:

  1. Testing all cookbooks that depend on a changed cookbook
  2. Ensuring that once a cookbook is tested and succeeds, it is never tested again as long as it and its dependencies do not change

I'll share the same code I shared with Alex (and more) in a bit.

Not enough to test just the cookbook changed

Lets consider this scenario: You manage infrastructure for a complex distributed application. You maintain about a baker's dozen cookbooks - see what I did there?...baker...chef...look at me go girl! You have just committed and pushed a change to one of your lower level library cookbooks that exposes functions used by several of your other cookbooks. We'd expect your pipeline to run tests on that library cookbook but is that enough? What if your upstream cookbooks that depend on the library cookbook "blow up" when they converge with the new changes. This is absolutely possible and not just for library cookbooks.

I wanted a way for a cookbook commit to trigger not only its own chefspecs and test-kitchen tests but also the tests of other cookbooks that depend on it.

Kitchen tests provide slow but valuable feedback, don't run more than needed

Here is another scenario: You are now testing the entire dependency tree touched by a change...yay! But failure happens and your build fails. However the failure may not have occurred at the root of your tree (the cookbook that actually changed). Here are some possibilities:

  1. Maybe there was some transient network glitch that caused a upstream cookbook to fail
  2. Gem dependency changes outside of your own managed cookbook collection (you know - the dark of semver - minor build bumps that are really "major")
  3. Your build has batched more than one root cookbook change. One breaks but the others pass

So an hour's worth (or more) of testing 10 cookbooks results in a couple broken kitchen tests. You make some fixes, push again and have to wait for the entire set to build/test all over again.

I have done this before. Just look at my domain name!

However many of these tests may be unnecessary now. Some of the tests that has succeeded in the filed batch may be unaffected by the fix.

Finding dependencies

So here is the class we use at CenturyLink Cloud to determine when one cookbook as been affected by a recent change. This is what I sent to Alex. I'll walk you through some of it later.

require 'berkshelf'
require 'chef'
require 'digest/sha1'
require 'json'

module Promote
  class Cookbook
    def initialize(cookbook_name, config)
      @name = cookbook_name
      @config = config
      @metadata_path = File.join(path, 'metadata.rb')
      @berks_path = File.join(path, "Berksfile")

    # List of all dependent cookbooks including transitive dependencies
    # @return [Hash] a hash keyed on cookbook name containing versions
    def dependencies
      @dependencies ||= get_synced_dependencies_from_lockfile

    # List of cookbooks dependencies explicitly specified in metadata.rb
    # @return [Hash] a hash keyed on cookbook name containing versions
    def metadata_dependencies

    def version(version=nil)

    def version=(version)
      version_line = raw_metadata[/^\s*version\s.*$/]
      current_version = version_line[/('|").*("|')/].gsub(/('|")/,"")
      if current_version != version.to_s
        new_version_line = version_line.gsub(current_version, version.to_s)
        new_content = raw_metadata.gsub(version_line, new_version_line)

    # Stamp the metadata with a SHA1 of the last git commit hash in a comment
    # @param [String] SHA1 hash of commit to stamp
    def stamp_commit(commit)
      commit_line = "#sha1 '#{commit}'"
      new_content = raw_metadata.gsub(/#sha1.*$/, commit_line)
      if new_content[/#sha1.*$/].nil?
        new_content += "\n#{commit_line}"

    def raw_metadata
      @raw_metadata ||= File.read(@metadata_path)

    def metadata
      @metadata ||=get_metadata_from_file

    def path
      File.join(@config.cookbook_directory, @name)

    # Performs a berks install if never run, otherwise a berks update
    def sync_berksfile(update=false)
      return if berksfile.nil?

      update = false unless File.exist?("#{@berks_path}.lock")
      update ? berksfile_update : berksfile.install

    # Determine if our dependency tree has changed after a berks update
    # @return [TrueClass,FalseClass]
    def dependencies_changed_after_update?
      old_deps = dependencies
      new_deps = get_dependencies_from_lockfile

      # no need for further inspection is 
      # number of dependencies have changed
      return true unless old_deps.count == new_deps.count

      old_deps.each do |k,v|
        if k != @name
          return true unless new_deps[k] == v

      return false

    # Hash of all dependencies. Guaranteed to change if 
    # dependencies of their versions change
    # @param [environment_cookbook_name] Name of environmnt cookbook
    # @return [String] The generated hash
    def dependency_hash(environment_cookbook_name)
      # sync up with the latest on the server
      hash_src = ''
      dependencies.each do | k,v |
        hash_src << "#{k}::#{v}::"

    # sync our environmnt cookbook dependencies 
    # with the latest on the server
    # @return [Hash] the list of cookbooks updated
    def sync_latest_app_cookbooks(environment_cookbook_name)
      result = {}

      #Get a list of all the latest cookbook versions on the server
      latest_server_cookbooks = chef_server_cookbooks(1)
      env_cookbook = Cookbook.new(environment_cookbook_name, @config)
      # iterate each cookbook in our environment cookbook
      env_cookbook.metadata_dependencies.each do |key|
        cb_key = key[0] # cookbook name

        # skip if its this cookbook or if its not on the server
        next if cb_key == @name || !latest_server_cookbooks.has_key?(cb_key)
        latest_version = latest_server_cookbooks[cb_key]['versions'][0]['version']
        # if the server as a later version than this cookbook
        # we do a berks update to true up with the server
        if dependencies.keys.include?(cb_key) &&
          latest_version > dependencies[cb_key].to_s
          result[cb_key] = latest_version

      # return the updated cookbooks


    def chef_server_cookbooks(versions = 'all')
      Chef::Config[:ssl_verify_mode] = :verify_none
      rest = Chef::REST.new(

    def berksfile
      @berksfile ||= get_berksfile

    def get_berksfile
      return unless File.exist?(@berks_path)
      Berkshelf.set_format :null

    def save_metadata(content)
      File.open(@metadata_path, 'w') do |out|
        out << content

      # force subsequent usages of metadata to refresh from file
      @raw_metadata = nil

    def get_metadata_from_file
      metadata = Chef::Cookbook::Metadata.new

    def get_synced_dependencies_from_lockfile
      # install to make sure we have a lock file


    # get the version of every dependent cookbook from berkshelf
    def get_dependencies_from_lockfile
      berks_deps = berksfile.list
      deps = {}

      berks_deps.each {|dep| deps[dep.name] = dep.locked_version}

    def berksfile_update(cookbooks = nil)
      cookbooks.nil? ? berksfile.update : berksfile.update(cookbooks)
      @dependencies = nil

This class is part of a larger "promote" gem that we use to promote cookbooks along our delivery pipeline. The class represents an individual cookbook and can be instantiated given an individual cookbook name and a config.

The config class (not shown here) is an extremely basic data class that encapsulates the properties of the chef environment.

Some assumptions

This code will only work given a couple conditions are true. One is fairly common and the other not so much. Even if these conditions do not exist in your environment you should be able to groc the logic.

Using Berkshelf

You are using berkshelf to manage your dependencies. This class uses the berkshelf API to find dependencies of other cookbooks.

Uning the Environment Cookbook Pattern

We use the environment cookbook pattern to govern which cookbooks are considered part of a deployable application. You can read more about this pattern here but an environment cookbook is very simple. It is just a metadata.rb and a Bekshelf.lock. The metadata file includes a depends for each of my cookbooks responsible for infrastructure used in the app. The Berkshelf.lock file uses that to build the entire tree of not only "my own" cookbooks but all third party cookbooks as well.

The dependency hash

For the purposes of this post, the dependency_hash method of the above class is the most crucial. This hash represents all cookbook dependencies and their version at the current point in time. If I were to persist this hash (more on that in a bit) and then later regenerate it with a hash that is at all different from the persisted one, I know something has changed. I don't really care where. Any change at all anywhere in a cookbook's dependency tree means it is at risk of breaking and should be tested.

The "Last Known Good" environment

Every time we test a cookbook and it passes, we store its current dependency hash in a special chef environment called LKG (last known good). This is really a synthetic environment with the same structure as any chef environment but the version constraints may look a bit strange:

  name: "LKG_platform",
  chef_type: "environment",
  json_class: "Chef::Environment",
  cookbook_versions: {
    p_rabbitmq: "",
    p_elasticsearch: "",
    p_haproxy: "",
    p_win: "1.1.121.fa2a465e8b964a1c0752a8b59b4a5c1db9074ee5",
    p_couchbase: "",
    elasticsearch_sidec: "0.1.7.baab30d04181368cd7241483d9d86d2dffe730db",
    p_orchestration: ""

We update this synthetic environment any time a cookbook test runs and passes. Here is a snippet of our Rakefile that does this:

cookbook = Promote::Cookbook.new(cookbook_name, config)
environment = Promote::EnvironmentFile.new("LKG", config)
lkg_versions = environment.cookbook_versions
cookbook_version = "#{cookbook.version.to_s}.#{hash}"

lkg_versions[cookbook_name] = cookbook_version


This is just using the cookbook and environment classes of our Promote gem to write out the new versions and following this, the environment like all of our environments is committed to source control.

The dirty check

We have now seen how we generate a dependency hash and how we persist that on successful tests. Here is where we actually compare our current dependency hash and the last known hash. We need to determine before running integration tests on a given cookbook if it is any different than the last time we succesfully tested it. If we previously tested it and neither it nor any of its dependencies (no matter how deep) have changed, we really do not need or want to test it again.

def is_dirty(
  dependency_hash = nil
  # Get the environment of the last good tests
  lkg_env_file = EnvironmentFile.new(lkg_environment, config)

  # if this cookbook is not in the last good tests, we are dirty
  unless lkg_env_file.cookbook_versions.has_key?(cookbook_name)
    return true

  # get the current environment under test
  source_env_file = EnvironmentFile.new(source_environment, config)
  # get the version of this cookbook from the last good tests
  # it will look like
  # thi is the normal version followed by the dependency hash tested
  lkg_parts = lkg_env_file.cookbook_versions[cookbook_name].split('.')

  # Get the "plain" cookbook version
  lkg_v = lkg_parts[0..2].join('.')
  # the dependency hash last tested
  lkg_hash = target_parts[3]
  # we are dirty if our plain version or dependency hash has changed
  return source_env_file.cookbook_versions[cookbook_name] != lkg_v || 
    dependency_hash != lkg_hash

This method takes the environment currently under test, our LKG environment that has all the versions and their dependency hashes last tested, the cookbook to be tested and its current dependency hash. It returns whether or not our cookbook is dirty and needs testing.

Prior to calling this code, our build pipeline edits our "test" environment (the "source_environment" above). This environment holds the cookbook versions of all cookbooks being tested including the last batch of commits. We use a custom kitchen provisioner plugin to ensure Test-Kitchen uses these versions and not just the versions in each cookbooks berkshelf lock file. As I discussed in my dependency post from last year, its important that all changes are tested together.

So there you have it. Now Hurry up and Wait for your dirty cookbooks to run their tests!

Detecting a hung windows process by Matt Wrock

A couple years ago, I was adding remoting capabilities to boxstarter so that one could setup any machine in their network and not just their local environment. As part of this effort, I ran all MSI installs via a scheduled task because some installers (mainly from Microsoft) would pull down bits via windows update and that always fails when run via a remote session. Sure the vast majority of MSI installers will not need a scheduled task but it was just easier to run all through a scheduled task instead of maintaining a whitelist.

I have run across a couple installers that were not friendly to being run silently in the background. They would throw up some dialog prompting the user for input and then that installer would hang forever until I forcibly killed it. I wanted a way to automatically tell when "nothing" was happening as opposed to a installer actually chugging away doing stuff.

Its all in the memory usage

This has to be more than a simple timeout based solution  because there are legitimate installs that can take hours. One that jumps to mind rhymes with shequel shurver. We have to be able to tell if the process is actually doing anything without the luxury of human eyes staring at a progress bar. The solution I came up with and am going to demonstrate here has performed very reliable for me and uses a couple memory counters to track when a process is truly idle for long periods of time.

Before I go into further detail, lets look at the code that polls the current memory usage of a process:

function Get-ChildProcessMemoryUsage {
    Get-WmiObject -Class Win32_Process -Filter "ParentProcessID=$ID" | % { 
        if($_.ProcessID -ne $null) {
            $proc = Get-Process -ID $_.ProcessID -ErrorAction SilentlyContinue
            $res += $proc.PrivateMemorySize + $proc.WorkingSet
            $res += (Get-ChildProcessMemoryUsage $_.ProcessID $res)

Private Memory and Working Set

You will note that the memory usage "snapshot" I take is the sum of PrivateMemorySize and WorkingSet. Such a sum on its own really makes no sense so lets see what that means. Private memory is the total number of bytes that the process is occupying in windows and WorkingSet is the number of bytes that are not paged to disk. So WorkingSet will always be a subset of PrivateMemory, why add them?

I don't really care at all about determining how much memory the process is consuming in a single point in time. What I do care about is how much memory is ACTIVE relative to a point in time just before or after each of these snapshots. The idea is that if these numbers are changing, things are happening. I tried JUST looking at private memory and JUST looking at working set and I got lots of false positive hangs - the counts would remain the same over fairly long periods of time but the install was progressing.

So after experimenting with several different memory counters (there are a few others), I found that if I looked at BOTH of these counts together, I could more reliably use them to detect when a process is "stuck."

Watching the entire tree

You will notice in the above code that the Get-ChildProcessMemoryUsage function recursively tallies up the memory usage counts for the process in question and all child processes and their children, etc. This is because the initial installer process tracked by my program often launches one or more subprocesses that do various bits of work. If I only watch the initial root process, I will get false hang detections again because that process may do nothing for long periods of time as it waits on its child processes.

Measuring the gaps between changes

So we have seen how to get the individual snapshots of memory being used by a tree of processes. As stated before, these are only useful in relationships between other snapshots. If the snapshots fluctuate frequently, then we believe things are happening and we should wait. However if we get a long string where nothing happens, we have reason to believe we are stuck. The longer this string, the more likely our stuckness is a true hang.

Here are memory counts where memory counts of processes do not change:

VERBOSE: [TEST2008]Boxstarter: SqlServer2012ExpressInstall.exe 12173312
VERBOSE: [TEST2008]Boxstarter: SETUP.EXE 10440704
VERBOSE: [TEST2008]Boxstarter: setup.exe 11206656
VERBOSE: [TEST2008]Boxstarter: ScenarioEngine.exe 53219328
VERBOSE: [TEST2008]Boxstarter: Memory read: 242688000
VERBOSE: [TEST2008]Boxstarter: Memory count: 0
VERBOSE: [TEST2008]Boxstarter: SqlServer2012ExpressInstall.exe 12173312
VERBOSE: [TEST2008]Boxstarter: SETUP.EXE 10440704
VERBOSE: [TEST2008]Boxstarter: setup.exe 11206656
VERBOSE: [TEST2008]Boxstarter: ScenarioEngine.exe 53219328
VERBOSE: [TEST2008]Boxstarter: Memory read: 242688000
VERBOSE: [TEST2008]Boxstarter: Memory count: 1
VERBOSE: [TEST2008]Boxstarter: SqlServer2012ExpressInstall.exe 12173312
VERBOSE: [TEST2008]Boxstarter: SETUP.EXE 10440704
VERBOSE: [TEST2008]Boxstarter: setup.exe 11206656
VERBOSE: [TEST2008]Boxstarter: ScenarioEngine.exe 53219328
VERBOSE: [TEST2008]Boxstarter: Memory read: 242688000
VERBOSE: [TEST2008]Boxstarter: Memory count: 2

These are 3 snapshots of the child processes of the 2012 Sql Installer on an Azure instance being installed by Boxstarter via Powershell remoting. The memory usage is the same for all processes so if this persists, we are likely in a hung state.

So what is long enough?

Good question!

Having played and monitored this for quite some time, I have come up with 120 seconds as my threshold - thats 2 minutes for those not paying attention. I think quite often that number can be smaller but I am willing to error conservatively here. Here is the code that looks for a run of inactivity:

function Test-TaskTimeout($waitProc, $idleTimeout) {
    if($memUsageStack -eq $null){
        $script:memUsageStack=New-Object -TypeName System.Collections.Stack
    if($idleTimeout -gt 0){
        $lastMemUsageCount=Get-ChildProcessMemoryUsage $waitProc.ID
        if($lastMemUsageCount -eq 0 -or (($memUsageStack.ToArray() | ? { $_ -ne $lastMemUsageCount }) -ne $null)){
        if($memUsageStack.Count -gt $idleTimeout){
            KillTree $waitProc.ID
            throw "TASK:`r`n$command`r`n`r`nIs likely in a hung state."
    Start-Sleep -Second 1

This creates a stack of these memory snapshots. If the last snapshot is identical to the one just captured, we add the one just captured to the stack. We keep doing this until one of two thing happens:

  1. A snapshot is captured that varies from the last one recorded in the stack. At this point we clear the stack and continue.
  2. The number of snapshots in the stack exceeds our limit. Here we throw an error - we believe we are hung.

I hope you have found this information to be helpful and I hope that none of your processes become hung!

Creating windows base images using Packer and Boxstarter by Matt Wrock

I've written a couple posts on how to create vagrant boxes for windows and how to try and get the base image as small as possible. These posts explain the multiple steps of preparing the base box and then the process of converting that box to the appropriate format of your preferred hypervisor. This can take hours. I cringe whenever I have to refresh my vagrant boxes. There is considerable time involved in each step: downloading the initial ISO from microsoft, installing the windows os, installing updates, cleaning up the image, converting the image to any alternate hypervisor formats (I always make both VirtualBox and Hyper-V images), compacting the image to the vagrant .box file, testing and uploading the images.

Even if all goes smoothly, the entire process can take around 8 hours. There is a lot of babysitting done along the way. Often there are small hiccups that require starting over or backtracking.

This post shows a way that ends much of this agony and adds considerable predictability to a successful outcome. The process can still be lengthy, but there is no baby sitting. Type a command, go to sleep and you wake up with new windows images ready for consumption. This post is a culmination of my previous posts on this topic but on steroids...wait...I mean a raw foods diet - its 100% automated and repeatable.

High level tool chain overview

Here is a synopsis of what this post will walk through:

  • Downloading free evaluation ISOs (180 day) of Windows
  • Using a Packer template To load the ISO in a VirtualBox VM and customize using a Windows Autounattend.xml file.
  • Optimize the image with Boxstarter, installing all windows updates, and shrinking as much as possible.
  • Output a vagrant .box file for creating new VirtualBox VMs with this image.
  • Sharing the box with others using Atlas.Hashicorp.com


If you want to quickly jump into things and forgo the rest of this post or just read it later, just read the bit about instaling packer or go right to their download page and then clone my packer-templates github repo. This has the packer template, and all the other artifacts needed to build a windows 2012 R2 VirtualBox based Vagrant box for windows. You can study this template and its supporting files to discover what all is involved.

Want Hyper-V?

For the 7 other folks out there that use Hyper-V to consume their devops tool artifacts, I recently blogged how to create a Hyper-V vagrant .box file from a Virtualbox hard disk (.vdi or vmdk). This post will focus of creating the VirtualBox box file, but you can read my earlier post to easily turn it into a Hyper-V box.

Say hi to Packer

Hi Packer.

In short, Packer is a tool that assists in the automation of machine images. Many are familiar with Vagrant, made by the same outfit - Hashicorp - which has become an extremely popular platform for creating VMs. Making the spinning up of VMs easy to do and easy to share has been huge. But there has remained a somewhat uncharted frontier - automating the creation of the images that make up the VM.

So many of today's automation tooling focuses on bringing a machine from bare OS to a known desired state. However there is still the challenge of obtaining a bare OS and perhaps one that is not so "bare". Rather than spending so many cycles building the base OS image up to desired state on each VM creation, why not just do this once and bake it into the base image?...oh wait...we used to do that and it sucked.

We created our "golden" images. We bowed down before them and worshiped at the alter of the instant environment. And then we forgot how we built the environment and we wept.

Just like Vagrant makes building a VM a source controlled artifact, Packer does the same for VM templates and like Vagrant, its built on a plugin architecture allowing for the creation of just about all the common formats including containers.

Installing Packer

Installing packer is super simple. Its just a collection of .exe files on windows and regardless of platform, its just an archive that needs to be extracted and then you simply add the extraction target to your path. If you are on Windows, do yourself a favor and install using Chocolatey.

choco install packer -y

Thats it. There should be nothing else needed. Especially since release 0.8.1 (the current release at the time of this post), Packer has everything needed to build windows images and no need for SSH.

Packer Templates

At the core of creating images using Packer is the packer template file. This is a single json file that is usually rather small. It orchestrates the entire image creation process which has three primary components:

  • Builders - a set of pluggable components that create the initial base image. Often this is just the bare installation media booted to a machine.
  • Provisioners - these plugins can attach to the built, minimal image above and bring it forward to a desired state.
  • Post-Processors - Components  that take the provisioned image and usually brings it to its final usable artifact. We will be using the vagrant post-processor here which converts the image to a vagrant box file.

Here is the template I am using to build my windows vagrant box:

    "builders": [{
    "type": "virtualbox-iso",
    "vboxmanage": [
      [ "modifyvm", "{{.Name}}", "--natpf1", "winrm,tcp,,55985,,5985" ],
      [ "modifyvm", "{{.Name}}", "--memory", "2048" ],
      [ "modifyvm", "{{.Name}}", "--cpus", "2" ]
    "guest_os_type": "Windows2012_64",
    "iso_url": "iso/9600.17050.WINBLUE_REFRESH.140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9.ISO",
    "iso_checksum": "5b5e08c490ad16b59b1d9fab0def883a",
    "iso_checksum_type": "md5",
    "communicator": "winrm",
    "winrm_username": "vagrant",
    "winrm_password": "vagrant",
    "winrm_port": "55985",
    "winrm_timeout": "5h",
    "guest_additions_mode": "disable",
    "shutdown_command": "C:/windows/system32/sysprep/sysprep.exe /generalize /oobe /unattend:C:/Windows/Panther/Unattend/unattend.xml /quiet /shutdown",
    "shutdown_timeout": "15m",
    "floppy_files": [
    "post-processors": [
      "type": "vagrant",
      "keep_input_artifact": true,
      "output": "windows2012r2min-{{.Provider}}.box",
      "vagrantfile_template": "vagrantfile-windows.template"

As stated in the tldr above, you can view the entire repository here.

Lets walk through this.

First you will notice the template includes a builder and a post-processor as mentioned above. It does not use a provisioner. Those are optional and we'll be doing most of our "provisioning" in the builder. I explain more about why later. Note that one can include multiple builders, provisioners, and post-processors. This one is pretty simple but you can find lots of more complex examples online.

  • type - This uses the virtualbox-iso builder which takes an ISO install file and produces VirtualBox .ovf and .vmdk files. (see packer documentation for information on the other built in builders).
  • vboxmanage - config sent directly to Virtualbox:
    • natpf1 - very helpful if building on a windows host where the winrm ports are already active. Allows you to define port forwarding to winrm port.
    • memory and cpus - these settings will speed things up a bit.
  • iso_url: The url of the iso file to load. This is the windows install media. I downloaded an eval version of server 2012 R2 (discussed below). This can be either an http url that points to the iso online or an absolute file path or file path relative to the current directory. I keep the iso file in an iso directory but because it is so large I have added that to my .gitignore.
  • iso_checksum and iso_checksum_type: These serve to validate the iso file contents. More on how to get these values later.
  • winrm values: as of version 0.8.0, packer comes with a winrm communicator. This means no need to install an SSH server. I use the vagrant username and password because this will be a vagrant box and those are the default credentials used by vagrant. Note that above in the vboxmanage settings I forward port 5985 to 55985 on the guest. 5985 is the default http winrm port so I need to specify 55985 as the winrm port I am using. The reason I am using a non default port is because the host I am using has winrm enabled and listens on 5985. If you are not on a windows host, you can probably just use the default port but that would conflict on my host. I specify a 5 hour timeout for winrm. This is the amount of time that packer will wait at most for winrm to be available. This is very important and I will discuss why later.
  • guest_additions_mode - By default, the virtualbox-iso builder will upload the latest virtualbox guest additions to the box. For my purposes I do not need this and it just adds extra time, takes more space and I have also had intermittent errors while the file is uploaded which hoses the entire build.
  • shutdown_command: This is the command to use to shutdown the machine. Different operating systems may require different commands. I am invoking sysprep which will shutdown the machine when it ends. Syprep when called with /generalize like I am doing here will strip the machine of security identifiers, machine name and other elements that that make it unique. This is particularly useful if you plan to use the image in an environment where many machines may be provisioned from this template and they need to interact with one another. Without doing this, all machines would have the same name and unique user SIDs which could cause problems especially in domain scenarios.
  • floppy_files: an array of files to be added to a floppy drive and made accessible to the machine. Here these include an answer file, and other files to be used throughout the image preparation.

Obtaining evaluation copies of windows

Many do not know that you can obtain free and fully functioning versions of all the latest versions of windows from Microsoft. These are "evaluation" copies and only last 180 days. However, if you only plan to use the images for testing purposes like I do, that should be just fine.

You can get these from  https://msdn.microsoft.com/en-us/evalcenter.aspx You will just need to regenerate the image at least every 180 days. You are not bound to purchase after 180 days but can easily just download a new ISO.

Finding the iso checksums

As shown above, you will need to provide the checksum and checksum type for the iso file you are using. You can do this using a utility called fciv.exe. You can install it from chocolatey:

choco install fciv -y

Now you can call fciv and pass it the path to any file and it will produce the checksum of that file in md5 format by default but you can pass a different format if desired.

Significance of winrm availability and the winrm_timeout

The basic flow of the virtualbox packer build is as follows:

  1. Boot the machine with the ISO
  2. Wait for winrm to be accessible
  3. As soon as it is accessible via winrm, shutdown the box
  4. start the provisioners
  5. when the provisioners are complete, run the post processors

In a perfect world I would have the windows install process enable winrm early on in my setup. This would result in a machine restart and then invoke provisioners that would perform all of my image bootstrap scripts. However, there are problems with that sequence on windows and the primary culprit is windows updates. I want to have all critical updates installed ASAP. However, windows updates cannot be easily run via a remote session which the provisioners do and once they complete, you will want to reboot the box which can cause the provisioners to fail.

Instead, I install all updates during the initial boot process. This allows me to freely reboot the machine as many times as I need since packer is just waiting for winrm and will not touch the box until that is available so I make sure not to enable winrm until the very end of my bootstrap scripts.

Also these scripts run locally so windows update installs can be performed without issue.

Bootstrapping the image with an answer file

Since we are feeding virtualbox an install iso, if we did nothing else, it would prompt the user for all of the typical windows setup options like locale, admin password, disk partition, etc. Obviously this is all meant to be scripted and unattended and that would just hang the install. This is what answer files are for. Windows uses answer files to automate the the setup process. There are all sorts of options one can provide to customize this process.

My answer file is located in my repo here. Note that it s named AutoUnattend.xml and added to the floppy drive of the booted machine. Windows will load any file named AutoUnattend.xml in the floppy drive and use that file as the answer file. I am not going to go through every line here but do know there are additional options beyond what I have that one can specify. I will cover some of the more important parts.

      <LocalAccount wcm:action="add">
        <Description>Vagrant User</Description>

This creates the administrator user vagrant with the password vagrant. This is the default vgrant username and password so setting up this admin user will make vagrant box setups easier. This also allows the initial boot to auto logon as this user instead of prompting.

  <SynchronousCommand wcm:action="add">
     <CommandLine>cmd.exe /c C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File a:\boxstarter.ps1</CommandLine>

You can specify multiple SynchronousCommand elements containing commands that should be run when the user very first logs in. I find it easier to read this already difficult to read file by just specifying one powershell file to run and then I'll have that file orchestrate the entire bootstrapping.

This file boxstarter.ps1 is another file in my scripts directory of the repo that I add to the virtualbox floppy. We will look closely at that file in just a bit.

<settings pass="specialize">
  <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-ServerManager-SvrMgrNc" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
  <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-IE-ESC" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
  <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-OutOfBoxExperience" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">

In short, this customizes the image in such a way that will make it far less likely to cause you to kill yourself or others while actually using the image. So you are totally gonna want to include this.

This prevents the "server manager" from opening on startup and allows IE to actually open web pages without the need to ceremoniously click thousands of times.

A boxstarter bootstrapper

So given the above answer file, windows will install and reboot and then the vagrant user will auto logon. Then a powershell session will invoke boxstarter.ps1. Here it is:

$WinlogonPath = "HKLM:\Software\Microsoft\Windows NT\CurrentVersion\Winlogon"
Remove-ItemProperty -Path $WinlogonPath -Name AutoAdminLogon
Remove-ItemProperty -Path $WinlogonPath -Name DefaultUserName

iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/mwrock/boxstarter/master/BuildScripts/bootstrapper.ps1'))
Get-Boxstarter -Force

$secpasswd = ConvertTo-SecureString "vagrant" -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential ("vagrant", $secpasswd)

Import-Module $env:appdata\boxstarter\boxstarter.chocolatey\boxstarter.chocolatey.psd1
Install-BoxstarterPackage -PackageName a:\package.ps1 -Credential $cred

This downloads the latest version of boxstarter and installs the boxstarter powershell modules and finally installs package.ps1 via a boxstarter install run. You can visit boxstarter.org for more information regarding all the details of what boxstarter does. The key features is that it can run a powershell script, and handle machine reboots by making sure the user is automatically logged back in and that the script (package.ps1 here) is restarted.

Boxstarter also exposes many commands that can tweak the windows UI, enable/disable certain windows options and also install windows updates.

Note the winlogon registry edit at the beginning of the boxstarter bootstraper. Without this, boxstarter will not turn off the autologin when it completes. This is only necessary if running boxstarter from a autologined session like this one. Boxstarter takes note of the current autologin settings before it begins and restores those once it finishes. So in this unique case it would restore to a autologin state.

Here is package.ps1, the meat of our bootstrapping:

Set-NetFirewallRule -Name RemoteDesktop-UserMode-In-TCP -Enabled True

Write-BoxstarterMessage "Removing page file"
$pageFileMemoryKey = "HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management"
Set-ItemProperty -Path $pageFileMemoryKey -Name PagingFiles -Value ""

Update-ExecutionPolicy -Policy Unrestricted

Write-BoxstarterMessage "Removing unused features..."
Remove-WindowsFeature -Name 'Powershell-ISE'
Get-WindowsFeature | 
? { $_.InstallState -eq 'Available' } | 
Uninstall-WindowsFeature -Remove

Install-WindowsUpdate -AcceptEula
if(Test-PendingReboot){ Invoke-Reboot }

Write-BoxstarterMessage "Cleaning SxS..."
Dism.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase

) | % {
        if(Test-Path $_) {
            Write-BoxstarterMessage "Removing $_"
            Takeown /d Y /R /f $_
            Icacls $_ /GRANT:r administrators:F /T /c /q  2>&1 | Out-Null
            Remove-Item $_ -Recurse -Force -ErrorAction SilentlyContinue | Out-Null

Write-BoxstarterMessage "defragging..."
Optimize-Volume -DriveLetter C

Write-BoxstarterMessage "0ing out empty space..."
wget http://download.sysinternals.com/files/SDelete.zip -OutFile sdelete.zip
[System.IO.Compression.ZipFile]::ExtractToDirectory("sdelete.zip", ".") 
./sdelete.exe /accepteula -z c:

mkdir C:\Windows\Panther\Unattend
copy-item a:\postunattend.xml C:\Windows\Panther\Unattend\unattend.xml

Write-BoxstarterMessage "Recreate [agefile after sysprep"
$System = GWMI Win32_ComputerSystem -EnableAllPrivileges
$System.AutomaticManagedPagefile = $true

Write-BoxstarterMessage "Setting up winrm"
Set-NetFirewallRule -Name WINRM-HTTP-In-TCP-PUBLIC -RemoteAddress Any
Enable-WSManCredSSP -Force -Role Server

Enable-PSRemoting -Force -SkipNetworkProfileCheck
winrm set winrm/config/client/auth '@{Basic="true"}'
winrm set winrm/config/service/auth '@{Basic="true"}'
winrm set winrm/config/service '@{AllowUnencrypted="true"}'

The goals of this script is to fully patch the image and then to get it as small as possible. Here is the breakdown:

  • Enable Remote Desktop. We do this with a bit of shame but so be it.
  • Remove the page file. This frees up about a GB of space. At the tail end of the script we turn it back on which means the page file will restore itself the first time this image is run after this entire build.
  • Update the powershell execution policy to unrestricted because who likes restrictions? You can set this to what you are comfortable with in your environment but if you do nothing, powershell can be painful.
  • Remove all windows features that are not enabled. This is a new feature in 2012 R2 called feature on demand and can save considerable space.
  • Install all critical windows updates. There are about 118 at the time of this writing.
  • Restart the machine if reboots are pending and the first time this runs, they will be.
  • Run the DISM cleanup that cleans the WinSxS folder of rollback files for all the installed updates. Again this is new in 2012 R2 and can save quite a bit of space. Warning: it also takes a long time but not nearly as long as the updates themselves.
  • Remove some of the random cruft. This is not a lot but why not get rid of what we can?
  • Defragment the hard drive and 0 out empty space. This will allow the final act of compression to do a much better job compressing the disk.
  • Lastly, and it is important this is last, enable winrm. Remember that once winrm is accessible, packer will run the shutdown command and in our case here that is the sysprep command:
C:/windows/system32/sysprep/sysprep.exe /generalize /oobe /unattend:C:/Windows/Panther/Unattend/unattend.xml /quiet /shutdown

This will cause a second answer file to fire after the machine boots next. That will not happen until after this image is built, likely just after a "vagrant up" of the image. That file can be found here and its much smaller than the initial answer file that drove the windows install. This second unattend file mainly ensures that the user will not have to reset the admin password at initial startup.

Packaging the vagrant file

So now we have an image and the builders job is done. On my machine this all takes just under 5 hours. Your mileage may vary but make sure that your winrm_timeout is set appropriately. Otherwise if the timeout is less that the length of the build, the entire build will fail and be forever lost.

The vagrant post-processor is what generates the vagrant box. It takes the artifacts of the builder and packges them up. There are other post-processors available, you can string several together and you can create your own custom post-processors. You can make your own provisioners and builders too.

One post processor worth looking at is the atlas post-processor which you can add after the vagrant post processor and it will upload your vagrant box to atlas and now you can share it with others.

Next steps

I have just gotten to the point where this is all working. So this is rough around the edges but it does work. There are several improvements to be made here:

  • Make a boxstarter provisioner for packer. This could run after the initial os install in a provisioner and AFTER winrm is enabled. I would have to leverage boxstarter's remoting functionality so that it does not fail when interacting with windows updates over a winrm session. One key advantage is that the boxstarter output would bubble up to the console running packer giving much more visibility to what is happening with the build as it is running. As it stands now, all output can only be seen in the virtualbox GUI which will not appear in headless environments like in an Atlas build.
  • As stated at the beginning of this post, I create both virtualbox and Hyper-V images. The Hyper-V conversion could use its own post-processor. Now I simply run this in a separate powershell command.
  • Use variables to make the scripts reusable. I'm just generating a single box now but I will definitely be generating more: client SKUs, server core, nano, etc. Packer allows you to specify variables and better templatize the build.

Thanks Matt Fellows and Dylan Meissner

I started looking into all of this a couple months ago and then got sidetracked until last week. In my initial investigation in may, packer 0.8.1 was not released and there was no winrm support out of the box. Matt and Dylan both have contributed to those efforts and also provided really helpful pointers as I was having issues getting things working right.

Creating a Hyper-V Vagrant box from a VirtualBox vmdk or vdi image by Matt Wrock

I personally use Hyper-V as my hypervisor on windows, but I use VirtualBox on my Ubuntu work laptop so it is convenient for me to create boxes in both formats . However it is a major headache to do so especially when preparing Windows images. I either have to run through creating the image twice (once for VirtualBox and again for Hyper-V) or I have to copy multi gigabyte files across my computers and then convert them.

This post will demonstrate how to automate the conversion of a VirtualBox disk image to a Hyper-V compatible VHD and create a Vagrant .box file that can be used to fire up a functional Hyper-V VM. This can be entirely done on a machine without Hyper-V installed or enabled. I will be demonstrating on a Windows box using Powershell scripts but the same can be done on a linux box using bash.

The environment

You cannot have VirtualBox and Hyper-V comfortably coexist on the same machine. I have Hyper-V enabled on my personal windows laptop and I needed some "bare metal" to install VirtualBox. Well as luck had it, I was able to salvage my daughter's busted old laptop. Its video is hosed which is just fine for my purposes. I'll be running it as a headless VM host. Here is how I set it up:

  • Repaved with Windows 8.1 with Update 1 Professional, fully patched
  • Enabled RemoteDesktop
  • Installed VirtualBox, Vagrant, and Packer with Chocolatey

Of course I used Boxstarter to set it all up. After all I was not born in a barn.

Creating the VirtualBox image

I am currently working on another post (hopefully due to publish this week) that will go into detail on creating Windows images with Packer and Boxstarter and cover many gory details around, unattend.xml files, sysprep, and tips to get the image as small as possible. This post will not cover the creation of the image. However, if you are interested in this topic, I'm assuming you are able to create a VirtualBox VM. That is all you need to do to get started here. Guest OS does not matter. You just need to get a .VDI or .VMDK file generated which is what automatically happens when you create a VirtualBox VM.

Converting the virtual hard disk to VHD

This is a simple one liner using VBoxManage.exe which installs with VirtualBox. You just need the path to the VirtualBox VM's hard disk.

$vboxDisk = Resolve-Path "$baseDir\output-virtualbox-iso\*.vmdk"
$hyperVDir = "$baseDir\hyper-v-output\Virtual Hard Disks"
$hyperVDisk = Join-Path $hyperVDir 'disk.vhd'
$vbox = "$env:programfiles\oracle\VirtualBox\VBoxManage.exe"
.$vbox clonehd $vboxDisk $hyperVDisk --format vhd

This uses the clonehd command and takes the location of the VirtualBox disk and the path of the vhd to be created. The vhd --format is also supplied. The conversion takes a few minutes to complete on my system.

Note that this may likely produce a vhd file that is much larger than the vmdk or vdi image from which the conversion took place. Thats OK. My 3.8GB vmdk produced a 9GB vhd. This is simply because the VHD is uncompressed and we'll take care of that in the last step. It is highly advisable that you "0 out" all unused disk space as the last step of the image creation to assist the compression. On windows images, the sysinterals tool sdelete does a good job at that.

Laying out the Hyper-V and Vagrant metadata

So you now have a vhd file that can be sucked into any Hyper-V VM. However, just the vhd alone is not enough for Vagrant to produce a Hyper-V VM. There are 2 key bits that we need to create: A Hyper-V xml file that defines the VM metadata and vagrant metadata json as well as an optional Vagrantfile. These all have to be archived in a specific folder structure.

The Hyper-V XML

Prior to Windows 10, Hyper-V stored virtual machine metadata in an XML file. Vagrant expects this XML file and inspects it for several bits of metadata that it uses to create a new VM when you "vagrant up". Its still completely compatible with Windows 10 since Vagrant will simply use the Hyper-V Powershell cmdlets to create a new VM. However, it does mean that exporting a windows 10 Hyper-V vm will no longer produce this xml file. Instead it produces two binary files in vmcx and vmrx formats that are no longer readable or editable. However, if you so happen to have access to a xml vm file exported from v8.1/2012R2 or earlier, you can still use that.

Here I am using just that. A xml file exported from a 2012R2 VM. Its fairly large so I will not display it in its entirety here but you can view it on github here. You can edit things like vm name, available ram, switches, etc. This one should be compatible out of the box on any Hyper-V host. The most important thing is to make sure that the file name of the hard disk vhd referenced in this metadata matches the filename of the vhd you produced above. Here is the disk metadata:

   <iops_limit type="integer">0</iops_limit>
    <iops_reservation type="integer">0</iops_reservation>
    <pathname type="string">C:\dev\vagrant\Win2k12R2\.vagrant\machines\default\hyperv\disk.vhd</pathname>
    <persistent_reservations_supported type="bool">False</persistent_reservations_supported>
    <type type="string">VHD</type>
    <weight type="integer">100</weight>
    <pathname type="string"></pathname>
    <type type="string">NONE</type>

Another thing to take note of here is the "subtype" of the VM. This is known to most as the "generation." Later Hyper-V versions support both generation and one and two VMs. If you are from the future, there may be more. Typically vagrant boxes will do fine on generation 1 and will also be more portable and accessible to older hosts so I make sure this is specified.

  <creation_time type="bytes">j3iz2977zwE=
  <global_id type="string">0AA394FA-7C4A-4070-BA32-773D43B28A68</global_id>
  <highly_available type="bool">False</highly_available>
  <last_powered_off_time type="integer">130599959406707888</last_powered_off_time>
  <last_powered_on_time type="integer">130599954966442599</last_powered_on_time>
  <last_state_change_time type="integer">130599962187797355</last_state_change_time>
  <name type="string">2012R2</name>
  <notes type="string"></notes>
  <subtype type="integer">0</subtype>
  <type_id type="string">Virtual Machines</type_id>
  <version type="integer">1280</version>

Again, its the subtype that is of relevance here. 0 is generation 1. Also, here is where you would change the name of the vm if desired.

If you are curious about exactly how Vagrant parses this file to produce a VM. Here is the powershell that does that. I dont think there is any reason why this same xml file would not work for other OSes but you would probably want to change the vm name. In a bit, I'll show you where this file goes.

Vagrant metadata.json

This is super simple but 100% required. The packaged vagrant box file needs a metadata.json file in its root. Here are the exact contents of the file:

  "provider": "hyperv"

Optional Vagrantfile

This is not required but especially for windows boxes it may be helpful to include an embedded Vagrantfile. If you are familiar with vagrant, you know that the Vagrantfile contains all of the config data for your vagrant box. Well if you package a Vagrantfile with the base box as I will show here, the config in this file will be inherited by any Vagrantfile that consumes this box. Here is what I typically add to windows images:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|
  config.vm.guest = :windows
  config.vm.communicator = "winrm"

  config.vm.provider "virtualbox" do |vb|
    vb.gui = true
    vb.memory = "1024"

  config.vm.provider 'hyperv' do |hv|
    hv.ip_address_timeout = 240

First, this is pure ruby. That likely does not matter at all and those of you unfamiliar with ruby hopefully groc what this file is specifying, but if you want to go crazy with ifs, whiles, and dos, go right ahead.

I find this file provides a better windows experience accross the board by specifying the following:

  • As long as winrm is enabled on the base image vagrant can talk to it on the right port.
  • VirtualBox will run the vm in a GUI console
  • Hyper-V typically takes more than 2 minutes to become accesible and this prevents timeouts.

Laying out the files

Here is how all of these files should be structured in the end:

Creating the .box file

In the end vagrant consumes a tar.gz file with a .box extension. We will use 7zip to create this file.

."$env:chocolateyInstall\tools\7za.exe" a -ttar package-hyper-v.tar hyper-v-output\*
."$env:chocolateyInstall\tools\7za.exe" a -tgzip package-hyper-v.box package-hyper-v.tar

If you have chocolatey installed, Rejoice! you already have 7zip. If you do not have chocolatey, you should feel bad.

This simply creates the tar archive and then compacts it. It takes about 20 minutes on my machine.

Testing the box

Now copy the final box file to a machine that runs Hyper-V and has vagrant installed.

C:\dev\vagrant\Win2k12R2\exp\Win-2012R2> vagrant box add test-box .\package-hyper-v.box
==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'test-box' (v0) for provider:
    box: Unpacking necessary files from: file://C:/dev/vagrant/Win2k12R2/exp/Win-2012R2/package-hyper-v.box
    box: Progress: 100% (Rate: 98.8M/s, Estimated time remaining: --:--:--)
==> box: Successfully added box 'test-box' (v0) for 'hyperv'!
C:\dev\vagrant\Win2k12R2\exp\Win-2012R2> vagrant init test-box
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
C:\dev\vagrant\Win2k12R2\exp\Win-2012R2> vagrant up
Bringing machine 'default' up with 'hyperv' provider...
==> default: Verifying Hyper-V is enabled...
==> default: Importing a Hyper-V instance
    default: Cloning virtual hard drive...
    default: Creating and registering the VM...
    default: Successfully imported a VM with name: 2012R2Min
==> default: Starting the machine...
==> default: Waiting for the machine to report its IP address...
    default: Timeout: 240 seconds
    default: IP:
==> default: Waiting for machine to boot. This may take a few minutes...
    default: WinRM address:
    default: WinRM username: vagrant
    default: WinRM transport: plaintext
==> default: Machine booted and ready!
==> default: Preparing SMB shared folders...
    default: You will be asked for the username and password to use for the SMB
    default: folders shortly. Please use the proper username/password of your
    default: Windows account.
    default: Username: matt
    default: Password (will be hidden):
==> default: Mounting SMB shared folders...
    default: C:/dev/vagrant/Win2k12R2/exp/Win-2012R2 => /vagrant

Enjoy your boxes!