A destructible and repeatable chef workstation environment by Matt Wrock

Several months ago I posted on the CentyryLink Cloud blog about how we reproduce our chef development environment with Vagrant using the chef solo provisioner that consumes the same cookbook as our build agents to create an environment for testing cookbooks. This post will dive into the technical details of such a Vagrantfile/cookbook and kicks it up a notch by publicly sharing a repo that is just that but stripped of any CenturyLink specific references like our Nexus server, Berkshelf API client setup or internal gem installs. Here I will share a chef_workstation repo that anyone can use to quickly spin up an environment that's awesome for developing and testing cookbooks. This repo can be found here in my github chef_workstation repo.

Why a vagrant box for chef development?

Many use vagrant for creating test nodes that converge cookbooks but it is not as common to hear about using vagrant for creating chef workstations/development environments. The fact is that ChefDK goes a long way to solve an isolated pre-built chef environment. However, every environment is different even if many use the ChefDK as a starting point. Its not uncommon to be changing or creating gems, adding tools, and tweaking rake tasks and such before you want to either "capture" these changes to be easily shared or blow these changes away and start fresh from a known, working base.

We have found that being able to easily reproduce shared dev environments has added a ton of value to our workflow.  So lets dive in and explore what exactly this repo provides.

Whats in the box?

I've tried to make sure the current readme describes the details of this box, what it contains, and how to use it. So please feel free to review that. Here is a high level list of what the chef_workstation repo provides:

  • chefdk
  • docker
  • nano text editor
  • git
  • squid proxy cache
  • SSH agent forwarding
  • generated knife.rb
  • Rakefile with tasks for cookbook testing
  • Support for VirtualBox, Parallels, Hyper-V and VMWare
  • Secret energizing ingredient guaranteed to infuze your soul with the spirit of automation!!

The repo contains a Vagrantfile at the root that builds an Ubuntu 12.04 image and sets up all of the above. I've been updating this repo in preparation for this post and I can say there is a fair amount that goes into getting this setup just right. Some of it seems trivial to me today but when I was just getting used to chef and linux a year ago, some of this seemed very mysterious then.

We use vagrant to develop but not for testing

Lets get this out of the way right now. In the automation team at CenturyLink Cloud, we use docker to test almost all linux based cookbooks and a custom vsphere kitchen driver for everything else. We don't use kitchen-vagrant. I do use kitchen-vagrant often on personal projects but it just does not make sense at work. We ARE a cloud so spinning up our own cloud resources needs to be central to our testing. Its also simpler for us to manage test instances in our own large scale capacity cloud than on a much thinner build agent's hard drive.

So if you primarily use kitchen-vagrant for testing, you may not benefit from all the features here but you may want to stick around before you run off. There is more here than just a test-kitchen environment and if you work mostly with linux infrastructure, I cant stress enough the benefits of containers to testing productivity and decreased feedback cycles. This workstation aims to reduce the friction of setting this up repeatably.

Cache all the things!

One feature of this environment that is very popular is its use of squid and the vagrant-cachier plugin. The first time one runs vagrant up with this environment, it may take 10 minutes to run. Give or take a few minutes. This setup caches most internet downloaded artifacts on the host so that all subsequent vagrant ups will build much more quickly.

This feature is not available to windows hosts but if you run a mac or a linux desktop, this works wonders to bring down box build times.

Standardize on a base tools install while maintaining freedom of dev tools

I've historically preferred not to work in VMs as a dev environment. I either need to make sure all my "stuff" is installed there or cope with an environment that has minimal tooling. For those who use vagrant, you know this is where it shines. It syncs folders seamlessly across guest and host regardless of differing operating systems so I can dev on the same files in my favorite text editor or IDE on the host and then run those bits immediately in the shared guest environment.

This environment standardizes on Ubuntu 12.04 as a guest for a few reasons. Its used in many of our production servers and the popular hashicorp/precise64 vagrant box image has providers for VirtualBox, Hyper-V, VMWare Fusion and parallels. This makes it easy to share among any member of our org. I don't think there is anyone who cannot accommodate any of these hypervisor configurations. We don't have to tell anyone that their OS is not welcome.

Decoupling the "shell" from the "core"

This repository is intended to serve as one's "master" chef cookbook repository but it allows one to keep individual cookbooks in separate repos or in a single repo that is separate from this one. This repository contains only one cookbook which actually builds the environment. Any additional cookbook added to the cookbooks directory will be gitignored and should not be maintained in this repo. Simply clone your cookbooks into the cookbooks directory to be maintained separately. As long as they are in the cookbooks directory, they will be included in the vagrant folder syncing and immediately available in your vagrant guest.

Common rake tasks

The readme goes into the specifics of how to call all of the available tasks. I removed quite alot from this Rakefile that was in our own CenturyLink Rakefile and tied into our internal CI/CD pipeline with tasks for syncing dependencies, bumping versions, promoting cookbooks to different environments, etc. However what remains are likely tasks that most would expect to be present in any basic chef repo:

The tasks can work on a single cookbook or all cookbooks in the cookbooks directory in a single task invocation. The kitchen task also allows you to provide your own cookbook level Rakefile if you need to "orchestrate" the test suites for more complex test behavior.

Chef server friendly

The idea here is that if you work with a chef server, upon bringing up this environment, knife commands should "just work" with your account with no gymnastics or secret winking patterns (not that I think secret winking patterns are an anti-pattern -- I'm a huge fan!).

The chef_workstation cookbook generates a knife.rb in the .chef directory of the repo and only if no other knife.rb file exists here. By default the username of the logged in user on the host is used and the chef server url of the publicly hosted chef server is included but is missing the org in the path. Both the user and server url are configurable in the Vagrantfile. Then it is up to the user to add their .pem file to the top level .chef directory with the same name as the user. The entire .chef directory is gitignored so you can add keys without fear of them being added to source control.

A brief walk through of building the box and testing with docker

Getting setup

See the readme for the exact prerequisites and install instructions. Chances are high you already have them. At the very least you want:

Unless you are on windows I strongly suggest installing vagrant-cachier but its completely optional. Finally clone the chef_workstation repo.

Make it your own

This wisdom applies equally to American Idol performances and customized chef workstation vagrant boxes. Edit the Vagrantfile chef.json property that will inject the node attributes into the chef_workstation cookbook that provisions the box. This can include chef server user name and url, gems you want added that are not in the default installed gems, or additional deb packages you want included. All the possible attributes are mentioned here.

Also add your own cookbooks that you intend to work with to the cookbooks directory. so far, everything we have mentioned in the walk through, you only have to do once.

For this walk through, I will add the same cookbook covered a couple posts ago that covered multi node testing with test-kitchen and docker. Its a fork of the couchbase cookbook that can add nodes to a cluster.

cd cookbooks
git clone https://github.com/mwrock/couchbase -b multi-node

Build the box

Now run:

vagrant up

If you are using a hypervisor other than virtualbox, make sure to either:

  • Set the VAGRANT_DEFAULT_PROVIDER environment variable to the provider you want to use
  • Specify it in the --provider argument to vagrant up

In a few minutes the box should be ready.

Add your server key if using a server

If you use a chef server, copy your private key used to authenticate to the .chef directory in the root of the repo. The box provisioning should have created this directory. This file should be named {user_name}.pem. You may of course edit the knife.rb file if you prefer to use a different name. Once generated, the knife.rb file will never be overwritten when provisioning the box, and as mentioned previously, it is in the .gitignore file and thus excluded from source control.

Log on to the box and navigate to the couchbase cookbook

vagrant ssh
cd cookbooks/couchbase

Test a kitchen suite

We wont go through the whole multi node test cycle. Refer to my post on that topic if you are interested in running through that because in this environment you can without a problem. 

First lets run our chefspecs just to make sure unit tests are happy:

vagrant /chef-repo $ rake chefspec[couchbase]

2 deprecation warnings total

Finished in 0.90515 seconds (files took 2.34 seconds to load)
243 examples, 0 failures

vagrant /chef-repo $

Looks good. Now lets see what kitchen suites we have available. There are quite a few. Here is a portion of the suites at the top of the list:

vagrant /chef-repo/cookbooks/couchbase $ kitchen list
Instance                          Driver   Provisioner  Verifier  Transport  Last Action
server-community-debian-76        Docker   Nodes        Busser    Ssh        <Not Created>
server-community-ubuntu-1204      Docker   Nodes        Busser    Ssh        <Not Created>
server-community-centos-65        Docker   Nodes        Busser    Ssh        <Not Created>
server-community-windows-2012R2   Vagrant  Nodes        Busser    Winrm      <Not Created>
second-node-debian-76             Docker   Nodes        Busser    Ssh        <Not Created>
second-node-ubuntu-1204           Docker   Nodes        Busser    Ssh        <Not Created>

We will run the server-community-ubuntu-1204 suite:

kitchen verify server-community-ubuntu-1204

You will likely see a lot of text and this may take several minutes to complete at least the first time as the base ubuntu docker image is downloading. If you are using vagrant-cachier, the next time will be much faster. Hopefully this should end in a successful converge and serverspec tests...yup:

       Finished in 0.13151 seconds (files took 0.54705 seconds to load)
       13 examples, 0 failures

       Finished verifying <server-community-ubuntu-1204> (0m5.89s).
-----> Kitchen is finished. (5m1.32s)

Add the cookbook to our server

First we'll do a berks vendor to grab all dependencies:

vagrant /chef-repo/cookbooks/couchbase $ berks vendor /tmp/couchbase-vendor
Resolving cookbook dependencies...
Fetching 'couchbase' from source at .
Fetching 'couchbase-tests' from source at test/integration/cookbooks/couchbase-tests
Using apt (2.7.0)
Using chef-sugar (3.1.0)
Using chef_handler (1.1.6)
Using couchbase (1.3.0) from source at .
Using couchbase-tests (0.1.0) from source at test/integration/cookbooks/couchbase-tests
Using export-node (1.0.1)
Using minitest-handler (1.3.2)
Using openssl (4.0.0)
Using windows (1.36.6)
Using yum (3.6.0)
Vendoring apt (2.7.0) to /tmp/couchbase-vendor/apt
Vendoring chef-sugar (3.1.0) to /tmp/couchbase-vendor/chef-sugar
Vendoring chef_handler (1.1.6) to /tmp/couchbase-vendor/chef_handler
Vendoring couchbase (1.3.0) to /tmp/couchbase-vendor/couchbase
Vendoring couchbase-tests (0.1.0) to /tmp/couchbase-vendor/couchbase-tests
Vendoring export-node (1.0.1) to /tmp/couchbase-vendor/export-node
Vendoring minitest-handler (1.3.2) to /tmp/couchbase-vendor/minitest-handler
Vendoring openssl (4.0.0) to /tmp/couchbase-vendor/openssl
Vendoring windows (1.36.6) to /tmp/couchbase-vendor/windows
Vendoring yum (3.6.0) to /tmp/couchbase-vendor/yum

Now we will upload to our server. Mine is my personal, free hosted chef server:

vagrant /chef-repo/cookbooks/couchbase $ knife cookbook upload -a -o /tmp/couch
Uploading apt            [2.7.0]
Uploading chef-sugar     [3.1.0]
Uploading chef_handler   [1.1.6]
Uploading couchbase      [1.3.0]
Uploading couchbase-tests [0.1.0]
Uploading export-node    [1.0.1]
Uploading minitest-handler [1.3.2]
Uploading openssl        [4.0.0]
Uploading windows        [1.36.6]
Uploading yum            [3.6.0]
Uploaded all cookbooks.

Rinse and repeat

Now I am free to destroy this box and my synced repo folder will remain in tact. We do this all the time. Boxes get stale as other team members are reving shared internal gems or you are experimenting with alternate configurations and want to just start over.

There is something to be said for the freedom that comes with knowing you can walk on a tight rope since there is a net just beneath you to catch your fall (except when there's not).

Multi node Test-Kitchen tests and working with Vagrant NAT addressing with VirtualBox by Matt Wrock

This is a gnat. Let the observer note that it is different from NAT

This is a gnat. Let the observer note that it is different from NAT

This is now the third post I have written on creating multi node test-kitchen tests. The first covered a windows scenario building an active directory controller pair and the last one covered a multi docker container couchbase cluster using the kitchen-docker driver. This will likely be the last post in this series and will cover perhaps the most commonly used test-kitchen setup using the kitchen-vagrant driver with virtualbox.

In one sense there is nothing particularly special about this setup and all of the same tests demonstrated in the first two posts can be run through vagrant on virtualbox. In fact that is exactly what the first post used for the active directory controllers although it also supports Hyper-V. However there is an interesting problem with this setup that the first post was lucky enough to avoid. If you were to switch from the docker driver to the vagrant driver in the second post that built a couchbase cluster in docker containers, you may have noticed a recipe at the top of the runlist in each suite: couchbase-tests::ipaddress.

Getting nodes to talk to one another

We will soon get to the purpose behind that recipe, but first I'll lay out the problem. When you use the kitchen-vagrant driver to build test instances with the virtualbox provider without configuring any networking properties on the driver, your instance will have a single interface with an ip address of Its gonna be difficult to do any kind of multi node testing with this. For one thing if you have two nodes, they will not be able to talk to each other over these interfaces. From the outside the host can talk to these nodes using its own localhost address and over the forwarded ports. But to another node? They are dead to each other.

The trick to get them to be accessible to one another is to add an additional network interface to the nodes by adding a network in the kitchen.yml:

        - ["private_network", { type: "dhcp" }]

Now the nodes will have a dhcp assigned ip address that both the host and each node can use to access the other.

One could then use my kitchen-nodes provisioner that derives from the chef-zero provisioner so that normal chef searches can find the other nodes and access their ip addresses.

Just pull down the gem:

gem install kitchen-nodes

Add it to your .kitchen.yml:

  name: nodes

Now chef searches from inside your nodes can find one another as long as both nodes have beed created:

other_ip = search(:node, "run_list:*couchbase??master*")[0]['ipaddress']

Missing the externally reachable ip address in ohai

While this allows the nodes to see one another, one may surprised if they inspect a node's own ip address from inside that node.

ip = node['ipaddress'] # will be

The ip will be the localhost ip and not the same ip address that the other node will see. This may be fine in many scenarios, but for others you may need to know the externally reachable ip. You would like the ohai attributes to expose the second NIC's address and not the one belonging to the localhost interface.

This was the wall I hit in my kitchen-docker post because I was registering couchbase nodes in a couchbase cluster and the couchbase cluster could not add multiple nodes with the same ip ( and each node needed to register an ip that could be used to access itself by the master node.

I googled for a while to see how others dealt with this scenario. I did not find much but what I did find were posts explaining how to create a custom ohai plugin to expose the right ip. Most posts really contained just a fraction of the information needed to create the plugin and once I did manage to assemble all the information needed to create the plugin, it honestly felt like quite a bit of ceremony for such a simple assignment.

Overwriting the 'automatic' attribute

So I thought instead of using an ohai plugin I'd find the second interfaces address in the ohai attributes and then set the ['automatic']['ipaddress'] attribute with that reachable ip. This seemed to work jut fine and as long as its done at the front of the chef run, any call to node['ipaddress'] subsequently in the run would return the desired address.

Here is the full recipe that sets the correct ipaddress attribute:

kernel = node['kernel']
auto_node = node.automatic

# linux
if node["virtualization"] && node["virtualization"]["system"] == "vbox"
  interface = node["network"]["interfaces"].select do |key|
    puts key
    key == "eth1"
  unless interface.empty?
    interface["eth1"]["addresses"].each do |ip, params|
      if params['family'] == ('inet')
        auto_node["ipaddress"] = ip
# windows

elsif kernel && kernel["cs_info"] && kernel["cs_info"]["model"] == "VirtualBox"
  interfaces = node["network"]["interfaces"]
  interface_key = interfaces.keys.last
  auto_node["ipaddress"] = interfaces[interface_key]["configuration"]["ip_address"][0]

First, this is not a lot of code and its confined to a single file. Seems a lot simpler than the wire-up required in an ohai plugin.

This is designed to work on both windows and linux. I have only used it on windows 2012R2 and ubuntu but its likely fairly universal. The code only sets the ip if virtualbox is the host hypervisor so you can safely include the recipe in multi-driver suites and other non virtualbox environments will simply ignore it.

Get it in the hurry-up-and-test-cookbook

In case others would like to use this I have included this recipe in a new cookbook I am using to collect recipes helpful in testing cookbooks. This cookbook is called hurry-up-and-test and is available on the chef supermarket. It also includes the export-node recipe I have shown in a couple posts that allows one to access a node's attributes from inside test code.

I hope others find this useful and I'd love to hear if anyone thinks there is a reason to package this as a full blown ohai plugin instead.

Now what are you waiting for? Hurry up and test!!

Lamentations of the OSS consumer: I'd like to read the $@#%ing manual but no one has written it by Matt Wrock

I've been an active open source consumer and contributor for the past few years and overall being involved in both roles has been one of the peak experiences of my career and I only wish  I had discovered open source much much sooner. However its not all roses, and things can be rough on both sides of the pull request. Especially for those new to these ecosystems and even more so if you come from a heritage of tools and culture not originally friendly to open source.

Yesterday I received a great email from someone asking about how to navigate a very popular infrastructure testing project where the documentation can be sparse in some respects. The project is ServerSpec - a really fantastic tool that many gain value from every day. The question comes from someone new to the chef community and ServerSpec is a key tool used in the chef ecosystem. The questioner immediately won my respect. They were curious but not bitter (at least did not admit to being so) and wanted to know how to learn more and start to contribute and get to a point of writing more creative tests.

This inspired me because i love interacting with people who are passionate about this craft and who like myself want to learn and improve themselves. It also struck a nerve since I have alot of opinions about approaching OSS projects and empathy for those new to the playing field and perhaps feeling a bit awkward. This individual, like myself, comes from a windows background, so I think I have some insights to where he is coming from.

I thought it might be interesting to transform my responses to a blog post. Here are some modified excerpts from my replies.

When Windows is the edge case

I think one issue that windows suffers from in this ecosystem is that it is the "edge case". The vast majority of those using, testing and contributing to this project are largely linux users. So when a minor version bump occurs and the PR notes explicitly call out that it wont break current tests, clearly thats evidence that windows was not tested. Although one can argue that windows is just not much of a player (but that's of course changing).

I'd look at this differently if this was code in a chef owned project where I would expect them to be paying more attention to windows. Regarding ServerSpec, a wholly open source project with no funding and shepherded by a community member who has a full time job that is not ServerSpec, I tend to be more forgiving but it can definitely make for a frustrating development experience at times.

I'm really hoping that more windows folks get involved and contribute more in this ecosystem both with code and documentation and also just filing issues, and I hope that their employers support them in these efforts. They stand to gain alot in doing so.

There may be no manual but there are always the codes

One thing I have found in the ruby world and much of the OSS world outside of ruby is that sometimes the best way to figure something out is to read the source code. The obvious downside to this is that its hard to read a language we may not be familiar with and we just want to write our test and move on with our lives.

So I find myself going back and forth. I may just do some quick “code spelunking” and not find anything that clearly points out how to do what I want and may take an uglier “brute force” approach. On other days depending on mood and barometric pressure in the office, I might be inclined to spend the time and dig deeper. It would be awesome if the authors took the time to spell out how to write custom matchers and resource types, but many OSS projects seem to lack this level of detailed documentation. I’m guessing because no one is paying them to and in the end they, like us, have a problem that needs solving and lack time to document.

One consolation is that these ruby libraries tend to be relatively small. Compare the ServerSpec code base including its sister project Specinfra to something like XUnit in C#. Its a lot less code. Of coarse it may take 3x longer to groc if you are a ruby beginner. What I often find is that given the motivation to learn and be more proficient, you eventually reach a point of minimal comfort with the codebase where you get it enough to see what needs to be added to get the thing to do what you want it to do and that’s when you start making contributions.

Heh. I totally have a weird love hate relationship with this stuff. There are days when  I curse these libraries because I just want to do something that seems so simple and I really have no desire or time to make an investment and then there are other times when  I am totally into the code and loving the sense that  I am gaining an understanding of new patterns and coding constructs and realize I’m gaining some knowledge where I can not only make the code better for me but for others as well.

In the end its all just a constant slog through the marshes of learning and as software engineers, that’s our sweet spot. The ability to live in a state of learning and not so much bask in what we have learned.

Multi node testing with Test-Kitchen and Docker containers by Matt Wrock

Two docker containers created and tested with Kitchen-Docker

Two docker containers created and tested with Kitchen-Docker

My last post provided a walk through of some of the new Windows functionality available in the latest Test-Kitchen RC and demonstrated those features by creating and testing a Windows Active Directory domain controller pair. This post will also be looking at testing multiple nodes but instead of windows, I'll be spinning up multiple docker containers. I'm going to be using a Couchbase cluster as my example. Note that while I am using docker containers, there is nothing special happening here preventing one from running the same tests on multiple linux or windows VMs using the Kitchen-Vagrant driver. Couchbase runs on windows too.

Why run tests with containers when my production nodes are VMs?

There are some really interesting things being done with containers in production environments but even if you are not using containers in production, there are some clear benefits to using them for testing infrastructure development. The biggest value is faster provisioning. Using the kitchen-docker driver over vagrant or another cloud based driver can potentially save several minutes per test. You might wonder "whats a couple minutes?" However, when you are iterating over a problem and need to reprovision several times, a couple minutes or more can add up quick.

You still want to test provisioning to VMs if that is what your production infrastructure runs, but that can sit later in your testing pipeline. You will save alot of time, money and tears (you'll need those later) by keeping your feedback cycles short early in your development process.

Setting things up

To get started you will need to have the docker engine installed and the latest RC of test-kitchen.

Docker Install

There are a few approaches one can take to installing docker. Some are more complicated than others and really depend on your host operating system. I'm using an Ubuntu 14.04 desktop os on my laptop. Ubuntu 14.04 has no prerequisites and you simply run:

wget -qO- https://get.docker.com/ | sh

Ubuntu 12.02 requires a kernel upgrade and several packages before the above install will work. The docker installation documentation provides instructions for most operating systems.  If you are running windows or a mac, you will want to run the docker engine from inside a linux vm. You can either setup a vm of your favorite linux distro and then install docker following the instructions on the docker site or you can install Boot2Docker which will install a local docker CLI, VirtualBox, and a stripped down, tiny core linux image.

This post is not aimed to explore the different ways of installing docker. If you do not already have docker or a vm setup from which you can install it friction-free, take a look at my chef_workstation repo that includes a Vagrantfile that will provision a  workable chef enabled workstation environment with docker installed. It should work with VirtualBox, Hyper-V or Parallels on a mac. I believe it also works for VMWare Fusion users but I have not validated that for a while.

A multi-node enabled cookbook to test

To demonstrate multi node testing with test-kitchen, I have forked the community couchbase cookbook. I'll be sending a PR with these changes:

  • Compatibility with docker (current version uses netstat to validate a listening port an thats not installed on the default ubuntu container)
  • Extends the couchbase-cluster resource to allow other nodes to be joined to a cluster
  • Fixes the cookbook on windows which is unrelated to this post but aligns well with one of my personal missions in life

Clone my fork and checkout the multi-node branch:

git clone -b multi-node https://github.com/mwrock/couchbase

If you are using the vagrant box in my chef_workstation repo, cd to the cookbooks directory just below the directory you land in from vagrant ssh and clone from there.

Using the right gems

To help facilitate testing multiple nodes, this cookbook uses a custom test-kitchen provisioner plugin that utilizes functionality exposed in the latest test-kitchen RC. So the cookbook includes a Gemfile that references both of these gems and other important dependencies. To ensure that you are testing with all of the correct gems, cd into the root of the couchbase cookbook and run:

bundle install

Converge and test the first node

We are now ready to create, converge and test the first node of our couchbase cluster. Make sure to run with bundle exec so that we use all of the correct gem versions:

bundle exec kitchen verify server-community-ubuntu

This will start a new container running ubuntu 12.04, install Couchbase and initialize a new cluster. Then a serverspec test will ensure that the service is running and configured the way we want it.

Joining an additional node to the cluster

To get the full multi-node effect, lets now ask test-kitchen to run our second-node suite:

bundle exec kitchen converge second-node-ubuntu

This brings up a new container that will post to the couchbase rest endpoint of our first node asking to join the cluster. Then its serverspec test will pull the list of nodes in the cluster exposed from the original node and check if our second node is included in the list.

Discovering the original node

One possible strategy could be to set an attribute specifying the IP or host name of the initiating couchbase node. However this assumes it is a known and constant value. You may prefer your infrastructure to dynamically query for an existing couchbase node. In our test scenario, we really cant predict the ip or host name since we are getting IPs from DHCP and docker is handing out a unique hash for a host name.

Note that we could tweak the driver configuration in our .kitchen.yml to expose predictable hostnames that can link to other containers. Here is an example of a possible config for our node suites:

- name: server-community
    publish_all: true
    instance_name: first_cluster
  - recipe[couchbase::server]
        password: "whatever"

- name: second-node
    links: "first_cluster:first_cluster"
  - recipe[couchbase-tests::default]
        password: "whatever"
        cluster_to_join: first_cluster

Here the first node uses the kitchen-docker configuration to ask the docker engine to expose its container with a specific name "first_cluster." The second node is asked to link the name "first_cluster" with he "first_cluster" instance. This way any requests from the second container to the DNS name first_cluster will resolve to our first container. Finally we would create a node attribute named luster_to_join that our second node would ask to join.

This may work for your scenario and thats great. However it may break down for others. First its not very portable. This cookbook supports windows and locking in docker specific options will run into problems for windows tests that leverage vagrant here:

- name: windows-2012R2
    name: vagrant
      - ["private_network", { type: "dhcp" }]
    name: winrm
    gui: true
    box: mwrock/Windows2012R2Full
      memory: 1024

Furthermore, our test logic needs to match production logic. If production nodes will be querying the chef server for a node to send cluster join requests to, out tests must validate that this strategy works.

The kitchen-nodes provisioner plugin

In my last post I demonstrated a strategy that uses chef search to find a chef node based on a run list recipe. It used my kitchen-nodes provisioner plugin to create mock chef nodes of each kitchen suite so that a chef search can find other suite test instances during convergences. Since that example was creating a windows active directory controller pair, its functionality had some windows specific functionality. I have extended the functionality of this plugin to support most *Nix scenarios including docker.

First we tell test-kitchen to use the kitchen-nodes plugin as a provisioner for the suites that test our couchbase servers:

- name: server-community
    name: nodes
  - recipe[couchbase-tests::ipaddress]
  - recipe[couchbase::server]
  - recipe[export-node]
        password: "whatever"

- name: second-node
    name: nodes
  - recipe[couchbase-tests::ipaddress]
  - recipe[couchbase-tests::default]
  - recipe[export-node]
        password: "whatever"

The defult recipe of the couchbase-tests cookbook used by our second node can now find the first node using chef search:

primary = search_for_nodes("run_list:*couchbase??server* AND platform:#{node['platform']}")
node.normal["couchbase-tests"]["primary_ip"] = primary[0]['ipaddress']

The search_for_nodes method is defined in our couchbase-tests library:

require 'timeout'

def search_for_nodes(query, timeout = 120)
  nodes = []
  Timeout::timeout(timeout) do
    nodes = search(:node, query)
    until  nodes.count > 0 && nodes[0].has_key?('ipaddress')
      sleep 5
      nodes = search(:node, query)

  if nodes.count == 0 || !nodes[0].has_key?('ipaddress')
    raise "Unable to find nodes!"


Here we are using a chef search to find a node that includes the couchbase server recipe and has the same os platform of the current node. Matching on platform is important if our .kitchen.yml is designed to test more than one platform like ours.

Chef-zero and chef search

The kitchen-nodes plugin derives from the chef-zero test-kitchen provisioner. Using chef-zero we can issue a chef-search for nodes without being hooked up to a real chef-server. Chef-zero accomplishes this by storing information on each node in a json file stored in its nodes folder. The test-kitchen chef-zero provisioner wires all of this up by copying all files under tests/integration/nodes to {test-kitchen temp folder on test instance}/nodes. So you can create a json file for each test suite in your local nodes folder and then chef search calls will effectively treat the nodes files as the master chef server database.

The kitchen-nodes plugin automatically generates a node file when a test instance is provisioned by test-kitchen. Provisioning occurs at the very beginning of the converge operation. kitchen-nodes populates the node's json file with ip address, platform, and run list. Here are the two nodes' json files generated in my tests:

  "id": "server-community-ubuntu-1204",
  "automatic": {
    "ipaddress": "",
    "platform": "ubuntu"
  "run_list": [

  "id": "second-node-ubuntu-1204",
  "automatic": {
    "ipaddress": "",
    "platform": "ubuntu"
  "run_list": [

During provisioning, kitchen-nodes will either use SSH or WinRM depending on the test instance platform to interrogate its interfaces for an IP that is accessible to the host. On windows, this information is retrieved using a few powershell cmdlets and on *Nix instances either ifconfig or ip addr show is used depending on what is available on that distro. There may be several interfaces but kitchen-nodes will only choose an ipv4 ip that can be pinged from the host.

Testing that we joined the correct cluster

So how do we test that we actually found the correct node? We cant write a serverspec test using a hard coded IP. We use a testing recipe, export-node, that dumps the entire node object to a json file. Our test recipe run by the second node stores the primary node's IP in a node attribute as we saw further above.

Here is an instant replay:

node.normal["couchbase-tests"]["primary_ip"] = primary[0]['ipaddress']

So when the export-node cookbook dumps the node data, that IP address will be included. Here is the test that validates the node join:

describe "cluster" do
  let(:node) { JSON.parse(IO.read(File.join(ENV["TEMP"] || "/tmp", "kitchen/chef_node.json"))) }
  let(:response) do
    resp = Net::HTTP.start node["normal"]["couchbase-tests"]["primary_ip"], 8091 do |http|
      request = Net::HTTP::Get.new "/pools/default"
      request.basic_auth "Administrator", "whatever"
      http.request request

  it "has found the priary node and it is not itself" do
    expect(node["normal"]["couchbase-tests"]["primary_ip"]).not_to eq(node['automatic']['ipaddress'])

  it "has joined the primary cluster" do
    joined = false
    response['nodes'].each do |cluster_node|
      if cluster_node['hostname'] == "#{node['automatic']['ipaddress']}:8091"
        joined =  true

    expect(joined).to be true

The export-nodes cookbook dumps the node json to a file named chef-node.json in the kitchen temp folder. So our test pulls the ip that was returned by the chef search from here. It makes sure that it is in fact a different node from its own IP and then issues a couchbase API request to that node to return all nodes in its cluster. Our test passes as long as the second node is included in the returned node list.

Testing all the things

I find this helpful and reassuring that I can include my node interactions into my tests. Test-Kitchen's coverage can indeed extend well beyond the boundaries of a single node.

Orchestrating multi node Windows tests in Test-Kitchen Beta! by Matt Wrock

Primary and backup active directory controllers in their own kitchen.local domain

Primary and backup active directory controllers in their own kitchen.local domain

This week marks an important milestone in the development bringing Test-Kitchen to windows. All the work that has gone into this effort over the past nine months has been merged into the master branch and prerelease gems have been cut for both the test-kitchen repo as well as kitchen-vagrant. So this post serves as another update to getting started with these new bits on windows (there are some changes) and expands on some of my previous posts by illustrating a multi node converge and test. The same technique can be applied to linux boxes as well but I'm going to demonstrate this with a cookbook that will build a windows primary and backup active directory domain controller pair.


In order to make the cookbook tests in this post work, the following needs to be preinstalled:

  • A recent version of vagrant greater than 1.6 and I would strongly recommend even higher to account for various bug fixes and enhancements around windows.
  • Either VirtualBox or Hyper-V hypervisor
  • git
  • A ruby environment with bundler. If you dont have this, I strongly suggest installing the chefdk.
  • Enough local compute resources to run 2 windows VMs. I use a first generation lenovo X1 with an i7 processor, 8GB of ram, an SSD and build 10041 of the windows 10 technical preview. I have also run this on a second generation X1 running Ubuntu 14.04 with the same specs.

The great news is that now your host can be linux, mac, or windows.


Install the vagrant-winrm plugin. Assuming vagrant is installed, this will download and install vagrant-winrm:

vagrant plugin install vagrant-winrm

Clone my windows-ad-pair cookbook:

git clone https://github.com/mwrock/windows-ad-pair.git

At the root of the repo, bundle install the necessary gems:

bundle install

This will grab the necessary prerelease test-kitchen and kitchen-vagrant gems along with their dependencies. It will also grab kitchen-nodes, a kitchen provisioner plugin I will explain later.

Using Hyper-V

I use Hyper-V when testing on my windows laptop. Vagrant will use VirtualBox by default but can use many other virtualization providers. To force it to use hyper-v you can:

  • Add the provider option to your .kitchen.yml:
    box: mwrock/Windows2012R2Full
    communicator: winrm
    vm_hostname: false
    provider: hyperv
  • Add an environment variable, VAGRANT_DEFAULT_PROVIDER, and assign it the value "hyperv".

I prefer the later option since given a particular machine, I will want to always use the same provider and I want to keep my .kitchen.yml portable so I can share it with others regardless of their hypervisor preferences.

A possible vagrant/hyper-v bug?

I've been seeing intermittent crashes in powershell during machine create on the box used in this post. I had to create new boxes for this cookbook. One reason is that using the same image for multiple nodes in the same security domain required the box to be sysprepped to clean all SIDs (security identifiers) from the base images. This means that when vagrant creates the machine, there is at least one extra reboot involved and I think this may be confusing the hyperv plugin.

I have not dug into this but I have found that immediately reconverging after this crash consistently succeeds.

Converge and verify

Due to the sensitive nature of standing up an Active Directory controller pair (ya know...reboots), rather than calling kitchen directly, we are going to orchestrate with rake. We'll dive deeper into this later but to kick things off run:

bundle exec rake windows_ad_pair:integration

Now go grab some coffee. 


No, no, no. I meant get in your car and drive to another town for coffee and then come back.

What just happened?

Test kitchen created a primary AD controller, rebooted it and then spun up a replica controller. All of this uses the winrm protocol that is native to windows so no SSH services needed to be downloaded and installed. This pair now manages a kitchen.local domain that you could actually join additional nodes to if you so choose.

Where are these windows boxes coming from?

These are evaluation copies of windows 2012R2. They will expire in a little under six months from the date of this post. They are not small and weigh in at about 5GB. I typically use smaller boxes where I strip away unused windows features but I needed several features to remain in this cookbook and it was easiest to just package new boxes without any features removed. I keep the boxes accesible on Hashicorp's Atlas site but the bytes live in Azure blob storage.

Is there an SSH server running on these instances?

No. Thanks to Salim Afiune's work, there is a new abstraction in the Test-Kitchen model, a Transport. The transport governs communication between test-kitchen on the host and the test instance. The methods defined by the transport handle authentication, transferring files, and executing commands. For those familiar with the vagrant model, the transport is the moral equivalent of the vagrant communicator. Test-kitchen 1.4 includes built in transports for winrm and ssh. I could imagine other transports such as a vmware vmomi transport that would leverage the vmware client tools on a guest.

How does the backup controller locate the primary controller?

One of the challenges of performing multi node tests with test-kitchen be they windows or not is orchestrating node communication without hard coding endpoint URIs into your cookbooks or having to populate attributes with these endpoints. Ideally you want nodes to discover one another based on some metadata. At CenturyLink we use chef search to find nodes operating under a specific runlist or role. In this cookbook, the backup controller issues a chef search for a node with the primary recipe in its runlist and then grabs its IP address from the ohai data.

primary = search_for_nodes("run_list:*windows_ad_pair??primary*")
primary_ip = primary[0]['ipaddress']

The key here is the search_for_nodes method found in this cookbook's library helper:

require 'timeout'

def search_for_nodes(query, timeout = 120)
  nodes = []
  Timeout::timeout(timeout) do
    nodes = search(:node, query)
    until  nodes.count > 0 && nodes[0].has_key?('ipaddress')
      sleep 5
      nodes = search(:node, query)

  if nodes.count == 0 || !nodes[0].has_key?('ipaddress')
    raise "Unable to find any nodes meeting the search criteria '#{query}'!"


Does Test-Kitchen host a chef-server?

Mmmmm....kind of. Test-kitchen supports both chef solo and chef zero provisioners. Chef zero supports a solo-like workflow allowing one to converge a node locally  with no real chef server and also supports search functionality. This is facilitated by adding json files to a nodes directory underneath the test/integration folder:

The node files are named using the same suite and platform combination as the kitchen test instances. The contents of the node look like:

  "id": "backup-windows-2012R2",
  "automatic": {
    "ipaddress": ""
  "run_list": [

You can certainly create these files manually but there is no guarantee that the ip address will always be the same especially if others use this same cookbook. Wouldn't it be nice if you could dynamically create and save this data at node provisioning time? I think so.

We use this technique at CenturyLink and have wired it into some of our internal drivers. I've been working to improve on this making it more generlized and extracting it into its on dedicated kitchen provisioner plugin, kitchen-nodes. Its included in this cookbook's Gemfile and is wired into test-kitchen in the .kitchen.yml:

  name: nodes

Its still a work in progress and I came accross a scenario in the cookbook here where  I had to add functionality to support vagrant/VirtualBox and temporarily make this plugin windows specific. I'll be changing that later. There is really not much code involved and here is the meat of it:

def create_node
  node_dir = File.join(config[:test_base_path], "nodes")
  Dir.mkdir(node_dir) unless Dir.exist?(node_dir)
  node_file = File.join(node_dir, "#{instance.name}.json")

  state = Kitchen::StateFile.new(config[:kitchen_root], instance.name).read
  ipaddress = get_reachable_guest_address(state) || state[:hostname]

  node = {
    :id => instance.name,
    :automatic => {
      :ipaddress => ipaddress
    :run_list => config[:run_list]

  File.open(node_file, 'w') do |out|
    out << JSON.pretty_generate(node)

def get_reachable_guest_address(state)    
  ips = code <<-EOS
    Get-NetIPConfiguration | % { $_.ipv4address.IPAddress}
  session = instance.transport.connection(state).node_session
  session.run_powershell_script(ips) do |address, _|
    address = address.chomp unless address.nil?
    next if address.nil? || address == ""
    return address if Net::Ping::External.new.ping(address)
  return nil

The class containing this derives from the ChefZero provisioner class. It reads from the state file that test-kitchen uses to get the node's ip address and then ensures that this ip is reachable externally or uses one that is from a different NIC in the node. It then adds that ip and the node's run list to the node json file.

Dealing with reboots

Standing up active directory controllers present some awkward challenges to an automated workflow in that each must be rebooted before they can recognize and work with the domain they control. Ideally we would simply be able to kick off a kitchen test which would converge each node and run their tests. However if we did this here, the results would be disappointing unless you like failure - just don't count on failing fast. So we have to "orchestrate" this. A rather fancy term for the method I'm about to describe.

I'll warn this is crude and I'm sure there are better ways to do this but this is simple and it consistently works. It includes using a custom rake task to manage the test flow. It looks like this:

desc "run integration tests"
task :integration do
 system('kitchen destroy')
 system('kitchen converge primary-windows-2012R2')
 system("kitchen exec primary-windows-2012R2 -c 'Restart-Computer -Force'")
 system('kitchen converge backup-windows-2012R2')
 system("kitchen exec backup-windows-2012R2 -c 'Restart-Computer -Force'")
 system('kitchen verify primary-windows-2012R2')
 system('kitchen verify backup-windows-2012R2')

This creates and converges the primary node and then uses kitchen exec to reboot that instance. While it is rebooting, the backup instance in created and converged. Of course there is no guarantee that the primary node will be available by the time the backup node tries to join the domain but I have never seen the back up node add itself to the domain without the primary node completing its restart. Remember that the backup node has to go through a complete create first which takes minutes. Then the backup node reboots after its converge and while its doing that, the primary node begins its verify process. The kitchen verify can be run independently and do not need both instances to be up.

If this were a production cookbook running on a build agent, I'd raise an error if any of the system calls failed (returned false) and I'd include an ensure clause at the end that destroyed both instances.

Vagrant networking

With the vagrant virtualbox provider, vagrant creates a NAT based network and forwards host ports for the transport protocol. This is a great implementation for single node testing but for multi node tests, you may need a real IP issued statically or via DHCP that one node can use to talk to another. This is not always the case but it is for an active directory pair mainly because creating a domain is tied to DNS and simply forwarding ports wont suffice. In order for the backup node to "see" the domain, "kitchen.local" here, created by the primary node, we assign the primary node's IP address to the primary DNS server of the backup node's NIC.

  powershell_script 'setting primary dns server to primary ad' do
    code <<-EOS
      Get-NetIPConfiguration | ? { 
        } | Set-DnsClientServerAddress -ServerAddresses '#{primary_ip}'

    guard_interpreter :powershell_script
    not_if "(Get-DnsClientServerAddress -AddressFamily IPv4 | ? { $_.ServerAddresseses -contains '#{primary_ip}' }).count -gt 0"

Adding would not work. We need a unique IP from wich the domain's DNS records can be queried.

Most non-virtualbox providers will not need special care here and certainly not hyper-v. Hyper-V uses an external or private virtual switch which maps to a real network adapter on the host. For virtualbox, you can provide network configuration to tell vagrant to create an additional NIC on the guest:

        - ["private_network", { type: "dhcp" }]

This will simply be ignored by hyper-v.

Testing windows infrastructure - the future is now

I started using all of this last July and have been blogging about it since August. That first post was entitled Peering into the future of windows automation. Well here we are with prerelease gems available for use. Its really been exciting to see this shape up and it has been super helpful to me as I have been learning a new language, ruby, to interact with this code base along with Salim Afiune and Fletcher Nichols to accelerate my learning process which is far from over.

By the way I'll be at ChefConf nearly all week next week and carrying both of my laptops (linux and windows) with all of this code. If anyone wants a demo to see this in action or just wants to talk Test Driven Infrastructure shop, I'd love to chat!