Posts Tagged ‘devops’

Getting started with Chef (and Vagrant)

March 21, 2014

I’m a big fan of configuration management, and the whole “infrastructure as code” approach. Currently, I’m managing hundreds of machines with Puppet, but I decided to take a look at Chef…just because.

I find a lot of the “getting started with Chef” articles and blog posts dive too deep into tool magic before giving a real understanding of the basics. When I learn a new programming language, I start with “Hello, World,” and I want the same kind of thing for learning Chef. I just googled “Getting started with Chef,” and the first three entries all have “now install knife and use this magic command to go get a bunch of cookbooks that other people have written.”

Maybe it’s just me, but I really don’t like typing “do this magic thing” and ending up with a ton of configuration files I don’t fully understand. That’s one of the reasons I don’t use IDEs when I develop software. I prefer to start from the bottom and work upwards, only adding a layer when I’m confident I understand the layers I’ve already built. That’s a slower approach, but we’re talking about the code that builds crucial pieces of my systems’ infrastructures, and there’s no way I’m just downloading a long, complex recipe and running it without fully understanding what it’s doing to my machine, and why (I have another rant about the complexity of most of the publicly available puppet/chef recipes you find on directories, but I’ll save that for another post).

From that point of view, the very best article I found about starting out with Chef was this one, from Jo Liss;

Chef Solo tutorial: Managing a single server with Chef

I also really like Vagrant as a way to iterate puppet/chef recipes quickly, and create throwaway VMs to play around with stuff. So, I’ve created the simplest possible Chef project I could build, which also has a Vagrantfile and a bootstrap script. This can be used as the basis for any server which you plan to manage using Chef. It also installs ruby2.0 as the system ruby.

Here’s how it works;

Clone the repo;

git clone myproject

Make it a new git project of your own;

cd myproject
rm -rf .git
git init

Edit the Vagrantfile and replace my SSH public key at the top, with one of your own. This public key will be installed in your Vagrant VM so that you have passwordless root access, which enables the script to do everything else without pausing to prompt you for a password.

Now for the fun part. This assumes you already have Vagrant, and that you already have the Ubuntu 12.04 “box.” If you don’t, there are instructions in the readme. It also assumes you’re running ruby 1.9 or greater. If you’re not, you’ll need to tweak the Vagrantfile to change “foo: bar” to “:foo => bar”;


Here’s what that does;

  1. Start an Ubuntu 12.04 vagrant VM (creating it, if it doesn’t already exist), with IP number
  2. Install your public key into the root account (/root/.ssh/authorized_keys)
  3. cd into the local “chef” directory and run, targeting the new VM
  4. The uploads the contents of the chef directory to the VM, unpacks it (removing any chef directory that was there before), and executes the “” script.
  5. bootraps the box to the point where it has ruby2.0 and chef-solo, and then runs the chef run list defined in server.json

This should take somewhere around 10 minutes or so, assuming a reasonably fast machine with a good internet connection.

I’ve left the run list almost empty. It just installs NTP and sets the server timezone to GMT, for the sake of having Chef do something.

If you don’t want to use Vagrant, you can use exactly the same process with any Ubuntu 12.04 server to which you can ssh as root, either by providing a password or by using an ssh key. Just start at step 3.

Whenever you make changes to your chef recipes, just run from step 3 again. After the first setup is completed, any subsequent runs should be quite quick, depending on your chef changes, so you should be able to iterate very quickly.

Anytime you want to go back and start from scratch again, just run;

vagrant destroy

Being able to use the same code to deploy a local development VM or a live server is very handy (although I would never recommend leaving root access open on your live servers), and it’s a big help in getting to the point where you have a Walking Skeleton of the system you’re building.




Icinga REST ruby gem

October 8, 2011

I’ve just published a gem to simplify access to the Icinga REST API.

The Icinga REST API can be use to allow nodes in a multi-server system to get information about the overall state of the system from the monitoring server, without requiring them to have detailed information about the other nodes in the system. This can be quite handy.

For example, let’s say one server in a multi-server system wants to take itself out of the active server pool to carry out some long-running, processor intensive task, and then put itself back into service once it has finished. This is fine, unless too many other, similar servers try to do the same thing at the same time. In that case, there might be too few active servers left to handle the realtime load on the system.

One option is for the server to say “I want to go out of service, but I’ll only do that if fewer than N of my siblings are currently out of service.” Assuming that our Icinga monitoring server knows about every node (which it should), then we could do something like this;

    #!/usr/bin/env ruby

    require 'rubygems'
    require 'icinga_rest'

    check =
      :host    => '',
      :authkey => 'mysecretapikey',
      :filter  => [
        {:host_name    => 'web*'},
        {:service_name => 'In Service', :state => :critical}

    puts check.count

Then, we can make a decision based on the value of check.count to see if this server is allowed to take a break.

That’s about all that the gem can be used for right now. I might extend it as I think of more ways to use the monitoring server to coordinate the activities of the various servers in a system.

Updated: The code is up on Github, here.

Updated: Thanks to a tip from Erik Eide, the gem no longer has to shell out to wget to call the Icinga REST API. The Addressable gem can handle the malformed URLs that the API requires.

Install ruby1.9.2 from source using Puppet

October 6, 2011

I usually use Ubuntu 10.04 as my server platform. Now that I’m switching to ruby 1.9.2 in production, the utter crapness of the built-in Ubuntu packages has become unsupportable (ruby 1.9.1 doesn’t work with Bundler, for example).

So, I wanted a way to install ruby 1.9.2 using Puppet. This is what I came up with. The files fit together like this;

In my site.pp file, I’ve got this;

import "ruby192"
include ruby

The init.pp file just contains this;

import "*"

The real fun is in the ruby.pp file;

class ruby {

  exec { "apt-update":
    command => "/usr/bin/apt-get update"

  # pre-requisites
  package { [
    ensure => "installed",
    require => Exec["apt-update"]

  # put the build script in /root
  file { "/root/":
    ensure => "present",
    source => "puppet:///modules/ruby192/",
    mode => 755

  # run the build script
  exec { "build-ruby192":
    command => "/root/",
    cwd => "/root",
    timeout => 0,
    creates => "/usr/bin/ruby",
    require => File["/root/"]

  # update rubygems
  exec { "update-rubygems":
    command => "/usr/bin/gem update --system",
    unless  => "/usr/bin/gem -v |/bin/grep ^1.8",
    require => Exec["build-ruby192"]


As you can see, it updates the apt cache, installs some pre-requisites and then runs a script to build ruby 1.9.2 from source. The “timeout => 0” line is important. Without it, puppet will not allow long enough for the build script to run completely. Here’s the build script;



wget "${RUBY_VERSION}.tar.gz"
tar xzf ${RUBY_VERSION}.tar.gz
./configure --prefix=/usr && make && make install

That will install ruby 1.9.2 and rubygems, so all that remains for the ruby.pp module is to update rubygems to the latest version.