Moving to Github Pages

March 29, 2014

I’ve decided that WordPress.com isn’t the right place for this blog.

I’m a hacker, so I should blog like one.

So, I’m moving this blog to here. I hope you’ll come visit.

Playing with logstash

March 29, 2014

I wanted to play with logstash, ideally using my current favourite tools of Vagrant and Chef.

I googled around, but the projects I found that use these tools were too complex, for my taste, so I rolled my own.

First, I wanted to start from a very simple Vagrant + Chef Solo + Ubuntu 12.04 configuration. Here’s one I made, earlier;


git clone https://github.com/digitalronin/chef_project_template.git logstash
cd logstash
rm -rf .git
./bootstrap_vagrantvm

This will take a few minutes.

Now we have a Vagrant VM, based on Ubuntu 12.04, with Ruby 2.0 as the system ruby, and a basic configuration using Chef Solo.

For more information, checkout this post.

Now to add logstash.

We’re going to install logstash via the apt package manager, from the Elasticsearch package repository.

mkdir -p chef/cookbooks/logstash/recipes

vi chef/cookbooks/logstash/recipes/default.rb

Here’s the content we need;

execute "add-logstash-repo-key" do
command "wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | apt-key add -"
not_if "apt-key list | grep Elasticsearch"
end

execute "add-logstash-repo" do
command "echo 'deb http://packages.elasticsearch.org/logstash/1.4/debian stable main' >> /etc/apt/sources.list"
not_if "grep packages.elasticsearch.org.logstash /etc/apt/sources.list"
end

execute "apt-get update"

package "logstash"

Now add “recipe[logstash]” to the runlist in chef/server.json;

{
"run_list": [
"recipe[ntp]",
"recipe[logstash]"
]
}

…and apply our new configuration;


cd chef
./deploy.sh root@192.168.11.11

That’s it. We now have a Vagrant VM with logstash 1.4.0 installed.

Let’s see what files logstash created;

dpkg -L logstash

Among others, you’ll see this file;

/opt/logstash/bin/logstash

Let’s try it out;

root@myserver:~# /opt/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'

Run that command, and then type something. You’ll have to wait a little to see the output, presumably because logstash is batching things up;

root@myserver:~# /opt/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
Hello, world
2014-03-29T13:42:38.437+0000 myserver Hello, world

Press Ctrl-D to exit.

Now let’s add elasticsearch. The installation recipe is very similar to that for logstash;

vi chef/cookbooks/elasticsearch/recipes/default.rb

execute “add-elasticsearch-repo-key” do

command “wget -O – http://packages.elasticsearch.org/GPG-KEY-elasticsearch | apt-key add -”
not_if “apt-key list | grep Elasticsearch”
end

execute “add-elasticsearch-repo” do
command “echo ‘deb http://packages.elasticsearch.org/elasticsearch/0.90/debian stable main’ >> /etc/apt/sources.list”
not_if “grep packages.elasticsearch.org.elasticsearch /etc/apt/sources.list”
end

execute “apt-get update”

package “elasticsearch”

Don’t forget to add it to our server.json runlist;

{
“run_list”: [
“recipe[ntp]”,
“recipe[logstash]”,
“recipe[elasticsearch]”
]
}

…and apply the new configuration;

cd chef

./deploy.sh root@192.168.11.11

Elasticsearch should now be running. You can confirm that by logging onto the VM via ssh and running this;

wget -O - 'http://localhost:9200/_search?pretty'

…which should produce output something like this;

{
"took" : 0,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" : [ ]
}
}

You can also fire up a web browser on your host machine and visit http://192.168.11.11:9200/

That’s as far as I’m going to go in this blog post, mainly because I don’t know much about Logstash and Elasticsearch (yet). More information is available here;

http://logstash.net/docs/1.4.0/tutorials/getting-started-with-logstash

Just remember that our logstash executable is /opt/logstash/bin/logstash when you work through their examples.

All the code for this blog post is available on github;

https://github.com/digitalronin/chef-logstash-elasticsearch

Getting started with Chef (and Vagrant)

March 21, 2014

I’m a big fan of configuration management, and the whole “infrastructure as code” approach. Currently, I’m managing hundreds of machines with Puppet, but I decided to take a look at Chef…just because.

I find a lot of the “getting started with Chef” articles and blog posts dive too deep into tool magic before giving a real understanding of the basics. When I learn a new programming language, I start with “Hello, World,” and I want the same kind of thing for learning Chef. I just googled “Getting started with Chef,” and the first three entries all have “now install knife and use this magic command to go get a bunch of cookbooks that other people have written.”

Maybe it’s just me, but I really don’t like typing “do this magic thing” and ending up with a ton of configuration files I don’t fully understand. That’s one of the reasons I don’t use IDEs when I develop software. I prefer to start from the bottom and work upwards, only adding a layer when I’m confident I understand the layers I’ve already built. That’s a slower approach, but we’re talking about the code that builds crucial pieces of my systems’ infrastructures, and there’s no way I’m just downloading a long, complex recipe and running it without fully understanding what it’s doing to my machine, and why (I have another rant about the complexity of most of the publicly available puppet/chef recipes you find on directories, but I’ll save that for another post).

From that point of view, the very best article I found about starting out with Chef was this one, from Jo Liss;

Chef Solo tutorial: Managing a single server with Chef

I also really like Vagrant as a way to iterate puppet/chef recipes quickly, and create throwaway VMs to play around with stuff. So, I’ve created the simplest possible Chef project I could build, which also has a Vagrantfile and a bootstrap script. This can be used as the basis for any server which you plan to manage using Chef. It also installs ruby2.0 as the system ruby.

Here’s how it works;

Clone the repo;

git clone https://github.com/digitalronin/chef_project_template.git myproject

Make it a new git project of your own;

cd myproject
rm -rf .git
git init

Edit the Vagrantfile and replace my SSH public key at the top, with one of your own. This public key will be installed in your Vagrant VM so that you have passwordless root access, which enables the script to do everything else without pausing to prompt you for a password.

Now for the fun part. This assumes you already have Vagrant, and that you already have the Ubuntu 12.04 “box.” If you don’t, there are instructions in the readme. It also assumes you’re running ruby 1.9 or greater. If you’re not, you’ll need to tweak the Vagrantfile to change “foo: bar” to “:foo => bar”;

./bootstrap_vagrantvm

Here’s what that does;

  1. Start an Ubuntu 12.04 vagrant VM (creating it, if it doesn’t already exist), with IP number 192.168.11.11
  2. Install your public key into the root account (/root/.ssh/authorized_keys)
  3. cd into the local “chef” directory and run deploy.sh, targeting the new VM
  4. The deploy.sh uploads the contents of the chef directory to the VM, unpacks it (removing any chef directory that was there before), and executes the “install.sh” script.
  5. install.sh bootraps the box to the point where it has ruby2.0 and chef-solo, and then runs the chef run list defined in server.json

This should take somewhere around 10 minutes or so, assuming a reasonably fast machine with a good internet connection.

I’ve left the run list almost empty. It just installs NTP and sets the server timezone to GMT, for the sake of having Chef do something.

If you don’t want to use Vagrant, you can use exactly the same process with any Ubuntu 12.04 server to which you can ssh as root, either by providing a password or by using an ssh key. Just start at step 3.

Whenever you make changes to your chef recipes, just run from step 3 again. After the first setup is completed, any subsequent runs should be quite quick, depending on your chef changes, so you should be able to iterate very quickly.

Anytime you want to go back and start from scratch again, just run;

vagrant destroy
./bootstrap_vagrantvm

Being able to use the same code to deploy a local development VM or a live server is very handy (although I would never recommend leaving root access open on your live servers), and it’s a big help in getting to the point where you have a Walking Skeleton of the system you’re building.

 

 

Habit-forming

March 10, 2014

I’ve been thinking a lot about habits, recently. In particular, how to create good habits in myself. I remember reading somewhere that it takes two or three weeks of daily repetitions to form a new habit. However, it seems this might be an underestimate.

The advantage of a habit, or a good habit, anyway, is that it seems to require so much less effort to do whatever it is once it’s habitual. In many ways, not doing an habitual task seems more effortful than doing it. It becomes harder to break the habit than it is to perform the task. Cory Doctorow puts it well when he says that “Habits are things you get for free.”

Regardless of the exact number, it’s clear that it takes some time, and some number of regular repetitions before something can be considered truly habitual.

Being a geek, I wanted to keep a record so I could monitor my progress. For a while, I tried recording in a text file the days on which I had or hadn’t done something I wanted to make it into a habit – in this case, doing daily Spanish practice using duolingo.com. But then I ended up having either multiple lists for multiple daily tasks, or one list that was difficult to read and annoying and time-consuming to maintain.

So, I created a little web application to help me manage the tasks I wanted to do every day, and to keep track of the days when I had completed those tasks, or not.

They say that naming things is one of only two hard problems in computer science (the second being cache-expiry, and the third off-by-one errors), so I don’t feel too bad about the uninspired name I chose for it. If you’re interested, go take a look;

http://mydailytodolist.org

Now I get to tick off “Blog” on today’s list.

Let me know what you think in the comments.

 

It’s been a while

March 7, 2014

I can’t believe I haven’t posted anything on this blog since 2011.

Actually, I can, but it’s still pretty shocking.

I’m going to try to make an effort to start posting again, partially inspired by this post;

You Should Start a Blog Right Now

Will I manage to stick to it, this time? Who knows?

Actually, if you’re reading this much after March 2014, then you already know.

How did I do?

David

Icinga REST ruby gem

October 8, 2011

I’ve just published a gem to simplify access to the Icinga REST API.

The Icinga REST API can be use to allow nodes in a multi-server system to get information about the overall state of the system from the monitoring server, without requiring them to have detailed information about the other nodes in the system. This can be quite handy.

For example, let’s say one server in a multi-server system wants to take itself out of the active server pool to carry out some long-running, processor intensive task, and then put itself back into service once it has finished. This is fine, unless too many other, similar servers try to do the same thing at the same time. In that case, there might be too few active servers left to handle the realtime load on the system.

One option is for the server to say “I want to go out of service, but I’ll only do that if fewer than N of my siblings are currently out of service.” Assuming that our Icinga monitoring server knows about every node (which it should), then we could do something like this;

    #!/usr/bin/env ruby

    require 'rubygems'
    require 'icinga_rest'

    check = IcingaRest::ServiceCheck.new(
      :host    => 'my.icinga.host',
      :authkey => 'mysecretapikey',
      :filter  => [
        {:host_name    => 'web*'},
        {:service_name => 'In Service', :state => :critical}
      ]
    )

    puts check.count

Then, we can make a decision based on the value of check.count to see if this server is allowed to take a break.

That’s about all that the gem can be used for right now. I might extend it as I think of more ways to use the monitoring server to coordinate the activities of the various servers in a system.

Updated: The code is up on Github, here.

Updated: Thanks to a tip from Erik Eide, the gem no longer has to shell out to wget to call the Icinga REST API. The Addressable gem can handle the malformed URLs that the API requires.

Install ruby1.9.2 from source using Puppet

October 6, 2011

I usually use Ubuntu 10.04 as my server platform. Now that I’m switching to ruby 1.9.2 in production, the utter crapness of the built-in Ubuntu packages has become unsupportable (ruby 1.9.1 doesn’t work with Bundler, for example).

So, I wanted a way to install ruby 1.9.2 using Puppet. This is what I came up with. The files fit together like this;

In my site.pp file, I’ve got this;

import "ruby192"
include ruby

The init.pp file just contains this;

import "*"

The real fun is in the ruby.pp file;

class ruby {

  exec { "apt-update":
    command => "/usr/bin/apt-get update"
  }

  # pre-requisites
  package { [
      "gcc",
      "g++",
      "build-essential",
      "libssl-dev",
      "libreadline5-dev",
      "zlib1g-dev",
      "linux-headers-generic"
    ]:
    ensure => "installed",
    require => Exec["apt-update"]
  }

  # put the build script in /root
  file { "/root/build-ruby.sh":
    ensure => "present",
    source => "puppet:///modules/ruby192/build-ruby.sh",
    mode => 755
  }

  # run the build script
  exec { "build-ruby192":
    command => "/root/build-ruby.sh",
    cwd => "/root",
    timeout => 0,
    creates => "/usr/bin/ruby",
    require => File["/root/build-ruby.sh"]
  }

  # update rubygems
  exec { "update-rubygems":
    command => "/usr/bin/gem update --system",
    unless  => "/usr/bin/gem -v |/bin/grep ^1.8",
    require => Exec["build-ruby192"]
  }

}

As you can see, it updates the apt cache, installs some pre-requisites and then runs a script to build ruby 1.9.2 from source. The “timeout => 0” line is important. Without it, puppet will not allow long enough for the build script to run completely. Here’s the build script;

#!/bin/bash

RUBY_VERSION='ruby-1.9.2-p290'

wget "http://ftp.ruby-lang.org/pub/ruby/1.9/${RUBY_VERSION}.tar.gz"
tar xzf ${RUBY_VERSION}.tar.gz
cd ${RUBY_VERSION}
./configure --prefix=/usr && make && make install

That will install ruby 1.9.2 and rubygems, so all that remains for the ruby.pp module is to update rubygems to the latest version.

3D-printed vertebrae!

September 26, 2011

WARNING: This post contains very intimate images of parts of my anatomy 😉

I was unlucky enough to have an accident in a circus class in June, and fractured my spine. There followed a dull 3 months wearing a spine brace, but fortunately there doesn’t seem to be any long-term damage.

Being a geek, I asked the hospital for a digital copy of my CT scans, which they gave me on a DVD. The DVD comes with a basic HTML front-end to view the pictures, like this.

That’s not very informative, without several years of medical training, and there was a whole bunch of other stuff on the disk which, I assumed, were the original source files from the CT scanner. So, I had a look for an open source viewer for that data.

I found OsiriX which is a truly amazing program. After a few minutes of fiddling, I was playing with a 3D representation of my spine, rotating it, zooming in and out.

(You can see the damage to the lower of the 2 vertebrae shown – it should be the same size and shape as the one above)

So, that was fun. But then I decided to geek it up a level and see if I could get a 3D print.

You can export from OsiriX in a number of standard 3D formats. By exporting a .obj file, I could pull that into Meshlab, another great open source program, and clean up the 3D model a bit – deleting some “floating” parts and closing a couple of holes. I’m by no means a 3D modelling expert, and it shows, but I managed to tidy things up a little bit. You’ll need a fairly powerful machine – the vertex map of my scan had nearly 600,000 vertices, which takes quite a bit of memory and CPU to manipulate.

Finally, I had a clean enough .obj file to send off to be printed.

I chose i.materialise Their “3D Print Lab” has a really nice interface and lets you change scale and choose from several printing materials, allowing you to see the costs and properties of each. I chose to print my model at 50% scale in polyamide, which came in at around €35.

After submitting the job, Dmitriy at i.materialise was really helpful, further cleaning up the model before sending it to print (it turns out my ribs would have fallen off – who knew?)

So, here’s the finished product.

It’s of no practical use whatsoever, but it was a fun bit of geeking about.

PS: Don’t ever fracture your spine – it’s a really bad idea.

IP Ranges gem

September 10, 2011

I just published a gem to help manage ranges of IP numbers.

It allows you to take lots of arbitrary IP data like this;

  • 1.2.3.4
  • 1.2.1.254..1.3.4.4
  • 1.2.0.0/16

…and find out which ranges include or overlap with others. In this case, it provides output like this;

1.2.3.4 is contained by range 1.2.1.254..1.3.4.4
1.2.3.4 is contained by range 1.2.0.0/16
1.2.1.254..1.3.4.4 overlaps with 1.2.0.0/16

Here’s the source.