Thursday, October 20, 2005

This blog has moved

I've migrated the blog to, including copying over most of the posts. Blogspot was just getting to be too difficult to write in, and I wanted the blog to be on that domain, rather than

Thursday, October 13, 2005


Now that the Web 2.0 Conference is over, it's time to get back to puppet development. The main problem I'm still trying to solve is service management -- I'm not quite sure there's a single abstraction that will work, since each and every *nix does service startup slightly differently, either using a single script for starting all services, or making links for everything but then using shell script config files (ugh) to determine whether a service actually starts, or using something like Solaris's SMF or Mac OS X's LaunchD to do it for you.

Trying to do service management has convinced me that the real answer is to not worry about getting it right just yet, since it's going to take many iterations to get it right on all of the platforms. Instead, I'm going to focus on making Puppet easy to update, so that as things work better, it'll be easy for existing customers to get that better functionality.

I need to come up with some kind of versioning system within Puppet; I'm thinking that the Puppet framework itself will have one version, and then each primitive (e.g., 'file', 'package', etc.) will have its own version. This might not work out that well in the long run, since we'll often update a primitive for one platform (e.g., add a new packaging type) without updating it for the other platforms, but it seems to be an acceptable compromise for now.

The first step is to come up with a way of registering and checking versions (which should be pretty easy, since I'm already registering all of the primitives), then I just need to add a 'versioncheck' method or something to the primary configuration protocol, and finally have some kind of self-update mechanism (yay!).


Thursday, October 06, 2005

Web 2.0 so far

I have to say that overall the conference is amazing, but I'm disappointed that the Web 2.0 concepts are not being broken down structurally as much as I'd hoped.

This is a huge marketplace, though, especially for me -- all of these social software companies are going to be building relatively large hosting setups, and they don't want to suffer from those setups any more than they have to. I've already talked to tons of potential customers, and am basically out of cards.

What I haven't been able to get, though, is an in-depth discussion of applying the principles of Web 2.0 to something very different, or even what the principles are. I've also been pretty disappointed that not many of the panels are talking about the structural aspects of Web 2.0 and instead are talking a lot more about the market aspects. In fact, it seems like quite a few of the more lengthy panelists really don't understand why they're on the stage or what this conference is supposed to be about. I've seen a few questions already that seemed pretty obvious but met blank faces.

It now looks relatively unlikely that I'll be able to delve deep into the technology and principles before I leave, but I would love to find someone who's really interested in the principles themselves and then spend an hour or so hashing it out.

Tuesday, October 04, 2005

HTML Sucks

So, blogspot has already lost two of my posts, and it forces me to write in HTML instead of a simpler markup language like restructured text, so I'm writing in restructured text on the side and sending the rst2html output up to blogspot. That's why the html looks insanely bad.

I don't expect to be on this site that much longer, considering that 1) it's a pretty big pain to post, and 2) it is fond of losing posts, especially using Camino.

Anyone have any better recommendations? Especially something that lets me write in a simplified markup like ReST, rather than having to write real HTML?

Pervasive Tagging

I'm writing this on the plane to the Web 2.0 conference.

I now have basic tagging working in Puppet. They're generated on the server when a configuration is provided, and they're stored with the individual objects in the configuration. For instance, here is what some of the tags look like in a short, example configuration on my OS X laptop:

remotefile base nodebase tsetse puppet file /etc/issue
base nodebase tsetse puppet file /etc/motd
remotefile base nodebase tsetse puppet file /root
base nodebase tsetse puppet file /tmp/screens/.
remotefile base nodebase tsetse puppet file
base nodebase tsetse puppet file /var/spool/cron
darwin nodebase tsetse puppet symlink /etc/resolv.conf

I'll break down the configuration that generated this admittedly-basic list of objects and tags.

  • tsetse The laptop name

  • base The base class; all nodes are members

  • basenode The, um, base node; all nodes inherit from it, and it basically just loads the node's operatingsystem class and the base class.

  • puppet This is the top-level collection of the entire configuration. This tag may not stay, since it would currently be on every single object.

  • remotefile A simple wrapper function to encapsulate my primary method of copying files around my network.

  • darwin The laptop's operating system class (based on the output of uname -s)

For each line, the last two tags are the object type and the path to the object (they could also be service names, user names, etc.). I think I should also add a tag for each of the files that mention the object, along with the respective lines from each file.

The tags associated with a specific configuration are sent with the configuration to the appropriate server, but they're also merged into a central repository, so as each node connects and generates its tag list, the central tag repository gets more comprehensive. (Each node's configuration needs to be compiled before its tag list can be generated, and the configuration requires information from the client before it can be compiled, so the nodes need to connect to get its tags.)

Possible uses

The tags are currently unused, but their mere existence has got me thinking:

  • Logging

    I have been expecting that most people would just use syslog to pass logs around -- it's easy, it's pervasive, and Puppet has been developed specifically to be easily compatible. However, it would be difficult to shoehorn tags into syslog, or at least it would be annoying to get them back out; it would need to be based on pattern matching a specific line, which is never pleasant. I'm thinking instead that I could write a log server capable of receiving these log messages, and then enhancing the log messages to store the tags associated with the node that generated the message.

    Imagine being able to trivially find every log message generated by any darwin node in a given time period, or the the error messages from a specific network associated with DNS. Build a database of all of these log messages, with the fields and tags set up appropriately, slap a Rails interface on it, and you could get that pretty easily.

  • Metrics

    Similarly, I have been expecting people to ship their Puppet metrics off to a specific performance app, but there probably aren't any apps out there that gracefully handle associating arbitrary tags with each metric. Build a simple server to accept the metrics that Puppet generates, and suddenly you can generate change count reports on a specific server class like 'webserver' or on how many out of sync security directives there are in the DMZ.

  • Ticketing

    I know I'd like to have each of these tags stored with each ticket generated from Puppet (yes, I hope to eventually have Puppet autogenerate tickets). Query for all outstanding tickets on the web farm or any tickets related to the recent Solaris upgrdae.

  • Reference

    With a web-based annotation system for the configuration, you could use tags as a kind of wiki keyword system -- selecting a tag in a log message automatically finds the definition of that tag (as a server class or functional component or node or whatever).

Tag injections

One of the thing unaddressed in the current proof-of-concept system is that servers will often want to generate their own tags. In particular, I can see adding an 'unmanaged' tag to objects that get mentioned in a configuration (e.g., as a requirement) but that are not managed, or storing the sync state as at least a local tag for an object (e.g., error, synced). Should these tags make their way to the central system, or should users at least be able to get access to those tags?

State tags

The concept of nodes injecting tags brings up the concept of maybe using tags as a means of storing state, or rather, turning the state of an object into just another tag. Again, this would be useful mostly for reports, but it could certainly help quickly find all nodes in an error state, along with the objects associated with those nodes, and this would be especially useful if it were updated live. It might make sense to just support this as a live query against the system -- it should be pretty light-weight, at least compared to, say, querying the actual configuration.

This most likely makes sense to use as either a last resort or as a verification system. If you've got a ticketing system for reporting errors, Puppet could automatically generate tickets when an object fails, and then the ticket system could automatically verify that the object is no longer in an error state before it allows the user to close the ticket.

Collaborative tagging

One of my primary design goals in Puppet is to encourage and support configuration sharing. Configuration management will never advance as a field when every organization has to create its own definition for every server class, because those definitions, like servers managed without automation, present unnecessary and often arbitrary variation. So, I know that collaboration will already be critical to the future of Puppet.

How can tagging affect that collaboration? Would it be beneficial if people published the tag lists associated with each of the objects they manage? Could seeing someone else's tags on an object provide new ideas for organization or configuration? Would people who are unwilling to share their whole configuration be willing to just share their tags?

Configuration discovery

Could tags be used to discover a node's existing configuration? Maybe provide a few important tags through autodiscovery, like operating system, host name, and network, and then use that to start collecting a larger configuration over time.

This seems pretty damn unlikely.

Tags or classes?

Bjork or Goldthwait? Sorry, couldn't resist. Just like in cfengine (although I definitely took the long way around), classes in Puppet can generally be considered as booleans, i.e., tags. There's a heckuvalot of additional semantics associated with these tags compared to, say, a Flickr tag, since adding a tag causes work to happen on the system in question, but it can still be considered just a tag.

As the configurations get both more complex and more dynamic (changing often either because it is designated to do so by users, or because it is automatically reacting to network state, time, etc.), it might make the most sense to have a static configuration in one place that does not mention nodes at all, and then almost a scratch space for the nodes, where you can dynamically pin tags on a host and watch the chaos ensue.

At the very least, if you had a configuration that was changing constantly, you would almost definitely benefit from a map that showed the tags on a given host.

Typed tags

Is it worth deemphasizing the boolean aspect of tags, and add typing instead? For instance, is it worth differentiating server class tags from component tags? Or, like so much else, does it just make sense to intelligently pick class and component names, so that it's relatively apparent?

Obviously the easy solution is to start without typed tags, but it's something to keep in mind. I intuitively think that a lot of the power of tags comes specifically from their simplicity, and adding complexity would probably tend to decrease the flexibility.

Monday, October 03, 2005

Tag and flatten

I am attending the Web 2.0 Conference this week, so I'm preparing for it by thinking about how it can apply to Puppet. I'm guessing that I'm ignoring a significant portion of what's considered to be a crucial aspect of Web 2.0, because I'm focusing on the immense value-add of tagging objects and then providing simple interfaces for sharing and browsing tags, which I think of as the 'tag and flatten' method. That is, rather than building up specific, static heirarchies based on whatever specific categories or criteria, just tag every object in the heirarchy with whatever details you think matter and then flatten the heirarchy, letting the tags themselves draw patterns out.

A good example of self-discovered patterns are Flickr's tag clusters, which are algorithmically determined groups of related tags -- that is, they are tags that are determined to be related by 1) individuals tagging individual items with whatever they want, and then 2) computers assessing those tags and finding sets of tags that seem "related", via whatever definition the algorithm is using.

How can Puppet take advantage of this? I mentioned it briefly in my letter to the O'Reilly Radar, but it's pretty simple. One way to talk about Puppet's goals is that it is empowering sysadmins to normalize object specifications across an entire network. Given any configurable element on any machine on your network -- file, user, package, cron job, IP address -- that individual element should only be mentioned one time in your configuration, and then every host or host class that needs that element (e.g., to install a package or provision a user) just imports that portion of the specification.

The top-down way of looking at this importing is that it presents a bit of a heirarchy, from server, to server class, to service, to element, except that it's not one heirarchy, it's a myriad of heirarchies, one for every node and every element. Try to draw a map of these relationships in anything resembling a real heirarchy and you will soon need more dimensions than string theory. Tag that same set of heirarchies, though, and tag-and-flatten them, so that each configurable element is tagged with the host names, services, and server classes that mention it, and you are no longer forced into one way of seeing the data. You can draw out whatever structures you think are appropriate, and sufficient tools should be able to draw them out for you, just like Flickr's tag clusters.

If you do this for your whole network, you could go to this tagged-and-flattened list of elements and quickly determine which hosts have a given user on them, or which server classes require Apache. And because you have these tags on the elements themselves, you could subsequently tag those elements' data with the same tags. Centralized Apache logs are great, but they lose a lot of implicit information. Imagine a log centralizer that also tagged each log message with all of the Apache element's tags (localized to the specific host, of course -- that is, the tag list would not include every host name with Apache in it or all of the server classes that use Apache, just the tags related to Apache running locally) -- go to your log database and see all Apache logs related to a specific server class, or a specific server, or network, or whatever you want.

If course, those add-on tags require quite a bit of additional infrastructure -- standard means of referring to an individually configurable element (hah! just try to standardize the definition of an element, I dare you!), plus standard ways of retrieving the tags on them, and then tools that actually do so. I'm guessing that this is not worth it for many purposes or many organizations, but I think it is worth it for some, and I also think it adds a helluvalot more value than is necessarily obvious.

Puppet is not quite ready to support this level of pervasive tagging, at least partially because I've been stupidly focused on making it actually do work, but... maybe this is what I will do in the 36 hours or so I have between now and when the conference starts.

Wednesday, September 28, 2005

Web 2.0 and Puppet

Here's a letter I sent to O'Reilly Radar, in relation to my attendance of the Web 2.0 conference:

I just announced the beta release of my software startup's main product, Puppet (, which is a GPLed configuration management solution written in Ruby (no, the point of this email isn't a pitch, it's to ask two questions related to Web 2.0). At this point Puppet is analogous to cfengine, although I believe I've created a significantly superior product, especially Puppet's language. I actually wrote a couple of cfengine articles for last year and I spent three years doing cfengine consulting, along with spending 4 months trying (and failing) to rewrite cfengine's parser, so I know cfengine and its language pretty well.

The reason that I'm writing the O'Reilly Radar about Puppet is that I have plans to significantly develop the Puppet client to create a kind of Puppet mesh network, and while I am convinced that there is some value-add to doing this that's analogous to the value-add in Web 2.0 sites, I can't quite pin it down.

I'm attending the Web 2.0 conference in October, and I'd like to show up with, at the least, some extra contacts so that my time in the hallway track (as it's called at LISA) is a bit more valuable, but I'd especially love to have a bit of a dialog about this idea before I show up, so that the time at the conference is especially valuable.

So, what am I thinking of? Here are some of the important aspects of the setup:

* Each puppet daemon will be modeling the entire configuration of the server on which it's running, using higher-level elements like packages, services, and files. In addition to the normal elements, though, the client will also be modeling all of the relationships between objects -- if a service requires a file, the client will know it.

* Each daemon will also be doing significant monitoring and record-keeping on the client and will have information available on what work it has done -- what packages it has installed or upgraded, which files it had to fix permissions or ownership of, which services had to be restarted, etc.

* Daemons will eventually also be able to model relationships between different servers, although that's not going to happen until probably 2.0. So, what we have now is a mesh capable of modeling not just a single host's configuration but that of the entire network, including hopefully all of the interrelationships between hosts and the services on the hosts. In addition, we have historical information about what things previously looked like and what we've had to do to keep the configuration correct.

How can we throw some Web 2.0 goodness into the picture? Well, I'm not really sure myself, at least partially because the definition of Web 2.0 doesn't seem very clear, but it does kind of necessarily imply humans visiting websites, and here we have neither humans nor websites. So we first have to ask whether it makes sense even to talk about Web 2.0 without those two key features; or rather, it makes sense to ask whether the general principles of Web 2.0 extend beyond the web and into general connectedness.

I think they do. Let's take a simple example: Puppet has classing capabities, where you collect objects and name the collection:

define apache {
service { apache: running => true }
package { apache: install => latest }
file { "/etc/apache":
source => "puppet://server/source",
recurse => true

Like most things, it makes sense to create the configuration in this kind of heirarchical style, but like most things, we want to be able to get more out of the configuration than simple heirarchy. Let's take the flickr route, then, and consider each of these elements to be tagged with 'apache', and then maybe also tag them each with the name of each server to which the 'apache' definition is applied. This doesn't seem too useful to start with, but if we extend it all the way up -- every element on a system is tagged with each class or definition that includes it, and since both classes and definitions can be hierarchical, this could be pretty big:

import "apache" # the definition above

class webserver {
apache {}

case $hostname {
culain: { webserver {} }

This would result in each of the objects in the apache definition getting tagged with 'apache', 'webserver', and 'culain'.

Let's go one better, though; this is marginally interesting on one client, but let's extend it to the whole network: Let's normalize all configuration elements across the entire network. Let's do the same tag-and-flatten to every element on every node on the whole network (ignore where we do this for now, whether on a central server or whatever) -- you now have, continuing with our example, a single apache package element (or maybe one for each major rev), tagged with every host that has apache installed along with a tag for every class or definition that refers to an apache package.

Now take this tagged-and-flattened list and make it available to every node, and add some CLI tools to access it. Now you can connect to any machine on the network and query for tags related to any element, and you'll get back all kinds of metadata -- what hosts also have that element, what classes care about it, what elements depend on it, that kind of thing.

Would this be useful? I can only think it would be useful, even with just that. But then take this and start doing all kinds of weird things, like looking for clusters like flickr's tag clusters -- I can basically guarantee you that you'll find all kinds of interesting patterns and clusters in this flattened-and-tagged list.

So, my two questions to the O'Reilly Radar team are:

1) Is this Web 2.0?

2) Are any of you interested in having a bit of a conversation about this? If not, is there a forum that it makes sense to bring this to? I think Puppet will be a useful and popular tool regardless of whether I can apply Web 2.0 to it, but if I can take all of this information and really do interesting things with it, I think I could have a great tool, and I think I could seriously affect how systems and managed and monitored.

No idea if you're interested, but feel free to post this on the blog if you think it would generate interesting discussion.