Nov 11 2009


Hacker-in-Residence Update and Call to Action

The Hacker-in-Residence program at Relevance has been a great success so far. Our current crop, Corey Haines and Nick Ang, have been a blast to have at the office, and we've learned a lot from each of them. The best thing about the program is the extended chance to see another style and approach; with 6 to 12 weeks to interact, you can really start to grok how that person approaches technical problems in a way that a day or even a week can't quite illuminate.

We've got open slots coming up, so if you are interested, here's the setup again:

  • Relevance:
    • pays you (this isn't an internship)
    • provides you a place to stay at our expense
    • makes you part of the team, and treats you like any other team member
  • You:
    • get here (Durham, NC)
    • feed and clothe yourself
    • give us 6-12 weeks of ass-kicking hacking
    • teach us what you know
    • hopefully learn something from us too

This crop of hackers has gotten to work with Ruby, Rails, Clojure, RSpec, Cucumber, Spread, Redis, and more. They've worked on giant projects for huge customers, and 1-week projects for immediate deployment. They've sat in on our team retrospectives, risk meetings and retros with clients, strategic planning sessions, and a lot of lunches filled with Indian food and technical rambling. Which means they've just been part of the Relevance team.

If you are a lone wolf looking for a place to hang for a while, or you and your employer have an "open relationship" where you can see other people, and you like the cut of our jib, drop us an email at contact AT thinkrelevance DOT com.

Oct 20 2009


The Power of Names

Last week I had a great time at the Latin America Rails Summit. (Thanks to the organizers for inviting me!) During breaks in the action here and there, I spent time working on Tarantula, and for a lot of that time I was sitting with Chad Fowler and David Chelimsky. David was working on RSpec, and asked Chad and me for help coming up with a good name for a method. The ensuing conversation struck all of us as a great example of the power of good names.

It's often very difficult to find just the right name for something in a program. It's also hard to understand why names are so important. Names are just symbols, after all, and in some sense they're arbitrary. Certainly the computer doesn't care what name you use for something, as long as you're consistent. And programmers are fairly good at dealing with arbitrary names; think of all the acronyms we use without ever thinking of what the letters stand for, and repurposed words like "bus," "mask," and "string," with technical meanings that bear little resemblance to the original English meanings.

So if we can deal with non-optimal names just fine, and choosing good names is really difficult, it hardly seems worth it. But good programmers understand the value of names. This story illustrates how hard it can be to find just the right name, and also how good names can help you to understand your own program and domain in deeper ways.

Wrapping Assertions

David was working on RSpec's built-in DSL for defining matchers. A matcher in RSpec is the object that actually tests a particular condition to determine whether or not it holds, along with a method that's used to select the desired matcher. There was a time when creating a matcher was a bit cumbersome, but David and other RSpec contributors have made it much easier over time, and part of that is a little DSL. Here's an example, from the RSpec docs:

    Spec::Matchers.define :be_in_zone do |zone|
      match do |player|

That snippet creates a matcher called be_in_zone that checks whether a supplied player (the actual value, in traditional unit-testing parlance) is in the expected zone. With that matcher defined, a programmer can write an expectation in this form:

    bob.should be_in_zone(3)

In some situations, RSpec needs to work with assertions defined for test/unit. That works just fine; when using RSpec with Rails, for example, you can use Rails' custom assertions like assert_select and it just works. But RSpec users would like to be able to easily wrap an assertion in an RSpec matcher, so that their tests are more consistent.

That's the feature David was working on: support in the matcher DSL for calling a test/unit assertion and mapping that to RSpec semantics. The matcher needs to call the assertion, determine the result, and report that result the way an RSpec matcher is supposed to.

David wrote the first version without thinking about names very much (just to get it working) and came up with the wrapped_assertion method, used like this:

    Spec::Matchers.define :equal do |expected|
      match do |actual|
        wrapped_assertion(AssertionFailedError) do
          assert_equal expected, actual

Both test/unit and RSpec recognize three results of a check: success, failure, and error. Success is when everything happens as expected. An error occurs when the code under test (or perhaps the test itself) raises some unexpected exception. Failure, on the other hand, happens when the code all seems to work (in the sense of not raising any exceptions) but one of the expectations is not met. In test/unit, failure is indicated when an assertion raises AssertionFailedError.

So two of the three situations are indicated when an exception is raised; the distinction lies in which exception. The definition of wrapped_assertion is explicit about that: it takes an argument that is the exception class representing failure.

    def wrapped_assertion(error_to_rescue)
      rescue error_to_rescue

As you can see, the matcher is supposed to indicate success by returning true and failure by returning false. Any unexpected exception just bubbles out of the method, indicating an error.

When that was working, David checked it in and asked Chad and me for our thoughts about a better name than wrapped_assertion.

He started by explaining the situation and refreshing our memory about the semantics. We promptly threw out some possibilities: fail_when, fail_on, fail_on_raise ... I don't remember all of the suggestions, but I know those were in the mix. And I think we settled on fail_when_raises, because that made sense even if the parameter was optional and omitted: fail_when_raises will fail when any exception is thrown, whereas fail_when_raises(AssertionFailedError) will fail only when that explicit error is thrown.

That was that, we thought. It was dinner time, so we closed our laptops and forgot about the problem.

Keeping the Domains Straight

The next day, however, we found ourselves around a different table, back to working on our three different projects. David realized he wasn't happy with the method name, for a subtle reason.

This is a method that bridges two domains: the test/unit domain and the RSpec domain. Inside the method, test/unit rules: we call an assert method and interpret the result. Outside the method, however, is the domain of RSpec. The method name needs to be expressed in RSpec terminology that's appropriate for the situation.

The test/unit terminology of success, failure, and error does apply to RSpec, but not at the level of matchers. A match expression is not a complete expectation in RSpec; it only constitutes an expectation when matched with a should or should_not, like this:

    actual.should match(expected)
    # or
    actual.should_not match(expected)

So the matcher either matches or not, and returns true or false to indicate that. The presence of should or should_not indicates whether that match was expected, and translates it to success or failure. (This allows a single matcher to handle both positive and negative expectations, as opposed to test/unit assertions, which tend to come in positive/negative pairs such as assert_equal and assert_not_equal.)

Method names including the words "fail" or "succeed" were letting the test/unit domain leak out of the method body. A better name would be expressed in RSpec terminology. We toyed with false_when_raises until Chad suggested match_unless_raises and we all knew it was right.

Deeper Understanding

At that point we all thought we were done, and I mentioned that it would be good to blog about the whole dialogue; many programmers don't take names very seriously, and it would be good to show the real experience of three good programmers spending more than 5 minutes, in two separate conversations spread across two days, on a name for one simple method.

But the big payoff was yet to come. Once the correct name was in place, it helped David to see something important about his problem domain.

Let's look at that original example again, now with the new name:

    Spec::Matchers.define :equal do |expected|
      match do |actual|
        match_unless_raises(AssertionFailedError) do
          assert_equal expected, actual

There's repetition there---the parallel structure of match and match_unless_raises---that is a clue to something wrong. David noticed it immediately and realized that match_unless_raises should be promoted to the same level of the API as match. Now the example is beautifully simple:

    Spec::Matchers.define :equal do |expected|
      match_unless_raises(AssertionFailedError) do |actual|
        assert_equal expected, actual

That's all it takes to write a matcher that wraps a test/unit assertion.

A lot has been written about the importance of names in programming, including Tim Ottinger's classic Ottinger's Rules for Variable and Class Naming. Good names help us communicate with other programmers, and with our customers. The notion of the ubiquitous language that features so prominently in domain-driven design is based largely on the choice of good, meaningful, shared names.

But good names also reflect the clarity of our thinking, and help us to communicate with ourselves about our own thoughts. And when we get the names right, our code and the problem domain can speak clearly to us, revealing accidental complexity and the underlying simplicity we're searching for.

Good names are worth the trouble.

Oct 19 2009


The Case for Clojure

The case for Clojure is richly detailed and well-documented. But sometimes you just want the elevator pitch. OK, but I hope your building has four elevators:

  • "Concurrency is not the problem! State is the problem. Clojure's sweet spot is any application that has state."
  • "Don't burn your legacy code! Clojure is a better Java than Java."
  • "Imperative programming and gratuitous complexity go hand in hand. Write functional Clojure and get shit done."
  • "Design Patterns are a disease, and Clojure is the cure."

Any one of these ideas would justify giving Clojure a serious look. The four together are a perfect storm. Over the next few months I will be touring the United States and Europe to expand on each of these themes:

  • "e;Clojure's sweet spot is any application that has state."e; At Oredev, I will be speaking on Clojure and Clojure concurrency. I will demonstrate how Clojure's functional style simplifies code, and how mere mortals can use Clojure's elegant concurrency support.
  • "Clojure is a better Java than Java." I will be speaking on Clojure and other languages at NFJS in Atlanta and Reston.
  • "Design Patterns are a disease, and Clojure is the cure." I will sing a siren song of Clojure to my Ruby brethren at RubyConf.
  • "Write functional Clojure and get shit done." At QCon, I will give an experience report on Clojure in commercial development. Our results so far: green across the board. Both Relevance and our clients have been pleased with Clojure.

Want to learn about all four of these ideas, and more? Early in 2010, I will be co-teaching the Pragmatic Studio: Clojure with some fellow named Rich Hickey.

If you can't make it to any of the events listed above, at least you can follow along with the slides and code. All of my Clojure slides are Creative-Commons licensed, and you can follow their development on Github. Likewise, all the sample code is released under open-source licenses, including

And remember: think globally, but act only through the unified update model.

Oct 07 2009


Brian's Functional Brain, Take 1.5

Last week, Lau wrote two excellent sample apps (and blog posts) demonstrating Brian's Brain in Clojure. Continuing with the first version of that example, I am going to demonstrate

  • using different data structures
  • visual unit tests
  • JMX integration (gratuitous!)
  • an approach to Clojure source code organization

Make sure you read through Lau's original post first, and understand the code there.

Data Structures

In a functional language like Clojure, it is easy to experiment with using different structures to represent the same data. Rather than being hidden in a rat's nest of mutable object relationships, your data is right in front of you in simple persistent data structures. In Lau's implementation, the board is represented by a list of lists, like so:

  (([:on 0 0] [:off 0 1] [:on 0 2]) 
   ([:on 1 0] [:on 1 1] [:off 1 2]))

Each cell in the list knows its state (:on, :off, or :dying), and its x and y coordinates on the board. The board data structure is used for two purposes:

  • the step function applies the rules of the automaton, returning the board's next state
  • the render function draws the board on a Swing panel

These two functions have slightly different needs: the step function cares only about the state of adjacent cells, and can ignore the coordinates, while the render function needs both.

How hard would it be to convert the data to a form that stores only the state? Not hard at all:

  (defn without-coords [board]
    (for [row board]
      (for [[state] row] state)))

You could write without-coords as a one-liner using map, but I prefer how the nested fors visually call out the fact that you are manipulating two dimentional data.

Without the coords, the board is easier to read:

  ((:on :off :on)) 
   (:on :on :off))

If you choose to store the board this way, you will need to get the coordinates back for rendering. That's easy too, again using nested fors to demonstrate that you are transforming two-dimensional data:

  (defn with-coords [board]
    (for [[row-idx row] (indexed board)]
      (for [[col-idx val] (indexed row)]
           [val row-idx col-idx])))

So, with a couple of tiny functions, you can easily convert between two different representations of the data. Why not use both formats, picking the right one for each function's needs?

When performance is critical, there is another advantage to using different data formats: caching. Consider the step function, which uses the rules function to determine the next value of each cell:

  (defn rules
    [above [_ cell _ :as row] below]
     (= :on    cell)                              :dying
     (= :dying cell)                              :off  
     (= 2 (active-neighbors above row below))     :on   
     :else                                        :off  ))

The "without coordinates" format used by step and rules passes only exactly the data needed. As a result, the universe of legal inputs to rules is small enough to fit in a small cache in memory. And in-memory caching is trivial in Clojure, simply call memoize on a function. (It turns out that for this particular example, the calculation is simple enough that memoize won't buy you anything. Lau's second post demonstrates more useful optimizations: transients and double-buffering. But in some problems a cacheable function result is a performance lifesaver.)

If you use comprehensions such as Clojure's for to convert inputs to exactly the data a function needs, your functions will be simpler to read and write. This "caller makes right" approach is not always appropriate. When it is appropriate, it is far less tedious to implement than the related adapter pattern from OO programming.

Since multiple data formats are so easy, you can use yet another format for testing.


Brian's Brain is a simulation in two dimensions, it would be nice to write tests with a literal, visual, 2-d representation. In other words:

  ; this sucks
  (is (= :on (rules (cell-with-two-active-neighbors))))

  ; this rocks
  ...  => O     

In the literal form above the O is an :on cell, and the . is an :off cell.

Creating this representation is easy. The board->str function converts a board to a compact string form:

  (defn board->str
    "Convert from board form to string form:

     O.O         [[ :on     :off  :on    ]
     |.|     ==   [ :dying  :off  :dying ]
     O.O          [ :on     :off  :on    ]]
    (str-join "\n" (map (partial str-join "") (board->chars board))))

The board->chars helper is equally simple:

  (def state->char {:on \O, :dying \|, :off \.})
  (defn board->chars
    (map (partial map state->char) board))

With the new stringified board format, you can trivially write tests like this:

  (deftest test-rules
    (are [result boardstr] (= result (apply rules (str->board boardstr)))
         :dying  "...

         :off    "O.O

         :on     "|||

The are macro makes it simple to run the same tests over multiple inputs, and with liberal use of whitespace the tests line up visually. It isn't perfect, but I think it is good enough.

One last note: the string format used in tests is basically ASCII art, so you can have a console based GUI almost for free:

  (defn launch-console []
    (doseq [board (iterate step (new-board))]
      (println (board->str board))))

JMX Integration

Ok, JMX integration is gratuitous for an example like this. But clojure.contrib.jmx is so easy to use I couldn't resist. You can store the total number of iterations performed in a thread-safe Clojure atom:

  (def status (atom {:iterations 0}))

Then, just expose the atom as a JMX mbean.

  (defn register-status-mbean []
    (jmx/register-mbean (Bean. status) "lau.brians-brain:name=Automaton"))

Yes, it is that easy. Create any Clojure reference type, point it at a map, and register a bean. You can now access the iteration counter from a JMX client such as the jconsole application that ships with the JDK.

To make the mbean report real data, wrap the automaton's iterations in an update-stage helper function that both does the work, and updates the counter.

  (defn update-stage
    "Update the automaton (and associated metrics)."
    (swap! stage step)
    (swap! status update-in [:iterations] inc))

If you haven't seen update-in (and its cousins get-in and assoc-in) before, go and study them for a moment now. They make working with non-trivial data structures a joy.

You might disagree with my choice of atoms. With a pair of references, you could keep the iteration count exactly coordinated with the simulator. Or, with a reference plus an agent you push the work of updating the iteration count out of the main loop. Whatever you choose, Clojure makes it easy to both (a) implement state and (b) keep the statefulness separate from the bulk of your code.

Source Code Organization

Lau's original code weighed in at a trim 67 lines. Now that the app supports three different data formats, a console UI, and JMX integration, it is up to around 150 lines. How should we organize such a monster of an app? Two obvious choices are:

  • put everything in one file and one namespace
  • split out namespaces by functional area, e.g. automaton, swing gui, and console gui

I don't love either approach. The single file approach is confusing for the reader, because there are multiple different things going on. The multiple namespace approach is a pain for callers, because they get weighed down under a bunch of namespaces to do a single thing.

A third option is immigrate. With immigrate you can organize your code into multiple namespaces for the benefit of readers, and then immigrate them all into a blanket namespace for casual users of the API. But immigrate may be too cute for their own good.

Instead, I chose to use one namespace for the convenience of callers, and mutilple files to provide sub-namespace organization for readers of code. I mimiced the structure Tom Faulhaber used in clojure-contrib's pprint library: a top level file that calls load on several files in a subdirectory of the same name (minus the .clj extension):


I also used this layout for clojure-contrib's JMX library.

Parting Shots

Over the course of Lau's two exampples and this one, you have seen:

  • an initial working application in under 100 lines of code
  • transforming data structures for performance optimization
  • transforming data structures for readability
  • a second (console) gui
  • optimizing the Swing gui with double buffering
  • optimizing with transients
  • visual tests
  • easy addition of monitoring with JMX

And here are some things you haven't seen:

  • classes
  • interfaces
  • uncontrolled mutation
  • broken concurrency

Would it be possible to write a threadsafe Brian's Brain using mutable OO? Of course. Is there a benefit to doing so? I would love to hear your thoughts on the subject, especially in the form of code.

Further Reading

Oct 06 2009


NFJS, the Retro (Twin Cities Edition)

We had a terrific conference retrospective this weekend at the Twin Cities NFJS show. Eight people participated, with me facilitating. The conversation was good, and everyone left excited to implement the SMART goals that we chose.

As usual, we followed the basic structure from the Agile Retrospectives Book: Setting the Stage, Gathering Information, Generating Insights, Deciding What To Do, and Closing.

Setting the Stage

The NFJS conference retro is an odd beast: a sixty-minute retro embedded in a ninety-minute talk about using retros in your own organization. This poses a few special challenges:

  • The team needs to have a working agreement to occasionally "go meta" and talk about the mechanics of the retro during the retro. This risks reducing the value of the retro, but so far attendees have found it effective.
  • The team isn't really a team: it is a subset of the conference attendees thrown together for the first time to do the retro. So, the facilitator has to choose the activities in real time to match the size and composition of the group.

Sticking to the time box, we covered these ground rules in the first five minutes.

Gathering Information

To gather information, we used the team radar activity from the book. A few minutes of brainstorming identified about ten possible items to rate. That is too many to actually rate, so we used dot voting to narrow down to four items. Each participant used a Sharpie to make three dots next to the items they considered most important. As usual, the moving about the room helped people get engaged physically and mentally.

The items and their individual rating (1=worst, 10=best) follow:

  • meaningful content: 8, 7, 8, 7, 7, 7, 7
  • speaker quality: 9, 8, 8, 8, 8, 8, 9
  • track organization/session selection: 7, 8, 6, 6, 6, 6, 5
  • price/value: 8, 7, 6, 6, 8, 7, 6, 8

These numbers are quite high; overall possibly the highest I have seen at a conference retro. Also, the lowest scores were on track organization. When selecting talks for a conference it is almost impossible to please everyone, so while the scores were lower I was not optimistic about finding a useful recommendation. (Note that as the facilitator I kept these musings strictly to myself during the retro!)

Generating Insights

To generate insights we used the learning matrix. This exercise uses four quadrants (happy/sad/ideas/appreciation) to guide brainstorming. In the interest of space I will not reproduce the matrix results here: the information would be meaningless without pages of explanatory text.

Unusually, the "ideas" section filled up first. In fact, to hit the time box we didn't even manage to fill the other quadrants.

Deciding What To Do

To keep to our sixty-minuted time box, we agreed to dot vote for two items on the ideas list, and convert them to SMART goals. A SMART goal nees to be *S*pecific, *M*easurable, *A*chievable, *R*elevant, and *T*imely.

We ended up selecting these goals:

  • By February 2010, update the website and printed materials to include a single-line of text describing the "recommended audience." Free text might seem dumb compared to some kind of tagging, and it is delibarately so. We started to discuss a system of categories and realized that this would make the goal more complex, and therefore less likely to be implemented. Keep it simple.

  • By February 2010, run a birds-of-a-feather (BOF) session titled "What To Do on Monday!" Everyone agreed that the conference talks were inspiring, and included ideas that could create massive improvements back at the day job. But massive improvements are often beyond the immediate control of conference attendees, and it is frustrating to go back to work on Monday and not be able to start a new practice. The BOF will allow attendees and speakers to look across all the conference topics and generate a list of bite-sized ideas. If the BOF goes well, we will use it to generate an outline for a formal talk.

February 2010 might seem like a long way off, but since the retro was a breakout team that did not include the people who would actually have to implement the recommendations, it seemed wise to allow the work to be done during the winter off-season for the NFJS tour.


To close the retro, we did a quick thumbs up/down/sideways check to see if the participants found the retro useful. Votes were seven up, one sideways.

All in all, a good sixty minutes. I am excited about implementing the new ideas, and I think that the regular recurring nature of NFJS provides a great opportunity to close the feedback loop faster than at a conference that runs less frequently.

Recommended Reading

Sep 30 2009


10 Must-Have Rails Plugins and Gems (2009 Edition)

When Paul Graham wrote that the "list of n things" is a degenerate case of the essay, our first thought was "Wow! That's for us!" And we're going one step farther: we're recycling an old "list of n things" essay from last year.

Seriously, we've been thinking of revisiting 10 must-have Rails plugins for a while now. There is a place for lists like that, and the Rails plugin and add-on space has been moving quickly. We are always looking for better ways to do things, so we try out a lot of the plugins that come along. Our list of favorites---the ones that we use on almost every project---is almost completely different than last year's model.

There's one important change in focus: the plugins and gems that are solely related to testing are gone from this list. Of course, that doesn't mean we're down on testing. On the contrary, we built RunCodeRun because we think testing is so vital. We're saving the testing tools for the RunCodeRun blog; we'll be writing another degenerate essay there as a counterpart to this one.

There are numerous other plugins we use for special needs, such as PDF generation or attachment handling. But our favorites are the ones that we use on almost every project. So here they are, along with brief comments explaining why you want to check them out:

  • Inherited Resources: eliminates most of the boilerplate code from our controllers. (The new controller responder feature in Rails 3 is similar in intent.)
  • Formtastic: takes most of the pain out of writing the markup for HTML forms. (Together, Inherited Resources and Formtastic make a nice alternative to scaffolding frameworks like Streamlined and ActiveScaffold.)
  • CapGun: provides easy build notifications (see this previous post for more info).
  • Faker: helps us generate fake data. We use it for testing, but mostly for providing demo data for development and staging environments.
  • Clearance: feature-rich authentication and signup.
  • Safe ERB: helps ensure that our apps are not vulnerable to cross-site scripting attacks. (We look forward to similar functionality being baked into Rails 3.)
  • RedHill on Rails Core: we use this primarily to declare foreign key references in our database schemas. Telling the database about table relationships adds a small cost to our projects, but we've found that the benefits outweigh that cost. (It's unclear where this plugin lives at the moment, but there are numerous forks of it on GitHub.)
  • RPM: Rails Performance Management from New Relic; wonderful for discovering and diagnosing performance problems.
  • will_paginate: the nicest, easiest pagination plugin we've seen.
  • hoptoad: great, customer-friendly notifications about exceptions that happen in the app.

Don't reinvent the wheel! Use these plugins (or others like them), and definitely consider contributing to them if they fall short of what you need!

Sep 23 2009


Quick and Easy Logging with LogBuddy

Good logging is crucial for effective problem diagnosis in production. Plus, easy logging remains a terrific debugging technique during development. As helpful as "real debuggers" are, sometimes a debugging log statement is exactly what you need to find a problem quickly.

LogBuddy is a little gem that makes good logging easier. It helps even in Rails, which already has good logging support, and it's a bigger help when building gems and standalone Ruby apps, where you have to start from scratch with logging.

What does LogBuddy do for you that Rails doesn't already? There are numerous small features, but here are the big ones:

  1. It ensures that logger is available everywhere, not just in classes that extend parts of the framework.
  2. It makes it extremely easy to add informative debugging messages with annotated output and full exception info.

Installing and Adding LogBuddy to Your Project

To install the latest release of LogBuddy, just install the gem:

$ gem install relevance-log_buddy --source

Then add a require 'log_buddy' statement to your app. If it's a Rails app, it's best to add this to environment.rb:

  config.gem 'relevance-log_buddy',
             :source => "", 
             :lib => "log_buddy"

Finally, initialize LogBuddy. In Rails apps, we usually put this in config/initializers/logging.rb:


(You can pass some options to LogBuddy.init; we'll get back to that in a bit.)

LogBuddy creates and initializes a logger for the app to use. In Rails apps, it simply uses RAILS_DEFAULT_LOGGER unless you tell it differently.

Using LogBuddy

LogBuddy mixes a couple of methods into every object in the system: logger and d. Here's how they work.

The logger method is no surprise at all. It simply returns the Logger instance, and you can log by calling debug, info, warn, error, or log methods on it. LogBuddy's logger doesn't usually do anything special; the benefit is that, since it's mixed into Object, it's available everywhere, automatically. (In a typical Rails app, there are numerous contexts where logger doesn't work, and you have to explicitly use RAILS_DEFAULT_LOGGER.)

My personal favorite LogBuddy feature is the d method. Like logger, it's available everywhere. But the d method is designed just for debugging messages. You can call it with an explicit string, or with some object you want to see the value of:

  d some_exception
  d result

Strings are logged the same way logger.debug would do it. Exceptions are logged with all of the information you might want: the message, exception class name, and backtrace. Finally, if you pass any other object, d calls that object's inspect method and logs the resulting string.

Where d really shines is when you want to log several values at once, with annotations to distinguish them. Just pass a single-line block to d, like this:

  d { first; current; last}

That produces these three log lines:

  first = "foo"
  current = "bar"
  last = "baz"

The values you log can be any Ruby expression:

  d { 3; name; @model; RAILS_ENV; options[:limit] }

and you'll get just what you want out of that:

  3 = "3"
  name = "primary"
  @model = #<Contact id: 14, name: "Joe">
  RAILS_ENV = "development"
  options[:limit] = 5

There are some restrictions if you use this feature. The entire call to d must fit on one line, and you must use the curly-brace style of block, rather than the do/end style. Finally, if you want to log multiple values, separate them with semicolons, not commas.

(You may be wondering how LogBuddy accomplishes that trick. The answer is left as an exercise for the reader ... especially since there are some hints in the restrictions just mentioned. Of course, you can always read the source.)

LogBuddy Initialization Options

The LogBuddy.init method takes an options hash. Here are the permissible options:

  • :logger -- you can supply a logger instance for LogBuddy to use. If you don't supply one, LogBuddy uses RAILS_DEFAULT_LOGGER if it's defined; otherwise, it creates a new logger that writes to standard output.
  • :log_to_stdout -- by default, messages from the d method are logged (using logger.debug) and also written to standard output. Set this option to false to only use the logger.
  • :disabled -- set this option to true to turn off the output from the d method. It's common to set it this way:
    LogBuddy.init :disabled => Rails.env.production?
  • :log_gems -- if you set this option to true, LogBuddy watches gem activation and logs information about each gem. This can be useful for tracking down gem activation errors

Try It Out!

LogBuddy is really easy to set up, and then it's there when you need it most: when you're focused on a problem and just need to get the details quickly. Please try it and let us know what you think!

Sep 16 2009


Easy Build Notifications with CapGun

Most of us get notified about lots of little events on our projects: Commits, build failures (and fixed builds), and runtime exceptions, for example. But deployment is where the rubber meets the road on web development projects. Deployment to staging means new functionality to try out and test, and of course deployment to production is even more important.

CapGun is a gem we use to send email notifications whenever we deploy one of our projects. It works with Capistrano and uses ActionMailer to let every interested party know when a deployment happens.

Here's how to work with CapGun in a Rails project. Add this to config/environment.rb:

  config.gem 'relevance-cap_gun', 
             :source => "", 
             :lib => "cap_gun"

Then install and unpack the gem:

  $ rake gems:install
  $ rake gems:unpack

Finally, edit your deployment script (usually config/deploy.rb) and add this:

  require File.join(RAILS_ROOT,

  set :cap_gun_action_mailer_config, {
    :address => "",
    :port => 587,
    :user_name => "", # deploy bot email address
    :password => "outtacontrol",  # deploy bot email password
    :authentication => :plain 

  set :cap_gun_email_envelope, {
    :recipients => %w[], 
    :from => "Foo Project Deployment Bot <>"

  after "deploy:restart", "cap_gun:email"

We usually use GMail as our MTA for this purpose, and CapGun includes support for secure, authenticated email connections so that you can do the same. It's best to create a special-purpose, throwaway email account just for things like this; that way there's little risk in putting the email password in the project deployment script. (But there's also nothing stopping you from reading it from another file that's not checked into source control, just like you would do with your production database passwords. The deploy.rb file is just Ruby code, after all. And if the source is going to be in a public repository, you should definitely do that.)

The deployment email has a brief summary at the top (in fact, the subject often tells you all you need to know). But there are useful details in the body. Here's an example of what you might see from the configuration above:

From: Foo Project Deployment Bot <>
Date: Tue, 15 Sep 2009 17:27:57 -0400
Subject: [DEPLOY] fooproj deployed to production

fooproj was deployed to production by george at September 15th, 2009 5:27 PM EDT.

Nerd details
Release: /var/apps/fooproj/releases/20090915212721
Release Time: September 15th, 2009 5:27 PM EDT
Release Revision: 69246739f2a1cbcef5ca76f1842c21849db7778a

Previous Release: /var/apps/fooproj/releases/20090915194707
Previous Release Time: September 15th, 2009 3:47 PM EDT
Previous Release Revision: 9ceb19b1777470d1d5133f433fdd5d1873c7a4c0

Deploy path: /var/apps/fooproj
Branch: main

Commits since last release
d37a15c:who put a blink tag in here?
f1b603e:Added favicon

Setting up CapGun only takes a few minutes, and you'll love the increased visibility into your team's deployments. (And even if you're a team of one, you'll appreciate having the emails as a record of your deployments.) Try it out, and let us know what you think!

Sep 09 2009


Keep Models out of Your Views' Business! (Part 2)

Last week, I wrote about using Rails’ I18N facilities to break dependencies between your models and views, by changing how model and attribute names are displayed. But there’s another place where the implementation of models sometimes peeks through into views: validation error messages.

Depending on how you use them, Rails’ error message helpers may insert attribute names into messages, and the changes from last week will take care of that automatically. But it goes farther than that. Often, validation error messages express things in a way that is not appropriate for end users. The usual solution is to override the default messages with `:message => '...'` options on the validation declarations. But again, that means the model is actively involved in presentation issues, and we’d like to avoid that if possible.

Whole plugins have been written to try to solve this problem, but Rails internationalization support has made it much easier. When validation error messages have to refer to the name of a model class or attribute, they will use the locale definitions we’ve already supplied. And it turns out that they look in the locale for error message text, too.

Overriding Validation Error Messages

Last week, we used a typical Rails example: a blogging app with a model called `Post` with attributes `title` and `body`. (And we used I18N to relabel `Post` as `Article`, and `title` as `headline`.) Let’s continue with that example, assuming this declaration in `Post`:

  validates_presence_of :title

The default error message (with the re-labeling we’ve already done) is “Headline can’t be blank.” How would we change that to something friendlier, like “Headline should not be empty; please supply one”?

Let’s add a little more to `en.yml`—an “errors” section that includes customized validation error messages:

      # models and attributes sections carried over from
      # previous article
        post: "Article"
          title: "Headline"
      # errors section is new
                blank: "should not be empty; please supply one"

We’re now several levels deep in this YAML structure, but it should be fairly easy to follow. Under `activerecord.errors` there’s a section for `models`, within which we find our model (`post`). Of its `attributes` we’ve supplied a custom error message for the `title` attribute; in particular, we’ve customized the `blank` message.

With that change in place, the error messages will read as we’d like them to, with no change to the model or the views.

Note that we restricted this change to just the `title` attribute of `Post`. If we wanted to change all “can’t be blank” messages for all attributes of `Post`, we could do that as well:

          blank: "should not be empty; please supply one"

Finally, if we wanted to make the change across all the models, here’s how:

      blank: "should not be empty; please supply one"

Which Messages Can I Override?

You can see the current default messages in the source code for the activerecord gem; just consult the file `lib/active_record/locale/en.yml`. But here’s the entire list, mapped to the validation methods that use them. (Some validations use different messages depending on the circumstances.)

`:accepted` (“must be accepted”)
`:invalid` (“is invalid”)
`:confirmation` (“doesn’t match confirmation”)
`:exclusion` (“is reserved”)
`:invalid` (“is invalid”)
`:inclusion`(“is not included in the list”)
`:too_short` (“is too short (minimum is {{count}} characters)”)
`:too_long` (“is too long (maximum is {{count}} characters)”)
validates_length_of (with :is option)
`:wrong_length` (“is the wrong length (should be {{count}} characters)”)
`:not_a_number` (“is not a number”)
validates_numericality_of (with :odd option)
`:odd` (“must be odd”)
validates_numericality_of (with :even option)
`:even` (“must be even”)
validates_numericality_of (with :greater_than option)
`:greater_than` (“must be greater than {{count}}”)
validates_numericality_of (with :greater_than_or_equal_to option)
`:greater_than_or_equal_to` (“must be greater than or equal to {{count}}”)
validates_numericality_of (with :equal_to option)
`:equal_to` (“must be equal to {{count}}”)
validates_numericality_of (with :less_than option)
`:less_than` (“must be less than {{count}}”)
validates_numericality_of (with :less_than_or_equal_to option)
`:less_than_or_equal_to` (“must be less than or equal to {{count}}”)
`:blank` (“can’t be blank”)
`:taken` (“has already been taken”)

Interpolating values

You probably noticed that some of the default messages contain the string `{{count}}`. Validations that use some threshold value pass that value into the I18n library as the option `count`, and I18n interpolates the count value into the message string.

In addition to `count`, all of the validations supply three other values that can be interpolated: `model` and `attribute` (the model and attribute names, already humanized as discussed in part 1) and `value` (the erronenous value of the attribute). You can write your error messages to make use of any of those values, using the same double-curly-brace syntax.

Writing custom validations

If you write your own validation methods, whether just for your project or in a plugin, you should make use of the same mechanisms.

When a validation detects an error, it reports that error by calling the `#add` method on the `errors` object. Here’s an example (taken from `validates_inclusion_of`, and slightly modified to make more sense out of context):

  record.errors.add(attr_name, :inclusion, 
                               :value => value, 
                               :default => options[:message]) 

The first parameter is the name of the attribute being validated. Second is the symbol that’s used to look up the default message text from the I18n message repositories. Finally there’s an options hash, which includes the current attribute value (for possible interpolation into messages) and any new message text that may have been supplied directly on the call to `validates_inclusion_of`.

(That last bit is somewhat confusing; the option to `add` is called `default`, but it’s actually a specific override. That bit of weirdness is there to maintain some internal compatibility for the benefit of plugins that were written for older versions of Rails. I wouldn’t be surprised to see that cleaned up in Rails 3, although for now it still works the same way.)

All you need to do in your custom validation is to use the `#add` method in the same way, and supply default error message text in a message repository, under the `en.activerecord.messages` key. You can do it directly in `config/locales/en.yml`, like this:

        bozo: "can't be ridiculous"

Plugins and gems can supply their own YAML file. They just need to tell Rails about it by pushing the file path onto `I18n.load_path`.


As I said last week, the fact that Rails uses model and attribute names in views isn’t a bad thing; it’s a useful convention that helps developers move quickly. The test of a framework like Rails is how easy it is to break those links when you need to. Rails’ internationalization support makes it easy to solve this particular problem, with the added benefit that it makes your application easier to localize, should you need to go down that path.

I’ve completely stopped overriding validation error messages directly in the model code. You should, too!

Sep 02 2009


Keep Models out of Your Views' Business!

One of the core goals of the MVC architecture is to separate the presentation of a model from its implementation and semantics. And yet Rails, out of the box, takes the representation of models---even the names of tables and columns in the database---right through to the user. The default behavior, for form labels and other places where models and attributes are referred to, is simply to use the name of the model or attribute.

Does this mean Rails' implementation of MVC is broken? Not at all! My favorite definition of "architecture" in software is "the set of decisions that will be hard to change later." And the best architectures are the ones that keep that set small, allowing developers to move quickly without fear of being boxed-in later in the project. Rails uses its default behavior to let you move quickly, but leaves your options open for changing that behavior later.

I think Rails' defaults work really well in most cases. After all, we should be working with a domain language that we share with customers and domain experts, so it's likely that our schema and models will use names that are appropriate for presentation. Nevertheless, it's common to reach a point where the user interface needs to use different terms for some of the domain concepts.

So let's see how Rails can help. Assume we've developed the "canonical" Rails application, a blogging engine. The primary model is called Post, and each post has a title and body. And then, the customer decides that the users will prefer writing "articles" rather than "posts", and "headlines" instead of "titles".

What can we do?

Breaking the Link: The Obvious Ways

The most obvious, brute-force solution is to avoid the defaults, instead supplying literal names in all of our views. While there are places where it will be appropriate to supply literal labels, this option should be reserved for specific places where the context requires a different label. Using this approach to change labels throughout the application would be way too much work, and create a maintenance nightmare.

You could actually change your model---rename the table, or column, along with every place that name appears in your Ruby code. But that approach would be tedious and error-prone if the name were central to your model. More importantly, it would be a mistake to change the domain model understood by those involved in the project just to satisfy presentation constraints. It's at this point that the MVC architecture needs to pull its weight by helping you deal with this problem.

Digging a little deeper, you realize that the places in ActionView that use model and attribute names (e.g., the label helper) transform them to "human" form first, removing underscores, adding spaces, and capitalizing appropriately. They do that by calling the human_name method (for model names) and human_attribute_name (for attributes). Every ActiveRecord model inherits those methods from ActiveRecord::Base. So one answer is to override those methods. That's much better than either of the other options, but it's still not very good. If you solve your problem this way, the model will be actively involved in presentation issues. We want to work with the MVC architecture, not against it.

Breaking the Link: The Right Way

The answer to our problem can be found in the documentation for ActiveRecord::Base#human_attribute_name:

This used to be depricated in favor of humanize, but is now preferred, because it automatically uses the I18n module now.

We can use Rails' internationalization support to supply the new names we want to present to users. Think about your current problem as specifying the English localization for your application.

If it's not already there, create the file config/locales/en.yml, with the following contents:

        post: "Article"
          title: "Headline"

Under the en.activerecord key there are two sections that define presentation names for the locale: models contains names for models, and attributes contains names for ... well, I think you get the idea. So in our example, the new name for a post is "Article", and the new name for a post's title is "Headline". Note that you specify attribute names in the context of a model class, so if there were another title attribute in your system on a different model class, it would not be affected.

Suddenly, throughout your application, those changes will take effect. You can still supply different labels in particular contexts as needed, but the default is to use the names in the locale file.

Update: As commenter Tim Watson pointed out, it's not currently automatic everywhere. It does automatically happen in error messages, but not in labels (making labels handle this automatically is planned for an upcoming Rails point release). You can supply the proper label text on your own (using the human_attribute_name method). If you're using custom FormBuilders, you can easily encapsulate this change; otherwise, see commenter Priit Tamboom's solution for doing a quick monkeypatch of the label helper until Rails handles it properly.

As you may have gathered from the documentation excerpt above, you should use Post.human_attribute_name('title') instead of calling humanize on an attribute name when writing views. Likewise, you should use post.class.human_name rather than or the literal 'Post' for model class names.

Also, associations are treated as attributes, rather than inheriting the new name of their target model. So if some model had an association to multiple posts, you would need to define an attribute renaming in the appropriate model; it would not automatically be called "Articles".

Finally, this approach doesn't deal with controller names in URLs; for that, you should modify the routes in config/routes.rb.

More To Come

There's more to this story, because there's another place where your models peek through into the user interface in undesirable ways. I'll address that in a follow-up post.

Popular Tags