Mar 25 2009


Agile Retrospectives

Agile teams manage change and risk by adapting. But to adapt, you must identify opportunities for change and take them. Retrospectives are a fun, cost-effective way for your team to learn and change.

In this talk, we will begin by conducting a mini-retrospective, so that you get a feel for the basic process. Next, we will review the core principles of a retrospective, and use these principles to compare and contrast a variety of retrospective activities from the book Agile Retrospectives.

Next, we will explore a few retrospective activities in greater detail. These are some of the favorites that we use regularly at Relevance:

  • team radar
  • prioritize with dots
  • learning matrix

Finally, we will talk about how to tune retrospectives to the needs of your team at a specific moment in time. No two retrospectives are alike, and an experienced facilitator adds value by adapting a retrospective to meet the current need.

Mar 03 2009


Fork Everything!

It is now reasonable for some agile teams to fork most or all of their dependencies. Here's Why:

A Brief History of Forking

Traditionally, forking a software project was a big deal. You forked when you disagreed about direction and philosophy. Disagreed a lot--so much that you have were willing to make a big effort to do things your own way. And while forking didn't have to imply bad blood between forkers and those getting forked, that was sometimes the case as well.

So what is the big forking deal? Forking causes problems at several levels:

  • API Compatibility: How does a prospective user of a library deal with multiple forks with differing APIs.
  • Quality: If there are multiple forks of a project, how do you know which one is good?
  • Configuration: How do you track dependencies when multiple forks of a project have the same name.
  • Documentation: What's the difference between the forks?

And, of course, all of these problems increase for larger code bases--possibly faster than linearly with code size.

So, for a long time, it was safe to assume that forking was a big deal, and best employed rarely.

Forking Pain

To understand what changed, it is best to go back and revisit the problems of forking and ask how they might be exacerbated or mitigated:

  • API compatibility is a problem that grows if API differences are large, or expensive to discover. On the other hand, API changes are much less of a problem when they are small, or cheap to respond to.
  • Evaluating code quality is a problem if QA is expensive, either for a library itself, or for the integration between that library and your system. On the other hand, if there was a cheap test that said "This fork of X works correctly with my project," then forking X yourself looks more reasonable.
  • Configuration is a problem to the extent that it is expensive to track and manage forks. On the other hand, if you can easily review your dependencies, and switch between forks of a project in a few seconds, then configuration becomes less of a problem.
  • Documentation is a problem. What if a fork fixes a problem, or adds a feature, but the documentation is not up to date? On the other hand, if there is a cheap way to discover what a fork does, then a buffet of forks to choose from can be a good thing.

So forking is more reasonable, to the extent that dealing with API compatibility, code quality, configuration, and documentation/discovery is cheap and fast. Different people can reasonably disagree about "how cheap" and "how fast," but everyone should be able to agree that there is a continuum, and that particular development practices could make forking easier or harder.

Reaching a Tipping Point

Over the last several years, we at Relevance have participated in several trends, each of which lowers the cost of forking.

  • Good unit tests help with several of the items above. A good unit test suite will document the API and demonstrate the code's quality.
  • Good integration tests make it cheap to discover API compatibility issues. A good integration test makes it easy to evaluate alternative configurations that use different forks of dependent libraries. Finally, a good integration test can quickly prove that fork X works with your project.
  • Continuous integration helps you keep on top of the code health of a large number of projects. We developed RunCodeRun in part to keep all of our forked projects building cleanly.
  • Low-ceremony languages make everything smaller. Code is smaller and the tests are smaller. Documentation can be done almost entirely in the code itself.
  • Test-driven development helps produce good APIs between subsystems, by making bad ideas painful during development. Good APIs have a smaller surface area, and need to change less.
  • Distributed version control systems such as Git make forking itself cheap and easy. More importantly, you can quickly update your configuration to switch between different forks. Also, it is much easier to manage merges and maintain your own personal fork that pulls needful pieces from multiple other forks.
  • Relentless refactoring keeps code readable, so the prospect of evaluating a fork by reading or diffing its source code is much more palatable.
  • Open source libraries and a culture of source-code deployment make it easy to manage all your dependencies at the source code level. Most of our Ruby projects vendor everything, so it is a simple Git operation to switch to a different commit (or a different fork!) of any dependency.
  • Social sites like GitHub make it easy to discover and track forks of a particular project, and for many different forks to pull from each other. Tools like the Network Graph Visualizer show how your fork differs from other forks, and can help you quickly locate commits you may be interested in.

The combination of all these factors makes forking way easier than I would have believed possible, even a few years ago.

Fork Your Dependencies!

As a result of all these trends, we now regularly fork the third-party dependencies in our projects. Imagine the following scenario: You are nearing a project deadline, and you discover a bug in a third-party library. Here are some possible reactions:

  • The closed-source way: Call a paid support line for your commercial software, explain your problem to a drone, and hope for a fix in the next release 18 months out.
  • The open-source way: Contact the project maintainers and convince them of the justice of your cause. If that is taking too long, and you are in a language that enables it, monkey patch.
  • The "Fork 'em!" way: Fork the project and fix it yourself. The original owners can debate your ideas in their own time (or not). Who cares? You are back up and running.

The cost of forking is now low, that it isn't even limited to fixing critical bugs. We fork to fix minor bugs, to add features, or to make usability improvements to APIs. In short, any kind of change we might make in our own code, we might also be willing to fork and make in somebody else's.

If you had asked me 18 months ago if the "fork 'em" approach would work, I would have said "no," and written you off as a crank. Luckily, nobody asked me! Rob just started doing it, and quickly showed that it worked.

Will this work for everybody? Absolutely not. You need to have a lot of other practices in place first. Otherwise, casual forking will only lead to trouble. But if you are running agile the right way, producing and consuming clean, tight, tested code, you may discover forking around is downright healthy for your code.

Feb 25 2009


Developer Day -- Coming to Durham

Come one, come all to Durham's own Developer Day. Relevance and Viget Labs are happy to be co-hosting a one-day, high-density technical day in the heart of downtown Durham.

For the low, low price of $50, you get eight deep technical talks, breakfast, lunch and snacks provided, awesome hallway chatter and a chance to come out to happy hour and hang out with people JUST LIKE YOU. Awesome!

As you can tell from the pricing, we're not in this for the money. We're just trying to gather the technical community in the Triangle and have some good times. Spitting in the face of the Great Depression and all. So register already! What are you waiting for?

Feb 17 2009


Programming Clojure Beta 7 Is Out

Programming Clojure Beta 7 is now available.

What's new:

  • The book now has a foreword, kindly provided by Rich Hickey.
  • The index-of-any example has been completely rewritten. The new version does a much better job of showing off functional style.
  • A new section in the FP Chapter: "Persistent Data Structures" explains structural sharing.
  • Two new sections in the Concurrency Chapter: "How STM Works: MVCC" explains MVCC, and sets the stage for commute. "The Unified Update Model" makes explicit the common update model shared by refs, agents, and atoms.
  • Better example implementation of blank?.
  • Hundreds of smaller changes based on reviewer feedback and reader suggestions.

What's coming:

The book is basically done, and is off for indexing, copyediting, etc. However, I will update any parts of the book affected by the new fully lazy sequences, once the dust has settled.

To make sure you have the latest, greatest version of the sample code from the book, go and grab the github repo.

Thanks again to everyone who has been offering feedback. Keep the feedback coming!

Feb 16 2009


2008 - A Fine Year

I don't often write "corporate update" style posts, but I'm so happy with how 2008 went that I really feel driven to catalog it in some way, if for no other reason than to remind ourselves of the fun we've had over the last year. The simplest metric for how our year went is size; at the end of 2007, we were seven strong. By the end of 2008, we are fifteen (point 2, if you count the Hacker in Residence program). I could not be happier with the people we've brought to the family in 2008. And we're always on the lookout for more geniuses and ninjaz. Although I'd really prefer both; nothing worse than a couple of dumb ninjas.

As for what the team actually accomplished, let's start with our new baby.


We launched RunCodeRun as a free service to the open source community, and in private beta for commercial code. Not only are we free for open source Ruby and Rails projects, but we supported the Rails Rumble last year, offering an account and round-the-clock(ish) support to anybody participating in the Rumble.

We launched RunCodeRun because we feel strongly about code quality, and having the watchful eye of a good Continuous Integration solution is integral to the way we work. Since we kept having problems with the infinite variety of self-hosted solutions we were using, we decided it was high time somebody came up with a hosted version. We looked around, and couldn't find anybody. Never one to let a yak go unshaved, we launched RunCodeRun. We're very excited by what 2009 holds as we ramp up towards coming out of private beta and going fully commercial. But don't forget, we're forever free for open source, so if your OS Ruby code needs some integration of the continuous kind, come get some.

Our Own Open Source

When we founded Relevance, we had a simple rule for time management: 20% of your "billable" hours had to go towards open source development. Which, it turns out, is a fine idea, except when your team is so dedicated to your clients that they simply won't take the time out of their week to do it. So in late '07, we made a switch: instead of 20% of your time, we made every Friday "Open Source Friday". As it turns out, that's the secret sauce. We set the expectation with our partners that Fridays are for the community, and in turn we encourage each other by all working on open source together on Fridays.

Some of that effort goes to open source projects of our own devising, and some to projects founded elsewhere. Of our own projects from '08, we're particularly proud of Tarantula (we love testing and security assurance so much we made a tool to help other teams feel the painlove), Vasco (an all-JavaScript-and-HTML explorer for your RESTful routes), our headless javascript_testing plugin, and Castronaut (Sinatra-based CAS implementation for single sign-on).

Security Book, Auditing, Open Source

Speaking of Castronaut, '08 was a big year for Relevance in the security space. Aaron's PeepCode on Rails Security Audit was released and seemed popular.

At the same time, we introduced our Security Audit service (to complement our Code Audit and Performance Audit offerings). Our auditing services have been a big hit, and have introduced us to some great teams, including the great folks at the Naval Research Lab. While working with them, we discovered the need for a simpler, Ruby-based CAS implementation, and Castronaut was born from that project collaboration.

Company Process

It feels funny to talk about our corporate process in this way, but I'm stoked about the way we work at Relevance. Our semi-annual retreats give the team some time to really focus on what we do and how we do it, and also on discovering the real differences between Speyside and Islay. Our holiday party (this time, at ACME Food and Beverage Co.) is a great time, but the two-day retreat is the real gem of the holidays.

It isn't just that, though. We've completely imbibed the agile zen and we have a full company retrospective every two weeks. For an hour and a half, we discuss what's going well, what could be going better, who's happy, who's not, and what we can do about it. This has become such a heartbeat for the company that missing or postponing one has the exact same effect on us that an arrhythmia has on your own heart: it doesn't feel good, and you think something might be really wrong.

On top of that, we encourage everybody at the company to schedule personal retrospectives, which take the place of progress reviews. The rules are simple:

  • a personal retro takes half an hour
  • only you can schedule a retro for you
  • you can invite everybody, or just a few people, entirely up to you
  • send out a list of questions you want people thinking about a week before hand
  • be prepared for honest, sometimes brutally honest, feedback

Sometimes, people have to be prodded to schedule a retro (Stu, I'm looking right at you, pal) but in general, everybody is anxious to do them. I bet we average 3 a year per person, and universally, everyone thinks they are enormously helpful. I wouldn't want to do it any other way; mandatory progress meetings with the boss seem so tepid by comparison.

And, finally, we added the Hacker in Residence program as a way to expand our reach, work with great people, and introduce outside folks both to Relevance, and, frankly, to North Carolina. We give room, board, and a nice consulting rate, and you give us 6-12 weeks of ass kicking. Hopefully, we both learn lots. So far, it is working beautifully.

Technical Trends

Lastly, we've made some technology decisions in '08 that have turned out to be great choices, and we've written some great material that we're proud of. First of all, we switched over to Git (and GitHub) which we've found to be quite a success. Not just for our internal process, but for the communal aspect of it as well. Heck, we like it so much we based the initial release of RunCodeRun around GitHub.

We've also adopted Clojure, as Stuart's beta (and very soon not-beta) book, Programming Clojure, shows. Clojure is an elegant entry into the general programming toolset, and one we've been quite happy to see grow and mature. If sales figures on the book are a useful indicator of the community's interest in the technology, '09 should be quite a boom year for Mr. Hickey.

On the custom development side, we are proud to have launched a Rails + MySQL app that works with over 100GB of data. Like any share-nothing architecture, Rails scales just fine with passionate developers on the job. A special shoutout to Mark Imbriaco for coming out to lunch one day and validating some of our strategies.

Our training arm delivered a wide variety of classes last year, from Ruby and Rails to JavaScript, Grails, Java Security, and more. We've been expanding our training offerings to include more advanced topics in Ruby and Rails, as well as new technologies like Clojure.

Lastly, we've got some really great blog series last year that stood out to me for quality and depth of coverage. Jason Rudolph really went over the top in his Testing anti-patterns series. And Stuart's series spawned some excellent discussion and was a great launching pad for development of his book.


We could not do what we do without the ongoing dedication and excellence of our partners. This means, first of all, our customers. We've got an awesome group of people we're working with, doing great stuff, and we couldn't be happier. I'd call you all out by name if those pesky NDA's didn't get in the way, but you know who you are. In 2008, we worked on projects in the legal, medical, financial, advertising, fitness, government and professional services industries. We developed social networking platforms for customers like TradeKing, and worked with customers like the Naval Research Lab on single sign-on issues. Our custom development in 2008 has included solutions in Rails, Ruby, Java, .NET, a little Windows automation, and a whole lot of database, security and messaging thrown in for good measure.

In addition, we have a variety of other groups we work with who do a tremendous job for and/or with us:

The Upshot

We set out, at the beginning of 2008, with a few goals for the year:

  • grow big enough to support our current customers as well as tackle the kinds of projects we are interested in
  • don't grow so big that our culture and capabilities suffer under weight
  • adopt the best new technologies
  • make an impact through open source
  • project our values

I definitely feel like we grew the right way in 2008. Our team is big enough to handle multiple complex projects and still provide flexibility, but is not so large that we've forgotten who we are or what we are trying to accomplish. We've adopted technologies (like Clojure and Sinatra, as just a couple of examples) that expand our capabilities and represent alternatives to standard technical approaches to business problems. Focusing our efforts on "Open Source Friday" has vastly amplified our ability to consume, evaluate and contribute to the community. And RunCodeRun is as clear a projection of our values as can be made: we love testing, and we love CI, and we love hosted solutions, and we love Ruby. Doesn't get much clearer than that.

Thanks to everyone who was part of our year last year, and here's to everyone having fun, challenge and success in '09.

Feb 04 2009


Taking Agile from Tactics to Strategy

When it comes to software project management, Agile has won. Oh, sure … most projects are still doing things in a more traditional way, using a waterfall process (or, in many cases, no recognizable process at all). But the best teams are using agile processes, and the thought leaders in software methodologies are focusing their work on agile processes.

But Agile is much more than just a way of running software projects; at its core, it’s a philosophy of how to approach any endeavor. Just take a look at the Agile Manifesto:

Manifesto for Agile Software Development

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Software is mentioned, but most of the manifesto is applicable to any business endeavor.

Many teams have successfully used agile principles to improve the way they run projects. But can we use those principles to do more? Can we use agile principles to improve how we plan for projects? How we identify and choose projects? How we recruit and hire? How we run our entire department, or business, or enterprise?

More Than Projects

Most discussion of agile has been focused on individual projects, or even smaller scales. For many teams, iterations or sprints represent the primary focus; a project is organized as a series of iterations, and little attention is given to managing the project as a whole.

One of the strengths of the agile mindset is that you can get good results that way, and a focus on iterations is not a bad thing! It’s always easier for people to focus on short-term goals, and if doing so yields good results, everybody wins.

But we can do better if we also use agile principles to improve our practices at the scale of the entire project, and even larger. Using agile at more strategic levels can help in two ways.

First, it can help us to avoid project failures. Even projects that are managed according to agile principles can fail, and when that happens, it’s often due to factors that lie beyond the short horizons that we focus on during iterations. To cite just one example, the project that gave birth to Extreme Programming, Chrysler’s C3 project, eventually failed. Kent Beck, the project’s longtime leader and the primary inventor of Extreme Programming, explained that over the course of the project, two crucial customer responsibilities diverged: responsibility for specifying project requirements was given to one customer representative, whereas responsibility for evaluating the project’s performance was given to another. The team failed to notice this risky situation, and ultimately the project was deemed to be a failure, even though the team had been successfully doing all that was asked of it. To avoid such situations, an agile project must periodically lift its gaze toward longer horizons and more distant connections, staying aware of the larger context in which the project exists.

There’s a second way that strategic application of agile principles can help. Our goal as software developers should not be merely to have gigs today; we want to have careers. Each project can be a success or failure by itself, but we should also measure it by the position it leaves us in—whether we are well placed to land the next project, and whether we’ve learned things that will make us even better at what we do.

In much of his writing about agile, Alistair Cockburn compares software development to a “cooperative game”—that is, a game where players collaborate to see how much they can achieve together, rather than competing to see who “wins”.

One particularly compelling example of a cooperative game is rock climbing. A big part of rock climbing is “setting up for the next win”. Experienced climbers plan sequences of moves so that, when they’ve traversed one section of the climb, they’re properly “set up” for the next section. That can involve the particular positions of hands and feet, so that the body is properly situated to reach for the next hand- or footholds. It can also involve ending up at the right place on the cliff face, so that there’s a feasible path onward.

Additionally, rock climbers follow planned progressions in their climbs, choosing new climbs that will expose them to new challenges and help them learn new skills. This is how climbers prepare for more challenging climbs to come.

So in several ways, part of the game of rock climbing is keeping the game going. That applies to our goal in software development. We want to succeed in the narrow senses of fulfilling all the requirements, keeping costs and timelines in check, and having delighted users. But we’d also like to succeed in much bigger ways:

  • Having our customers think “We’d like to work with them again!”
  • Getting references and recommendations from our customers.
  • Most importantly, getting better with each project, improving our skills and practices and not making the same mistakes over again.

Without sacrificing any of the short-term excellence our customers are paying for, we want to use each project to set up for the next win—by providing us the opportunity to play again, and by improving the way we do things so that we can take on bigger challenges and do an even better job.

That’s why we want to take agile from tactics to strategy.

Feedback and Economy

Before we get into the details, though, it’s important to understand the fundamentals of Agile. Many people tend to think of Agile in terms of a particular process—XP, say, or perhaps Scrum, or some hybrid, in-house agile process. But those processes are just manifestations of agile principles in the context of software project management. At a deeper level, what’s Agile all about?

It’s well known that feedback is a crucial part of agile processes. Under scrutiny, agile processes begin to look like feedback engines. Our friend Venkat Subramaniam has referred to agile development as “feedback-driven development”.

You can see the emphasis on feedback in the opening statement of the manifesto:

We are uncovering better ways of developing software by doing it and helping others do it.

Instead of sitting back and thinking about how to do software development better, the Agile Manifesto emphasizing getting to work and paying attention to the results as a way of improving.

But not just any feedback will do. There are many kinds of feedback mechanisms in software processes, and some of them are far from agile. In some cases the feedback comes too late to help; in others, the cost of gathering the feedback overwhelms the process. (And all too often, both of those things are true.) How do we choose the right feedback mechanisms?

The next line of the manifesto gives us a clue:

Through this work we have come to value …

Value: we should focus on what’s valuable, and what provides greater value. Value decisions can be about ethics and morals, but in business contexts they’re also about what gives the most bang for the buck. We should assign greater value to feedback mechanisms that provide more for less cost.

Feedback in agile processes should be both timely and economical. The cost of the feedback mechanism should be matched to the size of the decisions being evaluated. What that means is that, for small-scale, tactical decisions, we need very cheap feedback mechanisms, because we’ll be engaging those mechanisms frequently. And for larger-scale strategic decisions, it’s reasonable for the feedback mechanisms to be a little more costly, because we’ll be looking at larger timeframes and reaping similar benefit from the lessons we learn.

From Iteration to Project

As mentioned above, many teams now embrace agile at the tactical level. And that’s definitely the right place to begin. Starting with the idea of iterative development, it’s very common for teams to embrace practices like:

  • daily standups
  • pair programming
  • spiking
  • behavior- or test-driven development
  • refactoring
  • technical expertise
  • continuous integration
  • customer always available

Those practices make iterations run well, and help the team keep up a steady rhythm of productivity. But it’s common to see teams that don’t really take a similarly agile approach to managing the project as a whole. Practices that contribute to project agility include:

  • well-understood roles
  • story point estimation
  • risk analysis
  • burndown tracking
  • acceptance testing

Story point estimation and risk analysis can work together to contain risk on the project. Many teams don’t like to estimate stories until the iteration when they’re to be worked on. And there are some good arguments for that approach; often the team doesn’t know enough about some stories to give accurate estimates several iterations in advance. However, advance story point estimation helps to identify high-risk features or areas of uncertainty. And risk analysis builds on that, looking explicitly at project risks, helping the team to identify them early and plan for them. Risky areas of the project, by their very nature, threaten to take longer or require more effort than might be expected, so effective risk management involves addressing risks as early as possible in the project, leaving time to react and adjust.

At Relevance, we usually do early estimation of every story, and track that number as the “original estimate”. Later, when we are ready to schedule the story for the current iteration, we estimate the task again; that number is called the “planning estimate”. By that time we know more about the task itself, as well as the current state of the system, so the planning estimate is usually much more accurate. Some projects spend more time than others on original estimates, but we do track both numbers so that we can improve our estimating skills.

Story point estimation is a great example of how practices in agile processes support and reinforce one another, and how small changes in a practice can increase its value. Breaking features up into small tasks, or “stories”, is a key part of many agile processes. However, it’s sometimes difficult to know whether stories are sufficiently small to be well understood, or whether a story might be hiding poorly understood complexity.

Rough estimates in terms of “points” can help. A story that the team agrees is just one or two points is clearly small and simple, and almost certainly well understood; it might correspond to anywhere from one to four hours of work.1 But what about a story that seems like four points? Five? Ten?

There are problems lurking there. A four-point story will probably take, at minimum, half a day, and more likely an entire day or more. Day-long tasks are fairly big; a lot of things need to be done, and any one of them could turn out to be more difficult than expected.

Additionally, there is error in every estimate, and granularity matters. The difference between a one- and a two-point task is 100%, but the difference between a four- and a five-point task is just 25%. And when you get up to nine and ten points, the difference is 10% or so. Can you really estimate a two- or three-day task with that degree of precision?

To deal with these issues, many teams force larger estimates to be less precise. Some teams only allow using powers of two as estimate values (1, 2, 4, 8, etc.). At Relevance, we’ve settled on using Fibonacci numbers (1, 2, 3, 5, 8, 13, and so on). Either series will work, and both have important advantages. For one thing, there are few time-consuming arguments about one point here or there on large stories. Additionally, this system makes it difficult to hide risk. Larger stories tend to blow up to much larger point values (for example, because nobody’s quite comfortable calling it a five, and the next alternative is eight). That helps highlight the risk, and is an acknowledgement of the uncertainty inherent in large stories. Ideally, any story over five points will be split into multiples. Doing that might involve some risk analysis, or a conversation with customers to clarify the requirements, or perhaps a quick spike to explore the uncertain technical aspects of a story. That entire process is unusually effective risk management.

Story estimation is also essential for burndown tracking. Burndown charts are valuable tools for project managers, customers, and developers alike, but far too few agile projects employ them or other analytical tools to help gauge the progress of the entire project over the course of several iterations.

Longer Terms, Longer Scales

The real focus of this article, though, is applying agile principles beyond even the project scale. We can use agile principles to build better teams and software businesses:

  • improving, project over project,
  • finding and choosing new projects,
  • leveraging existing projects and relationships,
  • observing and embracing change in our field and the industry around us.

Here are the things we’ve been doing at Relevance. We’re still learning how to do them well—that’s a big part of what being agile is all about—but we’re convinced that this is what agile principles look like when applied strategically.

Pair Everything

Pair programming has taught us something unintuitive: two people can be more productive working on a single task, guiding and checking each other as they work, than they can be working separately on two tasks. There are many reasons why pairing is good for productivity, but one crucial realization is that pairing helps eliminate mistakes that ordinarily wouldn’t be detected until later, when detecting and correcting them is more costly. Pair programming is a low-cost, immediate feedback mechanism.

It’s an interesting exercise to compare pair programming with “programming by committee”. If pair programming is good, wouldn’t group programming be better?

My colleague Stu Halloway frequently gives conference talks that include live coding, and makes jokes about taking pair programming to the next level, to “giant-room-full-of-people programming”. That turns out to work fairly well in Stu’s talks, but that contest is rigged: Stu carefully plans and rehearses those demos, guides the discussion with an experienced hand, and relies on the audience only to help him out of rare slipups.

True “group programming” doesn’t really work well at all. Just like any committee endeavor, more often than not the group gets bogged down in petty disagreements about inconsequential decisions, or becomes mired in “analysis paralysis”. In programming, one person making decisions is good, and two together are better—and beyond that things quickly fall apart.2

That lesson can apply equally well at the strategic level. I don’t mean that two people should sit side-by-side, hour after hour, day by day, thinking “strategic thoughts.” As discussed above, we should strive for economy by matching our practices to the scale of our decisions and feedback loops. For strategic practices and decisions, we should “pair” by seeking someone to plan with, bounce ideas off of, and muse about lessons learned.3 At Relevance, for example, we have “PM buddies”—each project manager has someone they consult with periodically to hold them accountable, help them stay focused on what’s important, suggest ideas for dealing with problems, and try to make sense of confusing patterns. PM buddy conversations happen weekly (more or less), and are incredibly valuable.

Use the Right Medium

Part of any successful agile project, we believe, is choosing the right technologies for the project. We do believe agile principles can still help you be successful no matter what tech mix you’re working with. Nevertheless, quality technology that helps you to both move quickly and respond to change really amplifies your success.

That’s doubly true when taking a strategic view. In the past, many consultants and consulting firms have tried the opposite approach: they’ll look for a technology that is complicated and difficult to work with and teach, become expert in that technology, and then help to promote that technology, helping to create a market for their specialized knowledge. That strategy has worked well for some, but we think customers are wising up to that approach. Furthermore, it’s not a recipe for true success. We want customers that continue to work with us, and tell their friends about us, because they enjoy the working relationship we have and because we build software that rocks—not because they’re kind of stuck with us because their employees don’t understand (or want to understand) what we do.

We look for tools and technologies that are as agile as we want to be. That’s why we love low-ceremony languages like Ruby, Clojure, Smalltalk, and Haskell. That’s also why we like platforms like Rails and Seaside, and simple, agile project management tools like Mingle, Pivotal Tracker, and Lighthouse.

Know Your Communication Pipeline

Some people prefer voice, some email, some IM; others strongly prefer face-to-face communication. Spend time figuring out what your customer’s style is and adapt to it! It’s worth spending time trying out new ways of communicating so that you can choose the right communication channel for different types of information.

Adding steps or other channels to the communication pipeline is a project risk, but in the real world it happens. We have learned many surprising things from asking project owners to walk through how they present the project to their stakeholders. One customer had an absentee stakeholder who never logged into Mingle, but received a print (on paper!) summary of things going on in Mingle from the accessible stakeholder. This is not an isolated issue. Taking into account customer preferences, we place a much higher value or printing Mingle cards than we might for ourselves.

We also had a customer who used Mingle card transitions as the primary indicator of process, while the team was pushing updates via email and updating Mingle infrequently. What happened there? The team had adapted to one customer (who preferred direct communication) and carried that same style of communication over to a subsequent project.

Now we understand that different customers have different preferences, and we try to have multiple options. We also spend time at the start of a new project figuring out what channels will work best for the particular mix of participants. Communication channels are important enough to deserve explicit attention.

Be “Generalizing Specialists”

Another traditional pattern in software development is to specialize, specialize, specialize. “Nobody hires generalists,” we’ve heard. And there’s some truth in that. But what we’ve noticed is that, while perhaps few teams go out in search of a generalist for a new position, they all want badly to keep any generalists they have on staff.

So specialize, yes, but look for opportunities to broaden your base of skills. Is your project tripping over a problem that’s outside your skill set? Spend at least a little time trying to solve it yourself, rather than waiting for someone else to save the day, or going in search of another expensive specialist. Over time you can build a broad base of general skills that will be applicable across many projects.

There’s a balance to be struck here: if there’s an expert on the team who can solve that specialized problem in a few minutes, your two days trying to figure it out yourself is just a waste of the project’s time. But you can ask to pair on the task with the expert, so that maybe you can handle simple things yourself in the future. (And as a team, you should foster a culture where experts want to share their expertise, not hoard it.)

One recent project at Relevance presented unusual challenges in the area of database query performance. We’ve all got experience with query optimization, but the sheer amount of data and the complexity of the queries in this case went beyond what we knew. We found ourselves balancing conflicting goals:

  • We wanted to solve the problem in a timely and cost-effective way for our customer. That goal led us to consider bringing in an outside specialist.
  • We also wanted to gain this experience ourselves, so that we would be better prepared to tackle similar situations in the future.

We ended up using a mix of billable hours and slack time to research and explore some solutions, and then consulted an outside expert to evaluate and critique the solution we’d identified. Since we were broadening our skills, it made sense to spend a few non-billable hours on the task in addition to the hours the customer paid for. Bringing in the outside expert to critique a proposed solution was much more effective than handing over the entire problem. Everyone was happy with the way the process worked.

Additionally, carefully weigh the future value of new skills. If something strikes you as an unusual problem that you may never encounter again in a software development career, then by all means seek out an expert to solve the problem in a hurry so you can get back to what you do best. But in most cases where we’ve heard “That’s not my job” or “I don’t do that stuff”, the problems are the kind that recur on project after project.

One of the reasons we love Rails is that it assumes that developers will be comfortable with all kinds of different tasks: design, HTML, CSS, programming, database design, SQL, query tuning and optimization, testing, automation, and deployment. Hyper-specialization may in some cases be good for an individual’s career, but more sensible specialization—a few core areas of expertise built on a broad foundation of general development skills—is even better, for the individual and for the team. Seek to build a team of generalizing specialists.

The “spike” (or “spike solution”) is a term used in agile processes for a very quick, exploratory experiment or prototype that is used to rapidly, cheaply determine whether some planned design or implementation strategy will work. Sometimes a spike is used to compare two or more possible strategies. Ordinarily a spike should take an hour or two, or at most half a day.

But the real point of a spike is not a particular time box; instead, the point is that the spike should take considerably less time than the full-fledged solution. The goal of a spike is to either prove or disprove an idea for a very small cost, to avoid spending the larger cost for a full implementation (and the plans that go with it) for an idea that ultimately will not work well.

The same idea can be applied to more strategic, process-oriented solutions. It’s often possible to try out a long-term idea—a new organization of a larger group into teams, for example, or a new project estimation strategy—more quickly, and at a lower cost, than would be required for the full-scale solution. Try the idea out for just one iteration, for example, with minimal overhead, and evaluate the result. Just as with a conventional spike, everyone should be aware of the limitations, and that the result of the spike will not have all of the characteristics of the full implementation. But even the spike can reflect the strengths or weaknesses of the idea, and potentially save a lot of money. (Spikes are most valuable, after all, when they show that an idea will not work.)

Be a Partner

Customers aren’t nearly so valuable to your business as good partners. I think that’s obvious enough.

What doesn’t seem to be obvious is that your customers feel the same way about their vendors.

Don’t limit yourselves in your role on a software project by what you assume your customer wants. Don’t passively accept requirements from and implement them as closely as possible. Instead, learn your customer’s business. Think about, and ask questions about, why features are important and how they will be used. Suggest improvements to features, or entirely new features.

Suggest entirely new projects that you can do together—projects that will help their business, and (obviously) will help yours, if the customer pays you to develop them.

Unlikely, you say? You think they won’t take such suggestions from you? Perhaps that’s because you haven’t taken the partnership far enough. Have you convinced them they don’t need a requested feature yet? Have you explained why a simpler, less costly approach will have more benefits? Have you suggested that they might not really need an entire new project they’ve been thinking about? That’s the kind of thing a partner does.

This is a crucial point. As a partner, you have a stake in the benefit side of the cost/benefit analysis, as well as the cost side. A colleague tells this story: “One of our most disappointing projects hit all the features requested, at planned cost, while shipping real production releases every iteration! Imagine our chagrin when, at the end of the project, the customer performed a benefit analysis (using data we had never seen) and concluded that no customer software needed to be written.”

The bit in the Agile Manifesto about customer collaboration isn’t just a cliché you can use to avoid having to make a detailed, long-term plan. Your customers want you to help them not just by building software for them; they want you to help them understand what software can do, how it can help them—and also how it can’t help them, what its limitations are. We programmers aren’t the only ones who sometimes try to solve people problems with technology; our customers do it all the time. Your customers want you to move around, sit on the same side of the table with them, understand their businesses and their problems, and help to solve them. If you do that, they’ll keep coming back, and they’ll tell others.

Leave Some Slack

It may seem like nonsense, but it’s the truth: not all waste is wasteful.

I won’t spend much time on this point, because it’s been done so well in Tom DeMarco’s wonderful book, Slack. But here it is in a nutshell: people and organizations need some slack—rest, relaxation, downtime, and diversion—to be at their most effective and productive. The most obvious reason that we need slack is so that we’ll have time left in our schedules to take advantage of new, unexpected opportunities, or to adapt to sudden changes. However, the importance of slack is even more basic than that. Slack allows us to be creative, to learn and explore new things, to have the physical and mental energy for bursts of sustained productivity.

At Relevance, we provide for slack time in two ways. First, every Friday is “open-source day”. We use Fridays to work on things that matter to us, and usually that means writing (or contributing to) open-source software projects that help us to work better—such as Castronaut, Tarantula, Micronaut, and Vasco. Some of our Friday time is spent exploring new technologies and tools.

Another way we ensure that we have slack time is by carefully avoiding “death march” schedules. We don’t “sprint” through iterations; we work at a sustainable pace that allows us to stay highly effective throughout the project.

Certain kinds of inefficiencies are deadly, and should be eliminated ruthlessly: busy work, things a well-programmed tool could do for us, things that require our energy and concentration without providing sufficient return. But other inefficiencies, such as having fun at work, pursuing our passions, learning broadly and exploring the nooks and crannies surrounding our work, constitute the slack that gives us room—and the spark—to move.

Lift your gaze beyond your current project, your current technology, and even beyond your current business model (and, for that matter, the business models of your customers and competitors). Look for trends that point to broad-based changes. Software development has changed a lot over the years, and it will change again.

Don’t, however, be naive about trends. Sometimes they represent once-and-for-all sea changes, but more often they’re just parts of larger cycles. For example, we’ve seen a gradual but almost universal turn away from desktop applications toward web apps, and it would be easy to believe that, with connectivity becoming more nearly universal, traditional desktop applications will all but disappear. Take a step back, however, and this looks like part of a cycle, or possibly a step up to a new equilibrium. Architecturally, web apps aren’t so different from how systems were built back in the days of time sharing, and all of a sudden there’s tremendous buzz around building rich apps for phones.

Furthermore, trends can be derailed by unexpected events. For that matter, not all trends are good trends; sometimes the industry might be making a wrong turn, and the best strategy is to find a way to ride it out, solidifying your experience with superior techniques while the industry tries something and eventually finds it wanting. (EJBs come to mind.)

Lift your gaze, spot the trends … and evaluate them carefully. You might lead your customers on a new and profitable road, or perhaps be their guide through some rocky territory. Or you might need to advise them to stay away.

Learning from mistakes doesn’t always just happen; sometimes you have to plan for it. People are so good at noticing things and learning from them at very small time scales that, during activities like pair programming, a lot of the learning happens without our noticing it. Over periods of days, weeks, or months, however, we seem to be equally good at forgetting mistakes, successes, and what we should be learning from them.

That’s why it’s important to have periodic retrospectives—run by someone who understands how to do it well. At Relevance, we do project regularly throughout a project’s lifecycle. More important, perhaps, for strategic thinking as a company, we do company-wide retrospectives every two weeks, and more in-depth retrospectives twice a year. That might seem like a lot, and some of us thought it was overkill when the idea was first proposed—but we’ve all come to find these retrospectives immensely valuable.

Two good books about running retrospectives are Agile Retrospectives by Esther Derby, Diana Larsen, and Ken Schwaber, and Project Retrospectives by Norm Kerth.

Build for Tomorrow (But Not Next Year)

A common practice on agile projects is to build just what you need for the requirements you’re implementing today; don’t build infrastructure or glue to support future requirements, because those requirements might change or even disappear altogether. “You Ain’t Gonna Need It” is the mantra.

But experienced developers, even while embracing that practice in general, have always been aware that there are occasional exceptions. When you’re building something very similar to what you’ve built several times before, your experience can tell you that there’s a very high likelihood that you will need something extra, and it will be cheaper to build it now. A few years ago I wrote about one example that I still strongly believe in: if your system needs a configuration language, it’s nearly always a good idea to go ahead and use a real, full-fledged programming language for that task right from the start. On the other hand, only a few systems need that kind of external configurability at all. YAGNI should still be the default position.

At the process level, projects tend to be much more alike than they are down at the level of code and tests. Our experience as project managers and participants tells us that some of the problems we encounter on current projects are quite likely to occur on future projects.

As a result, when we build things to help us solve our project’s problems—whether technical solutions or process elements—we consider taking the time to build a little more generality or communicate it to the larger team in such a way that it can be an ongoing, repeating part of our process. We tell the other project teams at the next retrospective, for example, or check the automation tool into a company-wide tools repository rather than the project-specific repo.

At the same time, though, we want to ensure that such things don’t become onions in the varnish. We take to heart Alistair Cockburn’s admonishments that all process elements and tools have costs of their own, and that they tend to outlive the problems that they were intended to solve, eventually becoming costly drags on efficiency. So we consider our processes during retrospectives, pruning them ruthlessly of procedures that cost more than they’re worth.

Change, Don’t Churn

As technologists, most of us want to stay on the cutting edge, using tools and techniques that represent the best we currently know how to do. However, there are some real dangers in riding the wave too aggressively.

The first danger is that we might be bitten by hidden flaws. New things often have compelling advantages that are obvious, paired with serious problems that are much more subtle and might only become clear over time. Jumping too early to a new tool or technique, without sufficient experience, can leave you vulnerable when those flaws decide to bite.

The other danger is that of constant churn: being so busy changing that you never build the deep expertise that lets you move really quickly.

You absolutely should stay abreast of new technologies and techniques, evaluating them for inclusion in your toolset. And you probably should be more aggressive about adopting them than the average software team; stagnation is a bigger problem in our industry than excessive change. But be aware of the cost of change itself. If a new tool of framework is 15% better along some axis than the one you’re currently using, a wholesale switch is a very questionable proposition. There may be weaknesses you’re unaware of that offset the obvious gains. Furthermore, deep expertise counts for a lot more than a 15% benefit, so during the period when you’re learning the new tool, you’ll be moving more slowly.

This is part of what we use slack time for: exploring new ideas and technologies on play projects and internal tasks, trying to stay abreast of new things so that, when something clearly has a really substantial benefit over current practice (as Rails did a few years back), we’ll be ready.

Measure the Unmeasurable (Skeptically)

Programmers are always pointing out that things “can’t be measured”—and often we’re right about that. In our field, especially as scales grow beyond tiny, controlled experiments and toy programs, there are way too many variables to know for sure what’s being measured. Additionally, we deal with nebulous things like “features” and “requirements” that don’t even have clear definitions, in many cases.

Tom Gilb is a software consultant who is most famous for stating what has become known as “Gilb’s Law”:

Anything you need to quantify can be measured in some way that is superior to not measuring it at all.

Gilb’s law points out that “that’s not measurable” is often a cop-out. There’s always some way to measure things. The measurement may be inaccurate or questionable in many ways, but it can still give us valuable information that we can’t get if we don’t even try to measure.

Here’s an example: when people have chronic pain, they report their pain level at each medical check in, using a number from 1 to 10. This is totally subjective, but more important to the person than any number of objective measures. Additionally, the way that number changes over time is much more important than the specific value on a particular day.

We can do the same thing in software. During retrospectives we ask every participant for a number of 1-to-10, subjective metrics like that. It turns out that asking about happiness with the pace of the project and getting a chorus of “eight!” responses is a better indicator than a beautifully predictive burndown chart.

That said, it’s important to recognize the limitations of such “measurements of the unmeasurable”. A few years ago, I blogged about some of the limitations and hazards hidden in Gilb’s Law. Later, Jason Yip formulated a brilliant corollary of his own:

Anything you need to quantify can be measured in some way that is inferior to not measuring it at all.

Taken by itself, “Yip’s Corollary” is so obvious as to be useless. But combined with Gilb’s Law, it is a vital dose of reality and wisdom. Clearly there must be something that’s better than nothing—but just picking something at random could easily be worse!

I think the lesson is clear: measure the unmeasurable, yes—but be very careful about what metrics you choose, and avoid using the results to draw firm conclusions. Instead, use your measurements to frame hypotheses that you then test through the other mechanisms we’ve discussed.

Want to Succeed

Finally, of course, none of this will matter unless you really want to succeed and make it work. None of it is difficult to understand. What’s difficult is actually doing it, day to day. Doing things consistently is hard. Reflecting while you’re working—thinking about what works well and what doesn’t, and how you can improve—is hard. Leading and participating in honest, open retrospectives is hard. Leaving slack time in your schedule when there’s money that could be made is hard. Being honest with your customers when doing so means giving up short-term revenue—or even (gasp!) admitting you made a mistake that cost them money—is hard.

You can succeed without doing the hard stuff, but the odds are against you. Similarly, you can fail while doing the hard stuff, but the odds are decidedly in your favor. By following agile principles, you can have:

  • a happy, talented team,
  • solid, working systems for your customers,
  • customers who enjoy working with you, and
  • nimbleness and readiness to roll with change,

and you’ll maximize your chance of success as individuals and as a team.

Is that what you want?


This article is a companion to a conference talk of the same name. To see it, try to catch one of Stuart Halloway’s appearances at a No Fluff, Just Stuff show this year. Alternatively, you can contact us to have one of us come speak to your group.

Comments and suggestions are most welcome.

Thanks to Stuart Halloway and Jess Martin for thoughtful comments on an earlier draft of this article.

  1. The way teams think of points as corresponding to real units of time is itself a complex topic. We will discuss that more in our upcoming article about Iteration Zero.

  2. This does not mean that having multiple pairs review a critical code path is a bad idea, though. One of the most interesting lessons of the conference talks is that occasionally somebody will approach the speaker in private after the talk and propose a better solution than anyone in the room came up with!

  3. In fact, in some ways development is the least important thing to pair on! Development has unit tests, and leaves the clearest artifacts for review later. Business analysis, customer interaction, interface design, sales, marketing, and invoicing have nothing as good as unit tests for a single person to use, and leave more ambiguous artifacts for later review.

Feb 01 2009


Six minute "Why git?"

When you successfully deploy a useful technology, you will see one of two things happen:

  • incremental (and possibly large) improvements to how things get done
  • a transformation to new ways of doing things that were unimaginable before

Distributed version control, such as git, is clearly in the latter category. I could make all sorts of detailed arguments about why this is true. (And in fact, I will be speaking about git as often as Jay will let me on the NFJS tour this year.)

But talk is cheap. Just watch this visualization of the Rails commit history. It is only six minutes long, so take the time to watch it full-screen, start to finish, with audio. Rails switched from svn to git in April 2008, just past the five minute mark in the video.

If you run a significant open source project that is not on a distributed SCM, it is a clear warning sign that you are a dinosaur. And this economy may be your meteor.

Jan 31 2009


Clojure Interview on InfoQ

Werner Schuster over at InfoQ has interviewed me about Clojure. Check it out.

Jan 30 2009


Programming Clojure Beta 6 is Out

Programming Clojure Beta 6 is now available. What's new:

  • A new Chapter, "Functional Programming," exlains recursion, TCO, laziness, memoization, and trampolining in Clojure.
  • A new Section, "Creating Java Classes in Clojure," shows how to create and compile Java classes from Clojure.
  • The snake example has been moved into the concurrency chapter. The snake demonstrates how to divide your model layer into two parts: a functional model and a mutable model.
  • The STM example in the introduction now uses alter instead of commute, which allowed a race condition. Given the problem domain of the example, the race condition was acceptable. However, explaining this in the introductory chapter would have been distracting.
  • The lancet runonce function now uses locking instead of an an agent. Yes, you can do plain old-fashioned locking in Clojure! Agents are unsuitable for the kind of coordination lancet requires, because you cannot await an agent while inside another agent.

While I am talking about lancet: it now has its own repository. I have added integration with, so lancet can now call either Ant tasks or other applications. It is still far from being a replacement from Ant or Rake, but maybe your contributions can change that!

The book is now prose-complete, so if you have been waiting for the whole story, this is it. A number of readers have made suggestions for additional topics. If you have suggestions, please add them as comments to this post. For reasons of space and time, most new topics will appear in a series of articles on this blog, not in the book itself.

Clojure and clojure-contrib continue to evolve. To make sure you have the latest, greatest version of the sample code from the book, go and grab the github repo.

Thanks again to everyone who has been offering feedback. I have cleared almost 400 entries from the errata/suggestion page.  Keep the feedback coming!

Jan 26 2009


It's OK to Break the Build

Here at Relevance we are passionate about testing and about Continuous Integration (CI). So passionate, in fact, that we have started a separate RunCodeRun blog for all things testing and CI. Go and check out the latest entry, It's OK to Break the Build.

And if your Ruby project isn't on CI yet, give RunCodeRun a look.

Popular Tags