We had a terrific conference retrospective this weekend at the Twin Cities NFJS show. Eight people participated, with me facilitating. The conversation was good, and everyone left excited to implement the SMART goals that we chose.
As usual, we followed the basic structure from the Agile Retrospectives Book: Setting the Stage, Gathering Information, Generating Insights, Deciding What To Do, and Closing.
Setting the Stage
The NFJS conference retro is an odd beast: a sixty-minute retro embedded in a ninety-minute talk about using retros in your own organization. This poses a few special challenges:
- The team needs to have a working agreement to occasionally "go meta" and talk about the mechanics of the retro during the retro. This risks reducing the value of the retro, but so far attendees have found it effective.
- The team isn't really a team: it is a subset of the conference attendees thrown together for the first time to do the retro. So, the facilitator has to choose the activities in real time to match the size and composition of the group.
Sticking to the time box, we covered these ground rules in the first five minutes.
To gather information, we used the team radar activity from the book. A few minutes of brainstorming identified about ten possible items to rate. That is too many to actually rate, so we used dot voting to narrow down to four items. Each participant used a Sharpie to make three dots next to the items they considered most important. As usual, the moving about the room helped people get engaged physically and mentally.
The items and their individual rating (1=worst, 10=best) follow:
- meaningful content: 8, 7, 8, 7, 7, 7, 7
- speaker quality: 9, 8, 8, 8, 8, 8, 9
- track organization/session selection: 7, 8, 6, 6, 6, 6, 5
- price/value: 8, 7, 6, 6, 8, 7, 6, 8
These numbers are quite high; overall possibly the highest I have seen at a conference retro. Also, the lowest scores were on track organization. When selecting talks for a conference it is almost impossible to please everyone, so while the scores were lower I was not optimistic about finding a useful recommendation. (Note that as the facilitator I kept these musings strictly to myself during the retro!)
To generate insights we used the learning matrix. This exercise uses four quadrants (happy/sad/ideas/appreciation) to guide brainstorming. In the interest of space I will not reproduce the matrix results here: the information would be meaningless without pages of explanatory text.
Unusually, the "ideas" section filled up first. In fact, to hit the time box we didn't even manage to fill the other quadrants.
Deciding What To Do
To keep to our sixty-minuted time box, we agreed to dot vote for two items on the ideas list, and convert them to SMART goals. A SMART goal nees to be *S*pecific, *M*easurable, *A*chievable, *R*elevant, and *T*imely.
We ended up selecting these goals:
By February 2010, update the website and printed materials to include a single-line of text describing the "recommended audience." Free text might seem dumb compared to some kind of tagging, and it is delibarately so. We started to discuss a system of categories and realized that this would make the goal more complex, and therefore less likely to be implemented. Keep it simple.
By February 2010, run a birds-of-a-feather (BOF) session titled "What To Do on Monday!" Everyone agreed that the conference talks were inspiring, and included ideas that could create massive improvements back at the day job. But massive improvements are often beyond the immediate control of conference attendees, and it is frustrating to go back to work on Monday and not be able to start a new practice. The BOF will allow attendees and speakers to look across all the conference topics and generate a list of bite-sized ideas. If the BOF goes well, we will use it to generate an outline for a formal talk.
February 2010 might seem like a long way off, but since the retro was a breakout team that did not include the people who would actually have to implement the recommendations, it seemed wise to allow the work to be done during the winter off-season for the NFJS tour.
To close the retro, we did a quick thumbs up/down/sideways check to see if the participants found the retro useful. Votes were seven up, one sideways.
All in all, a good sixty minutes. I am excited about implementing the new ideas, and I think that the regular recurring nature of NFJS provides a great opportunity to close the feedback loop faster than at a conference that runs less frequently.