The Product Owner View
This page describes the various pieces of how Relevance builds software, from the point of view of a product owner, rather than of a developer.
We'll begin by defining, in very simple terms, the overall philosophy that (we believe) makes Relevance's approach to software development work. Then we'll define roles more precisely, and continue with project structure and practices, from the overall view down to the day-to-day.
Relevance’s Approach in a Nutshell
Risk management is project management for grownups.
—Tom DeMarco and Timothy Lister, in Waltzing With Bears
There are two big risks in software development. The first, and most obvious, is not getting the thing done. The second is building the wrong thing.
One way to avoid the second problem is to postpone doing anything until you're sure you know what’s desired. But that really courts the first risk: not delivering anything.
At Relevance, we take a different approach, that can be summed up in three simple rules:
- We are determined to deliver new business value every single day of the project.
- We ask our customers to give us feedback on everything we do, as soon as possible after we do it. Making adjustments costs very little if we can do it right away.
- We want to improve continuously. We ask both ourselves and our customers to do things just a little bit better every day.
Dividing people into two roles—product owner and developer—is somewhat arbitrary, so we'll start by being clear: what is a product owner? And what are some of the more specific roles that a product owner might fill?
A "product owner" is anyone who has an interest in the completed project. Obviously, in a broad sense, that includes us at Relevance and potential end users of the finished project. But we use the term in a slightly more restricted sense. Product owners are members of the customer organization who will be involved in specifying and accepting the software; in other words, those who are most interested in using the system to accomplish business or organizational goals.
Product owners are those who participate in any of these activities:
- making the decision to engage Relevance for a project
- specifying requirements; writing story cards
- participating in iteration planning meetings; setting priorities
- participating in daily standup meetings
- accepting tasks as completed
- answering questions from the development team to clarify requirements or priorities
Within the customer organization, there may be many product owners. And conflicting priorities for the project are natural in such a situation. Those conflicts and discussions are best handled by the customer. To improve communication and keep development moving ahead with clear priorities, we ask each of our customers to name a "project sponsor." That sponsor is the primary point of contact for Relevance. The sponsor has the final say about task priorities, clarifying requirements, and other decisions that belong to the customer.
None of that is to say that the sponsor is the only point of contact. Other product owners are welcome to attend daily standup meetings, and their perspective on issues is crucial. Part of the project sponsor's responsibility is to hold off development of features where there is serious debate among product owners about how the feature should work.
The business analyst is responsible for elaborating and describing requirements so that they are detailed enough for development of the feature to begin. The analyst works with story cards and must be able to express the requirement in terms of objective acceptance criteria.
On some projects, the business analyst is a product owner; for other projects, one of Relevance's developers or the project manager takes on this role.
Project Manager / Coach
The project manager (also called "project coach") is a Relevance employee who is responsible for ensuring that the project continues to move forward toward the goal. The coach will schedule most of the necessary meetings and (except where a facilitator is involved) prepare any documents that should result from those meetings.
The coach should be watching for potential issues, including changing customer goals, technical gotchas, personnel scheduling challenges, business analysis lagging behind development capacity, and other things that could result in delays or misunderstandings. Resolution of those issues in a timely manner is the primary responsibility of the coach.
The development team, consisting of developers, architects, and designers, work together as one self-organizing, cross-functional team.
Members of the development team employ the following practices:
- pair programming
- tdd and bdd
- code review
- objective acceptance criteria
- continuous integration
- code coverage analysis
- distributed source control
Quality Assurance (QA)
Quality assurance is a role that team members assume as needed to verify that a completed story meets its objective acceptance criteria before showcasing that story to product owners.
Most of our testing is fully automated, in the form of executable unit tests or integration tests. But automated tests can't catch every possible error. The QA role should perform more "exploratory" testing, manually exercising the feature and looking for problems (including problems with the user interface). When possible, the developer in the QA role should identify additional automated tests that would be helpful and either write such tests (if that can be done quickly, as part of the QA process) or write a new story card for more involved tests, so that they can be prioritized and scheduled by the customer.
The developer assuming the QA role should not be one of the same developers who completed the story in question. (Some simple stories can skip the QA step, not because there can't possibly be errors in them, but because the likelihood is deemed small enough that the cost of QA would exceed the benefit.)
Too often, in important meetings, the manager or someone else with a vested interest in the meeting's result also runs the meeting. We have found that this can lead to lower participation and morale—essentially a wasted meeting.
The facilitator is a meeting chauffeur, a servant of the group. Neutral and non-evaluating, the facilitator is responsible for making sure the participants are using the most effective methods for accomplishing their task in the shortest time.
—Michael Doyle and David Straus, in How to Make Meetings Work
Some project meetings are best run by a neutral party who is not a regular member of the project team. A facilitator acts in this neutral role, focusing on the mechanics of the meeting itself. In addition, a facilitator can provide outside counsel, calling the team's beliefs into question to prevent "groupthink".
Like QA, the facilitator role is transient; for each meeting that requires a facilitator, someone from another Relevance team will volunteer to be the facilitator. (Several Relevance employees have focused on learning the facilitator role, and have a lot of experience at doing it well.)
The two most important roles on any project are the project sponsor and project manager. The project sponsor owns the vision for the project, choosing priorities and accepting completed work. The project manager makes sure the process is followed, keeps the team communicating, and removes obstacles to the team's success.
Project Management Practices
Relevance uses a modified agile development methodology for ensuring development quality, timeliness and utility. The components of our development strategy have been honed over years of experience; however, our methods are flexible enough to suit the needs of a variety of different kinds of clients. We will tailor our approach to each new project as necessary.
A development engagement with Relevance consists of a series of iterations , and follows these general lifecycle stages:
- Iteration 0: An initial period of 2-10 days where the project team gains an understanding of the product owner's company and industry, their requirements, and their goals for the team and project. Exit criteria are focused around specific deliverables (different for each project) that indicate key challenges are understood and the team is prepared to embark on the development process. This includes setting up project management and code repositories and establishing environments, services, and databases as needed. Additionally 1-2 iterations worth of story cards are prepared to enable immediate progress on the system.
- Iterations 1 - N: Development iterations, two weeks in length. At the beginning of each iteration we hold an iteration planning meeting to choose the highest priority functionality to be addressed. At the end of each iteration is a walkthrough of what was accomplished, as well as the delivery of a deployable application.
- Final Iteration: Final wrap up of the functionality and deployment of the system to the production environment.
- Support Phase: Relevance works with the customer to deal with issues after the deployment of the application, addressing newly discovered defects or performance enhancements as needed.
The most important part of any development process is the management of communication amongst the entire team, from developers to business analysts to beta customers and beyond. We value communication above all else in our partnerships. But communication doesn’t just happen; therefore, we have many opportunities for communication built into the structure of the project:
- daily standup meetings,
- iteration reports and iteration planning meetings,
- online discussion and commentary through the project wiki and story cards,
- periodic project retrospectives and risk assessments,
- and of course, emails and phone calls when things are unclear.
You will read more detail about all of those during the rest of this document.
We use a variety of tools depending on client needs and preferences, including Mingle and Pivotal Tracker as our Project Management repository and living documentation source. Requirements are documented as "User Stories" and represented within the system as cards. Each card has a title, a description, a detailed set of acceptance criteria, and an estimate of complexity. Comments, changes, and research are all captured directly on the card. As we proceed through iterations, cards are pulled in to the iteration as their priority dictates.
In addition to the story cards themselves, we frequently provide a per-project wiki where we keep documentation about architecture, system design, risk analysis, status reports, etc. These documents comprise the system and process documentation for the entire project, and are viewable and editable by all members of the project team.
Here are the documents that are produced on a representative project, including evolving "point-in-time" documents that are usually delivered via email:
Discretion of the Project Team
No two projects are alike, and no project follows a documented process exactly. But the team has some discretion. The team (not the individual) has the freedom to vary from any specific practice. On each project, the project manager is responsible for making sure the teams knows and understands the process for that individual project.
Of course, this is kept simple and lightweight. The various feedback loops can be used to introduce, discuss, and adopt new processes.
Iterations and Stories
The most important concepts in our process are iterations (the unit of planning and delivery) and stories (the unit of estimation, development, and tracking).
Relevance projects are structured in two-week iterations. For each iteration, a set of story cards is selected that we believe can be developed during the two weeks. Then, during the the iteration, those cards will be developed methodically, according to the priorities selected by the project sponsor.
As the cards are developed and showcased for the customer, they are accepted by the sponsor. At the end of the iteration (and possibly during the iteration if it will not be too disruptive), the new features will be deployed to production. Finally, if the customer has more requirements and wants to continue the project, the next iteration is planned.
Why are our iterations two weeks long? We’ve tried other things, and two-week iterations definitely work best for us. Four weeks is too long, because we can deliver a lot of value in two weeks. It’s just courting risk to wait for the feedback. One week, on the other hand, is too short. Project sponsors have trouble accepting cards fast enough to keep up, and the iteration overhead takes up too large a fraction of available development time.
Relevance manages requirements and tasks using story cards. Story cards are a common feature of agile methods; instead of a big requirements document in which small details can get lost, each requirement or task has its own card, which records the task name, details (including acceptance criteria), priority, estimate, and other information.
Story cards don't have to be real, physical cards (although on some large projects we've found it helpful to have a face-to-face meeting where real cards are used).
Cards have many advantages, as they push us toward very small granularity requirements, which improves estimates. It's also easy to see what’s been done and what’s left if you track via cards.
Each card has a status, and moves through a lifecycle:
- Ready for Development
- In Development
- Ready for Code Review (if developed by a single programmer)
- Ready for QA
- Ready for Showcase
The client is responsible for two of those transitions. The client must decide if the requirements are correct and move the card to "Ready for Development". Then, when the card is in the "Ready for Showcase" state, the client is responsible for final testing and either marking the card as "Accepted" or commenting on why it has not been accepted and moving it back into "Ready for Development".
Objective Acceptance Criteria
If you don't know what you are building, you won't know when you are done—and you will almost certainly build the wrong thing. Objective acceptance criteria keep the team in sync as they define, implement,and accept story cards.
- A business analyst should believe that a story card has objective acceptance criteria before marking the card as Approved for Development.
- A project sponsor should verify the acceptance criteria before approving a card for development.
- A developer should verify the acceptance criteria before accepting a card for development.
We try not to go overboard in documenting the acceptance criteria. It's possible to keep documenting well beyond the point where the additional detail ceases to pay for itself. A good rule of thumb is "the smallest description that a non-team member could comfortably use to test that a card is complete."
For example, "Basic CRUD functionality for the account object" is too vague. However, "Basic CRUD functionality for account, using the User CRUD story as a guideline" might be enough, if the "User CRUD" story has already been accepted.
Our story and task estimates are not expressed in hours or days; they’re expressed in points. This acknowledges the inevitability of error in estimates, and of varying productivity, helping to avoid mismatches in expectations between Relevance and our customers.
At the level of tasks, estimates are kept separate from explicit notions of time. But at the level of iterations, points are mapped back onto time using the concept of velocity: the number of estimated points that we achieve in a single iteration. (That's one reason we like to stick to two-week iterations except when it's absolutely necessary to vary the iteration length. If all iterations are two weeks long, it makes velocity numbers directly comparable and much more useful.)
Points and velocity are vital concepts. As Relevance founder Stu Halloway points out, variance at the level of single cards—whether it's variance in productivity or accuracy of estimation—is like the weather, whereas velocity is like climate. That it's raining and windy today might be an annoyance, but it's no reason to pack up and move to another state unless it's like this frequently. Similarly, every programmer (and every team) will have days that aren’t as productive as we’d like, but we'll also have days where we really rock.
What matters much more is the climate: how fast we're moving iteration-by-iteration, moving toward getting the project complete.
One aspect of our point estimates sometimes strikes our customers as odd. (Some have even thought we were joking!) Only some values are allowed as estimates; specifically, numbers in the Fibonacci sequence (1, 2, 3, 5, 8, …). Why would we do such a silly thing?
As tasks get larger, we have much less confidence in our estimates. It’s more difficult to grasp the full requirements, and to think about all of the implementation pieces that will be required. That means it’s also easier for unknown complexities or problems to lurk in such tasks. So as the size of the task grows, we force the estimates to fit a coarser granularity, acknowledging the greater likelihood of error.
What are the benefits?
First, it saves time we might otherwise spend arguing about whether a task is (say) 5 points or 6. Once we get to that size, it could just as likely be either.
Second, it helps us to resist natural pressure to estimate aggressively. "Surely it's not really an 8," someone might say—and if we could just knock it down to a 7 we might be tempted, but a 5 is clearly ridiculous, so it stays an 8. We'd rather come in under our estimates than exceed them. (That's not just vanity, or striving to impress—it's also much less disruptive to our customers' plans.)
Finally, it gives us an incentive to subdivide tasks, making them small enough for more accurate estimates. We've seen many times how smaller tasks are easier to estimate, develop, test, and track. (In fact, we won't permit a task larger than 8 points to stand; those get split right away. And even 8-point tasks must be split before development on them begins.)
The number of points that are targeted for work in the new iteration is based on the velocity from prior iterations: the number of estimated points that were actually achieved during those iterations. The notion of velocity helps us to calibrate the combination of our productivity and our estimation skill, both of which are variable in specific instances, but mostly stable in aggregate. That means, after the first iteration or two, we can make fairly accurate predictions of how much work can be done over a certain period of time, allowing the customer to budget and plan more easily.
We try not to spend time in meetings unless the meeting is really necessary, but of course some kinds of communication are best done in meetings. These are the kinds of meetings that occur regularly on a Relevance project:
The Daily Standup
Every project has a daily standup meeting. It's scheduled for the same time every day. Participants from Relevance include the project manager and developers who are currently involved in the project. From the customer, at least the project sponsor is present; others may be involved if they find the meeting useful.
The meeting covers what each participant or pair accomplished during the previous day, and what they plan to work on today. This may seem a little mundane, but with small teams it doesn't take much time, and it is very useful.
- It focuses on accomplishment, providing an incentive for everyone to have something to show for every day.
- It keeps everyone in the team aware of what is going on.
It provides an opportunity to chime in if someone else on the project plans to work on something that you have some insight into.
Before the meeting concludes, there is a chance to raise "blockers": any situation that keeps a project member from making progress. Plans are made to address the blockers, to keep everyone moving.
We try to keep the standup meetings short. 15 minutes is the goal. Occasionally they may run slightly long, but topics that require extended discussion will become the subject of a more focused meeting.
At the beginning of each iteration, we engage in a planning exercise with the project sponsor and other interested customer representatives. The Approved for Development that are to be the focus of the new iteration are chosen from the pool of cards remaining.
As part of this exercise, Relevance provides initial estimates of how much work we think is involved in each card, in points. The customer’s role is to assign priorities to the cards, and then to choose the cards that should be worked on during the next two-week iteration.
Considering only the rosy scenario and building it into the project plan is real kid stuff. Yet we do that all the time.—Tom DeMarco and Timothy Lister in Waltzing with Bears
The projects we do are full of risk: our clients come to us because they're going into uncharted territory, exploring new lines of business. We know how to write code , but what we take special pride in is our awareness of risk and ability to handle it. Many of our practices deal with risk in an implicit way, but risk assessments are focused solely on risk.
Risk assessments are monthly meetings we have to discover, communicate, track and address risks. In a risk assessment we:
- Reassess the status of previous risks.
- Identify risks using various approaches (including brainstorming, analogy and history-based). Each risk has attributes including a name, a description, a consequence in estimated cost and probability.
- Validate and analyze risks for relevance, ambiguity, and uniqueness.
- Rank the risks that the project faces.
- Decide how we are going to respond to each risk. (Valid responses include avoidance, acceptance, monitoring or mitigation.)
Retrospectives help teams—even great ones—keep improving.—Esther Derby and Diana Larsen in Agile Retrospectives
Agile teams need to accept change and deal with it effectively. An important tool we use to ensure that we are aware of change and responding to it is a monthly retrospective.
A retrospective is a meeting with a specific goal in mind: to make everyone’s perspective explicit, inspect results, and come up with specific actions to improve or respond to changes, whether those changes come from outside the team or from within. We’ve had a variety of process and technical improvements that have come about due to retrospectives:
- Changing the direction of a project, based on discovering that client budgets have changed.
- Increased transparency with client developers.
- Deciding to implement alternative architectures based on newly revealed (or newly decided) future plans.
We do our own company-wide retrospectives periodically, but we do project-specific retrospectives as well. Product owner involvement is crucial for the project retrospectives. Discussing what’s working well and where things can be improved is an important part of ensuring that the next iterations are improved relative to the iterations that have been completed.
One of our goals is to get software into production—even if the only early users are a few of the customer’s product owners—as soon as possible in the development cycle. However, once early versions of the software are in production, it would be disruptive to push new-and-not-fully-verified features into the production system, which may be in active use either for production work or for demonstrations or training for potential users.
Therefore, we set up a second deployment environment, the staging environment. Features that are ready for showcase are deployed first to staging, where the Project sponsors and other product owners can verify that the features work as expected. Features that aren’t quite ready can be pushed back to development; others are marked as complete.
At least once an iteration, and more often with customer approval, completed features are pushed to the production environment.