Neal Freeland

Engineering/marketing manager, family guy. My personal blog with a few work thoughts mixed in.

Managing Product Development

leave a comment »

When starting a business it can be fairly straight-forward to create the initial product: generate an idea, build something, and then see how it does. As a start-up progresses, however, things typically become more complicated. The current product needs to be maintained, it’s difficult to develop on top of it, and new people join the company adding more ideas about what to do next. Product development can start to feel like a random walk from daily crisis to idea-du-jour, making it challenging to focus and get things done.

At BuddyTV we overcame these challenges and maintained our productivity by adopting a version of the Agile Development methodology. Roughly each month we followed a three step process: we created a plan, executed it, and then assessed our performance.

Phase 1: Plan

We wrote down all ideas, feature requests, and maintenance requirements in a list we called the product backlog. Each entry was given a task name and described in as much detail as needed. For example, instead of just writing “Improve Google Analytics,” which could be very ambiguous for the developer implementing the task, we wrote “Implement campaign tracking variables for paid media campaigns to measure total page views by source and facilitate ROI analysis,” and added links to best practice articles in the Google forums.

Occasionally some tasks were complex enough that we wrote multi-page specifications that contained user definitions, scenario descriptions, list of included features, list of non-included features (to table discussion), research data, flow diagrams, wireframes, design comps, dependencies, architecture decisions, open issues, dependencies, and competitive analyses. The spec forced us to think things through thoroughly, facilitated group discussion, and helped us tightly define the task before we committed valuable development time to execute it. That said, we also recognized that much discovery would happen in the development process and allowed ourselves to adapt the plan as we progressed.

We also often wrote simple user stories that added another layer of detail to the task by striving to use customer language. For example, we wrote “The marketing manger wants to know how acquisition campaigns are performing, and access detail to help improve them.” It was surprising how useful this simple story could be when mired deep in the details: it helped us to remember to think like the customer. Finally, we assigned each task a priority assessment (High, Medium, Low), costing estimate (Hard, Medium, Easy), and success metric that described the hoped for business result (examples: increased revenue, more page views, lowered cost, improved employee productivity).

With the backlog in place we began by looking at the upcoming sprint, which we usually defined as a month long period. We budgeted 2-3 business days for code design, 12-13 days for development, and 4-6 days for testing and deployment. Some tasks would require more than one sprint, but we wanted to keep a rhythm of monthly sprints to force us to deliver working code regularly and keep our productivity up, while minimizing the amount of sprint planning overhead. We created a sprint calendar with two key milestones – our code complete and release dates – and also add in holidays, vacations, or other important dates like a launch event or partnership deal delivery date. We then selected a portion of the backlog we thought we could complete in the upcoming sprint, listed the tasks in priority order, and reviewed with the management team for input and consensus. With the sprint plan set, we moved on to the execution phase.

Phase 2: Execute

We started the sprint with a short kick-off meeting that showed how the tasks fit into our product strategy. This gave a bigger picture view and helped motivate people. “Implement canonical reference and no-index rules on subsequent photos in photo rating game” could feel like grunt search engine optimization work unless we connected it to our strategic effort to create a great experience for organic visitors. We also discussed who would take on each task and changed assignments as needed based on interest or experience. The developers then took a few days to dive into the task and talk to stakeholders to fully understand the requirements. They created a technical design, which was reviewed with the lead developer and maybe one other person, as well as with our tester who wrote up the test case.

To monitor progress during development, we held a 15 minute stand-up scrum meeting twice a week. Developers shared what they had accomplished since the last scrum, what they were working on next, and identified any blockers that were holding them up. We used a white board to keep track of the sprint tasks, and placed them into categories: backlog (not yet started), blocked, in progress, or completed. The brief meeting identified issues and facilitated communication with minimal overhead, keeping people focused on coding as much as possible. It also encouraged commitment to making steady progress, helping people avoid a stressful crash effort at the end of the sprint.

Once we reached the code complete milestone, we moved to stabilization, enlisting all employees to use the code on our test servers in two 90 minute bug bash sessions. All bugs were filed in a tool and assigned a priority (P1 immediate, P2 fix prior to release, P3 important, P4 filed for later prioritization). While we focused on bugs that came up as part of the sprint, we also tried to address older bugs and reduce the total count in the tool.

When we judged our code quality to be sufficiently high (no P1 or P2 bugs), we proceeded to the release milestone and moved the code into production.

Phase 3: Assess

Immediately after release we monitored our data to ensure there were no major disruptions to page load time, visits, page views or feedback forms that required a roll-back. Otherwise, we allowed the release to settle in for a few days or even weeks, monitoring pre and post data to assess how we performed against our success metrics. When we found that we were short of goals we would investigate by diving deeper into the data and talking to our fans through site comments, email or even direct phone calls. This could produce immediate actions we could take, or new tasks that we would add to the backlog for prioritization in the next sprint.

We also closed the loop with the product team by holding a 30 minute post-mortem meeting to discuss what went well or could have been done better. We used this feedback to continuously improve our processes and communication. Finally, we had each team member fill out a short sprint eval, reviewing themselves on a 5 point scale (above, at, or below expectations) against their objectives (planning, code quality, teamwork, overall). They also recorded the outcome of the tasks they worked on (complete, partially complete, delayed), and added additional comments. Managers reviewed the self-assessment and added an overall review score and comments. Performance above expectations qualified for a small quarterly bonus and if sustained helped justify a bigger annual bonus and salary raise. Since we did this at the end of each monthly sprint, we strove to keep the overhead to a minimum but still wanted a quick 15 minute process to facilitate feedback and recognize achievement.


This three phase process – plan, execute, assess – worked pretty well for us given our size and development environment. None of this was set in stone and, according to Agile Development principles, we were always open to adding new steps or shedding ones that were no longer useful. That said, we found that codifying the process was very helpful in reducing randomization, increasing productivity, and ultimately making people feel more satisfied with their work.


Written by nealfreeland

August 3, 2010 at 4:28 pm

Posted in Uncategorized

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: