Delivery Methodology Checklist
This checklist covers critical components of a successful software delivery team.
Have we mapped the major components of the system and documented their role and accountability?
What are the major sub-components within the system and how are they dependent on each other? Who is the technical authority for each component and what are the links to learn more about the component and its usecases/requirements? Consider building a diagrammatical ‘map’ of the system illustrating its sub-components and depicting the inter-dependencies. Below tabulate this information illustrating who is the technical authority/responsible individual and provide links to the component overview page. This page and serve as an entry point for those wishing to learn more about the system and for newcomers to get their bearings.
Are we polling the team on a regular basis to measure their understanding across each area of the map?
We want to promote division of responsibilities/skills yet we need to ensure everyone is aware and up-to-date with the whole system. Often knowledge begins to acrete around one or two individuals, or from one discipline to another. For example, SRE-style roles often understand the overall application architecture as well as the infrastructure that supports it, whereas developers are typically kept unaware of the infrastructure components. This is reinforced by the tendency for application developers make nice demos for everyone. Consider sending a simple questionaire to the team asking to self-assess their knowledge of each area of the system map; it should illustrate where the knowledge is moving.
Are the team estimating their work using a standardised points based estimate?
Estimates provide information about the team environment. If estimates are consistently large, then the usecases and requirements are no well-broken down or the team is losing confidence in itself to deliver small units of work witin the given time unit (estimation inflation). If the ratio between planned points completed versus unplanned points (work arising from support incidents) completed is volitale, then the team isn’t able to remain focussed on its roadmap objectives.
Does each sprint have a goal providing purpose?
Focussing the team on a small number of objectives is much more satisfying than having the team thrash across multiple unrelated work streams. We want to keep the amount of work in progress relatively small. Work out what the sprint’s goals are and select stories that progress these goals to completion. Teams will have a greater sense of satisfaction if they feel like they are completing goals on each iteration.
Is the delivery team’s velocity stable and predictable?
A stable and predictable velocity within a team is a worthy goal; through this simple metric we can understand:
- If the team are getting high quality stories
- If the software is organised well enough to permit high quality estimates
- If the team has confidence in itself
- If the team is allowed to remain focussed on its committed points
Are we using retrospectives to identify problems and are the teams owning solutions under their control?
A team must learn together to organise itself. In my experience this is best achieved not through a top-down mandated process, but rather evolved through trial-error-reflect-adjust process where the retrospective is an opportunity for the team to identify their failings and the failings of other influences (dependent teams, stakeholders, architects) and to take actions to rectify them.
Are we tracking the team’s focus (planned work vs non-planned work)?
There are inevitable distractions affecting an engineering team that prevent them from dedicating all their time towards planned work. Examples include emergency high priority work (product support incidents), surprise requests from important customers, and the like. This is especially true if the team is also responsible for maintaining some production services.
By applying points (either through estimation or hours-tracking) to the non-planned work, we can understand focus. The focus should be tracked (85% feels about right for a service supporting engineering team), and we should take measures to adjust the organisation to ensure teams do not lose focus nor ignore their routine responsibilities (e.g. training line support engineers more, investing more heavily in paying back technical debt etc). We can use the previous 3 to 5 sprints to provide a forecast estimate for the focus over the next few sprints.
Does each team member understand their role and responsibility clearly, and do they have a set of medium term targets they are working towards?
Its important that each individual is aware of their expectations. Take the time to document these expectations and discuss them in detail. Devise ways to make these SMART (Specific, Measurable, Assignable, Realistic, Time-related). Review them every 6 to 10 weeks.
Do we have a well designed, transparent and clearly articulated employee growth and review plan and is it applied consistently and regularly? Is the process owned by someone?
The review process is difficult to get right consistently over time, in part because it relies on a high quality of preparation to ensure that framework for success is in place, and in part because it relies on a great deal of character strength from reviewers to deliver the review dispassionately. Failure on either of these two fronts results in distrust, perceived unfairness and the gradual erosion of the process altogether.
Given these pressures, devolving the process to managers will result in inconsistency and ultimately an unfair implementation. We must therefore design and implement a framework that provides a structure within which the review process can take place. It is important that all reviewers complete manditory training on the framework and how to provide high quality feedback. The framework should be oriented around growth; recognizing that they are a journey of continuous improvement.
A good example can be found here.
Do we have a clear idea of the team’s Objectives and Key Results (OKRs), and is the team lead reporting on them regularly?
Teams should be focused on outcomes rather than activities. Outcomes include selling a product, increasing customer satisfaction, retaining customers, increasing revenue and so on. Outcomes are achieved from a long chain of activities (reducing cloud spend, arranging sales presentations). Activities are an easy but ultimately fruitless thing to focus on because they give the sense of progress without necessarily achieving it. Teams must be continually aware of their outcomes and must orient themselves towards achieving them.
Do we maintain a risk register?
Creating and maintaining a risk register and reviewing it regularl is a useful endeavour for a team leader and manager. Managers should be acutely aware of risks so that mitigations can be put in place to prevent or recover from them.
Have we created a list of training opportunities and are we recording what training has happened per-member/team?
Training is the highest-leverage activity an organisation since an improvement in performance of just 1% can result in a significant upturn in quality when applied over a year or longer. Training also establishes boundaries for performance and gives reasonable opportunities for employees to reach their objectives. Finally training is an excellent way to progress the growth plan of the employee and to improve overall retention in the team.
Towards this we must identify areas that require training, build training sessions to satisfy those areas or find external training providers/partners that can provide that training. We should also keep track per-employee of what training they have undergone (this may also be useful for tax reasons).
Are we demoing the delivered software regularly to stakeholders and product owner, and is the whole team involved in the demo?
Demos communicate a huge amount of information about the product in a short amount of time to a large number of people. They are extremely useful for stakeholders and others to grok the software and the usecases for which it was designed. Demos can be done at all levels and for all audiences, and can be as simple as demonstrating some working tests. Take care to define the audience and to choose a demo scope that is appropriate for that audience.
Do we have a team handbook describing how the team functions?
A natural body of implied knowledge will build during the course of a software project. This includes practical tasks such as creating new users or setting up the development environment, as well as cultural norms such as how issues are to be resolved and how meetings are to be conducted. Develop a handbook section of the Wiki or project documentation and ensure it is well maintained; it will be come useful for newcomers also.