The Five Universal Project Checkpoints
By Preben Ormen
I have been working in or alongside projects for over 35 years now and have certain notions about what projects are alike and how they differ.
One way all projects are more alike than different is in what I think of as the five universal project checkpoints.
I think of each checkpoint as a process whereby stuff gets done and then we are done so we can get on to the next piece of work.
After each checkpoint we should know we have our back clear so we can focus on the future and not worry about the past (well, OK, hopefully only as little as possible – we can never escape it entirely as we will be held accountable).
So without further ado, my Five Universal Project Checkpoints are as follows:
This checkpoint occurs when we conduct the kick-off (or start-up or initiation) meeting with the sponsor and the project team. This is where the project formally starts with the organization.
The road there can be long or short depending upon the project, but typically requires sorting out and getting formal agreement, approval and signature on the project charter.
The project charter may have different names in different places, but essentially is the contract between the project manager and the sponsor. The standard content of a project charter is:
- Project scope and objectives
- Delivery approach
- Assumptions and dependencies
- Risk management plan
- Issue and change control approach
Other items may be included, but this is the basic content. If we don’t get agreement on this up front, we are at risk of endless discussions without a baseline or reference point.
The charter (the contract) assist in these discussions because we can refer to it and say whatever is in we have to do or find ways to do and whatever is not in requires a change order. The nuance is when assumptions change for things that are in the charter where we may or may not be able to argue for a change order. Mistakes do happen – sometimes they’re ours.
In order to build anything, we have to know what we are trying to accomplish. This is what is meant by requirements. A project is kicked-off to deliver on certain goals and objectives and it is now our responsibility to drive this out in sufficient detail to so we can build something useful.
This can be a short and sweet exercise or a long drawn out one. It all depends on the complexity of the project and the degree of uncertainty associated with it. The rule of thumb is that the more complexity we face, the longer it will take (and must take) to lock down requirements. But at some point we have to say stop, this is it for now so we can get on with the next phase.
We need formal agreement on the requirements so we have a baseline for future evaluation of the solution. Without the formal agreement we are at risk of endless discussions about what was or was not said, mentioned, emailed and so on at various steps along the way. Stop this nonsense by getting the requirements signed off.
I will offer a twist on this in a moment.
Requirements say something about the expected function of the solution. They say little about the technical aspects of it. For example, a requirement may be to have a car with certain capabilities. The design is the engineering solution with all the technical specifications and parts lists.
I consider design to be more than the visual look and feel of something; it covers the documentation required to hand the proposed solution over to developers, manufacturers, builders or whoever will do the build. Ideally the documentation should allow the builders to work independently, but life is never quite so simple.
Already, we can see that if the requirements are hazy then the designers will be confused about what to build and how to build it. After going through a cycle like this, most of us learn a lesson or two about how to write requirements that preempt certain discussions in design to make the process more efficient.
Designers have lots of ideas of their own and sometimes they are so to the point that requirements can and should be changed. This is where the process begins to feel inefficient because we have to cycle back for more approvals on changed requirements.
For this reason (and here is the twit I mentioned) many projects try to combine the requirements and design cycle to a higher degree by having functional and technical people more involved in requirements gathering sessions with the client.
SAP’s ASAP methodology formally captures this with the “Blueprint” phase. While we may still end up with two documents, one for requirements and one for design, we could as easily combine them.
I favour combining them in a way that allows cross-referencing requirements to design. Again, this is a specific requirement of some methodologies, but I just think it makes perfect sense because it helps out with testing later on in the build phase. (I’ll get to that shortly.)
If we do not get formal approval on the design, we are at risk of endless discussions in the build phase when the inevitable questions arise about incredibly specific details that can have enormous consequences.
I will say though, that the time and effort on design should be aligned with the complexity and uncertainty of the project. The higher the complexity and all that, the longer it will take us to lock down the final design.
The build phase is where the design gets turned into something we can use; a product, a service, a program, application, system or whatever.
The discussions we have in this phase are invariably very technical and detailed in nature. Most of the time we are simply getting confirmation on the build team’s interpretation of requirements and design, but not always.
Sometimes we find unintended consequences of the requirements or designs. Here is where we learn to appreciate the value of formal agreements on requirements and designs as this cuts down on the discussions; we have a baseline for what was agreed and what was left out. Of course, our judgment along the way is open to re-interpretation, but at least we can keep it fact based as much as possible.
Part of the build phase is testing the solution prior to release to the masses. Testing is often squeezed to recover lost schedule time, but it is a bad practice indeed and thankfully not a universal ailment.
The more common challenge is how to test efficiently and effectively in the first place. This is in itself a large subject, which I will not get into here. That sais, I firmly believe that testing should be requirements-driven.
Requirements-driven testing simply means that you design your suite of test cases, scenarios and scripts based on the user requirements. Each specific test will then be designed based on the solution component (or components) that covers the specific requirement being tested. Typically, one test can cover one or more requirements, so this is not just a on-to-one mapping exercise.
There is more to test design than this, but the requirements set the scope of testing for new functionality. Other testing activities then follow from this to plug any holes (e.g., regression and integration testing).
Ultimately, the users will judge us on the usefulness of the solution based on what they are used to now and what they asked for in the requirements specification. Again, we see that preserving a clear trail from requirements to design to testing helps ensure that the users focus on the right things to prove out the solution and that we have a clear baseline to measure success against.
We need formal approval and signoff on the outcome testing in order to close out the build phase. If we don’t have this, we are open to risk of endless discussions after commissioning about why we have certain production issues after roll-out.
If we can show a clear trail from requirements through testing, we can keep the discussion fact based and quickly sort out the responsibilities and next steps.
Commissioning is variously called roll-out, go-live and so on. It simply means making the solution available for the users to carry on their daily activities in support of their business.
Commissioning may be simple or complicated; it does not matter. Either the users will find they can do business with this or not and usually they can, but with issues. Dealing with the post commissioning issues can be time consuming and usually occurs when there is precious little budget left for anything.
At this stage, we’re at risk of second-guessing and finger pointing from people up and down the organization. Unless we have a formal record to point to, it is hard to keep this discussion fact based and free of unfair comments and blame from either side of the table.
Sometimes trouble at this stage occurs because we didn’t make it clear or get agreement on what it means to be done and who specifically gets to say whether we are or are not actually done.
This brings me to a refinement to the process for passing through the various project stages. It helps to specifically define the entry and exit criteria for each stage. I recommend doing this in the project charter right up front. Then manage the process for each stage. For example, I was the program test lead on a good sized technology implementation project where we went through two commissioning stages and several test cycles for each roll-out. We had some 170-180 testers spread out over Canada and the US and completed 1,500-1,600 tests all told.
For each test cycle we agreed the acceptance criteria up front with the business users. Then we sent an email to all the business (or process owners) owners on our list before testing started to get agreement that we would run a specific list of tests with specific testers from the business side and that if this completed successfully the business owners would be able to approve the solution as fit for production. We collected the email approvals and included as part of the project documentation.
This step flushed out any doubt on the part of the business owners and resulted in some adjustments to our tests. This was all good and helped reinforce the responsibility on the part of the business side.
We then repeated this after testing was complete and obtained approval that the results were acceptable to put to solution into production. We collected the approvals as part of the project documentation.
The formality around the approvals ensured that we could have a fact-based discussion when dealing with incidents reported after commissioning. It became pretty clear pretty quickly for each incident what kind of discussion would be most useful. The blame game was pretty much pre-empted from the start. It was to evident who was responsible for what so scape-goating wasn’t really possible.
All of this did not protect us from the real underlying risk that things can go wrong even if we try to do our best. That’s what risk means in practice; we can mitigate, but not entirely reduce all risk. From a contractual point of view, though, it helps to be able to show evidence of due diligence and project planning and execution.
I don’t think it is as fully appreciated as it should be that when you get formal approvals along the way, the approval is part of your risk management. The approval says that we all agree we have been as diligent as we can reasonably expect to be to reduce risk so that we now collectively agree to accept the residual risk that we may not have covered something important and that unknown things can occur in the future.
Getting acceptance for the fact that Murphy can strike at will, that bad things can happen to good projects, means that everyone shares responsibility and that we will take the future as it comes and deal with it in good faith.
Preben Ormen has over 35 years experience with a wide range of businesses, teams and cultures from around the world. He has experience in SAP, IT Governance, procurement, system selection and integration, and performance and process improvements. You can read more from Preben on his blog, you can follow him on Twitter, and you can contact him via LinkedIn.