How do you measure success and failure of projects?


There are quantitative and qualitative ways to describe the success or failure of a project.

In general, a project is considered succesful when it meets its objectives while staying within an agreed up on budget and time line. This sounds simple enough, except that it is not !

First, who decides if a project is succesful? And if a project passes a series of tests and goes live, is it considered succesful? The common model is to celebrate success at go-live. But going live is just the beginning of the journey. It takes a fair amount of time to judge if the business got sufficient value out the project. Also, at what cost? Did they have to hire more people to support the implementation? Can they make enhancements to the functionality as requirements change, without costing an arm and a leg? Do we attribute this to the original project?

Since I am more familiar with SAP projects than non-SAP projects, let me quote examples from SAP area. In the nineties, SAP was not as rich in functionality as it is today. Many implementations had dirty modifications to source code to make it work according to requirements. These were called succesful projects at the time. 15 years later, these same modifications have caused the customers a lot of trouble in upgrading their systems. So do we still call them succesful?

Second, all stakeholders have to agree to a set of objectives that a project has to meet. Sounds simple enough – but then, it is kind of hard to accomplish too. First, there is the legal issue – if you cannot express it in a language that lawyers agree to, the objective will not be present in the contract. And if it is not in the contract – it is hard to ensure it will be met. Then there is the question of the definition of who a stakeholder is. In large projects, there are many people who are affected, and it is impossible to get them all enlisted to create a list of objectives. So a subset of people is assigned as leads or representatives, and they may or may not know how the new project is going to affect the work of the whole population. 

In one of my first succesful SAP projects, I remember going to the office the day after go-live and finding a large number of trucks backed up from the loading bay all the way into the street. Guess what? no one remembered to capture the requirements of the picking department – so they tried to do it the old way, like they have done all their lives – and then could not update the system with the results. Thankfully, we could solve it with some quick work around – but the point is – you cannot figure out every one’s requirements all the time in big projects.  Voice of the majority does not mean something is right – there are plenty of exceptions in every business process, and they all should be handled.  Consultants swear by “Mutually Exclusive and Collectively Exhaustive” – just that as scope increases, the chance for hitting this paradigm decreases.

Third – how do you agree up on project time line and budget? It is by estimation, which is an inexact science by definition. Based on prior experience, some one thinks that a certain task will take a certain amount of time to finish.  Just as the stock market has taught us that past performance is not a good measure of future (for the short-term),  there is no guarantee that these estimates will hold true across the life of the project. Techniques like PERT make us eliminate bias by using formulae that goes from pessimistic to optimistic. But as all experienced project managers will attest – estimates are best guesses, irrespective of techniques. 

Now – this means estimates have to be revised as projects progress and we have more data. But the question remains – if I estimated $2M today, and then re-estimated after 3 months to find that it actually costs $3M – is the project already a failure? The world outside the project typically looks at projects with this perspective. Once an estimate is made for time and money, then the world expects it to be set in stone, come what may. There is no room for re-estimation.  So most projects get into this situation that they are set up to fail from the beginning.

Finally, no project has unlimited resources – so scope and timeline usually is subservient to available money. This means some one has to prioritize scope. And in this process, some requirements will need to be dropped. Despite the due diligence in this exercise – it is hard to avoid a decrease in value to some part of the customer organization. But when the project finishes and these gaps get exposed – usually it is put as a failure of the project.

Now, I am not the only one to have this knowledge – most customers, consultants and software vendors know this. But why would not this situation change? Just a rhetorical question of course 🙂

Do bigger projects really fail more often than smaller projects?


Common sense and Michael Krigsman tell me that bigger projects fail more than smaller projects.  However, this does not match what I have experienced in the past. From what I have seen, project size does not have a significant impact on its odds to fail.

First – how do we define size ? Size can generally be expressed in terms of effort, which in turn drives duration and headcount.  occasionally we do run into situations which are akin to asking 9 women to deliver a baby in 1 month – but assuming that doesn’t happen , there is a logical way of arriving at duration and headcount from a WBS.  There are always compromises made between the triple constraints – scope, schedule and budget.  Once all the stakeholders agree to this – project is good to go.

Now, assuming this exercise is done – size is generally not a reason for failure any more.  The reason is that in this planning exercise, you should have covered the effort required to mitigate the risk for duration and headcount. After this planning exercise irrespective of size – all projects start on the same footing. 

It could be argued that bigger projects are harder to plan and hence should fail more. However I am not sure if this is a true statement entirely.  The reason is that planning for a bigger project is done on a larger scale with a more intense process and  will have better scrutiny than a smaller project. Since the planning effort is proportionate – size should not be an unmitigated risk after that. For example – if headcount increases, then there is more overhead on communication. But once you factor that increase into the schedule, this risk has a valid mitigation.  and so on and so on…

Then there  is the risk that we fail on execution.  This is not rare – but the question is – does it depend on size?  As an example – lets say a small project was commissioned to build a UI in 5 days.  Planning was done in an hour on a whiteboard, and the stakeholders were 2 people. It only needed one developer to write the code and test it and move it to production. Total cost was calculates as say $2000. In execution, it took 6 days to finish because one of the stakeholders was out sick for a day and hence could not clarify a requirement in time. Cost over run is $800. What is the chance that this failure gets highlighted in blogosphere? ZERO  or NEAR ZERO.  As a percentage – cost slipped by 40% and schedule slipped by 40%. But since the materiality was so low, it is not worth spending time to analyze it.  And guess what – in most cases, people won’t even say that this was a failure.

But on a project that is $10 Million in size – a 40% miss is enough to get some serious blogosphere attention. Then we need to find out what went wrong – and point fingers at the SI, the customer, the product vendor, the weather and the macro economic factors. 

My point is – as long as we compare projects and their risks apples to apples, I have not seen big projects fail any more than smaller projects. The difference is that when big projects fail, they fail SPECTACULARLY, and hence they overshadow the similar failure rates of smaller projects. Several decades later, we still talk about sinking of Titanic. Since that time, more people have probably dies of smaller accidents – but do we talk about them?

 Lookin forward to hearing your perspective on this topic…

Too much of analysis + Too little of synthesis = Sub-optimal decisions


I remember a high school lesson on analysis and synthesis – with the teacher emphasising why they should always go hand-in-hand in a complimentary manner. It apparently did not register very deeply in my mind – and for a number of years, I was a bigger fan of analysis than one of synthesis.  Higher education in engineering and management pretty much helped me firm up by belief that analysis is the big deal and this is the area I should master.  Engineering taught me more about how to break a problem into smaller parts and solve each. It did not teach me with the same vigor on how to put things together to better solve the problem.  Same deal with my MBA – I became pretty good at analyzing issues, but when I look back – I don’t think I had the same zeal for putting things together to aim at a better solution.

This craze for analysis must have some how played into my decision to take an active interest in the world of Business Intelligence too. Over a period of time, I got exposed to more and more of the challenges that my clients face. While I had a decent ability to figure out why they were having the problem, and give them advice on how to analyze the issues – I was not equipped with the tools or training on how to use synthesis and put it all back together to give a better solution that gave more value than the sum of solutions to the problems my analysis pointed out.

In real world, the best business brains have the ability to use analysis and synthesis together – and not just analysis alone.  These are people who use tools and other people to do the analysis part and come to them with the required information, and then like a master chef – they mix the parts to create an extraordinary dish. However, the fact that this type of people are few in number makes me believe that we have a fundamental issue with how our education, tools and thinking is preparing us for taking on grand challenges.

A primary reason for this is our simplistic view on solving problems. Here are three  that come to mind

1. Not all problems have exactly one root cause. But we have been taught to think that there is one such cause.  Even if our analysis comes up with 3 causes, we try harder to somehow rank them – many times artificially, till we can defined “The” root cause. And in this process, we lose out on the ability to gain a better solution by understanding the relation between all the causes. When analysts scream that a CEO has to be replaced, or when opposition screams that the President is ineffective – we lose sight of the fact that there are many things that cause issues – and it cannot be all attributed to one person. But since we are tuned to think about the world in a hierarchical fashion – and the CEO or President is visually the top node – we attribute too much to them, whether it is good or bad.

2. Over use of the 80-20 rule can be counterproductive.  We almost always find something using analysis along the lines of 80% of revenue comes from 20% of customers. And hence we think if we spend most of time and resources in making these 20% customers happy, then we are in good shape. Well…think again. If you have a large customer base, then 80% of your customers is a large enough number to drag you down in a variety of ways, using the various channels available to them to do so.

3. Analysis is always done assuming certain boundary conditions and assumptions. However, we do not always factor this when we interpret the results. Just by asking the same question in a different way – you can get a different answer. Here is a recent example. We asked a set of stakeholders – “How important is dashboarding/  graphical representation of data to you?” as a part of 10 questions in a survey. When we compiled the results, we found it was one of the least important. Around the same time, some one else had done a similar survey which asked “how often do you use charting and other graphical representations of the data you analyze?”. And guess what – the answer indicated that many of them used it quite regularly.  Eventually,after many more discussions with the people who answered the survey,  we figured that other questions in the survey had an influence on how the users answered each question.

I still think that analysis is crucial to decision-making – all I want to add is that people should not stop there. They should use the principle of synthesis and take better decisions.