I remember when I was on a vacation in Scotland, and I had visited Loch Ness and the adjacent Loch Ness Centre which chronicles the multitude of research efforts to find or disprove the Loch Ness Monster. The interesting thing about this research (to me) is the tremendous risk involved with funding a study that could potentially prove the monster does not exist – this is a major tourism center for Scotland, and the monster myth is the reason this is the most famous lake in the world.
What I’m talking about here is a research effort where a positive outcome (successfully prove or disprove the myth) could have a significant negative effect. Proving the monster does not exist would almost certainly impact tourism. Even though every other visitor I spoke with agreed that the existence of a monster in the lake was anywhere from highly unlikely to borderline impossible, most seemed to agree that the lack of certainty was what held the appeal.
There is a similarity here to a position I’ve seen taken by many program/project managers. They don’t want to have their projects or organizations evaluated because they believe a positive outcome (identify and measure a problem so that it can be fixed) would have a significant negative effect (someone would find out that their program isn’t perfect). The big difference is that while lack of certainty is good for Loch Ness, it’s terrible for an IT project.
The problem here is that nothing is perfect and very few professionals actually have a literal expectation of perfection – many (if not most) expect improvement, which cannot be achieved, demonstrated, or measured without first finding and understanding problems. The program manager who doesn’t want to have his/her projects evaluated for fear of what will be discovered is simply sitting in a dark room, not turning the lights on because they’re afraid there might be a monster. While ignorance can be very comforting, it is rarely (if ever) advantageous – with the potential exception of the Loch Ness Monster, of course.