The purpose of this paper is to examine people’s understanding and evaluation of uncertainty intervals produced by experts as part of a quality assurance procedure of large public projects.
Three samples of educated participants (employees in a large construction company, students attending courses in project management and judgment and decision making, and judges of district and appeal courts) answered questionnaires about cost estimates of a highway construction project, presented as a probability distribution.
The studies demonstrated additivity neglect of probabilities that are graphically displayed. People’s evaluations of the accuracy of interval estimates revealed a boundary (a “cliff”) effect, with a sharp drop in accuracy ratings for outcomes above an arbitrary maximum. Several common verbal phrases (what “can” happen, is “entirely possible” and “not surprising”) which might seem to indicate expected outcomes were regularly used to describe unlikely values near or at the top of the distribution (an extremity effect).
All judgments concerned a single case and were made by participants who were not stakeholders in this specific project. Further studies should compare judgments aided by a graph with conditions where the graph is changed or absent.
Experts and project managers cannot assume that readers of cost estimates understand a well-defined uncertainty interval as intended. They should also be aware of effects created by describing uncertain estimates in words.
The studies show how inconsistencies in judgment affect the understanding and evaluation of uncertainty intervals by well-informed and educated samples tested in a maximally transparent situation. Readers of cost estimates seem to believe that precise estimates are feasible and yet that costs are usually underestimated.