To enable properly sized software project budgets and plans, it is important to be able to assess the uncertainty of the estimates of most likely effort required to complete the project. Previous studies show people in general, as well as software professionals, tend to be overconfident when assessing uncertainty over estimated effort. This thesis explores the possibility of learning more realistic uncertainty assessment with the use of outcome feedback. Two experiments, with favorable learning environments, were set up to investigate the issue. The first study focused on whether people in general possess the ability to learn more realistic uncertainty assessment; the second how much, and how, software developers learn to improve uncertainty assessment. The results indicate that people in general are well calibrated initially, and highly capable of adjusting towards realism given favorable learning conditions; i.e. frequent and relevant feedback on performance. In the software engineering setting, using experienced software developers, there was, in comparison, observed a lower degree of learning realism in effort uncertainty assessment. There was found that a necessary condition for improvement of uncertainty assessments on effort estimates may be the use of explicitly formulated uncertainty assessment strategies. In contrast, intuition-based uncertainty assessment strategies may lead to little or no learning. The implications found for the industry and further research was: (I) For learning to occur, the learning process may need to be aided by explicitly stated learning strategies, and frequent reminders of the goal of the learning session. (II) There must be given special attention to the framing of the probability measures used to state uncertainty over effort. Check for adequate understanding of the concept of probability and uncertainty, give proper explanations of these terms, and issue reminders of the agreed upon definitions at regular intervals during the time of learning. It seems beneficial to support mathematical probability definitions with natural language descriptions, and oral consensus through debate of these definitions. (III) Feedback should be given in such a way that: (1) several kinds of feedback is used and issued frequently, as a minimum it should be given at naturally occurring places; (2) the possibility of subjective interpretations on performance is avoided as much as possible; (3) it can be directly transferable as input to future uncertainty assessments, i.e. framing of the appearance to visually mach the uncertainty assessment process to come, history based tendencies should be pointed out. (IV) There are different qualities and learning strategies that are effective for learning the skill of “know how” versus learning “how uncertain is”. The design and framing, of learning environment and feedback, should therefore reflect how learning uncertainty assessment is best obtained when this is the purpose.