• English
    • Norsk
  • English 
    • English
    • Norsk
  • Administration
View Item 
  •   Home
  • Det matematisk-naturvitenskapelige fakultet
  • Institutt for informatikk
  • Institutt for informatikk
  • View Item
  •   Home
  • Det matematisk-naturvitenskapelige fakultet
  • Institutt for informatikk
  • Institutt for informatikk
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Quality of Design, Analysis and Reporting of Software Engineering Experiments:A Systematic Review

By Kampenes, Vigdis
Doctoral thesis
View/Open
DUO_671_Kampenes_17x24.pdf (274.8Kb)
Year
2007
Permanent link
http://urn.nb.no/URN:NBN:no-18244

Metadata
Show metadata
Appears in the following Collection
  • Institutt for informatikk [3581]
Abstract
Background: Like any research discipline, software engineering research must be of a certain quality to be valuable. High quality research in software engineering ensures that knowledge is accumulated and helpful advice is given to the industry. One way of assessing research quality is to conduct systematic reviews of the published research literature.

Objective: The purpose of this work was to assess the quality of published experiments in software engineering with respect to the validity of inference and the quality of reporting. More specifically, the aim was to investigate the level of statistical power, the analysis of effect size, the handling of selection bias in quasi-experiments, and the completeness and consistency of the reporting of information regarding subjects, experimental settings, design, analysis, and validity. Furthermore, the work aimed at providing suggestions for improvements, using the potential deficiencies detected as a basis. Method: The quality was assessed by conducting a systematic review of the 113 experiments published in nine major software engineering journals and three conference proceedings in the decade 1993-2002.

Results: The review revealed that software engineering experiments were generally designed with unacceptably low power and that inadequate attention was paid to issues of statistical power. Effect sizes were sparsely reported and not interpreted with respect to their practical importance for the particular context. There seemed to be little awareness of the importance of controlling for selection bias in quasi-experiments. Moreover, the review revealed a need for more complete and standardized reporting of information, which is crucial for understanding software engineering experiments and judging their results.

Implications: The consequence of low power is that the actual effects of software engineering technologies will not be detected to an acceptable extent. The lack of reporting of effect sizes and the improper interpretation of effect sizes result in ignorance of the practical importance, and thereby the relevance to industry, of experimental results. The lack of control for selection bias in quasi-experiments may make these experiments less credible than randomized experiments. This is an unsatisfactory situation, because quasi-experiments serve an important role in investigating cause-effect relationships in software engineering, for example, in industrial settings. Finally, the incomplete and unstandardized reporting makes it difficult for the reader to understand an experiment and judge its results.

Conclusions: Insufficient quality was revealed in the reviewed experiments. This has implications for inferences drawn from the experiments and might in turn lead to the accumulation of erroneous information and the offering of misleading advice to the industry. Ways to improve this situation are suggested.
List of papers
Paper 1: Dag I.K. Sjøberg, Jo E. Hannay, Ove Hansen, Vigdis By Kampenes, Amela Karahasanovic, Nils-Kristian Liborg, and Anette C. Rekdal, A survey of controlled experiments in software engineering, IEEE Transactions on Software Engineering Vol. 31, No. 9, pp. 733-753, 2005. The paper is not available in DUO. The published version is available at: https://doi.org/10.1109/TSE.2005.97
Paper 2: Tore Dybå, Vigdis By Kampenes, and Dag I.K. Sjøberg , A systematic review of statistical power in software engineering experiments. Information and Software Technology Vol. 48, No. 8, pp. 745-755, 2006, The paper is not available in DUO. The published version is available at: https://doi.org/10.1016/j.infsof.2005.08.009
Paper 3: Vigdis By Kampenes, Tore Dybå, Jo E. Hannay, and Dag I.K. Sjøberg, A systematic review of effect size in software engineering experiments. "> Information and Software Technology Vol. 4, No. 11-12, pp.1073-1086, 2007. The paper is not available in DUO. The published version is available at: https://doi.org/10.1016/j.infsof.2007.02.015
Paper 4: Vigdis By Kampenes, Tore Dybå, Jo E. Hannay, and Dag I.K. Sjøberg, A systematic review of quasi-experiments in software engineering. Information and Software Technology, In Press 2008. The paper is not available in DUO. The published version is available at: https://doi.org/10.1016/j.infsof.2008.04.006
 
Responsible for this website 
University of Oslo Library


Contact Us 
duo-hjelp@ub.uio.no


Privacy policy
 

 

For students / employeesSubmit master thesisAccess to restricted material

Browse

All of DUOCommunities & CollectionsBy Issue DateAuthorsTitlesThis CollectionBy Issue DateAuthorsTitles

For library staff

Login
RSS Feeds
 
Responsible for this website 
University of Oslo Library


Contact Us 
duo-hjelp@ub.uio.no


Privacy policy