Social websites have had an enormous growth the last couple of years. One of the success factors has been user contributions and great use of meta data on these contributions. While the developers can concentrate on functionality, the amount of content grows by itself. The fact that the content is unfiltered raises some interesting challenges. With big amounts of content with variable quality, it is hard to find what is most useful for the users. This is where a quality measurement has to be done. Especially in review sites where users are looking for comments and meanings on various elements (eg. consumer products like books, music, gadgets etc.).
In this thesis the theory behind this is used to build an example system for user contributed evaluations of courses using the Python based web framework Django A algorithm for quality measuring evaluations is proposed and tested on 71 evaluations from kurskritikk.no. The algorithm includes two main variables where the first is meta-data (thumbs up/down on usefulness) and the second text properties of the evaluation. The thesis has resulted in a full functional web site for open online course evaluations and the algorithm works well in quality measurement of the evaluations. It is not complete, but is built in a way that makes future expansions easy to implement.