H. Gruber, A. Hentschel, R. Plösch: Experiment for Comparing the Automatically Assessed Source Code Quality with Experts' Opinions, Proceedings of SQMB 2009 Workshop, held in conjunction with SE 2009 conference, March 3rd 2009, Kaiserslautern, Germany, published as Technical Report TUM-I0917 of the Technical University of Munich, July 2009.

The evaluation of software quality is supported by numerous tools but is still an extensive task that has to be carried out manually by an expert. We present a method for an automatic assessment of source code quality by using a benchmarking-oriented approach to rate the results of static code analysis tools. Within an experiment we compared these results with the evaluations of several experts who made a ranking of the software projects regarding to their quality. As a result we can reveal that the experts’ ranks strongly correlate to the ranking of our automatic assessment method. The approach is promising with the restriction that we just made statements about a quality ranking of the software projects and skipped conclusions about the absolute quality.