One of the problems with graphic rating scales that quickly became apparent after their introduction is the so-called "halo effect". When examining graphical performance ratings, Ford (2001) found that there was a tendency for raters to give similar scores to a ratee on all dimensions of performance. Parrill (1999) Evaluating a worker in this way would be equivalent to evaluating the worker on a single scale, rather than on many different scales measuring different aspects of job performance. Other researchers have also discovered this problem. Parrill (1999) Soon, existed a large amount of literature documenting the halo problem when using graphic rating scales Even more current literature has documented the halo problem, citing that it continues to be a pervasive problem with graphic rating scales (Landy and Farr, 2000). For a time, it was thought that the halo could be eliminated, or at least attenuated, through training. By alerting raters to this error associated with graphic rating scales, scores would contain fewer halos assessments would have been more appropriate, however, research has shown that this is not the case (Ryan, 2008). Some have proposed the alternative of statistical correction to compensate for the halo. Halo has traditionally been considered a serious problem for the effectiveness of an evaluation system. Cleveland, Murphy, and Williams (2009) organizations generally use performance evaluations to make some sort of decision about a worker and his or her job. When evaluating a person, the organization attempts to measure the worker based on several criteria. In this way the worker, with the help of the organization, is able to be aware of his own strengths and can identify areas for improvement. The halo eliminates the variant... the center of the card... the halo was just the measurement of the wrong error. A more plausible conceptualization was that these “rating errors” actually contained some variance in the actual score, not just errors (Hedge & Kavanagh, 2008). Regardless, the traditional criticism of the graphic rating scale's susceptibility to these “errors” no longer raises the same concern as it once did. There are other problems associated with graphic rating scales beyond the traditional halo and leniency problems. Graphic rating scales have also been cited as having problems associated with validity, poor inter-rater agreement, and a rater's personal biases (Kane and Bernardin, 2002). While important, these other issues associated with graphic rating scales are not as prevalent in the research literature and have not traditionally been accorded the same level of importance and influence as halo and indulgence..
tags