Presentation Title
Rasch Rating Scale Model and Traditional Psychometric Methods: A Comparative Study
Presentation Type
Poster Presentation
College
College of Social and Behavioral Sciences
Major
Psychology
Location
Event Center A&B
Faculty Mentor
Dr. Matt Riggs
Start Date
5-27-2014 1:00 PM
End Date
5-27-2014 2:30 PM
Abstract
The use of Likert scales in applied psychological research is often assumed to be measurement at the interval level. Item scores are usually summed or averaged and the scale scores are used in parametric statistical analyses. However, Likert scales do not often meet the requirements of true measurement at the interval level. That is, the difference between “Strongly Agree” and “Agree” cannot be assumed to equal the distance between “Agree” and “Neither Agree nor Disagree.” In addition, a “Strongly Agree” on one item does not equal a “Strongly Agree” on the next item. The Rasch Rating Scale Model attempts to fix the unequal interval problem by converting items and participant responses into log-odds units (logits). The purpose of this study is to provide an initial test of the resulting differences of using summed or averaged scale scores compared to Rasch scores when testing a set of hypotheses. Likert scales were used to measure two psychological constructs. Averaged scores were calculated as well as Rasch scores for each of the scales. Each of these will then be used in two moderated linear regression analyses. Differences in the results will be evaluated. Results are not expected to reveal immense differences, but, given the stricter requirements of the Rasch Rating Scale, the use of these scores is expected to have less measurement error, ultimately resulting in a better estimate of true effects.
Rasch Rating Scale Model and Traditional Psychometric Methods: A Comparative Study
Event Center A&B
The use of Likert scales in applied psychological research is often assumed to be measurement at the interval level. Item scores are usually summed or averaged and the scale scores are used in parametric statistical analyses. However, Likert scales do not often meet the requirements of true measurement at the interval level. That is, the difference between “Strongly Agree” and “Agree” cannot be assumed to equal the distance between “Agree” and “Neither Agree nor Disagree.” In addition, a “Strongly Agree” on one item does not equal a “Strongly Agree” on the next item. The Rasch Rating Scale Model attempts to fix the unequal interval problem by converting items and participant responses into log-odds units (logits). The purpose of this study is to provide an initial test of the resulting differences of using summed or averaged scale scores compared to Rasch scores when testing a set of hypotheses. Likert scales were used to measure two psychological constructs. Averaged scores were calculated as well as Rasch scores for each of the scales. Each of these will then be used in two moderated linear regression analyses. Differences in the results will be evaluated. Results are not expected to reveal immense differences, but, given the stricter requirements of the Rasch Rating Scale, the use of these scores is expected to have less measurement error, ultimately resulting in a better estimate of true effects.