Which statistic is used to represent the internal reliability of multiple items Self Report Scale?

In statistics and research, internal consistency is typically a measure based on the correlations between different items on the same test (or the same subscale on a larger test). It measures whether several items that propose to measure the same general construct produce similar scores. For example, if a respondent expressed agreement with the statements "I like to ride bicycles" and "I've enjoyed riding bicycles in the past", and disagreement with the statement "I hate bicycles", this would be indicative of good internal consistency of the test.

Cronbach's alpha[edit]

Internal consistency is usually measured with Cronbach's alpha, a statistic calculated from the pairwise correlations between items. Internal consistency ranges between negative infinity and one. Coefficient alpha will be negative whenever there is greater within-subject variability than between-subject variability.[1]

A commonly accepted rule of thumb for describing internal consistency is as follows:[2]

Cronbach's alphaInternal consistency
0.9 ≤ α Excellent
0.8 ≤ α < 0.9 Good
0.7 ≤ α < 0.8 Acceptable
0.6 ≤ α < 0.7 Questionable
0.5 ≤ α < 0.6 Poor
α < 0.5 Unacceptable

Very high reliabilities (0.95 or higher) are not necessarily desirable, as this indicates that the items may be redundant.[3] The goal in designing a reliable instrument is for scores on similar items to be related (internally consistent), but for each to contribute some unique information as well. Note further that Cronbach's alpha is necessarily higher for tests measuring more narrow constructs, and lower when more generic, broad constructs are measured. This phenomenon, along with a number of other reasons, argue against using objective cut-off values for internal consistency measures.[4] Alpha is also a function of the number of items, so shorter scales will often have lower reliability estimates yet still be preferable in many situations because they are lower burden.

An alternative way of thinking about internal consistency is that it is the extent to which all of the items of a test measure the same latent variable. The advantage of this perspective over the notion of a high average correlation among the items of a test – the perspective underlying Cronbach's alpha – is that the average item correlation is affected by skewness (in the distribution of item correlations) just as any other average is. Thus, whereas the modal item correlation is zero when the items of a test measure several unrelated latent variables, the average item correlation in such cases will be greater than zero. Thus, whereas the ideal of measurement is for all items of a test to measure the same latent variable, alpha has been demonstrated many times to attain quite high values even when the set of items measures several unrelated latent variables.[5][6][7][8][9][10][11] The hierarchical "coefficient omega" may be a more appropriate index of the extent to which all of the items in a test measure the same latent variable.[12][13] Several different measures of internal consistency are reviewed by Revelle & Zinbarg (2009).[14][15]

See also[edit]

  • Cronbach's alpha
  • Consistency (statistics)
  • Reliability (statistics)

References[edit]

  1. ^ Knapp, T. R. (1991). Coefficient alpha: Conceptualizations and anomalies. Research in Nursing & Health, 14, 457-480.
  2. ^ George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple guide and reference. 11.0 update (4th ed.). Boston: Allyn & Bacon.
  3. ^ Streiner, D. L. (2003) Starting at the beginning: an introduction to coefficient alpha and internal consistency, Journal of Personality Assessment, 80, 99-103
  4. ^ Peters, G.-J. Y (2014) The alpha and the omega of scale reliability and validity: Why and how to abandon Cronbach’s alpha and the route towards more comprehensive assessment of scale quality. European Health Psychologist, 16 (2). URL: //ehps.net/ehp/index.php/contents/article/download/ehp.v16.i2.p56/1
  5. ^ Cortina. J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78, 98–104.
  6. ^ Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334.
  7. ^ Green, S. B., Lissitz, R.W., & Mulaik, S. A. (1977). Limitations of coefficient alpha as an index of test unidimensionality. Educational and Psychological Measurement, 37, 827–838.
  8. ^ Revelle, W. (1979). Hierarchical cluster analysis and the internal structure of tests. Multivariate Behavioral Research, 14, 57–74.
  9. ^ Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8, 350–353.
  10. ^ Zinbarg, R., Yovel, I., Revelle, W. & McDonald, R. (2006). Estimating generalizability to a universe of indicators that all have an attribute in common: A comparison of estimators for . Applied Psychological Measurement, 30, 121–144.
  11. ^ Trippi, R. & Settle, R. (1976). A Nonparametric Coefficient of Internal Consistency. Multivariate Behavioral Research, 4, 419-424. URL: //www.sigma-research.com/misc/Nonparametric%20Coefficient%20of%20Internal%20Consistency.htm
  12. ^ McDonald, R. P. (1999). Test theory: A unified treatment. Psychology Press. ISBN 0-8058-3075-8
  13. ^ Zinbarg, R., Revelle, W., Yovel, I. & Li, W. (2005). Cronbach’s α, Revelle’s β, and McDonald’s ωH: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70, 123–133.
  14. ^ Revelle, W., Zinbarg, R. (2009) "Coefficients Alpha, Beta, Omega, and the glb: Comments on Sijtsma", Psychometrika, 74(1), 145–154. [1]
  15. ^ Dunn, T. J., Baguley, T. and Brunsden, V. (2013), From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology. doi: 10.1111/bjop.12046

External links[edit]

  • //web.archive.org/web/20090805095348///wilderdom.com/personality/L3-2EssentialsGoodPsychologicalTest.html

Which statistic is used to represent the internal reliability of a multiple items Self Report Scale?

Which statistic is used to represent the internal reliability of multiple-item self-report scales? Because: Using the Correlation Coefficient r to Evaluate Reliability — Cronbach's alpha is a statistic based on the average of inter-item correlations. It is used to assess internal reliability of a scale.

Which type of measure Operationalizes a variable?

A physiological measure operationalizes a variable by recording biological data. The problem with physiological measurement is that not everything can be measured with biological data (at least not yet). All variables must have at least two levels.

Which measurement would a researcher use to test for reliability when the data are in Likert scale response format?

Which measurement would a researcher use to test for reliability when the data are in Likert-scale response format? Cronbach's alpha measures the homogeneity of an instrument with a Likert-type format. The KR-20 coefficient is used to estimate the homogeneity of instruments with a dichotomous response format.

When using correlation coefficients do you evaluate reliability?

Correlation coefficients It measures the relationship between two variables rather than the agreement between them, and is therefore commonly used to assess relative reliability or validity. A more positive correlation coefficient (closer to 1) is interpreted as greater validity or reliability.

Toplist

Neuester Beitrag

Stichworte