Talk:Construct validity

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Example[edit]

I removed "Example: Intelligence" not because it isn't a good example, but because it needs some context. -Nicktalk 03:08, 21 February 2006 (UTC)[reply]

Some problems[edit]

1. I am not familiar with the rebellion against the "operationist past," and this does not sound neutral to me.

2. Given that I'm not familiar with the arguments behind this view, I'm reluctant to change the article, but it makes more sense to me to say that "construct validity refers to whether an OPERATIONALIZATION CAPTURES the unobservable (THAT IS, THEORETICAL) social construct ..." Certainly construct validity is not specifically about SCALES.

3. The phrase "unobservable idea of a unidimensional easier-to-harder dimension" confuses conceptualization and measurement. The "unobservable idea" is not the "unidimensional easier-to-harder" response scale that's used to measure it.

4. I have only a vague idea what this means: "A construct is not restricted to one set of observable indicators or attributes. It is common to a number of sets of indicators." Does this actually mean the reverse--that indicators are not unique to a specific construct? What's the point of bringing this up unless you're also going to talk about convergent and discriminant validity?

5. This is, I think, incorrect: 'Thus, "construct validity" can be evaluated by statistical methods that show whether or not a common factor can be shown to exist underlying several measurements using different observable indicators.' All the factor analysis shows is that the indicators "go together". It's an indicator of convergent validity without a test for discriminant validity, predictive validity, or concurrent validity.

6. This is both underdeveloped and contentious: "This view of a construct rejects the operationist past that a construct is neither more nor less than the operations used to measure it." I don't know anyone who thinks that the construct IS the operationalization, but I also don't see how using factor analysis to measure construct validity buys you anything interesting.

Somewhat Agree 08:15, 6 March 2006 (UTC)[reply]

All of your comments are correct. I've been meaning to re-write the article, but haven't gotten around to it. -Nicktalk 16:28, 6 March 2006 (UTC)[reply]

Bring the article up to modern validity theory[edit]

I'm slowly working through WP's validity articles adding the last 40 years of work on the issue. Construct validity is probably the most important of the classical validities because it most approaches what current theory describes as "validity." Therefore I want to be very deliberate with my contributions and add more than I did to, say, predictive_validity

I would like to add a section citing Angoff, 1988, that modern models of validity only consider construct validity to be "true" validity, while the other types (criterion, content, predictive, etc.) are subsets, aspects, facets, or simply "types of [construct] validity-supporting evidence." Then cite the 1999 Standards, Chapter 1.

Jmbrowne (talk) 21:03, 18 February 2009 (UTC)[reply]

Here is a thought: since the modern unified view of validity basically denoted as "construct validity" anyhow...Would it be an idea to join this article with Jmbrowne's article on Test Validity? Personally I find the notion of test validity to be somewhat in conflict with the unified concept of validity where, according to the Standards, validity is a property of the interpretation and use of test results rather than the test itself. The classical notion of construct validity could be denoted as "construct validity (classical)".

Awesomeannay (talk) 01:18, 1 September 2013 (UTC)[reply]

This article needs some Borsboom in here. — Preceding unsigned comment added by 161.76.22.8 (talk) 13:49, 11 March 2016 (UTC)[reply]

Bafflegabbish[edit]

How about rewriting this article so it doesn't require a doctorate in Postmodernism to understand? Just a thought. --75.5.66.58 (talk) 06:43, 29 March 2009 (UTC)[reply]

I agree, the article needs fixing. I had a lot of trouble understanding it, even with a background in psychology. —Entropy (T/C) 18:30, 18 February 2011 (UTC)[reply]

Measurement Theory[edit]

In "Introduction to Measurement Theory" (Mary J. Allen & Wendy M. Yen, 1979) page 109 section 5.9 Construct Validity they note: "Construct validity is the most recently developed form of validity (Cronbach & Meehl, 1955)."

Cronbach, L. J. & Meehl, P. E. Construct validity in psychological tests. Psychological Bulletin, 1955, 52, 281-302.

Cronbach is the guy that Cronbach's alpha is name for. One of the *dudes* in the measurement theory.

Which appears to be an older reference than currently sited: "described in Campbell and Fiske's landmark paper (1959).[citation needed]"

Allen & Yen note in the next section 5.10 page 109; "Multitrait-multimethod validity is an aspect of construct validity that was developed by Campbell and Fiske (1959)."

The citation in Allen & Yen - is: Campbell, D. T. & Fiske, D. W. Convergent and discriminant validation by the multitrait=multimethod matrix. Psychological Bulletin, 1959, 56, 81-105.

Latest additions[edit]

This article has been expanded as part of my class's participation in the APS Wikipedia initiative. Class is History and Systems of Psychology, North Dakota State University, Spring 2013. They had been working off-line, and found that others had been expanding it in the meantime. To their credit, they recognized the high quality of the previous additions and worked around them. I agree with the high importance ratings, and am happy to say that this article is no longer a stub. James Council (talk) 04:37, 3 May 2013 (UTC)[reply]

Distressing revisions[edit]

This opening, "Construct validity is "the degree to which a test measures what it claims, or purports, to be measuring." and a similar opening to the entry on Test validity is distressing because it ignore the work by Angoff Messick Cronbach and others to refute this conception of measurement validity and also missinterprets the citations used as support. This quote is also the summation pulled up by Google and threatens my respect for Wikipedia as an authoritative voice. This wikipedia entry was much better in the past. Can anyone tell me why this has happened? — Preceding unsigned comment added by 100.35.195.240 (talk) 18:40, 19 May 2017 (UTC)[reply]

I am also confused, the definition "quoted" here (it actually only appears in one of the three sources as far as I can tell, and is misused here) seems to be the one for validity in general. Not being an expert on this topic at all I am hesitant to make such a prominent edit in the introduction, but I believe the definition needs to be replaced by a more fitting one. These are the actual definitions I could find in the sources I was able to access:
  • "Construct validity has traditionally been defined as the experimental demonstration that a test is measuring the construct it claims to be measuring" (from Brown 2000, still leaves the question what a "construct" is supposed to be) --LukasFreeze (talk) 14:55, 21 December 2017 (UTC)[reply]

Entry opens with a misguided definition[edit]

I echo the above concern (Distressing revisions). The first sentence of the entry is a definition of classical validity, but is simply wrong when it comes to construct validity. Construct validity has its roots in the classical article by Cronbach & Meehl (1956) and it applies to inferences based on tests, not to tests themselves (contra how the entry begins now). The perhaps most notable current proponent of the view is Michael Kane, see e.g. Kane, 2013. Factor analyses and analyses of criterion validity are current counterparts of Cronbach & Meehl's idea of validating tests and instruments by examining their nomological networks (lawful connections) 1) between theoretical concepts, 2) between theoretical concepts and associated observed concepts. The philosophical background of the concept of "construct" is discussed in Michell, 2013 and certain logical inconsistencies of construct validity practice in Borsboom et al., 2009. I think the opening should be edited to reflect the basic ideas that in construct validity theorizing, what is validated are inferences based on tests and the use of tests, not the tests themselves. Droste effect (talk) 15:23, 29 January 2020 (UTC)[reply]

Hawthorne effect as an example of hypothesis guessing[edit]

This article currently gives the Hawthorne effect as an example of hypothesis guessing. I believe this must be a misinterpretation, because the Hawthorne effect is simply about the presence of an observer, without specifying that the participants are guessing the hypothesis as in the "please-you" and "screw-you" effects and other demand characteristics. Anditres (talk) 19:46, 17 January 2022 (UTC)[reply]

Entry opens with a definition of construct validation, not construct validity[edit]

The term “accumulation” in the introductory sentence, “Construct validity is the accumulation of evidence to support the interpretation of what a measure reflects”, describes the process of construct validation, not the property construct validity, cf. Borsboom et al. 2004, p. 1063: “… validity is a property, whereas validation is an activity.” Moreover, this introductory sentence is complicated and hardly understandable to non-experts: “… accumulation of evidence to support the interpretation …”.

I propose a simpler and more intuitive introductory sentence: “Construct validity concerns how well a set of indicators represent or reflect a concept that is not directly measurable.” And then the existing introductory sentence could follow, given the change of “construct validity” to “construct validation”.

Our description of CV is consistent with the classical work of Cronbach and Meehl 1955, Cook & Campbell 1979 and with what we have recently published.[1] Our article also reports a literature review that revealed a great lack of understanding of construct validity.

  1. ^ Sjøberg, D. I. K.; Bergersen, G. R. (2022). "Construct validity in software engineering". IEEE Transactions on Software Engineering. doi:10.1109/TSE.2022.3176725.

Dagsj (talk) 13:09, 30 September 2022 (UTC)[reply]