The measure of how well the test matches up with other tests of the same thing.
The extent to which the assessment results positively correlate with the results of other measures designed to assess the same or similar constructs.
The general agreement among ratings, gathered independently of one another, where measures should be theoretically related.
evidence showing that a measure tends to correlate with other measures that assess similar constructs. Convergent validity evidence is often used to support the construct validity of a measure.
Convergent validity according to Campbell and Fiske (1959) is when, in the presence of other scale items for other constructs, the scale items in a given construct move in the same direction (for reflective measures) and, thus, highly correlate. In a factor analysis, we would expect to see such items loading together on one factor (and not cross-loading on another construct altogether). Convergent validity differs from reliability in that tests of reliability include only the scale items for a single construct and they are not being compared to other constructs. See also discriminant validity, which is the complement of convergent validity and together form the construct validity of an instrument.
In convergent validity, the degree is examined to which the operationalization is similar to (converges on) other operationalizations that it theoretically should be similar to. For instance, to show the convergent validity of a test of math skills, the scores on a test can be correlated with scores on other tests that purport to measure basic math ability, where high correlations would be evidence of convergent validity.