In assessment instruments, the concept of validity relates to how well a test measures what it is purported to measure. Validity may refer to the test items, interpretations of the scores derived from the assessment or the application of the test results to educational decisions. The measurement of an instrument’s validity is often subjective, based on experience and observation.
If an assessment has face validity, this means the instrument appears to measure what it is supposed to measure. Face validity is strictly an indication of the appearance of validity of an assessment. An instrument would be rejected by potential users if it did not at least possess face validity. No professional assessment instrument would pass the research and design stage without having face validity. However, informal assessment tools may lack face validity. For example, online surveys that are obviously meant to sell something rather than elicit consumer data do not have face validity. This is obvious by looking at the survey that its intention is not the same as its stated purpose. Moreover, this lack of face validity would likely reduce the number of subjects willing to participate in the survey.
Content validity concerns whether the content assessed by an instrument is representative of the content area itself. For example, a math assessment designed to test algebra skills would contain relevant test items for algebra rather than trigonometry. Content validity is usually determined by experts in the content area to be assessed.
Construct validity refers to whether the method of assessment will actually elicit the desired response from a subject. Two types of construct validity are convergent and discriminant. If an assessment yields similar results to another assessment intended to measure the same skill, the assessment has convergent validity. If an assessment yields dissimilar results compared to an assessment it should be dissimilar to, it is said to have discriminant validity. Discriminant validity is the extent to which a test does not measure what it should not.
Criterion validity of a test means that a subject has performed successfully in relation to the criteria. Two types of criterion validity are predictive and concurrent validity. Predictive validity concerns how well an individual’s performance on an assessment measures how successful he will be on some future measure. The SAT is an assessment that predicts how well a student will perform in college. Concurrent validity refers to how the test compares with similar instruments that measure the same criterion.
Validity of Results
There are three types of validity primarily related to the results of an assessment: internal, conclusion and external validity. If an assessment has internal validity, the variables show a causal relationship. Conclusion validity means there is some type of relationship between the variables involved, whether positive or negative. External validity involves causal relationships drawn from the study that can be generalized to other situations.
- Photo Credit Comstock/Comstock/Getty Images
Validity & Reliability of Focus Groups
According to the Lehigh site, a focus group provides a way to get feedback and information from a group of customers. Focus...
How to Validate a Research Instrument
In the field of Psychology, research is a necessary component of determining whether a given treatment is effective and if our current...
How to Write a Needs Assessment Report
Needs assessments come in many varieties. Some are standalone reports designed to outline the challenges faced by a community or population. In...
How to Calculate Content Validity Ratios
Worthless or essential -- that is the measurement of the Content Validity Ratio, or CVR. Struggling to find an empirical way to...
The Validity & Reliability of Employment Testing
Businesses use employment testing to help make decisions about hiring and promoting employees. Employers use a variety of tests, including personality, intelligence,...
Questionnaire Validation Methods
Questionnaire validation is a process in which the creators review the questionnaire to determine whether the questionnaire measures what it was designed...
What Are Some Ways to Increase the Reliability and Validity of a Classroom Assessment?
A reliable assessment yields the same results at any time and with any scorer. For a test to be valid, it must...
Validity of Assessment Tools
When analyzing a test or assessment, the two most important concepts are reliability and validity. These two concepts are similar to the...
Valid IRS Reasons for Late Filing of Form 2553
To enjoy the tax advantages that come with being an S corporation, a business must first inform the Internal Revenue Service it...
What Is the Difference Between Internal & External Validity of Research Study Design?
An essential concept in experimental design, validity directly relates to the soundness of research. Validity refers to the degree to which a...