Accurate measurement is pivotal for psychological research, educational and clinical decision making (Cohen & Swerdlik, 2002), personnel selection (Hough & Oswald, 2000), and managerial practices.
Interestingly, measuring individual differences has a history of thousands of years. It began in China about 3,000 years ago when an emperor decided to assess the competency of his officials. This government-developed measurement method gradually evolved a sophisticated system with a multistage process to select for various government administrative positions, and this measurement covered a wide range of topics including music, horsemanship, civil law, writing, Confucian principles, and knowledge of public and private ceremonies (see Du Bois, 1970, for a detailed overview of this ancient measurement system). The system had been in use for 3,000 years by 1905, right before Britain and the United State began to develop civil-service exams as a fair way of selecting applicants for government jobs.
The modern Psychometrics has developed two important psychological measurement theories for assessing cognitive ability and scoring tests: Classical test theory and item response theory (IRT). Our Lab is focused on IRT research and its applications in various facets, including scale development, measurement equivalence assessment, computerized adaptive testing, etc.
Estimation of Item Response Theory Models
We are committed to improve the IRT model estimation and develop fine computer programs by employing innovative estimation methods including Markov Chain Monte Carlo (MCMC) and Adaptive Markov Chain Monte Carlo (aMCMC). This method is expected to generate more accurate parameter estimates and smaller standard errors comparing to the traditional estimation methods such as EAP and MLE. We have developed our first software (MCMC GGUM) in C++ computer programming language, which is free to download for academic use from the Resources page.
We are also working hard to develop an R package for this IRT model estimation and make it more user friendly and versatile. The estimation of multidimensional IRT models is also on the agenda of this research line.
Measurement Equivalence and Testing Fairness
The latest version of The Standards for Educational and Psychological Testing by APA, AERA, and NCME has highlighted fairness in testing as one of the three foundations of psychological measurement (the other two are reliability and validity), yet research on the fairness topic has been historically understudied, especially in the area of non-cognitive psychological measurement. Our Laboratory taking the MCMC approach to examine measurement equivalence problems has generated seminal results shedding light on the question.
Cognitive Processes Underlying Psychological Testing
We empirically examine the cognitive processes underlying psychological testing and investigate factors that impact the ideal-point process.
Computerized Adaptive Testing
We apply the ideal-point IRT models measuring non-cognitive abilities (e.g., personality, vocational interests, attitudes, etc) into computerized adaptive testing (CAT). We seek to establish a CAT item bank and a CAT computer system for each test and use it for applied psychological measurement including human resource personnel selection, college admission, management, clinical, etc.