Many people do not automatically think of experimental design and statistics when they think of Education. In fact, major departments of Educational Psychology throughout the country have strong commitments to teaching and systematically exploring these topics and have consistently been leaders in the development of knowledge on how behavioral scientists can use the best possible practices when doing their research. It is in this spirit that I spend almost all of my teaching and research efforts on issues related to statistics and experimental design.
All of the classes that I (and others in the Measurement, Research Design and Statistics area) teach draw students from a wide variety of academic departments and colleges. This makes the course a lot more interesting for me and exposes our students to an array of issues not found in more narrowly defined programs. Our students learn about the issues in Nursing, Speech, Social Welfare, and many other disciplines making them more knowledgeable statistical consultants as they progress through the program.
Research that I do usually consists of asking what is the best way for researchers in the behavioral sciences to conduct their statistical analysis. I devote most of my time to finding best practices in answering specific questions within an analysis of variance. Using monte carlo simulation techniques, I will construct a simulated world in which I introduce differences between treatment groups. I then use each of the competing strategies to analyze the scores I have generated and record which methods correctly detect the difference. This is repeated a half million times to determine which strategy works best in the long run. A secondary research interest involves rating scales. Numerous factors influence how we attach meaning to potential responses on a rating scale. These include how many points there are and what anchors are attached to the positions. I am interested in how these characteristics can be manipulated to increase the information captured by the scale.
When I am not teaching or working on research I greatly enjoy writing musicals for my church’s youth groups. Music has a wonderful structure and intricacy to it that in many ways is like the structure and intricacies of statistics.
Ph.D., University of Washington, 1967
Klockars, A.J. & Yamagishi, M. (1988) “The influence of labels and position on rating scales.” Journal of Educational Measurement, 25, 85-96.
Klockars, A. J. & Hancock, G.R. (1992) “Power of recent sequential Bonferroni procedures as applied to a complete set of planned orthogonal contrast.” Psychological Bulletin, 111, 505-510.
Klockars, A.J., Hancock, G.R., and McAweeney, M.J. (1995) “Power of unweighted and weighted versions of simultaneous and sequential multiple comparison procedures.” Psychological Bulletin, 118, 300-307.
Hancock, G.R. & Klockars, A.J. (1996) “The quest for alpha: Developments in multiple comparisons procedures in the quarter century since Games (1971).” Review of Educational Research, 66, 269-306.
Hancock, G.R. & Klockars, A. J. (1997) “Finite intersection tests: A paradigm for optimizing simultaneous and sequential inference.” Journal of Educational and Behavioral Statistics, 22, 291 - 307.
McAweeney, M.J. & Klockars, A.J. (1998) "Maximizing power in skewed distributions: Analysis and assignment" Psychological Methods, 3(1), 117-122.
Klockars, A.J., Potter, N.S., & Beretvas, S.N. (1999) Power to detect additive treatment effects with randomized block and analysis of covariance designs. The Journal of Experimental Education., 67(2), 190 - 191.
Klockars, A.J. & Hancock, G.R. (in press) "Scheffe's more powerful F-protected post-hoc procedure." Journal of Educational and Behavioral Statistics
Winter Quarter 2007 ANOVA article (in pdf format).