A Threshold for a Q-Sorting Methodology for Computer-Adaptive Surveys


Computer-Adaptive Surveys (CAS) are multi-dimensional instruments where questions asked of respondents depend on the previous questions asked. Due to the complexity of CAS, little work has been done on developing methods for validating their content and construct validity. We have created a new q-sorting technique where the hierarchies that independent raters develop are transformed into a quantitative form, and that quantitative form is tested to determine the inter-rater reliability of the individual branches in the hierarchy. The hierarchies are then successively transformed to test if they branch in the same way. The objective of this paper is to identify suitable measures and a "good enough" threshold for demonstrating the similarity of two CAS trees. To find suitable measures, we perform a set of bootstrap simulations to measure how various statistics change as a hypothetical CAS deviates from a "true" version. We find that the 3 measures of association, Goodman and Kruskal's Lambda, Cohen's Kappa, and Goodman and Kruskal's Gamma together provide information useful for assessing construct validity in CAS. In future work we are interested in both finding a "good enough" threshold(s) for assessing the overall similarity between tree hierarchies and diagnosing causes of disagreements between the tree hierarchies.

Meeting Name

25th European Conference on Information Systems, ECIS 2017 (2017: Jun. 5-10, Guimaraes, Portugal)


Business and Information Technology

Keywords and Phrases

Computer-adaptive; Construct validity; Q-sorting; Survey; Threshold

International Standard Book Number (ISBN)


Document Type

Article - Conference proceedings

Document Version


File Type





© 2017 Association for Information Systems (AIS), All rights reserved.

Publication Date

01 Jun 2017

This document is currently not available here.