We broadly consider the topic of ranking entities from surveys/opinions. Often, numerous ranks from different respondents are available for the same entity, e.g., a candidate from a pool, and yet an averaging of those ranks may not serve the purpose of identifying a consensus candidate. We first consider a risk-adjusted paradigm for ranking, where the rank is defined as the average (mean) rank plus a scalar times the risk in the rank; we use standard deviation as a risk metric. In case of a candidate being ranked either on the basis of opinions of a selection committee's members or on social interactions in a social network such as Facebook, risk-adjusted ranking can result in selecting a consensus candidate who/which does not secure the best average rank, but is acceptable to a large number of the opinion providers. Second, we present an approach to develop the margin of error in Likert surveys, which are increasingly being used in data analytics, where the responses are on a five-point scale, but one is interested in a binary response, e.g., yes-no, agree-disagree. Computing the margin of error in Likert surveys is an open problem.
A. Gosavi, "Analyzing Responses from Likert Surveys and Risk-Adjusted Ranking: A Data Analytics Perspective," Procedia Computer Science, vol. 61, pp. 24-31, Elsevier, Nov 2015.
The definitive version is available at https://doi.org/10.1016/j.procs.2015.09.139
Complex Adaptive Systems (2015: Nov. 2-4, San Jose, CA)
Engineering Management and Systems Engineering
Keywords and Phrases
Adaptive systems; Complex networks; Errors; Risks; Social networking (online); Surveying; Binary response; Likert scale; Margin of error; Ranking; Selection Committee; Social interactions; Social media; Standard deviation; Surveys
International Standard Serial Number (ISSN)
Article - Journal
© 2015 The Authors, All rights reserved.
Creative Commons Licensing
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.