Autocompletion interfaces make crowd workers slower, but their use promotes response diversity

Authors

  • Xipei Liu
  • James Bagrow

DOI:

https://doi.org/10.15346/hc.v6i1.3

Keywords:

Crowdsourcing, Creativity and Ideation, Natural Language Processing

Abstract

Creative tasks such as ideation or question proposal are powerful applications of crowdsourcing, yet the quantity of workers available for addressing practical problems is often insufficient. To enable scalable crowdsourcing thus requires gaining all possible efficiency and information from available workers. One option for text-focused tasks is to allow assistive technology, such as an autocompletion user interface (AUI), to help workers input text responses. But support for the efficacy of AUIs is mixed. Here we designed and conducted a randomized experiment where workers were asked to provide short text responses to given questions. Our experimental goal was to determine if an AUI helps workers respond more quickly and with improved consistency by mitigating typos and misspellings. Surprisingly, we found that neither occurred: workers assigned to the AUI treatment were slower than those assigned to the non-AUI control and their responses were more diverse, not less, than those of the control. Both the lexical and semantic diversities of responses were higher, with the latter measured using word2vec. A crowdsourcer interested in worker speed may want to avoid using an AUI, but using an AUI to boost response diversity may be valuable to crowdsourcers interested in receiving as much novel information from workers as possible.

References

Allahbakhsh, M, Benatallah, B, Ignjatovic, A, Motahari-Nezhad, H. R, Bertino, E, and Dustdar, S. (2013). Quality control in crowdsourcing systems: Issues and directions. IEEE Internet Computing 17, 2 (2013), 76–81.

Anson, D, Moist, P, Przywara, M, Wells, H, Saylor, H, and Maxime, H. (2006). The effects of word completion and word prediction on typing rates using on-screen keyboards. Assistive technology 18, 2 (2006), 146–154.

Bast, H and Weber, I. (2006). Type less, find more: fast autocompletion search with a succinct index. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 364–371.

Cheng, J, Teevan, J, Iqbal, S. T, and Bernstein, M. S. (2015). Break It Down: A Comparison of Macro- and Microtasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, 4061–4064. DOI:http://dx.doi.org/ 10.1145/2702123.2702146

Demartini, G. (2016). Crowdsourcing Relevance Assessments: The Unexpected Benefits of Limiting the Time to Judge. In Proceedings of Conference on Human Computation and Crowdsourcing (HCOMP 2016). Sheffield.

Karger, D. R, Oh, S, and Shah, D. (2014). Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems. Operations Research 62, 1 (2014), 1–24. DOI:http://dx.doi.org/10.1287/opre.2013.1235

Kittur, A, Chi, E. H, and Suh, B. (2008). Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 453–456.

Koester, H. H and Levine, S. P. (1994). Modeling the speed of text entry with a word prediction interface. IEEE Transactions on Rehabilitation Engineering 2, 3 (Sep 1994), 177–187. DOI:http://dx.doi.org/10.1109/86.331567

Lasecki, W. S, Rzeszotarski, J. M, Marcus, A, and Bigham, J. P. (2015). The Effects of Sequence and Delay on Crowd Work. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, 1375–1378. DOI: http://dx.doi.org/10.1145/2702123.2702594

Li, Q, Ma, F, Gao, J, Su, L, and Quinn, C. J. (2016). Crowdsourcing high quality labels with a tight budget. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining. ACM, 237–246.

Little, G, Chilton, L. B, Goldman, M, and Miller, R. C. (2010). Exploring iterative and parallel human computation processes. In Proceedings of the ACM SIGKDD workshop on human computation. ACM, 68–76.

Magnuson, T and Hunnicutt, S. (2002). Measuring the effectiveness of word prediction: The advantage of long-term use. TMH-QPSR 43, 1 (2002), 57–67.

McAndrew, T. C and Bagrow, J. P. (2016). Reply & Supply: Efficient crowdsourcing when workers do more than answer questions. (2016). arXiv:1611.00954.

Mikolov, T, Chen, K, Corrado, G, and Dean, J. (2013)a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).

Mikolov, T, Sutskever, I, Chen, K, Corrado, G. S, and Dean, J. (2013)b. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 3111–3119.

Sevenster, M, van Ommering, R, and Qian, Y. (2012). Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary. Journal of Biomedical Informatics 45, 1 (2012), 107–119.

Tran-Thanh, L, Venanzi, M, Rogers, A, and Jennings, N. R. (2013). Efficient budget allocation with accuracy guarantees for crowd- sourcing classification tasks. In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems. International Foundation for Autonomous Agents and Multiagent Systems, 901–908.

Wang, Z, Wang, H, Wen, J.-R, and Xiao, Y. (2015). An Inference Approach to Basic Level of Categorization. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. ACM, 653–662.

Welinder, P and Perona, P. (2010). Online crowdsourcing: rating annotators and obtaining cost-effective labels. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on. IEEE, 25–32.

Wu, W, Li, H, Wang, H, and Zhu, K. Q. (2012). Probase: A probabilistic taxonomy for text understanding. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data. ACM, 481–492.

Downloads

Published

2019-06-02

How to Cite

Liu, X., & Bagrow, J. (2019). Autocompletion interfaces make crowd workers slower, but their use promotes response diversity. Human Computation, 6(1), 42-55. https://doi.org/10.15346/hc.v6i1.3

Issue

Section

Research