Local Crowdsourcing for Annotating Audio: the Elevator Annotator platform

Authors

  • Themistoklis Karavellas Netherlands Institute for Sound and Vision
  • Anggarda Prameswari Netherlands Institute for Sound and Vision
  • Oana Inel Vrije Universiteit Amsterdam
  • Victor de Boer Vrije Universiteit Amsterdam http://orcid.org/0000-0001-9079-039X

DOI:

https://doi.org/10.15346/hc.v6i1.1

Keywords:

Crowdsourcing, Local crowdsourcing, audio annotation

Abstract

Crowdsourcing and other human computation techniques have proven useful in collecting large numbers of annotations for various datasets. In the majority of cases, online platforms are used when running crowdsourcing campaigns. Local crowdsourcing is a variant where annotation is done on specific physical locations. This paper describes a local crowdsourcing concept, platform and experiment. The case setting concerns eliciting annotations for an audio archive. For the experiment, we developed a hardware platform designed to be deployed in building elevators. To evaluate the effectiveness of the platform and to test the influence of location on the annotation results, an experiment was set up in two different locations. In each location two different user interaction modalities are used. The results show that our simple local crowdsourcing setup is able to achieve acceptable accuracy levels with up to 4 annotations per hour, and that the location has a significant effect on accuracy.

References

Agapie, E., Teevan, J., & Monroy-Hernández, A. (2015, September). Crowdsourcing in the field: A case study using local crowds for event reporting. In Third AAAI Conference on Human Computation and Crowdsourcing.

Anggarda, P. (2017). Experiment Results. figshare. Retrieved 1 July 2017, from https://doi.org/10.6084/m9.figshare.5106844.v1

De Boer, V., Hildebrand, M., Aroyo, L., De Leenheer, P., Dijkshoorn, C., Tesfa, B., & Schreiber, G. (2012, October). Nichesourcing: harnessing the power of crowds of experts. In International Conference on Knowledge Engineering and Knowledge Management (pp. 16-20). Springer, Berlin, Heidelberg.

Gligorov, R., Baltussen, L. B., van Ossenbruggen, J., Aroyo, L., Brinkerink, M., Oomen, J., & van Ees, A. (2010). Towards integration of end-user tags with professional annotations.

Gupta, A., Thies, W., Cutrell, E., & Balakrishnan, R. (2012, May). mClerk: enabling mobile crowdsourcing in developing regions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1843-1852). ACM.

Heimerl, K., Gawalt, B., Chen, K., Parikh, T., & Hartmann, B. (2012, May). CommunitySourcing: engaging local crowds to perform expert work via physical kiosks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1539-1548). ACM.

Hildebrand, M., Brinkerink, M., Gligorov, R., Van Steenbergen, M., Huijkman, J., & Oomen, J. (2013, October). Waisda?: video labeling game. In Proceedings of the 21st ACM international conference on Multimedia (pp. 823-826). ACM.

Howe, J. (2006). The rise of crowdsourcing. Wired magazine, 14(6), 1-4.

McKay, C., & Fujinaga, I. (2005, March). Automatic music classification and the importance of instrument identification. In Proceedings of the Conference on Interdisciplinary Musicology.

Oomen, J., Belice Baltussen, L., Limonard, S., van Ees, A., Brinkerink, M., Aroyo, L., Vervaart, J., Asaf, K. & Gligorov, R. (2010). Emerging practices in the cultural heritage domain-social tagging of audiovisual heritage.

Oomen, J., & Aroyo, L. (2011, June). Crowdsourcing in the cultural heritage domain: opportunities and challenges.In Proceedings of the 5th International Conference on Communities and Technologies (pp. 138-149). ACM.

Shirky, C. (2010). Cognitive surplus: Creativity and generosity in a connected age. Penguin UK

Väätäjä, H., Vainio, T., Sirkkunen, E., & Salo, K. (2011, August). Crowdsourced news reporting: supporting news content creation with mobile phones. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (pp. 435-444). ACM.

Vaish, R., Wyngarden, K., Chen, J., Cheung, B., & Bernstein, M. S. (2014, April). Twitch crowdsourcing: crowd contributions in short bursts of time. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems (pp. 3645-3654). ACM.

Downloads

Published

2019-06-02

How to Cite

Karavellas, T., Prameswari, A., Inel, O., & de Boer, V. (2019). Local Crowdsourcing for Annotating Audio: the Elevator Annotator platform. Human Computation, 6(1), 1-11. https://doi.org/10.15346/hc.v6i1.1

Issue

Section

Briefs