Wednesday February 9, 8:30am-5pm, Hong Kong (see overview)
The advent of crowdsourcing is revolutionizing data annotation, evaluation, and other traditionally manual-labor intensive processes by dramatically reducing the time, cost, and effort involved. This in turn is driving a disruptive shift in search and data mining methodology in areas such as:
Evaluation: the Cranfield paradigm for search evaluation requires manually assessing document relevance to search queries. Recent work on stochastic evaluation has reduced but not removed this need for manual assessment.
Supervised Learning: while traditional costs associated with data annotation have driven recent machine learning work (e.g. Learning to Rank) toward greater use of unsupervised and semi-supervised methods, the emergence of crowdsourcing has made labeled data far easier to acquire, thereby driving a potential resurgence in fully-supervised methods.
- Applications: Crowdsourcing has introduced exciting new opportunities to integrate human labor into automated systems: handling difficult cases where automation fails, exploiting the breadth of backgrounds, geographic dispersion, real-time response, etc.
Audience and Scope
This workshop will bring together researchers and practitioners of crowdsourcing techniques. This meeting is intended to work as a catalyst for future collaborations, as well as one of the main forums for sharing the latest developments in this area. The workshop will provide participants an opportunity to hear about and discuss key issues such as:- Advantages/disadvantages of crowdsourcing vs. traditional methods
- When and how to use crowdsourcing for an experiment
- How to increase quality and throughput of crowdsourcing
- How to detect cheating and handle noise in crowdsourcing
- General guidelines and best practices of crowdsourcing experiments
- The latest improvements and current state-of-the-art crowdsourcing systems and methods
- The reach and potential of recent innovative applications in the area