Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This thesis addresses challenges in elicitation and aggregation of crowd information for settings where an information collector, called center, has a limited knowledge about information providers, called agents. Each agent is assumed to have noisy private information that brings a high information gain to the center when it is aggregated with the private information of other agents. We address two particular issues in eliciting crowd information: 1) how to incentivize agents to participate and provide accurate data; 2) how to aggregate crowd information so that the negative impact of agents who provide low quality information is bounded. We examine three different information elicitation settings. In the first elicitation setting, agents report their observations regarding a single phenomenon that represents an abstraction of a crowdsourcing task. The center itself does not observe the phenomenon, so it rewards agents by comparing their reports. Clearly, a rational agent bases her reporting strategy on what she believes about other agents, called peers. We prove that, in general, no payment mechanism can achieve strict properness (i.e., adopt truthful reporting as a strict equilibrium strategy) if agents only report their observations, even if they share a common belief system. This motivates the use of payment mechanisms that are based on an additional report. We show that a general payment mechanism cannot have a simple structure, often adopted by prior work, and that in the limit case, when observations can take real values, agents are constrained to share a common belief system. Furthermore, we develop several payment mechanisms for the elicitation of non-binary observations. In the second elicitation setting, a group of agents observes multiple a priori similar phenomena. Due to the a priori similarity condition, the setting represents a refinement of the former setting and enables one to achieve stronger incentive properties without requiring additional reports or constraining agents to share a common belief system. We extend the existing mechanisms to allow non-binary observations by constructing strongly truthful mechanisms (i.e., mechanisms in which truthful reporting is the highest-paying equilibrium) for different types of agents' population. In the third elicitation setting, agents observe a time evolving phenomenon, and a few of them, whose identity is known, are trusted to report truthful observations. The existence of trusted agents makes this setting much more stringent than the previous ones. We show that, in the context of online information aggregation, one can not only incentivize agents to provide informative reports, but also limit the effectiveness of malicious agents who deliberately misreport. To do so, we construct a reputation system that puts a bound on the negative impact that any misreporting strategy can have on the learned aggregate. Finally, we experimentally verify the effectiveness of novel elicitation mechanisms in community sensing simulation testbeds and a peer grading experiment.