Leveraging Cognitive Models for the Wisdom of Crowds in Sequential Decision Tasks


Many decisions we face in life are sequential, where alternatives appear over time. We often must decide whether to take the opportunity and stop searching or to continue evaluating potentially better future alternatives. Research suggests that humans are notoriously poor at stopping optimally in sequential decision-making tasks. These sequential decisions are difficult because they involve the consideration of how past, present, and future decisions affect the outcome. Recent research suggests that the wisdom of the crowd (WoC) — that is, aggregated decisions of many people that outperform most individuals — can be applied to sequential decision tasks and potentially help improve stopping decisions. However, current models rely on a process of fitting human data, making it difficult to understand how those individuals would behave in new problems. Furthermore, these models do not account for the learning process that humans experience while making these decisions. In this work, we demonstrate how simulated agents using a cognitive model derived from Instance-Based Learning Theory (IBLT) can produce WoC that is similar to WoC from human participants in two sequential decision tasks. We demonstrate that the WoC performance from simulated groups of agents is better than the performance of most agents and that the Instance-Based Learning (IBL) crowd behavior is similar to the human crowd behavior. Thus, cognitive models that account for learning and experience can be used to inductively predict the behavior of human crowds in sequential decision tasks.

Aug 29, 2022 11:42 AM — 11:42 AM
Erin H. Bugbee
Erin H. Bugbee
Cognitive Decision Science PhD Student at Carnegie Mellon

I study how humans learn and make sequential decisions from experience, and I do so by building computational cognitive models of human and artificial decision making and through behavioral experimentation.