In the optimal stopping problem, a decision maker aims to select the option that maximizes reward in a sequence, under the condition that they must select it at the time of presentation. Past literature suggests that people use a series of thresholds to make decisions (Lee, 2006), and researchers have developed a hierarchical Bayesian model, Bias-From-Optimal (BFO), to characterize these thresholds (Guan et al., 2015, 2020). BFO relies on optimal thresholds and the idea that people’s thresholds are characterized by how far they are from optimal and how this bias increases or decreases throughout the sequence. In this work, we challenge the assumption that people use thresholds to make decisions. We develop a cognitive model based on Instance-Based Learning Theory (Gonzalez et al., 2003) to demonstrate an inductive process by which individual thresholds are derived, without assuming that people use thresholds or relying on optimal thresholds. The IBL model makes decisions by considering the current value and the distance of its position from the end of the sequence, and learns through feedback from past decisions. Using this model, we simulate the choices of 56 individuals and compare these simulations with empirical data provided by Guan et al. (2020). Our results demonstrate that the IBL model replicates human behavior and generates the BFO model’s thresholds, without assuming any thresholds. Overall, our approach improves upon previous methods by producing cognitively plausible choices, resembling those of humans. The IBL model can therefore be used to predict human risk tendencies in sequential choice tasks.