That product you’ve seen three times this week? The one you suddenly “need”? It’s not a sign from the universe. It’s your algorithm, working overtime to bring you closer to checkout. It’s not that your phone is listening to you. You’re just not noticing how much you’re telling it. Every silent scroll is a secret spoken to the algorithm, and the algorithm remembers everything.
Every time you pause on a video, rewatch a reel, or scroll just a little slower on a website, you’re revealing something that may be more valuable than the product you end up buying: your attention and data. These aren’t random clicks–they’re behavioral data points, and behind every one of them is an algorithm learning to influence and sell to the next consumer. The system of modern consumerism is no longer built around needs, but around wants. Consumer decisions are increasingly optimized by prediction, driven by both behavioral economics and machine learning. Behavioral economics studies how psychological patterns influence human decision-making, often showing that people deviate from rational economic models. Machine learning shows how computer systems learn from data without more explicit programming, using algorithms and statistical models to catch patterns and make predictions. Together, these fields are reshaping how we behave, what we desire, and even how we think. However, there is more nuance to machine learning.
Machine learning’s predictive powers are likely most familiar through the recommendation algorithms that curate our online experiences. The word “algorithm” refers to a type of recommendation engine that relies on two main models: collaborative filtering and content-based filtering. Collaborative filtering creates and links user profiles that behave similarly, whether they work in parallel industries, live in the same area, or spend a similar amount of time on a website, creating an interpersonal experience. These related data points tell the model to create an audience profile and assume what to recommend. Content-based filtering, on the other hand, focuses on the details of the item, brand, price point, and category, then matches it to things you’ve liked or interacted with before, personalizing the product for you. Combining these two models creates even higher recommendation accuracy (Geetha et al., 2018). However, these models go beyond recommendations (Jannach & Adomavicius, 2016). When you repeatedly see the same trend, headlines, or hear the same song, the subtle repetition of content slowly influences your desire. These models intentionally reshape your digital environment, deciding what you see, when you see it, and how often you should see it, thereby creating a filter bubble of goods based on multiple websites and apps (Pariser, 2011).
More complex systems use deep learning and reinforcement learning, adapting their recommendations based on their success in influencing your actions (Chen et al., 2020). Deep learning and reinforcement learning are types of machine learning that teach computer systems to adapt on their own. Deep learning mimics the human brain in how it processes information, while reinforcement learning teaches a system to make decisions by rewarding actions that lead to better outcomes (e.g., an AI robot learning to win at rock-paper-scissors from repeated practice and feedback). Therefore, each action, measured through human–computer interaction (HCI), provides continuous knowledge to digital interfaces, training systems to learn and anticipate a user's preferences and behavior. These systems then subtly guide the user toward the brand's objectives, such as maximizing engagement, profit, or data acquisition. The goal isn’t just to guess your next decision. It’s to steer you toward it without you noticing.
Beyond the technical complexities of these systems, psychological studies and behavioral economics also play a role, reminding consumers that they are not purely rational, but emotional, impulsive, and easily influenced by context. One concept is reference dependence, which suggests a price evaluation across a market, instead of just giving you one price (Kahneman & Tversky, 1979). When all the jeans on your feed are between $100 and $150, an $80 pair seems like a steal, even if it initially seemed out of budget. The algorithm sets your reference point.
Another concept is loss aversion, where we fear losses more than we value gains (Tversky & Kahneman, 1991). When Columbia’s Bookstore offers a limited-time flash sale on plush lion keychains and shows “only 3 left in stock,” students are more inclined to purchase even if they don’t have school spirit. It’s not about the item, it’s about avoiding the regret of missing out. Couponing, free trials, and store-opening deals also buy into this, manipulating the customer into thinking there’s more to lose in not taking the offer.
Then there’s mental accounting, which is how we irrationally bucket money (Prelec & Loewenstein, 1998). Using a gift card to impulse buy feels justified. Paying with a saved credit card on your phone doesn’t feel like real spending. Digital payment systems are designed to exploit mental accounting, reducing the “pain of paying” until the purchase becomes faster than you can even fully think about it (Soman, 2001). These algorithms, curated, repetitive, and gradually narrowing choices, are also called “hypernudging” (Thaler & Sunstein, 2008). Unlike traditional nudges, which help people make decisions, like suggested tips percentages, hypernudges adapt in real time to influence behavior moment by moment (Mills, 2022). For example, TikTok’s “For You” page changes as you scroll, and navigation apps adjust the route after you miss a highway exit. They’re not designed for your well-being; they’re made to change your mind (Yeung, 2017).
Ultimately, these algorithms and behaviors are implemented beyond buying items or consuming media. In this illusion of freedom, consumers’ sense of choice is being quietly reprogrammed by systems we don’t fully understand. Pause the next time you make a purchase and think about where the idea really came from. Was it yours, or was it engineered? You’re not just the user, you’re the product. And the proof that the persuasion works.
Works Cited
Chen, L., Yang, Z., Zhang, M., Zhang, Y., & Ma, S. (2020). Reinforcement learning for user
response modeling. ACM Transactions on Information Systems (TOIS), 38(4), 1–38.
https://doi.org/10.1145/3402388
Geetha, G., Safa, M., Fancy, C., & Saranya, D. (2018). A hybrid approach using collaborative
filtering and content-based filtering for recommender system. Journal of Physics:
Conference Series, 1000, 012101. https://doi.org/10.1088/1742-6596/1000/1/012101
Jannach, D., & Adomavicius, G. (2016). Recommendation systems: Challenges, insights and
research opportunities. ACM Transactions on Management Information Systems (TMIS),
6(4), 1–31. https://doi.org/10.1145/2835508
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk.
Econometrica, 47(2), 263–291.
Mills, S. (2022). Finding the ‘nudge’ in hypernudge. Technology in Society, 71(C), Article
102117. https://doi.org/10.1016/j.techsoc.2022.102117
Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin Press.
Prelec, D., & Loewenstein, G. (1998). The red and the black: Mental accounting of savings and
debt. Marketing Science, 17(1), 4–28. https://doi.org/10.1287/mksc.17.1.4
Soman, D. (2001). Effects of payment mechanism on spending behavior: The role of mental
accounting. Journal of Consumer Research, 27(4), 460–474.
https://doi.org/10.1086/319621
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions About Health, Wealth, and
Happiness. Yale University Press.
Tversky, A., & Kahneman, D. (1991). Loss aversion in riskless choice: A reference-dependent
model. The Quarterly Journal of Economics, 106(4), 1039–1061.
Weinmann, M., Schneider, C., & vom Brocke, J. (2016). Digital nudging. Business &
Information Systems Engineering, 58(6), 433–436.
https://doi.org/10.1007/s12599-016-0453-1
Yeung, K. (2017). ‘Hypernudge’: Big Data as a mode of regulation by design. Information,
Communication & Society, 20(1), 118–136.
https://doi.org/10.1080/1369118X.2016.1186713
