Freedom of Choice is not Freedom of Desire: The Problem with Media Recommender Systems Today
- William Seaward
- Nov 29, 2022
- 4 min read
Updated: Dec 3, 2022

It’s well-established that having freewill depends not just on having the freedom of choice, but also on the freedom of one’s desires. If our desires are not aligned with our choices, or if some external force is constraining our desires, then simply having the freedom to act, may not be sufficient for our freedom. This insight about how our desires relate to our freedom is relevant today for the lives we inhabit inside media recommendation systems, because these systems ignore our deepest desires.
Our digital lives are immersed in recommender systems. Most of them work by modeling user choices and using the model to predict how content will cause us to act. On YouTube, for instance, the recommendations are selected based on what the model predicts will keep us on the platform the longest.
To build these models of users, existing platforms rely almost entirely on the implicit actions we take, like clicks and watch time, rather than explicit inputs, like a user-reported preference or feeling. This is a problem for us because of course our internal subjective emotions matter. And if these systems are not considering our internal emotional experience in modeling and promoting our behaviors, then our emotions and actions may actually start to diverge. For example, we might find that despite spending considerable time on a platform we also quite dislike it.
This disconnection between our behaviors and desires is possible because behavior doesn't mirror emotions in a perfect one-to-one way. Desires are more like an ocean current; the net result of competing and conflicting desires that are in a constant state of experiment and negotiation. We actually have desires about desires, which reflect our changing personal identities and goals. We act, we get feedback from the world, we weigh our competing desires, make adjustments, and act again. And it’s in the continuous unfolding of this process that our freedom of choice emerges.
But current recommendation systems don't consider our internal mental states in modeling our experience, so as we engage the current platforms we’re training the models on a limited version of ourselves. These models are then used to narrow the range of our future recommendations. And this smaller range of future choices then increases the probability that you’ll click on more of what you did earlier, simply because there's less variety. And when you do click on more of the same it in turn strengthens the previous model again, and your future recommendations become even more narrow, and so on.
This is a feedback loop in which our initial choices become unnaturally magnified focal points of interest over time. So, the probability that a fleeting interest will become an entrenched habit in the future can increase in a very unnatural way. This kind of mental feedback is new. We don’t get in the natural world, and we have not experienced it in any previous media technology.
Without the ability to report back to the system about our explicit mental experience, our emotions, personal outcomes, and our decision landscape becomes dissociated from our own internal reflection. So our deeper desires are less actualized and unnaturally constrained. As noted, an unnatural constraint on our desires is a constraint on our freedom of choice.
Failing to appreciate this relationship between deep desires and content recommendation systems is part of why social media, while fun at times, makes some people miserable and many of us dissatisfied. How can we feel right with the world when our goals and identities are a product of this outside amplification?
There are many contexts where recommender systems are less concerning, such as in the curation of specific media, like music or movie recommendations. But our online environments are becoming more than just entertainment delivery systems and are increasingly proxies for the real world. As more of our analog life moves online, these concerns get more significant. And now with the onset of the “metaverse” the implications for these recommendation feedback loops are more acute than ever.
So what’s the solution? At LiveVybe our aim is to create a digital social experience that better reflects the real you, where you can mold your content and experience based on your deeper desires and changing preferences. So first, our recommendation system is designed to better appreciate user outcomes and satisfaction. Second, we create a new layer of content tagging and connecting that captures the associated moods, goals, and other more meaningful attributes of people and content. And third, we let users filter their overall digital experience based on the most meaningful parts of life–the moods and goals that users want to achieve.
As governments around the world pursue algorithmic legislation, changes to the way platforms push recommendations is inevitable. But we need to be making improvements in our digital spaces now, based on a deeper and more nuanced understanding of human experience. This is our goal at LiveVybe.
Algorithms may define our digital experience, but our deepest desires ought to be what ultimately defines the algorithms, and in doing so we’ll build an online world that is more fulfilling and still enormously fun.
Bill Seaward
Comments