top of page

White Papers

This section is about detail. While a lot of effort has gone into making LiveVybe intuitive and easy, it is also the product of a great deal of passion, reflection, and research. For anyone curious about the details of this work, it can be found here. Fair warning: this is where we go deep. 

​

We love conversation and debate, so if you'd like to challenge an idea or share an opinion, feel free to email us, or join our forum. 

The Effects of Automated Recommendation Algorithms on the Self
How the current paradigm of social media algorithms are a constraint on our freewill, and how to fix it. 

Bill Seaward

LiveVybe, 2022

​

Intro: The-One-Thing-Cafe

Imagine you sit at a cafe and order a cocktail. Many of us enjoy cocktails, maybe even a bit too much. But imagine that you decide you’d like to have something different. Maybe you remember you have to be up in the morning, or maybe you’ve just had enough and decide a meal is in order. Sure, more cocktails might make you feel even better, but your higher thoughts turn to what you should do, not what you want more of in the short term. However, when you query the menu the only items listed are more cocktails, and when you try to tell the host that you’d like the check, she finds your statements unintelligible, except if you order another cocktail, so you do. Cocktails are on offer at the one-thing-cafe because that’s what they know you like. And It keeps you there. 

 

This thought experiment describes your experience on most social media platforms whether you know it or not. The use of software to model who you are and predict what to show you in order to keep you on the platform is like limiting the menu at the cafe to the one thing they know you like. And the inability to communicate your wishes to the waiter is analogous to being unable to tell social platforms how content mentally affects you. The content menu served on existing platforms may offer more than just one item you like, but those items, no matter how well matched to your momentary interests, do not accurately reflect who you really are. Rather, it represents what the AI guesses is going to keep you on the platform based on its model of you, despite whatever other desires you may have. And by feeding you choices designed to keep you there, you can easily start to think that the menu you've been served is all there is, and that you should have more of the same. 

 

Twitter, Facebook, YouTube, and most other existing social platforms have decided that if you like cocktails, it’s all you’re going to be served, despite whatever other plans or desires you have, and it’s a bigger problem for you than you may think. 

 

Recommendation Algorithms in Social Media 

With the exception of direct messaging services, social media systems typically use algorithms (which is just another word for software programs) to manage, in part or in full, how content is selected and recommended to users.

 

Outside of the social media setting, other digital content publishers, as well as stand-alone content aggregators, such as news aggregators and video aggregators also use algorithms to curate and recommend content to users.

 

Users typically view content recommendations in the format of a feed, a list or dynamic scrolling menu containing the individual items of content selected for and delivered to the user by the recommendation algorithm. Suggestions can appear elsewhere on the platforms as well, where they depend on the same recommendation system.

 

Social media platforms implement a variety of approaches to determining what items of content are selected for presentation to the user from a larger corpus of content. These approaches may involve a combination of user-defined preferences or algorithmic functions which make these decisions automatically. This selection or filtering of the larger corpus of content may either constrain search results presented to the user after a specific search or dictate automated recommendations while the user simply browses a platform.

​

The recommendation decisions made by an algorithm may be informed by any number of programmed rules and/or inputs, including user defined preferences, patterns of historical content consumption, or other data associated with the user’s activity either inside the same social media platform or in other digital media environments. Such inputs are typically also combined with additional selection protocols that are often premised on the business objectives of the platform’s management, an example of which would be to optimize ad revenue for instance. [1]   

 

Machine-learning algorithms are a particular sort of automated curation tool that uses software to build a predictive model of user behavior capable of automatically inferring how the user will behave in response to different content or other features of a social media environment. These models are generally built using two types of user-generated data inputs: explicit user inputs, like user-defined preferences, searches, and likes, and implicit user inputs, derived from simply observing and logging user content consumption. [2] 

​

Importantly, while user-defined preferences, user search histories, and other explicit user inputs can offer users direct control over how their content is selected, implicit inputs do not. And this choice by platforms, to favor implicit inputs over explicit self reported preferences by users, makes a profound difference to user experiences and the impact these social platforms have on our lives.  

​

Public concerns have recently been expressed regarding the ways in which these recommendation algorithms work, the content selections they make, and the overall effect on users. [3] 


 

Implicit Inputs in Algorithmic Content Curation Ignore the User’s Mental State 

Approaches to content curation across major social media systems have become increasingly focused on implicit inputs to inform their content curation algorithms. Explicit user preferences or inputs, where they are offered, are often subsumed or entirely obviated by implicit inputs.[2] 


As implicit inputs derive entirely from the external behavior of users, content recommendation algorithms that depend significantly on such inputs can be understood as making content selection choices that are independent of the psychological state of the user. In content curation frameworks of this sort there is a greatly diminished capacity for the user to report their subjective conscious experiences or desires to the algorithm. As such, the user’s conscious experiences may not be utilized in determining what content is  recommended to the user.  For instance, user’s generally cannot report to existing media platforms how an item of content makes them feel, nor do users have complete control over how the content to which they are exposed reflects the full range of their interests or preferences. This is evidenced by youtube diminishing the importance of user subscriptions in its recommendation framework. [2] Other Explicit input mechanisms like indicating to the platform that you want less of something or more of something else are also not always offered, nor are explicit inputs like distinguishing more specific attributes of content. For example, while many platforms typically offer topical or subject searches, they do not offer searches that distinguish whether an item is a debate or a presentation, journalism or satire.

​

The algorithmic recommendation frameworks used across many social media systems are generally designed to promote the business interests of the platform. As many social media systems derive a large part of their revenue from advertising this usually means optimizing the amount of time users spend on the system as a means of optimizing ad impressions and, therefore, ad revenue. [3] Fundamentally, all other user effects and outcomes are irrelevant to the current platforms.

​

A recommendation model derived from implicit inputs alone considers only the behavior of maximizing user engagement. Why is this a serious problem? Because a person’s external behaviors do not always reflect their mental state or subjective experience.

 

As curation algorithms optimize a user’s engagement time rather than their enjoyment or satisfaction, it is possible to produce a set of outcomes in which users may simultaneously spend significant time on a platform while deriving considerable displeasure from it.  The internal conscious experience of the user that drives the engagement may be either positive or negative, yet this is irrelevant to the algorithm. All that matters is the external behavior of optimized engagement.

 

This coercion of users and the perverse disassociation between engagement rate and conscious experience is exemplified in the Facebook leaks regarding the company's own assessment of its negative effect on the mental health of teenage girls. [2]

 

First Order and Second Order Desires and The Alienation of the User’s Self Determination 

The deleterious effects of the current mainstream recommendation paradigm go beyond the concerns associated with engagement rates and user satisfaction. The predictive modeling of users combined with a focus on user behavioral outcomes rather than the subjective mental state of the user, effectively alienates the user’s internal volitional self from their digital social experience. Combined with the lack of transparency provided to users regarding the rules and/or premises on which the algorithmic recommendation decisions are based, and the overall result is that users have greatly diminished autonomy in determining their social and content experience, or how they wish to use these experiences in the context of their broader desires and goals. 

 

Human beings are complex psychological entities that are in a sustained state of acting, collecting feedback from acting, and editing one’s actions and attitudes based on interests, what outcomes we wish to promote in the future, or what attitudes, mental states, and psychological attributes we wish to obtain.

​

It is well established that humans are capable of experiencing first order and second order desires. Second order desires can be thought of as desires about desires. [4] As an example, an individual who has a first order desire to smoke cigarettes can also possess a second order desire to quit smoking. Or, put another way, they may have a desire to no longer have the desire to smoke. The struggle between first order and second order desires in determining a person’s behavior forms a unique dynamic in each individual that is central to defining our personality and enabling our freewill. It is important to note that while content curation algorithms may be able to predict, to some degree, our first order desires, it is highly unlikely that algorithms are presently capable of predicting second order desires. This deficiency can be understood as a subjectivity latency, or desire latency in these sorts of human modeling programs.

​

Understanding the desire latency of algorithmic content recommendation based on implicit inputs is critical to grasping how toxic these algorithms are to users in a digital social environment.  

​

Maintaining the ability to adequately identify and act on both our first and second order desires is central in preserving the freewill and self determination of any individual. Such is fundamental to our freedom to edit ourselves and become the people we want to be. Therefore, if content curation algorithms predict and decide for us what we want to see, or even what we will want to see in the future, without our self reported inputs, they hinder or entirely obviate our ability to edit ourselves according to our deepest desires or our concern for our future selves. As such, social media that relies on implicit input-focused recommendations can be thought of as a volitional toxin (a constraint on our freewill). 

​

As a rebuttal to this analysis, it might be suggested that users are engaging because they find engagement pleasurable, and if they are increasing engagement it’s because their pleasure is also increasing. And so, by optimizing for engagement time, the algorithm is merely behaving as a proxy for what is subjectively pleasurable to the user. One might conclude from this theory that we wouldn’t be engaging more if we didn’t find more engagement pleasurable. This may suggest that it would be unlikely that a user’s pleasure would increase while their desires are being denied. 

 

But like the addict chained to their neurochemical response to a substance, the dopamine released in the brain induced by a wide variety of social media engagement does not entirely define what it means for that user, or addict, to have desires satisfied, even if it may be pleasurable in the moment. And if we are not able to express our deepest desires and goals through our social media content choices then this possible disjunction between what is pleasurable and what we desire becomes crucial. The social media environment today exemplifies that we can at once be in a state of pleasure and also be in a state of agony. To embellish the point, the mind can be in a state of pleasure while the soul is in agony–a turn of phrase that is as characteristic of the typical social media user under the current paradigm as it is for the substance addict.            

 

This brings us back to the one-thing-cafe. The thought experiment that illustrates how artificially limiting choices based on a narrow model of a person’s mental state, then feeding those limitations back to the person in the form of their previous choices, effectively magnifies first order desires over second order desires. The user is coerced in that they may increasingly think and act other they would have without the artificially constrained menu and the feedback of previous choice. 

​

More cocktails are going to have the same neurological effect as they did earlier, but second order desires and other concerns alter one’s perception of satisfaction, and creates a competing set of states between what is immediately pleasurable and what is ultimately desirable and satisfying given a wider set of concerns. For all of us, what we find pleasurable in the present is in part determined by what we want to achieve in the future, and our ability to act in such a way that promotes the full range of those ends. The customer’s satisfaction at the one-thing-cafe can be thought of as hindered by being denied the freedom to pursue the full scope of their desires.  While we do have the freedom to act in this scenario we can do so only through the very narrow choices offered by the menu and the host, and we can only be happy in such a scenario if we narrow our desires accordingly.  When one's choices about their inner desires and future selves are constrained, immediate pleasure might still be possible, but happiness or satisfaction becomes commensurately narrowed along with our narrowing choices. Satisfaction may become possible only insofar as we surrender the full range of our desires to the choices offered.   

 

The coercive effects that the one-thing-cafe has on its customers is analogous to the effects that the current algorithmic curation of content has on the social media user. By optimizing for engagement time, modeling users based on such, and then curating what the user can experience based on what they will engage with the longest, and then denying the user the ability to explicitly report to the system what they actually want, the user’s choices are constrained. The user is served, through their feed, only what is going to optimize engagement, because what does or does not optimize engagement is the only signal the system is capable of receiving from the user. A quite unnatural environment is thus created in which the user is free to act, but only as the algorithm sees fit, and the user’s happiness, satisfaction, and pleasure are obtainable only so far as the user narrows the full scope of their volition in accordance with the constrained decision landscape of their content feed.    

 

The focus on implicit inputs in recommendation systems not only denies the user self determination over their social media experience but may deny the user’s happiness, satisfaction, and ultimately their future.  It may also create dangerous feedback loops of the user’s most impulsive and unexamined self, where vice gains strength over our virtues, and our base self is unnaturally enabled over our higher self. 
 

No Freedom of Self Determination and No Free Market

An argument in favor of engagement optimizing recommendation is that such is a manifestation of what users want, and giving users what they want is an expression of the free market.  However, the feedback loop created by engagement optimized choices, constrains the decision matrix of any one individual such that they become optimized engagers in a manner that is agnostic to their internal subjective state. A market in which the user cannot supply signals regarding their subjective experience back into the market, is not a free market. Rather it is a highly constrained command-style market in which the feedback loop created by modeling the user quietly coerces the user into thinking and or behaving other than they would in the absence of the feedback loop. As illustrated, the current recommendation paradigm constrains freewill, and there cannot be a free market if one’s will is constrained.    

 

A free market for algorithmic recommendation would consist of curation systems that optimize multiple vectors associated with user outcomes in addition to engagement. For instance optimizing for a broader range of engagement styles in conjunction with the self reported  subjective experiences of users. As long as platforms optimize for engagement time and intensity alone, the user’s satisfaction may be impeded, life goals may be undermined, knowledge may be compromised, values may be perverted, and the self-determination of the user’s future is made uncertain. 


 

LiveVybe Restores the Freewill of Social Media Users

LiveVybe was designed to fix algorithmic recommendations by giving users the ability to self-author their Identity, moods, desires, goals and futures, with the assistance of machine learning in selecting and displaying content based on these preferences. Unlike other social media platforms, LiveVybe does not rely on an opaque model of the user's personality, built without the user's input, to determine what the user experiences. The automated recommendations generated by LiveVybe are in the explicit service of the user's reported preferences and desires. 

​

LiveVybe restores self determination and the accurate fulfillment of second order desires in a social media environment by allowing users to curate their social media content according to their self reported subjective experience and desires.  Users can also provide feedback regarding content outcomes and effects. The LiveVybe approach provides greater autonomy and control to the user regarding how social media experiences affect their life, mind, and future self.
 

References

1. Hao, Karen, Oct 5, 2021. The Facebook Files, Wall street Journal, Oct 1, 2021 MIT

    Technology Review, “  The Facebook whistleblower says its algorithms are dangerous. Here’s why.” https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/

​

2. Deep Neural Networks for YouTube Recommendations Paul Covington, Jay Adams, Emre Sargin Google Mountain View, CA {pcovington, jka, msargin}@google.com

​

3.  Facebook “is tearing our societies apart,” whistleblower says in interview

     "Facebook, over and over again, chose to optimize for its own interests."

      10/4/2021, 

​

 4. Harry G. Frankfurt, Freedom of the Will and the Concept of a Person. The Journal of Philosophy, Vol. 68, No. 1 (Jan. 14, 1971), pp. 5-20

Thanks for submitting!

bottom of page