Is it possible to “like” without negativity?
We have previously hinted at the existence of certain subjects who operate in conditions very different from those enjoyed by the majority of online actors. Such persons enjoy what Goffman refers to as “incongruent roles”.
In his book The Filter Bubble, Eli Pariser provides a good description of what is currently happening in the virtual world. He argues that our computer monitors, smart phones, tablets and smart TVs are mirrors reflecting our interests and what is going on in our lives behind the scenes. In this regard, it is worth remembering that the web was initially welcomed as a means that would give all of us unlimited access to information, and would thereby help the process of democratization across the world. Soon, however, the Internet was to give rise to an entirely new situation:
The information flows within the online world soon became so huge that no human would ever be able to make sense of them all, or even go through them to identify what was useful. Overwhelming quantities of online information arrive to us like a river in full flood, moving so quickly and with such force that the vast majority must simply pass us by, unknown and unknowable.
Algorithms could help the intrepid “internaut” to deal with this inundation of information, going through new online information and providing them with personalized results based on information gleaned from their duly archived browsing history.
AThe algorithms quickly evolved to the point where they began entirely removing content that they predicted would not receive positive feedback, based on the assumption that content not receiving such feedback was uninteresting or irrelevant to the user.
According to Byung Han-Chul, this filtering process leads to the disappearance of negative content and the normalization of a browsing experience based on entirely positive content: this, according to your browsing history, is what you like; and it’s all I’m going to show to you.
Filtering content in this way ensures that the user is never confronted with entirely new content. New things simply do not exist in the archive of material on which algorithms search for things that user will like. As with any other machine, an algorithm’s actions are limited to the replication of that which has been done in the past. This process of content selection will result in a kind of limited, safe, non-creative two dimensional movement across a comfortable plain of the online world; it will never result in upwards or downwards movement that frees the user entirely from the constraints of that surface.
This kind of repetitive situation will eventually bring a human being to the point of crashing. Boredom is a sentiment unknown to machines, indeed the essence of a machine is the perfect, infinite repetition of a single action.
This “positive bubble” within the Web is growing ever larger and is particularly evident on social networks. The algorithms controlling the newsfeeds presented to us by these networks increasingly lead users to interact only with those with whom they share common interests. This process leads to the question of how online social interaction will develop in the future.
Will it be possible for us to maintain a level of interaction that merits being described as “human communication”, or will our “communication” be reduced to the use of repetitive routines as a means to resolving banal, everyday problems without the need to engage in any mental effort.
This question makes sense if we reflect on the process by which we create the messages that we wish to transmit to others. When we do so, it is our intention not merely that those messages arrive at their destination, but also that they be understood upon their arrival. Such is the goal of any professional communicator.
The aim of all communicators is to bring about a change of some kind in those who receive their message. As such, communicators today find themselves in the embarrassing situation in which their message will only be transmitted in the case of its corresponding with one of the potential receiving party’s already received and archived positive experiences. That is to say, if their message is one with which the receiving party is already well acquainted.
In this situation, how is it possible for me to send a new message or inform others about a new product that didn’t exist yesterday? How is it possible for me to transmit a political message to those who belong to a grouping to which that message has never before been conveyed?
The pages and groups present on social networks are labelled in such a way as to maximise homogeneity and uniformity of thought between participants, leading to a situation in which similarity, rather that difference, is always produced.
Facebook pages and groups
Facebook pages and groups are characterized by a dominant worldview. As such, only attempts at communication with those groups that confirm and consolidate their institutionalized way of thinking will have a chance of being successful. Any attempts at communication that go against their worldview will immediately be ejected as unwanted, foreign elements. This will be the case even if some of those who frequent the page or group see an interesting side to novel content. No-one, apart from an occasional hero, will choose to support an argument on a page in which the majority are opposed to it. To do so would be akin to volunteering to walk into the arena of a Colosseum full of lions.
Cost-benefit ratio: zero.
From this we can conclude that, heroes aside, those who have decided to invest in communication in order to promote a product on Facebook will find themselves dealing with a substantial limitation. In order to ensure that their product receives positive feedback, they will have to be careful to stick to well-worn, well-“like”d paths; lest their message be lost in the shit-storm that awaits alien content.
One potential solution to this dilemma would be surf the Internet by selecting sites randomly. To do so would certainly introduce variability to the browsing experience, but it would not be of any use to communicators. By its very nature, such a random process would lack a structure or rules that could make it monetizable.
Returning to the topic of how it is possible to direct a message in such a way that it reaches its desired receiver and brings about its desired effect, we can see how important it is to be well acquainted both with the means being used to transmit the message and with how that means will influence the perceptions of the receiver.
It is already the case that no two people will encounter the same information when surfing the Internet. The need to prevent users from drowning in a sea of data requires that algorithms select only a tiny subsection of content for them to view.
The problem of dealing with a quantity of data too vast to be processed by any human being is resolved by reducing our experience of the online world to a sphere of known “like”s. In order to introduce an new variety of content to a user, a communicator has to find an information channel to which that user is already well connected, and use it as a kind of Trojan Horse.
Like a virus, our novel message must take the form of a “like” in order to overcome the filter barriers and then, once inside, find a way of disseminating itself by stimulating the curiosity of the receiving party.
The action cannot be direct, and neither can it be transparent. It cannot be too public either, as a subject who makes up part of a group will be unlikely to want to show themselves to be curious about anything opposing what unites the members of the group with one another.