The concept of othering distortion refers to when false, unscientific facts, especially regarding a specific group, is trending so much it is taken as a fact. You may see this commonly in everyday life, such as when people take their Facebook feed as a factual newsfeed. I too will find out information from social media. However, the problem is not getting information from social media, the problem is taking false information as fact.
Something that social media unintentionally does is that it will increase division among groups. This is not an intended effect by the designers, it is a side effect of the current algorithm system. Our algorithms are made to keep our attention, and to do this it shows us upsetting things. This upsets the user which causes them to keep looking and so generates engagement.
By showing upsetting content, the algorithms are creating division among groups. The content shown to the user will show an opposing side in a negative light, which upsets the user and keeps their attention. What is harmful in this from a social standpoint, is that it also paints the opposing side as being unworthy of being understood. This causes further division because the two sides become unwilling to hear the other out.
An example of othering distortion would be someone becoming radicalized from watching YouTube videos. When you watch YouTube, as you get to the end of a video another will come up as suggested. These suggested videos will be specifically chosen to up the ante from the video you were just watching.
That being said, according to a study done by the Anneberg School of Communication, user preference is a factor in the radicalization caused by YouTube’s algorithm. In this study, they had robots watch YouTube to analyze the suggested videos. It was found that the robots that watched partisan content had increasingly more radical suggestions, for both robots watching far-right content or far-left.
Interestingly when they had robots purposely switch from far-right partisan content to far-left partisan content, then the suggestions would adapt to the user preferences and be less radical. This means that while the algorithm may suggest that viewers watch videos that otherize and distort different groups, it is still on the user to accept that ideal.
While YouTube’s algorithm may not exactly help with other distortions, it is still on the user to be accepting the videos that promote this way of thinking. In my own experience on YouTube, I stay away from partisan content for the most part. I mostly look at silly and videogame related videos. The most radical thing I’ve seen lately is a diss track towards a videogame character that most fans hate. Hate for a videogame character is admittedly a far cry from actually radical content.
The fact that human preference plays a part in radicalization means that to avoid the distortion of others, then the user needs to consciously make an effort to ignore radicalizing content.
