I posted a short article on LinkedIn at the end of last week that divided opinion. I won’t link to it here, but it’s on my LinkedIn profile if you want to see it.
I say it divided opinion. The division was in no way equal. As I write this, the post has had c. 140,000 views, with over 1,300 “reactions” (Likes and so on, but in the LinkedIn Model all positive) and over 200 comments. Most of the comments appear supportive of the article.
However, there are a few that misinterpret the article to launch an attack on other subjects. Related subjects, but not the thing about which the article was written. Those have caused further debate.
Underneath an article on LinkedIn comments are sorted in one of two ways – reverse chronologically or by an algorithm to determine “Relevance”. And here’s where we start to see the impact of how algorithms on social networks sow division. The algorithm appears to primarily use the number of subcomments and the recency of subcomments as the way in which it brings things to the top of the list.
So one post in particular, which is extremely antagonistic to the subject matter in the original (and misinterprets significantly) has ended up sitting at the top of the “Relevance” list, thus attracting more comments and becoming a magnet for the people (a significant minority by the look of it). And because of that, it attracts more comments.
This kind of polarising effect is really pernicious. I could, as the author of the piece, simply delete all the comments that I don’t agree with, but I don’t believe that that is a healthy approach. But the default view amplifies something that stokes misinterpretation of my original writing.
These kinds of effects are all around us. If you want to explore them more, Charles Arthur’s book Social Warming is a great place to start.