Social media won't get any better without algorithms
Echo chambers and outrage: there are no adjustments that can make social media less toxic – says a new study.
(Image: AlpakaVideo/Shutterstock.com)
Social media fosters outrage, enables radicalization, and generates filter bubbles. Spending too much time on social networks often not only puts people in a bad mood, it is also downright toxic. A study from Amsterdam now shows that very little can be done to change this.
While it is often said that algorithms are to blame, it has now been shown that this is not the case. In the study design, scientists from the University of Amsterdam sent AI agents to simulate a social network. The agents were supposed to represent different people with different opinions and behaviors. This showed that without algorithms, the familiar dynamics would have emerged—i.e., echo chambers and attention for everything that causes as much outrage as possible. Various platform designs also showed that even in a simple network, where there were only posts, reposts, and follows, toxic patterns emerged.
Improvements brought simultaneous deterioration
The researchers tested six measures to improve interaction: chronological or randomized feeds, reversing algorithms to reduce the visibility of sensational content, increasing diversity of opinion through opposing political views, bridging algorithms to show content that stands for more understanding, hiding likes and follower numbers, and decoupling biographies.
The result: sobering. There were hardly any improvements. What's more, improvements, on the one hand, were usually accompanied by deteriorations in other areas. While chronological feeds, for example, reduced the inequality of attention, the spread of extreme content increased at the same time. Other measures had no effect at all.
Videos by heise
Ars Technica spoke to the authors of the study about the reasons for this. They say that the structure of the networks themselves is the problem—and also, to a certain extent, people. Attention increases attention. Instead of a filter bubble, one author likes to talk about a trigger bubble, because particular posts trigger people to interact with them. Moreover, only one percent of users dominate the networks, while 99 percent are barely heard. Emotional content encourages reactions, even if it is not algorithmically highlighted. They are then shared.
One of the authors points out that the use of AI agents is not completely similar to human behavior. The agents were given their personalities from the American National Election Survey, a survey of voters that is regularly conducted in the USA.
For example, a Bob was created who likes fishing and comes from Massachusetts. Bob and the other personas were then asked to read the newspaper and decide whether and when to post, like, or repost.
(emw)