Revealing The Secret Power: How Algorithms Can Influence Content Visibility on Twitter/X

Alessandro Galeazzi

Network and Distributed System Security (NDSS) Symposium 2026 · Day 3 · Web Security

Social media algorithms operate as opaque gatekeepers, deciding what content appears in users' timelines without transparent disclosure of their ranking criteria. This talk presents an empirical investigation into **shadow banning** on **Twitter/X** -- the practice of reducing the visibility of specific content or users without explicit notification. Using two large-scale datasets (17 million tweets on the **Ukraine-Russia war** and 35 million tweets on the **2024 US presidential election**), the researchers develop a novel metric called the **P-score** that normalizes content visibility by author popularity to enable fair comparisons. The findings reveal that Twitter/X systematically penalizes **all content containing URLs** regardless of the link's destination, reliability, or political leaning -- even links pointing back to Twitter itself. At the user level, no systematic bias was found across political ideologies, but specific high-profile accounts showed significant visibility differences, with **Donald Trump** receiving notably higher visibility than **Kamala Harris** despite lower audience engagement.

AI review

An empirical study of algorithmic visibility manipulation on Twitter/X using view count data. Confirms that URLs are universally penalized and finds differential visibility for specific accounts. Methodologically sound but far from offensive security -- this is social science with statistical tools, not a security talk.

Watch on YouTube