tech
February 9, 2026
Why AI Makes Believing the Truth Harder
Premium Members can share articles with friends & family to bypass the paywall.

TL;DR
- Social media has entered a third phase, with AI-generated content adding to friend, family, and creator content.
- Meta earned $16 billion in 2024 from ads for scams and banned goods, often utilizing AI to appear legitimate.
- AI-generated images and videos are increasingly difficult to distinguish from authentic content.
- The rise of 'slop,' or low-quality AI-generated content, is a significant issue on platforms like YouTube.
- Experts fear AI-generated content will lead to widespread skepticism, making people doubt even true information.
- High-visibility deepfake incidents (e.g., Zelensky, Biden impersonation) may obscure the more pervasive, infrastructural harm of AI media.
- AI-generated content is being weaponized in geopolitical contexts, such as in Iran and by Russian/Iranian regimes to dismiss verified videos.
- Malicious actors are using AI to spread disinformation and scam victims, with reports rising significantly.
- AI agents could be used to manipulate beliefs and behaviors on a population-wide scale, threatening democracy.
- Advancements in AI video generation, like OpenAI's Sora, have dramatically increased the realism of synthetic media.
- Good journalism producing trustworthy information is expected to become more valuable in discerning truth.
Continue reading the original article