Something has gone wrong with how we consume information, and most of us know it but cannot quite articulate why.
We spend hours every day scrolling through content about current events. We feel informed. We have opinions. We share things. We argue in comment sections. We feel the righteous clarity of knowing who is right and who is wrong.
And yet somehow, despite consuming more news-adjacent content than any generation in human history, we understand less about what is actually happening in the world than we think we do.
This is not an accident. It is the predictable outcome of a system that was never designed to inform you. It was designed to keep you watching.
The Birth of the Clip Economy
Over the past decade, something significant has happened to how media content is produced and consumed.
A generation of media figures, some former television anchors, some longtime commentators, some entirely new voices, discovered that they could build larger and more loyal audiences on social media than they ever could on traditional television. Tucker Carlson, after leaving Fox News, built a social media following of tens of millions. Glenn Greenwald left The Intercept and built a subscriber base on Substack and social media that rivals major publications. Megyn Kelly left NBC and built a podcast audience that dwarfs what she drew on network television.
The format that drives this ecosystem is not the long-form interview or the carefully researched documentary. It is the clip.
A three-hour podcast produces dozens of potential clips. An hour-long interview produces dozens more. The people who run these accounts, or the platforms themselves through their own recommendation algorithms, select the clips most likely to generate engagement. And the clips most likely to generate engagement are not the ones where a guest carefully qualifies a complex argument. They are the ones where someone says something shocking, something outrageous, something that makes you feel the sharp clarity of tribal certainty.
This is the clip economy: a system in which long-form content is systematically mined for its most emotionally provocative moments and redistributed as news.
What Gets Left on the Cutting Room Floor
Here is what a 90-second clip almost never contains.
It does not contain the context that preceded the provocative statement. It does not contain the qualifications the speaker attached to their argument. It does not contain the counter-evidence that was presented and addressed. It does not contain the moments where the speaker acknowledged uncertainty, reversed a position, or conceded a point.
A 2021 study by researchers at MIT found that short-form clips of longer content systematically omit the qualifications, counter-arguments, and evidentiary standards present in the full version. What registers as a devastating, definitive point in 90 seconds is frequently a simplified, stripped, and often distorted version of a more complicated argument that played out over an hour.
This matters because the human brain does not automatically flag shortened content as incomplete. We experience a confident, authoritative voice making a clear argument as informative, regardless of how much has been left out. We feel informed when we may have received, at best, a fragment of a larger picture, and at worst, a deliberate misrepresentation of one.
The people who clip and distribute this content know exactly what they are doing. Controversy travels. Nuance does not. A clip of someone making a measured, carefully qualified argument about a complex policy issue will get a fraction of the views of a clip of someone declaring that the other side is destroying the country.
So the measured, qualified clip does not get made. And the destroying-the-country clip gets made over and over again, across dozens of accounts, millions of views, day after day.
The Former Anchor Problem
There is a specific dynamic worth examining here that does not get enough attention.
Many of the most influential voices in the social media news ecosystem are people who built their credibility inside traditional journalism institutions and then left, taking their audiences and their credibility with them, but leaving behind the accountability structures that made their credibility meaningful.
When Tucker Carlson was at Fox News, his content was subject to Fox's legal review, its editorial standards, and the institutional consequences of significant factual errors. When he moved to X, none of that applied. He retained the air of authority that decades of television journalism conferred. He lost the constraints that came with it.
This pattern repeats across the media landscape. Figures who built reputations for credibility inside institutions migrate to independent platforms where they operate without editors, fact-checkers, legal review, or correction policies. Their audiences follow them, extending the institutional trust they earned in one context to a completely different context where it is no longer warranted.
This is not an accusation that these figures are all deliberately misleading their audiences. Some are doing genuinely good work. But the absence of institutional accountability means their audiences have no reliable mechanism for distinguishing the good work from the bad. They are operating on transferred trust, and transferred trust is not the same as earned trust.
The Echo Chamber Is Not a Metaphor
The term "echo chamber" has become so common that it has lost its force. It sounds like a mild inconvenience, a tendency to spend time with like-minded people. It is significantly more serious than that.
An algorithmic echo chamber is a self-reinforcing information environment in which the content you consume shapes the algorithm that determines what you see next, which shapes your beliefs, which shapes what you engage with, which shapes the algorithm again. Each iteration moves in the same direction. Over time, the information environment becomes not just ideologically homogeneous but progressively more extreme, because the most engaging content within any ideological community tends to be its most emotionally charged and least qualified version.
Research by Eli Pariser, who coined the term "filter bubble" in his 2011 book of the same name, documented the early version of this dynamic. Subsequent research has repeatedly confirmed and extended his findings. A 2023 study by researchers at NYU's Center for Social Media and Politics found that social media users who relied primarily on algorithmic feeds were significantly more likely to hold extreme political views than those who used chronological feeds or traditional media, controlling for demographic factors.
The Yale research we have cited elsewhere on this site found that social media's feedback mechanisms literally train users to express more outrage over time. It is not just that you encounter more outrage in your feed. You learn to produce more outrage yourself, because outrage is what gets rewarded.
This is the echo chamber at full operation: not just a place where your existing views are confirmed, but a system that actively intensifies those views and trains you to express them in increasingly extreme terms, while generating revenue for the platforms and personalities who profit from your engagement.
Monetization Is the Engine
It is worth being direct about the economic structure that drives all of this.
Social media platforms make money from advertising. Advertising revenue is proportional to time spent on the platform. Time spent on the platform is maximized by content that generates strong emotional engagement. Strong emotional engagement is most reliably produced by content that activates anger, fear, outrage, and tribal solidarity.
Therefore, the algorithm that determines what content you see is, by design, an outrage-maximizing system. Not because the people who built it are malicious, but because maximizing outrage is the most effective strategy for maximizing the metric they are paid to maximize.
The media personalities who operate within this system face the same incentive structure. Their income depends on views, subscribers, and engagement. Views, subscribers, and engagement are maximized by content that keeps their audience coming back, which means content that validates their audience's beliefs, reinforces their sense of threat, and confirms their conviction that the other side is dangerous.
A media figure who tells their audience things that are true but uncomfortable, who acknowledges complexity, who presents evidence that complicates the preferred narrative, will lose audience to a media figure who tells that same audience what it wants to hear. The economic incentive is not toward truth. It is toward validation.
The Facebook Files, internal documents leaked by whistleblower Frances Haugen and published by The Wall Street Journal in 2021, revealed that Facebook's own researchers had found that its algorithm was systematically amplifying content that generated angry reactions, that this content was disproportionately likely to contain misinformation, and that the company had repeatedly chosen engagement metrics over user wellbeing when the two came into conflict.
The angry reaction emoji on Facebook was weighted five times more heavily than a standard like. The platform had literally built a system that valued your anger five times more than your satisfaction.
The Context You Are Not Getting
Let us be concrete about what context stripping actually does to your understanding of specific kinds of stories.
Consider a complex policy debate, say immigration or healthcare or monetary policy. These topics involve genuine tradeoffs, empirical uncertainties, and reasonable disagreements among experts. A serious engagement with these topics requires understanding the tradeoffs, the evidence, and the range of expert opinion.
A 90-second clip of a media figure discussing immigration policy will give you none of that. It will give you the most rhetorically effective version of one position, stripped of the complications and evidence that would allow you to evaluate it. If you consume ten such clips a day, you will feel extremely confident about a topic you have not actually engaged with at any meaningful depth.
This is perhaps the most insidious effect of the clip economy: it produces the feeling of being informed without the substance of being informed. People who consume primarily short-form social media content report high levels of confidence in their political opinions while demonstrating, in studies that test factual knowledge, significantly lower levels of accurate information than people who consume longer-form content from credible sources.
A 2019 Pew Research Center study found that Americans who cited social media as their primary news source scored lower on a news knowledge quiz than those who cited any other news source, including print, television, and radio. They were not just less informed. They were less informed while feeling equally or more confident than their better-informed counterparts.
This combination, high confidence and low accuracy, is more dangerous than simply being uninformed. An uninformed person knows they do not know. A confidently misinformed person does not know what they do not know, and acts accordingly.
What Genuine News Consumption Actually Requires
If social media is not the place for people who care about actually understanding what is happening in the world, what is?
The answer is unglamorous and somewhat inconvenient: primary sources, long-form reporting, and credible outlets with documented accuracy records.
Primary sources mean the actual text of legislation, the actual content of studies, the actual transcripts of statements. Not someone's interpretation of them. Not a clip of someone reacting to them. The thing itself.
Long-form reporting means pieces of journalism that have the space to present evidence, acknowledge complexity, include opposing views, and follow a story over time rather than reducing it to its most emotionally engaging moment. This exists. It is produced every day by journalists at wire services, nonprofit newsrooms, and quality publications. It is simply less engaging, by design, than the clip economy's output.
Credible outlets mean organizations with documented track records of factual accuracy, transparent sourcing, and genuine correction cultures. These can be identified. They can be evaluated. PressGrade exists precisely to make that evaluation easier.
None of this means you should never use social media or that every short piece of content is worthless. It means understanding what social media is and is not good for. It is good for knowing that something is happening. It is good for connecting with communities around shared interests. It is genuinely good for some forms of commentary and creative expression.
It is not a reliable source of information about what is actually happening in the world, why it is happening, what the evidence shows, or what reasonable people disagree about. Treating it as one is not a neutral choice. It is a choice that makes you more susceptible to manipulation, more confident in what you do not actually know, and more useful to the people who profit from your engagement.
The Way Out
The clip economy is not going away. The incentives that created it are structural and durable. The platforms that profit from it are among the most powerful and wealthy institutions in human history.
But individual choices still matter. Not because any individual can change the system, but because any individual can change their own information diet, and an information diet built around primary sources, long-form reporting, and credible outlets produces a meaningfully different understanding of the world than one built around clips, algorithms, and outrage.
The first step is recognizing what you are actually consuming when you scroll through social media news content. You are not watching journalism. You are watching the most provocative fragment of a longer conversation, selected and served to you by a system that profits from your anger.
The second step is deciding whether that is sufficient for the kind of citizen, the kind of thinker, the kind of person you want to be.
For people who genuinely care about understanding the world, the answer should be no.
Search any media figure or outlet on PressGrade to see how they score on the five criteria that actually predict whether a source can be trusted.