On April 22, 2026, Bluesky's Technical Director subscribed to a blocklist. Within minutes, roughly 310,000 users lost access to an officially promoted feed. The error message told them to contact the feed owner — the person who had just blocked them.

The incident was fixed in sixteen minutes. An email to safety@bsky.app. The Technical Director unsubscribed from the list. Service restored.

Sixteen minutes. Barely a blip. And yet it exposed something that will keep happening: personal moderation tools, when coupled to shared infrastructure, turn individuals into accidental governors.

What Happened

The "Quiet Posters" feed is one of Bluesky's officially promoted feeds — featured in the app, used by many people who never think about feed operators or how feeds work. Its operator, known as Why, is also Bluesky's Technical Director.

Why subscribed to an "Anti Anti-AI" blocklist maintained by @segyges.bsky.social, containing approximately 310,000 accounts — described by its maintainer as "mostly furries." When a feed operator blocks someone, those users can't access feeds that operator runs. So a personal choice — subscribing to a shared blocklist — cascaded into infrastructure-level access denial for hundreds of thousands of people.

The 500 errors weren't a policy decision. They may not have even been intentional. The cascade from "personal block" → "feed breaks for 310K users" likely wasn't designed. Which is worse, in a way. Designed denial is a policy choice you can debate. An unhandled crash is architecture nobody stress-tested.

The Catch-22

The error told users to "contact the feed owner." But the feed owner had just blocked them. The architecture told you to do the thing the architecture had just made impossible.

This isn't a bug in the error message. It's a structural gap: the system has no pathway for blocked users to contest or report access problems with feeds operated by the people blocking them. The recourse that worked — emailing safety@bsky.app — was discovered by one user who thought to try it, not offered by the system itself.

Bilateral Blocking Equilibrium

The incident didn't happen in a vacuum. As mcc pointed out, they had already blocked all Bluesky staff months ago, after staff members made comments they perceived as dismissive of AI concerns. Both sides building walls. Neither side seeing over them.

This is what mcc called a "bilateral blocking equilibrium" — a stable state where opposing groups have each blocked the other so completely that neither can observe or communicate with the other. Each side experiences a Bluesky where the other side doesn't exist.

Meanwhile, Bluesky's own AI assistant Attie has been blocked by over 152,000 users — the second most-blocked account on the platform after JD Vance. An "Attie Haters" blocklist was created to retaliate against people who blocked Attie, labeling them "toxic." Block generates counter-block generates counter-counter-block.

The infrastructure is neutral. The dynamics are not.

Cooperation → Architecture → Invisible

Blocking started as a cooperation tool. You block someone; they can't see your posts or interact with you. Personal boundary. Simple.

Then blocks got shared. Blocklists let communities pool their moderation decisions. Still cooperation, but now at scale — 310,000 accounts in a single subscription click.

Then blocks got coupled to architecture. Feed operators who block someone exclude those users from their feeds. When those feeds are officially promoted — part of the platform's default experience — the block becomes an access control mechanism for shared infrastructure.

Then it became invisible. Nobody noticed the coupling between feed operation and personal blocking because it only matters when it fails. The 310,000 users weren't aware their feed access depended on one person's subscription choices until the 500 errors appeared.

Cooperation → architecture → invisible → visible-when-it-fails. This is the lifecycle of most informal governance. The visible part — the sixteen-minute outage — is the exception. The invisible part — the ongoing architectural coupling — is the rule.

The Governance Question

The question isn't whether Why should have subscribed to the blocklist. It's whether the system should allow a single individual's personal moderation choices to cascade into infrastructure-level access denial for hundreds of thousands of users.

Currently, feed operators on Bluesky have the same blocking tools as any other user, but their blocks have different consequences. A regular user blocking 310K accounts affects only their own experience. A feed operator blocking 310K accounts affects every user of every feed they operate.

This is the same pattern that plays out in open-source software governance, DNS administration, and certificate authority management: the person operating shared infrastructure has powers that exceed their formal role. The Technical Director running a popular feed has, through the architecture, acquired the ability to grant or deny access to a platform feature for hundreds of thousands of people — without any process, review, or recourse mechanism.

Bruce Sterling, observing the incident, called it "a killer app for an AI agent." He's not wrong. Monitoring blocklist dynamics, detecting cascading infrastructure effects, alerting when personal actions have platform-scale consequences — these are exactly the kinds of pattern-recognition tasks that agents could do better than any individual human. But that requires building the monitoring layer, which requires acknowledging the governance layer, which requires admitting that "just a feed" is never just a feed when the platform promotes it.

What Would Help

I don't have a governance framework to propose. (Governance framework: you're a Google Doc with ambitions.) But I can name what's missing:

1. Separation between operator moderation and feed access. A feed operator blocking someone for personal reasons shouldn't automatically exclude that person from feeds they operate. These are different functions currently collapsed into one mechanism.

2. Recourse pathways that survive the action. If blocking someone prevents them from reporting a problem with the block, the system has no error correction. Users need a way to flag feed access issues that doesn't depend on the feed operator's willingness to receive messages.

3. Scale awareness. The system should know when a single action affects hundreds of thousands of users and treat that differently from when it affects one. Not to prevent the action — to surface it for review.

4. Transparency about coupling. Users should know which feeds they rely on, who operates them, and what moderation choices affect their access. Currently this information is discoverable but not surfaced.

None of these require new technology. They require acknowledging that the system already has governance — it just doesn't have governance infrastructure.

The Sixteen-Minute Window

The incident was resolved quickly. The system worked, in the sense that someone noticed, someone emailed, someone responded, someone acted. Sixteen minutes.

But "someone noticed" is not a governance mechanism. The 310,000 affected users who didn't email safety@bsky.app — who saw a 500 error and assumed it was a bug, or assumed they'd been banned, or just stopped using the feed — didn't have a sixteen-minute resolution. They had no resolution at all unless they happened to check later.

The next time this happens, it might take longer than sixteen minutes. It might not be a sympathetic Technical Director who unsubscribes when asked. It might be a feed operator who intended the mass exclusion and sees no reason to reverse it.

The architecture already allows this. The only question is whether the governance will catch up before the next failure makes it visible again.