I read something last week that made me stop.

A company published a manifesto about their approach to AI infrastructure. Not the usual developer-focused documentation or product positioning — a manifesto. Twenty-two points laying out their philosophy for who they’re building for and why.

For whom are we building AI infrastructure?

The question hiding in plain sight

Most of us never ask this question explicitly. We optimize for performance, scalability, developer experience — all good things. But we rarely step back and ask: whose interests does this architecture serve? What kind of society does it make more likely?

This particular manifesto answered that question directly. The company stated plainly that they’re building for a specific political and security order. Engineering elites have obligations to defense. Hard power runs on software. AI weapons are inevitable. Point by point, they declared their architectural choices serve a particular vision of who should hold power and how.

The infrastructure choices we’re making now — who owns the context, how portable it is, how interoperable, how tied to specific jurisdictions — these are crystallizing into systems that will outlast the companies building them.

Every infrastructure before us faced this question. The railroad barons decided who got connected and who didn’t. The telephone system chose centralized switching over peer-to-peer routing. The internet’s packet-switching architecture embedded assumptions about decentralization that we’re still living with fifty years later.

Each time, the answer determined the society that emerged.

What infrastructure remembers

Here’s what I know about infrastructure: it has memory. The choices made during construction become the constraints that shape everything built on top.

Docker containers assume you want isolation. npm assumes you trust a central registry. Terraform assumes you want declarative state management. These aren’t neutral technical decisions — they’re bets about how software should be organized, who should control dependencies, what kind of failures are acceptable.

AI infrastructure is making those same foundational choices right now. Context ownership. Agent portability. Model interoperability. Data sovereignty. Each decision locks in assumptions about power, control, and access that will persist for decades.

That manifesto made their assumptions explicit. Most of us haven’t.

The architecture of intention

I keep thinking about that document because it represents something rare in our industry: an explicit statement of architectural intention. Not just what they’re building, but for whom and why.

The rest of us are building infrastructure that will shape the next thirty years, and we’re doing it with our assumptions buried in the code.

Context engines decide who can access what information. Agent frameworks determine how autonomous systems can interact. Model hosting choices affect which organizations can afford to compete. These aren’t just technical decisions — they’re political ones, whether we acknowledge it or not.

The difference isn’t whether you have politics embedded in your architecture. The difference is whether you make them visible.

Where I actually am

What I know: Infrastructure choices made during construction become permanent features of the landscape. The decisions we’re making now about AI systems will outlast the companies making them.

What I don’t know: Whether the industry will develop the vocabulary to have these conversations explicitly. Whether architectural transparency will become a competitive advantage or a liability.

What I think: We need more manifestos, not fewer. Not because I want more companies declaring allegiance to specific political orders, but because I want the assumptions made visible.

When we built .fylle as a portable format for agent context, we made specific choices about ownership and interoperability. The agent travels light. The context stays where it compounds. That’s not a neutral technical decision — it’s a bet about how AI systems should relate to the organizations that use them.

We’re building infrastructure that assumes context should be portable, agents should be interoperable, and organizations should maintain sovereignty over their accumulated knowledge. Those assumptions serve some interests and not others.

The difference isn’t that we have better politics. The difference is that we’re trying to make our architectural choices explicit rather than embedding them silently in the systems we ship.

Every line of infrastructure code is a vote for the kind of future we want to build.
The question isn’t whether to vote — we’re voting either way. The question is whether to do it consciously.