I kept rewriting agents that worked perfectly.
Not because they were wrong. Because the framework changed. CrewAI updated its API. A better tool came out. A client wanted to switch providers. And every time, same story: rebuild from scratch.
The prompts, the model configs, the tools, the guardrails — all of it coupled to one runtime. The agent was a prisoner of the platform that ran it.
After the third time this happened, I stopped fixing the symptom and started looking at the structure.
The gap
Every other layer of the stack solved this problem years ago.
Apps have Docker — a container that separates what an app is from where it runs. Packages have npm. Infrastructure has Terraform. AI agents have nothing.
There is no standard way to say “this is my agent” — identity, instructions, capabilities, constraints — in a format that any framework can understand.
CrewAI uses YAML. OpenAI uses JSON. LangChain uses Python. Claude Code uses markdown files. Switch framework, rewrite everything.
This gap exists because most frameworks treat the agent definition and the agent runtime as the same thing. They're not.
Why this matters to us specifically
If you've read the first post on this blog, you know our thesis: agents are commodity. Context is the product.
The intelligence doesn't live in the model or the agent. It lives in the structured context that feeds them. Models improve every quarter. Frameworks come and go. But the knowledge about your client, your domain, your compliance rules — that compounds.
If you believe this, then a locked-in agent format is architecturally wrong.
Think about it. If the agent is commodity, it must be portable. Locking it inside a framework is like putting a shipping container that only works on one truck. The container should go anywhere. The cargo — your context — is what has value.
That's the reasoning behind .fylle.
What it is
A .fylle file is a ZIP. Inside it, everything that defines an agent:
my-agent.fylle/
├── manifest.json
├── identity/
│ ├── system_prompt.md
│ └── persona.json
├── model/
│ └── preferences.json
├── tools/
│ └── tool_definitions.json
├── guardrails/
│ └── rules.json
└── io/
├── inputs.json
└── outputs.json Human-readable. Versionable in Git. Framework-agnostic.
What it deliberately does not contain: the context itself. The cards, the knowledge bases, the client intelligence — those stay in your context infrastructure. A .fylle file can reference context sources, but it doesn't embed them.
The agent travels light. The context stays where it compounds.
What it does today
v0.1.0. Python SDK. Four framework adapters.
Import from CrewAI (YAML), OpenAI Assistants (JSON), OpenClaw/SOUL.md, or Claude Code workspaces. Export to any of them.
Moving from OpenClaw to Claude Code? The export generates a complete workspace — CLAUDE.md, settings, rules. Three lines of Python. Switching from CrewAI to OpenAI Assistants? Same agent, new runtime, no rewrite.
72 tests passing. Early, but functional.
Why open source
This is the part people ask about. Why give it away?
Because keeping it closed would contradict everything we believe.
If the agent is commodity, the container that holds it is infrastructure. Like a file format, not a product. The value isn't in the ZIP structure. It's in what you put inside it — and more importantly, in the context infrastructure that makes those agents effective.
We open-sourced .fylle because we think a portable agent format benefits everyone building with AI agents. And we think the ecosystem needs it more than any single company needs a proprietary lock.
Whether that's the right strategic call is something I can't prove yet. But it's consistent with the thesis, and consistency matters more than cleverness when you're building something that's supposed to last.
What's next
More framework adapters. Better support for multi-agent compositions. A richer capability model for tool definitions.
We're also exploring how .fylle files can reference external context sources in a standardized way — keeping the separation between portable agents and persistent context clean and explicit. This is the hard part, and we haven't solved it yet.
If you build with AI agents and you've felt this problem, we'd like to hear from you.
Spec is open. SDK is MIT. Contributions welcome.