60-80% fewer tokens than JSON. Human-debuggable. Stream-native.
Inspired by 40 years of battle-tested EDI density. Built for LLM agents.
No structural punctuation tax. Field positions defined by schema, not repeated on the wire.
ASCII-only, line-oriented, debuggable with cat. If you can read EDI, you can read AXF.
Segment-terminated. Parsers emit events as they go. No need to buffer the whole message.
Envelope names a schema ID. Receivers already know the layout. Zero key overhead.
Same tool call. Left: JSON. Right: AXF. Numbers use the cl100k_base tokenizer (GPT-4 family); other tokenizers differ but the direction holds.
{
"formatica_version": "1.0",
"schema": "tc:weather.v1",
"request_id": "8a3f",
"header": {
"role": "assistant",
"timestamp": 1745515200,
"conversation_id": "c_9b2e"
},
"tool_call": {
"name": "get_weather",
"arguments": {
"location": "Austin, TX",
"units": "imperial",
"when": "now"
}
},
"metadata": {
"priority": "normal",
"timeout_ms": 5000
}
}
FX*1.0*tc:weather.v1*req:8a3f~
H*role:assistant*ts:1745515200*conv:c_9b2e~
TC*get_weather*loc:Austin,TX*units:imperial*when:now~
M*priority:normal*timeout:5000~
E~
Why the savings? JSON spends a lot of tokens on {, }, ", ,, and whitespace. AXF uses single ASCII delimiters that tokenizers treat as cheap separators, plus schema-by-reference so keys don't ride along on every message.
AXF is an open standard. The spec is a draft — feedback, test vectors, and reference implementations are welcome.
github.com/axf-dev (coming soon)