Ad Code

Responsive Advertisement

Ticker

6/recent/ticker-posts

Show HN: Zonformat– 35–60% fewer LLM tokens using zero-overhead notation https://ift.tt/Ae9XRbp

Show HN: Zonformat– 35–60% fewer LLM tokens using zero-overhead notation hey HN! Roni from India — ex-Google Summer of Code(GSoC) @ internet archive, full-stack dev. got frustrated watching json bloat my openai/claude bills by 50%+ on redundant syntax, so i built ZON over a few weekends: zero-overhead notation that compresses payloads ~50% vs json (692 tokens vs 1,300 on gpt-5-nano benchmarks) while staying 100% human-readable and lossless. Playground -> https://ift.tt/e9fn1g5 ROI calculator -> https://ift.tt/t4fhSd8 <2kb typescript lib with 100% test coverage. drop-in for openai sdk, langchain js/ts, claude, llama.cpp, streaming, zod schemas—validates llm outputs at runtime with zero extra cost. Benchmarks -> https://ift.tt/bHn97Au try it: npm i zon-format or uv add zon-format, then encode/decode in <10s (code in readme). full site with benchmarks: https://zonformat.org github → https://github.com/ZON-Format harsh feedback on perf, edge cases, or api very welcome. if it saves you a coffee's worth of tokens, a star would be awesome let's make llm prompts efficient again https://zonformat.org December 8, 2025 at 10:55PM

Post a Comment

0 Comments