04
Product
16
Backend
09
Auth
12
iOS
07
Infra
02
Real-Time

Use S-expressions for GPT-5 effect generation

ADR-0035 ACCEPTED · 2025-09-15
Use S-expressions for GPT-5 effect generation

Context

The trip planning system uses GPT-5 with CFG constraints to generate effects (ADR-0032). The original approach used a custom Lark grammar that output a domain-specific format:

CreateDestination("Paris", 5); AddDestination("Rome"); SetTotalDuration(14)

This required 170+ lines of manual string parsing to convert into Rust Effect enums. Grammar and parser had to be kept in sync manually. Parsing errors only surfaced at runtime. OpenAI also acknowledged that GPT-5 can "go out of distribution on unbounded regexes" in CFG mode.

We evaluated JSON Schema (100% reliable per OpenAI but ~2x tokens), RON (not significantly better), and S-expressions (Lisp-like format with existing serde support).

Decision

Migrate to S-expressions using serde-lexpr for automatic serialization/deserialization.

((create-destination (location . "Paris") (duration-days . 5))
 (add-destination (location . "Rome"))
 (set-total-duration (days . 14)))

Named fields (location, duration-days) rather than positional arguments — disambiguates same-type fields and reduces argument-ordering mistakes from GPT-5. The existing Effect enum stays unchanged with #[serde(rename_all = "kebab-case")].

170+ lines of manual parsing replaced by serde_lexpr::from_str().

Consequences

Zero parsing code — serde handles all conversion. Compile-time type safety through derives. Single source of truth (the Rust enum) for effect structure. Adding new effects requires no grammar changes.

Token overhead is negligible (~35 vs ~32 characters). The format is natural for command representation — S-expressions match the (verb args) semantics of effects.