#information-theory
3 experiments
EXP-011
Cross-Model Replication: Surprisal Typology Clusters by Family in Both Qwen and Gemma 4
Does the finding that surprisal curves cluster by language family replicate across different model architectures (Qwen2.5-7B dense vs Gemma 4 E2B MoE)?
- Family clustering replicates across architectures, but with important caveats. Gemma 4 shows 2.52x family ratio vs Qwen'…
- The within-family distance appears identical (0.0073) across both models — this is a rounding coincidence. Actual values…
#replication#surprisal#typology#multilingual
EXP-010
Surprisal Typology: 12 Languages Cluster by Family, Not Word Order
Does the surprisal-by-sentence-position profile in an LLM cluster by language family (genealogy) or by syntactic word order (SOV vs SVO)?
- Language family clusters, word order doesn't. Within-family curve distance is 0.0073 vs between-family 0.0110 (1.51x rat…
- Romance languages form the tightest cluster. Portuguese, Spanish, French, and Italian have nearly overlapping surprisal …
#surprisal#typology#multilingual#language-families
EXP-002
Byte-Level Mutual Information Decays as a Power Law Across 5 Languages
How does mutual information between bytes decay with distance in natural language, and is this structure universal across languages with different scripts and morphology?
- Mutual information between bytes decays as a power law I(d) ~ d^(-alpha) in all 5 languages tested (0 out of 5 exponenti…
- 82-96% of prediction gain comes from the first 8 bytes of context. Conditional entropy drops from ~5 bits (unigram) to ~…
#information-theory#byte-level#mutual-information#power-law