Archive: 2025/10

Rotary Position Embeddings (RoPE) revolutionized how LLMs handle context by encoding position through rotation instead of addition. It enables models to generalize to longer sequences without retraining, making it the standard in Llama, Gemini, and Claude. But it comes with tradeoffs in memory, implementation complexity, and edge cases.