Operations
Latency
LLM latency is the time from API request to response. Frontier models: 2-10 seconds typical, longer for long outputs. Affects user experience for chat interfaces; less relevant for batch workflows.
More detail
Strategies to reduce: (a) use streaming so the user sees tokens as they generate, (b) use smaller/faster models (Llama 3.1 8B Instant via Groq ~200ms vs Claude 4.5 Sonnet ~3-5s), (c) cache common queries, (d) parallel-call multiple models and use first response. Aiprosol's chat widget streams tokens; agent workflows tolerate 5-10s per call.
