Benchmark Overview
12.00
2080.00
25
Feb 8, 2026, 12:01 PM
Key Insights
gpt-5.2-codex streams at 12.00 tokens/second on average across the last 25 benchmark runs.
Performance fluctuated by 10.59 tokens/second (88.3% coefficient of variation), indicating variable behavior across benchmark runs.
Average time to first token is 2080.00 ms (high latency), consider alternatives for latency-sensitive workloads.
Latest measurements completed on Feb 8, 2026, 12:01 PM based on 25 total samples.
Benchmark Samples
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| openai | 12.00 | 6.61 | 17.20 | 2080.00 |
Frequently Asked Questions
The latest rolling average throughput is 12.00 tokens per second with an average time to first token of 2080.00 ms across 25 recent runs.
Benchmarks refresh automatically whenever the monitoring cron runs. The most recent run completed on Feb 8, 2026, 12:01 PM.