Benchmark Overview
86.00
420.00
238
Feb 8, 2026, 09:02 PM
Key Insights
gpt-3.5-turbo streams at 86.00 tokens/second on average across the last 238 benchmark runs.
Performance fluctuated by 100.60 tokens/second (117.0% coefficient of variation), indicating variable behavior across benchmark runs.
Average time to first token is 420.00 ms (excellent latency), suitable for latency-sensitive workloads.
Latest measurements completed on Feb 8, 2026, 09:02 PM based on 238 total samples.
Performance Distribution
Distribution of throughput measurements showing performance consistency across benchmark runs.
Performance Over Time
Historical performance trends showing how throughput has changed over the benchmarking period.
gpt-3.5-turbo
Benchmark Samples
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| openai | gpt-3.5-turbo | 86.00 | 24.40 | 125.00 | 420.00 |
Frequently Asked Questions
The latest rolling average throughput is 86.00 tokens per second with an average time to first token of 420.00 ms across 238 recent runs.
Benchmarks refresh automatically whenever the monitoring cron runs. The most recent run completed on Feb 8, 2026, 09:02 PM.