Provider Snapshot
5
213.80
562.00
Feb 8, 2026
Key Takeaways
5 cerebras models are actively benchmarked with 1130 total measurements across 593 benchmark runs.
qwen-3-32b leads the fleet with 253.00 tokens/second, while qwen-3-235b-instruct delivers 163.00 tok/s.
Performance varies by 55.2% across the cerebras model lineup, indicating diverse optimization strategies for different use cases.
Avg time to first token across the fleet is 562.00 ms, showing good responsiveness for interactive applications.
The cerebras model fleet shows consistent performance characteristics (13.6% variation coefficient), indicating standardized infrastructure.
Fastest Models
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| cerebras | qwen-3-32b | 253.00 | 4.77 | 417.00 | 370.00 |
| cerebras | llama-3.1-8b | 225.00 | 6.52 | 365.00 | 530.00 |
| cerebras | llama-3.3-70b | 215.00 | 14.60 | 338.00 | 370.00 |
| cerebras | gpt-oss-120b | 213.00 | 4.60 | 346.00 | 750.00 |
| cerebras | qwen-3-235b-instruct | 163.00 | 2.74 | 264.00 | 790.00 |
All Models
Complete list of all cerebras models tracked in the benchmark system. Click any model name to view detailed performance data.
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| cerebras | qwen-3-235b-instruct | 163.00 | 2.74 | 264.00 | 790.00 |
| cerebras | qwen-3-32b | 253.00 | 4.77 | 417.00 | 370.00 |
| cerebras | gpt-oss-120b | 213.00 | 4.60 | 346.00 | 750.00 |
| cerebras | llama-3.3-70b | 215.00 | 14.60 | 338.00 | 370.00 |
| cerebras | llama-3.1-8b | 225.00 | 6.52 | 365.00 | 530.00 |
Featured Models
Frequently Asked Questions
Based on recent tests, qwen-3-32b shows the highest average throughput among tracked cerebras models.
This provider summary aggregates 1130 individual prompts measured across 593 monitoring runs over the past month.