Provider Snapshot
2
183.50
1365.00
Apr 3, 2026
Key Takeaways
2 cerebras models are actively benchmarked with 292 total measurements across 268 benchmark runs.
llama-3.1-8b leads the fleet with 190.00 tokens/second, while gpt-oss-120b delivers 177.00 tok/s.
Performance varies by 7.3% across the cerebras model lineup, indicating diverse optimization strategies for different use cases.
Avg time to first token across the fleet is 1365.00 ms, showing moderate responsiveness for interactive applications.
Fastest Models
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| cerebras | llama-3.1-8b | 190.00 | 1.54 | 353.00 | 1060.00 |
| cerebras | gpt-oss-120b | 177.00 | 1.51 | 348.00 | 1670.00 |
All Models
Complete list of all cerebras models tracked in the benchmark system. Click any model name to view detailed performance data.
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| cerebras | gpt-oss-120b | 177.00 | 1.51 | 348.00 | 1670.00 |
| cerebras | llama-3.1-8b | 190.00 | 1.54 | 353.00 | 1060.00 |
Featured Models
Frequently Asked Questions
Based on recent tests, llama-3.1-8b shows the highest average throughput among tracked cerebras models.
This provider summary aggregates 292 individual prompts measured across 268 monitoring runs over the past month.