Provider Snapshot
11
68.17
890.00
Feb 8, 2026
Key Takeaways
11 together models are actively benchmarked with 2681 total measurements across 2460 benchmark runs.
llama-3.1-8b leads the fleet with 155.00 tokens/second, while llama-3.3-70b delivers 51.90 tok/s.
Performance varies by 198.7% across the together model lineup, indicating diverse optimization strategies for different use cases.
Avg time to first token across the fleet is 890.00 ms, showing good responsiveness for interactive applications.
The together model fleet shows varied performance characteristics (55.6% variation coefficient), reflecting diverse model architectures.
Fastest Models
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| together | llama-3.1-8b | 155.00 | 2.74 | 230.00 | 580.00 |
| together | mistral-7b | 109.00 | 3.68 | 166.00 | 410.00 |
| together | qwen-2.5-7b | 100.00 | 3.60 | 146.00 | 270.00 |
| together | llama-3.2-3b | 72.90 | 5.00 | 145.00 | 1160.00 |
| together | llama-3.1-70b | 72.70 | 4.09 | 147.00 | 490.00 |
| together | llama-3.3-70b | 51.90 | 2.70 | 136.00 | 1480.00 |
All Models
Complete list of all together models tracked in the benchmark system. Click any model name to view detailed performance data.
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| together | llama-3.3-70b | 51.90 | 2.70 | 136.00 | 1480.00 |
| together | deepseek-r1 | 38.00 | 1.09 | 67.50 | 1770.00 |
| together | deepseek-v3 | 29.00 | 1.17 | 66.20 | 1390.00 |
| together | mistral-7b | 109.00 | 3.68 | 166.00 | 410.00 |
| together | qwen-2.5-72b | 50.90 | 3.28 | 70.40 | 490.00 |
| together | qwen-2.5-7b | 100.00 | 3.60 | 146.00 | 270.00 |
| together | mixtral-8x7b | 49.10 | 6.55 | 111.00 | 330.00 |
| together | llama-3.2-3b | 72.90 | 5.00 | 145.00 | 1160.00 |
| together | llama-3.1-405b | 21.40 | 1.78 | 29.60 | 1420.00 |
| together | llama-3.1-70b | 72.70 | 4.09 | 147.00 | 490.00 |
| together | llama-3.1-8b | 155.00 | 2.74 | 230.00 | 580.00 |
Featured Models
Frequently Asked Questions
Based on recent tests, llama-3.1-8b shows the highest average throughput among tracked together models.
This provider summary aggregates 2681 individual prompts measured across 2460 monitoring runs over the past month.