groq Provider Benchmarks

Comprehensive performance summary covering 6 models.

This provider hub highlights throughput and latency trends across every groq model monitored by LLM Benchmarks. Use it to compare hosting tiers, track regressions, and discover the fastest variants in the catalogue.

Visit groq Official Website

Provider Snapshot

Models Tracked

6

Avg Tokens / Second

204.50

Avg Time to First Token (ms)

235.00

Last Updated

Feb 8, 2026

Key Takeaways

  • 6 groq models are actively benchmarked with 1143 total measurements across 626 benchmark runs.

  • llama-3.1-8b leads the fleet with 281.00 tokens/second, while kimi-k2 delivers 140.00 tok/s.

  • Performance varies by 100.7% across the groq model lineup, indicating diverse optimization strategies for different use cases.

  • Avg time to first token across the fleet is 235.00 ms, showing excellent responsiveness for interactive applications.

  • The groq model fleet shows consistent performance characteristics (22.7% variation coefficient), indicating standardized infrastructure.

Fastest Models

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
groqllama-3.1-8b281.0095.20447.00130.00
groqqwen-3-32b240.0046.20391.00150.00
groqllama-3.3-70b206.0079.50280.00120.00
groqllama-4-scout195.0023.10316.00250.00
groqllama-4-maverick165.0019.70310.00510.00
groqkimi-k2140.0021.90203.00250.00

All Models

Complete list of all groq models tracked in the benchmark system. Click any model name to view detailed performance data.

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
groqqwen-3-32b240.0046.20391.00150.00
groqllama-4-maverick165.0019.70310.00510.00
groqllama-3.3-70b206.0079.50280.00120.00
groqkimi-k2140.0021.90203.00250.00
groqllama-4-scout195.0023.10316.00250.00
groqllama-3.1-8b281.0095.20447.00130.00

Featured Models

Frequently Asked Questions

Based on recent tests, llama-3.1-8b shows the highest average throughput among tracked groq models.

This provider summary aggregates 1143 individual prompts measured across 626 monitoring runs over the past month.