About Groq
Groq develops inference-optimized AI accelerators (LPUs) and provides hardware-backed inference solutions and cloud offerings that accelerate LLM inference workloads for high-throughput, low-latency production usage.
Key Features
- Custom inference hardware (Language Processing Unit - LPU) optimized for LLM workloads.
- Hardware-backed inference services and cloud deployment options to accelerate production inference.
- Engineering and tooling to adapt and optimize models to Groq’s execution environment.
Use Cases & Best For
About LLM Development
Tools for building with large language models