Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

AMD MI300X vs. Nvidia H100 SXM: Performance Comparison on Mixtral 8x7B Inference
Runpod benchmarks AMD’s MI300X against Nvidia’s H100 SXM using Mistral’s Mixtral 8x7B model. The results highlight performance and cost trade-offs across batch sizes, showing where AMD’s larger VRAM shines.
Hardware & Trends

Run Larger LLMs on Runpod Serverless Than Ever Before – Llama-3 70B (and beyond!)
Runpod Serverless now supports multi-GPU workers, enabling full-precision deployment of large models like Llama-3 70B. With optimized VLLM support, flashboot, and network volumes, it's never been easier to run massive LLMs at scale.
Product Updates