We're officially SOC 2 Type II Compliant
You've unlocked a referral bonus! Sign up today and you'll get a random credit bonus between $5 and $500
You've unlocked a referral bonus!
Claim Your Bonus
Claim Bonus
Brendan McKeag

Brendan McKeag

Introducing the A40 GPUs: Revolutionize Machine Learning with Unmatched Efficiency

Discover how NVIDIA A40 GPUs on Runpod offer unmatched value for machine learning—high performance, low cost, and excellent availability for fine-tuning LLMs.
Read article
Hardware & Trends

New Navigational Changes To Runpod UI

The Runpod dashboard just got a streamlined upgrade. Here's a quick look at what’s moved, what’s merged, and how new UI changes will make managing your pods and templates easier.
Read article
Product Updates

Use alpha_value To Blast Through Context Limits in LLaMa-2 Models

Learn how to extend the context length of LLaMa-2 models beyond their defaults using alpha_value and NTK-aware RoPE scaling—all without sacrificing coherency.
Read article
AI Workloads

Save the Date October 11th, 2:00 PM EST: Fireside Chat With Runpod CEO Zhen Lu And Data Science Dojo CEO Raja Iqbal On GPU-Powered AI Transformation

Join Runpod CEO Zhen Lu and Data Science Dojo CEO Raja Iqbal on October 11 for a live fireside chat about GPU-powered AI transformation and the future of scalable machine learning infrastructure.
Read article
Hardware & Trends

Runpod Partners With RandomSeed to Provide Accessible, User-Friendly Stable Diffusion API Access

Runpod partners with RandomSeed to power easy-to-use API access for Stable Diffusion through AUTOMATIC1111, making generative art more accessible to developers.
Read article
Product Updates

Runpod Partners with Data Science Dojo To Provide Compute For LLM Bootcamps

Runpod has partnered with Data Science Dojo to power their Large Language Model bootcamps, providing scalable GPU infrastructure to support hands-on learning in generative AI, embeddings, orchestration frameworks, and deployment.
Read article
Product Updates

What You'll Need to Run Falcon 180B In a Pod

Falcon-180B is the largest open-source LLM to date, requiring 400GB of VRAM to run unquantized. This post explores how to deploy it on Runpod with A100s, L40s, and quantized alternatives like GGUF for more accessible use.
Read article
AI Infrastructure

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

You’ve unlocked a
referral bonus!

Sign up today and you’ll get a random credit bonus between $5 and $500 when you spend your first $10 on Runpod.