We're officially SOC 2 Type II Compliant
You've unlocked a referral bonus! Sign up today and you'll get a random credit bonus between $5 and $500
You've unlocked a referral bonus!
Claim Your Bonus
Claim Bonus
Brendan McKeag

Brendan McKeag

Runpod Achieves SOC 2 Type II Certification: Continuing Our Compliance Journey

Runpod has officially achieved SOC 2 Type II certification, validating that its enterprise-grade security controls not only meet strict design standards but also operate effectively over time. This milestone proves Runpod’s ongoing commitment to protecting customer data and maintaining trusted, compliant AI infrastructure for enterprises and developers alike.
Read article
Product Updates

Setting up Slurm on Runpod Instant Clusters: A Technical Guide

Slurm on RunPod Instant Clusters makes it simple to scale distributed AI and scientific computing across multiple GPU nodes. With pre-configured setup, advanced job scheduling, and built-in monitoring, users can efficiently manage training, batch processing, and HPC workloads while testing connectivity, CUDA availability, and multi-node PyTorch performance.
Read article
AI Infrastructure

DeepSeek V3.1: A Technical Analysis of Key Changes from V3-0324

DeepSeek V3.1 introduces a breakthrough hybrid reasoning architecture that dynamically toggles between fast inference and deep chain-of-thought logic using token-controlled templates—enhancing performance, flexibility, and hardware efficiency over its predecessor V3-0324. This update positions V3.1 as a powerful foundation for real-world AI applications, with benchmark gains across math, code, and agent tasks, now fully deployable on RunPod Instant Clusters.
Read article
AI Workloads

Wan 2.2 Releases With a Plethora Of New Features

Deploy Wan 2.2 on Runpod to unlock next-gen video generation with Mixture-of-Experts architecture, TI2V-5B support, and 83% more training data—run text-to-video and image-to-video models at scale using A100–H200 GPUs and customizable ComfyUI workflows.
Read article
AI Infrastructure

Deep Cogito Releases Suite of LLMs Trained with Iterative Policy Improvement

Deploy DeepCogito’s Cogito v2 models on Runpod to experience frontier-level reasoning at lower inference costs—choose from 70B to 671B parameter variants and leverage Runpod’s optimized templates and Instant Clusters for scalable, efficient AI deployment.
Read article
AI Infrastructure

Comparing the 5090 to the 4090 and B200: How Does It Stack Up?

Benchmark Qwen2.5-Coder-7B-Instruct across NVIDIA’s B200, RTX 5090, and 4090 to identify optimal GPUs for LLM inference—compare token throughput, cost per token, and memory efficiency to match your workload with the right performance tier.
Read article
Hardware & Trends

How to Run MoonshotAI’s Kimi-K2-Instruct on RunPod Instant Cluster

Run MoonshotAI’s Kimi-K2-Instruct on RunPod Instant Clusters using H200 SXM GPUs and a 2TB shared network volume for seamless multi-node training. This guide shows how to deploy with PyTorch templates, optimize Docker environments, and accelerate LLM inference with scalable, low-latency infrastructure.
Read article
AI Workloads

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

You’ve unlocked a
referral bonus!

Sign up today and you’ll get a random credit bonus between $5 and $500 when you spend your first $10 on Runpod.