Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Running a 1-Trillion Parameter AI Model In a Single Pod: A Guide to MoonshotAI’s Kimi-K2 on Runpod
Moonshot AI’s Kimi-K2-Instruct is a trillion-parameter, mixture-of-experts open-source LLM optimized for autonomous agentic tasks—with 32 billion active parameters, Muon-trained performance rivaling proprietary models (89.5 % MMLU, 97.4 % MATH-500, 65.8 % pass@1), and the ability to run inference on as little as 1 TB of VRAM using 8-bit quantization.
AI Workloads

Streamline Your AI Workflows with RunPod’s New S3-Compatible API
RunPod’s new S3-compatible API lets you manage files on your network volumes without launching a Pod. With support for standard tools like the AWS CLI and Boto3, you can upload, sync, and automate data flows directly from your terminal — simplifying storage operations and saving on compute costs. Whether you’re prepping datasets or archiving model outputs, this update makes your AI workflows faster, cleaner, and more flexible.
Product Updates

The Dos and Don’ts of VACE: What It Does Well, What It Doesn’t
VACE introduces a powerful all-in-one framework for AI video generation and editing, combining text-to-video, reference-based creation, and precise editing in a single open-source model. It outperforms alternatives like AnimateDiff and SVD in resolution, flexibility, and controllability — though character consistency and memory usage remain key challenges.
AI Workloads

Deep Dive Into Creating and Listing on the Runpod Hub
A deep technical dive into how the Runpod Hub streamlines serverless AI deployment with a GitHub-native, release-triggered model. Learn how hub.json and tests.json files define infrastructure, deployment presets, and validation tests for reproducible AI workloads.
Product Updates