We're officially SOC 2 Type II Compliant
You've unlocked a referral bonus! Sign up today and you'll get a random credit bonus between $5 and $500
You've unlocked a referral bonus!
Claim Your Bonus
Claim Bonus
Emmett Fear

Emmett Fear

Emmett runs Growth at Runpod. He lives in Utah with his wife and dog, and loves to spend time hiking and paddleboarding. He has worked in many different facets of tech, from marketing, operations, product, and most recently, growth.

AI on a Schedule: Using Runpod’s API to Run Jobs Only When Needed

Explains how to use Runpod’s API to run AI jobs on a schedule or on-demand, so GPUs are active only when needed. Demonstrates how scheduling GPU tasks can reduce costs by avoiding idle time while ensuring resources are available for peak workloads.
Guides

Integrating Runpod with CI/CD Pipelines: Automating AI Model Deployments

Shows how to integrate Runpod into CI/CD pipelines to automate AI model deployments. Details setting up continuous integration workflows that push machine learning models to Runpod, enabling seamless updates and scaling without manual intervention.
Guides

Secure AI Deployments with RunPod's SOC2 Compliance

Discusses how Runpod’s SOC2 compliance and security measures ensure safe AI model deployments. Covers what SOC2 entails for protecting data and how Runpod’s infrastructure keeps machine learning workloads secure and compliant.
Guides

GPU Survival Guide: Avoid OOM Crashes for Large Models

Offers a survival guide for using GPUs to train large AI models without running into out-of-memory (OOM) errors. Provides memory optimization techniques like gradient checkpointing to help you avoid crashes when scaling model sizes.
Guides

Top Serverless GPU Clouds for 2026: Comparing Runpod, Modal, and More

Comparative overview of leading serverless GPU cloud providers in 2026, including Runpod, Modal, and more. Highlights each platform’s key features, pricing, and performance.
Guides

Runpod Secrets: Affordable A100/H100 Instances

Uncovers how to obtain affordable access to NVIDIA A100 and H100 GPU instances on Runpod. Shares tips for cutting costs while leveraging these top-tier GPUs for heavy AI training tasks.
Guides

Runpod’s Prebuilt Templates for LLM Inference

Highlights Runpod’s ready-to-use templates for LLM inference, which let you deploy large language models in the cloud quickly. Covers how these templates simplify setup and ensure optimal performance for serving LLMs.
Guides

Scale AI Models Without Vendor Lock-In (Runpod)

Explains how Runpod enables you to scale AI models without being locked into a single cloud vendor. Highlights the platform’s flexibility for multi-cloud deployments, ensuring you avoid lock-in while expanding machine learning workloads.
Guides

Top 12 Cloud GPU Providers for AI and Machine Learning in 2026

Overview of the top 12 cloud GPU providers in 2026. Reviews each platform’s features, performance, and pricing to help you identify the best choice for your AI/ML workloads.
Guides

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

You’ve unlocked a
referral bonus!

Sign up today and you’ll get a random credit bonus between $5 and $500 when you spend your first $10 on Runpod.