Explore our credit programs for startups

Emmett Fear

Emmett runs Growth at Runpod. He lives in Utah with his wife and dog, and loves to spend time hiking and paddleboarding. He has worked in many different facets of tech, from marketing, operations, product, and most recently, growth.

How to Expose an AI Model as a REST API from a Docker Container

Explains how to turn an AI model into a REST API straight from a Docker container. Guides you through setting up the model server within a container and exposing endpoints, making it accessible for integration into applications.
Guides

How to Deploy a Custom LLM in the Cloud Using Docker

Provides a walkthrough for deploying a custom large language model (LLM) in the cloud using Docker. Covers containerizing your model, enabling GPU support, and deploying it on Runpod so you can serve or fine-tune it with ease.
Guides

The Best Way to Access B200 GPUs for AI Research in the Cloud

Explains the most efficient way to access NVIDIA B200 GPUs for AI research via the cloud. Outlines how to obtain B200 instances on platforms like Runpod, including tips on setup and maximizing these high-end GPU resources for intensive experiments.
Guides

Cloud GPU Pricing Explained: How to Find the Best Value

Breaks down the nuances of cloud GPU pricing and how to get the best value for your needs. Discusses on-demand vs. spot instances, reserved contracts, and tips for minimizing costs when running AI workloads.
Guides

How ML Engineers Can Train and Deploy Models Faster Using Dedicated Cloud GPUs

Explains how machine learning engineers can speed up model training and deployment by using dedicated cloud GPUs to reduce setup overhead and boost efficiency.
Guides

Security Measures to Expect from AI Cloud Deployment Providers

Discusses the key security measures that leading AI cloud providers should offer. Highlights expectations like data encryption, SOC2 compliance, robust access controls, and monitoring to help you choose a secure platform for your models.
Guides

What to Look for in Secure Cloud Platforms for Hosting AI Models

Provides guidance on evaluating secure cloud platforms for hosting AI models. Covers key factors such as data encryption, network security, compliance standards, and access controls to ensure your machine learning deployments are well-protected.
Guides

Get Started with PyTorch 2.4 and CUDA 12.4 on Runpod: Maximum Speed, Zero Setup

Explains how to quickly get started with PyTorch 2.4 and CUDA 12.4 on Runpod. Covers setting up a high-speed training environment with zero configuration, so you can begin training models on the latest GPU software stack immediately.
Guides

How to Serve Gemma Models on L40S GPUs with Docker

Details how to deploy and serve Gemma language models on NVIDIA L40S GPUs using Docker and vLLM. Covers environment setup and how to use FastAPI to expose the model via a scalable REST API.
Guides

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

You’ve unlocked a
referral bonus!

Sign up today and you’ll get a random credit bonus between $5 and $500 when you spend your first $10 on Runpod.