Runpod × OpenAI: Parameter Golf challenge is live
You've unlocked a referral bonus! Sign up today and you'll get a random credit bonus between $5 and $500
You've unlocked a referral bonus!
Claim Your Bonus
Claim Bonus
Emmett Fear

Emmett Fear

Emmett runs Growth at Runpod. He lives in Utah with his wife and dog, and loves to spend time hiking and paddleboarding. He has worked in many different facets of tech, from marketing, operations, product, and most recently, growth.

AI Docker Containers: Deploying Generative AI Models on Runpod

Covers how to deploy generative AI models in Docker containers on Runpod’s platform. Details container configuration, GPU optimization, and best practices.
Guides

Deploy AI Models with Instant Clusters for Optimized Fine-Tuning

Discusses how Runpod’s Instant Clusters streamline the deployment of AI models for fine-tuning. Explains how on-demand GPU clusters enable optimized training and scaling with minimal overhead.
Guides

An AI Engineer’s Guide to Deploying RVC (Retrieval-Based Voice Conversion) Models in the Cloud

Walks through how AI engineers can deploy Retrieval-Based Voice Conversion (RVC) models in the cloud. Covers setting up the environment with GPU acceleration and scaling voice conversion applications on Runpod.
Guides

How to Deploy a Hugging Face Model on a GPU-Powered Docker Container

Learn how to deploy a Hugging Face model in a GPU-powered Docker container for fast, scalable inference. This step-by-step guide covers container setup and deployment to streamline running NLP models in the cloud.
Guides

Using Runpod’s Serverless GPUs to Deploy Generative AI Models

Highlights how Runpod’s serverless GPUs enable quick deployment of generative AI models with minimal setup. Discusses on-demand GPU allocation, cost savings during idle periods, and easy scaling of generative workloads without managing servers.
Guides

Nvidia RTX 5090 Review: Specs, VRAM, Benchmarks, and AI Performance

The complete guide to the Nvidia RTX 5090: full specs, 32 GB GDDR7 VRAM, benchmark performance, AI workload capabilities, and how it compares to the H100 and RTX 4090 for cloud GPU workloads.
Guides

Beginner's Guide to AI for Students Using GPU-Enabled Cloud Tools

Introduces students to the basics of AI using GPU-enabled cloud tools. Covers fundamental concepts and how cloud-based GPU resources make it easy to start building and training AI models.
Guides

Training LLMs on H100 PCIe GPUs in the Cloud: Setup and Optimization

Guides you through setting up and optimizing LLM training on Nvidia H100 PCIe GPUs in the cloud. Covers environment configuration, parallelization techniques, and performance tuning for large language models.
Guides

Optimizing Docker Setup for PyTorch Training with CUDA 12.8 and Python 3.11

Offers tips to optimize Docker setup for PyTorch training with CUDA 12.8 and Python 3.11. Discusses configuring containers and environment variables to ensure efficient GPU utilization and compatibility.
Guides

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

You’ve unlocked a
referral bonus!

Sign up today and you’ll get a random credit bonus between $5 and $500 when you spend your first $10 on Runpod.