Explore our credit programs for startups

Emmett Fear

Emmett runs Growth at Runpod. He lives in Utah with his wife and dog, and loves to spend time hiking and paddleboarding. He has worked in many different facets of tech, from marketing, operations, product, and most recently, growth.

Everything You Need to Know About the Nvidia RTX 5090 GPU

Comprehensive overview of the Nvidia RTX 5090 GPU, including its release details, performance, AI and compute capabilities, and key features.
Guides

Beginner's Guide to AI for Students Using GPU-Enabled Cloud Tools

Introduces students to the basics of AI using GPU-enabled cloud tools. Covers fundamental concepts and how cloud-based GPU resources make it easy to start building and training AI models.
Guides

Training LLMs on H100 PCIe GPUs in the Cloud: Setup and Optimization

Guides you through setting up and optimizing LLM training on Nvidia H100 PCIe GPUs in the cloud. Covers environment configuration, parallelization techniques, and performance tuning for large language models.
Guides

Optimizing Docker Setup for PyTorch Training with CUDA 12.8 and Python 3.11

Offers tips to optimize Docker setup for PyTorch training with CUDA 12.8 and Python 3.11. Discusses configuring containers and environment variables to ensure efficient GPU utilization and compatibility.
Guides

Train Cutting-Edge AI Models with PyTorch 2.8 + CUDA 12.8 on Runpod

Shows how to leverage PyTorch 2.8 with CUDA 12.8 on Runpod to train cutting-edge AI models, using a cloud GPU environment that eliminates the usual hardware setup hassles.
Guides

The GPU Infrastructure Playbook for AI Startups: Scale Smarter, Not Harder

Provides a strategic playbook for AI startups to scale smarter, not harder. Covers how to leverage GPU infrastructure effectively—balancing cost, performance, and security—to accelerate AI development.
Guides

How to Deploy Hugging Face Models on A100 SXM GPUs in the Cloud

Provides step-by-step instructions to deploy Hugging Face models on A100 SXM GPUs in the cloud. Covers environment setup, model optimization, and best practices to utilize high-performance GPUs for NLP or vision tasks.
Guides

Runpod Secrets: Scaling LLM Inference to Zero Cost During Downtime

Reveals techniques to scale LLM inference on Runpod to zero cost during downtime by leveraging serverless GPUs and auto-scaling, eliminating idle resource expenses for NLP model deployments.
Guides

Exploring Pricing Models of Cloud Platforms for AI Deployment

Examines various cloud platform pricing models for AI deployment, helping you understand and compare cost structures for hosting machine learning workflows.
Guides

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

You’ve unlocked a
referral bonus!

Sign up today and you’ll get a random credit bonus between $5 and $500 when you spend your first $10 on Runpod.