Explore our credit programs for startups

Emmett Fear

Emmett runs Growth at Runpod. He lives in Utah with his wife and dog, and loves to spend time hiking and paddleboarding. He has worked in many different facets of tech, from marketing, operations, product, and most recently, growth.

GPU Memory Management for Large Language Models: Optimization Strategies for Production Deployment

Deploy larger language models on existing hardware with advanced GPU memory optimization on Runpod—use gradient checkpointing, model sharding, and quantization to reduce memory by up to 80% while maintaining performance at scale.
Guides

AI Model Quantization: Reducing Memory Usage Without Sacrificing Performance

Optimize AI models for production with quantization on Runpod—reduce memory usage by up to 80% and boost inference speed using 8-bit or 4-bit precision on A100/H100 GPUs, with Dockerized workflows and serverless deployment at scale.
Guides

Edge AI Deployment: Running GPU-Accelerated Models at the Network Edge

Deploy low-latency, privacy-first AI models at the edge using Runpod—prototype and optimize GPU-accelerated inference on RTX and Jetson-class hardware, then scale with Dockerized workflows, secure containers, and serverless endpoints.
Guides

The Complete Guide to Multi-GPU Training: Scaling AI Models Beyond Single-Card Limitations

Train trillion-scale models efficiently with multi-GPU infrastructure on Runpod—use A100/H100 clusters, advanced parallelism strategies (data, model, pipeline), and pay-per-second pricing to accelerate training from months to days.
Guides

Creating High-Quality Videos with CogVideoX on RunPod's GPU Cloud

Generate high-quality 10-second AI videos with CogVideoX on Runpod—leverage L40S GPUs, Dockerized PyTorch workflows, and scalable serverless infrastructure to produce compelling motion-accurate content for marketing, animation, and prototyping.
Guides

Synthesizing Natural Speech with Parler-TTS Using Docker

Create lifelike speech with Parler-TTS on Runpod—generate expressive, multi-speaker audio using RTX 4090 GPUs, Dockerized TTS environments, and real-time API endpoints for accessibility, education, and virtual assistants.
Guides

Fine-Tuning DeepSeek-Coder V2 for Specialized Coding AI on RunPod

Fine-tune DeepSeek-Coder V2 on Runpod’s A100 GPUs to accelerate code generation and debugging—customize multilingual coding models using Dockerized environments, scalable training, and secure serverless deployment.
Guides

Deploying Yi-1.5 for Vision-Language AI Tasks on RunPod with Docker

Deploy 01.AI’s Yi-1.5 on Runpod to power vision-language AI—run image-text fusion tasks like captioning and VQA using A100 GPUs, Dockerized PyTorch environments, and scalable serverless endpoints with per-second billing.
Guides

Generating 3D Models with TripoSR on RunPod's Scalable GPU Platform

Generate high-fidelity 3D models in seconds with TripoSR on Runpod—leverage L40S GPUs, Dockerized PyTorch workflows, and scalable infrastructure for fast, texture-accurate mesh creation in design, AR, and gaming pipelines.
Guides

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

You’ve unlocked a
referral bonus!

Sign up today and you’ll get a random credit bonus between $5 and $500 when you spend your first $10 on Runpod.