Emmett runs Growth at Runpod. He lives in Utah with his wife and dog, and loves to spend time hiking and paddleboarding. He has worked in many different facets of tech, from marketing, operations, product, and most recently, growth.
May 9, 2025
Running Stable Diffusion on L4 GPUs in the Cloud: A How-To Guide
Provides a how-to guide for running Stable Diffusion on NVIDIA L4 GPUs in the cloud. Details environment setup, model optimization, and steps to generate images using Stable Diffusion with these efficient GPUs.
Guides
April 27, 2025
Achieving Faster, Smarter AI Inference with Docker Containers
Discusses methods to achieve faster and smarter AI inference using Docker containers. Highlights optimization techniques and orchestration strategies to maximize throughput and efficiency when serving models.
Guides
May 16, 2025
The Fastest Way to Run Mixtral in a Docker Container with GPU Support
Describes the quickest method to run Mixtral with GPU acceleration in a Docker container. Covers how to set up Mixtral’s environment with GPU support, ensuring fast performance for this application.
Guides
April 26, 2025
Serverless GPUs for API Hosting: How They Power AI APIs–A Runpod Guide
Explores how serverless GPUs power AI-driven APIs on platforms like Runpod. Demonstrates how on-demand GPU instances efficiently handle inference requests and auto-scale, making it ideal for serving AI models as APIs.
Guides
April 28, 2025
Unpacking Serverless GPU Pricing for AI Deployments
Breaks down how serverless GPU pricing works for AI deployments. Understand the pay-as-you-go cost model and learn tips to optimize usage to minimize expenses for cloud-based ML tasks.
Guides
April 26, 2025
Unlock Efficient Model Fine-Tuning With Pod GPUs Built for AI Workloads
Shows how Runpod’s specialized Pod GPUs enable efficient model fine-tuning for AI workloads. Explains how these GPUs accelerate training while reducing resource costs for intensive machine learning tasks.
Guides
May 16, 2025
How to Deploy LLaMA.cpp on a Cloud GPU Without Hosting Headaches
Shows how to deploy LLaMA.cpp on a cloud GPU without the usual hosting headaches. Covers setting up the model in a Docker container and running it for efficient inference, all while avoiding complex server management.
Guides
May 8, 2025
Everything You Need to Know About the Nvidia DGX B200 GPU
Comprehensive overview of the Nvidia DGX B200 GPU, including its architecture, performance, AI and compute capabilities, key features, and use cases.
Guides
May 2, 2025
Run Automatic1111 on Runpod: The Easiest Way to Use Stable Diffusion A1111 in the Cloud
Explains the easiest way to use Stable Diffusion’s Automatic1111 web UI on Runpod. Walks through launching the A1111 interface on cloud GPUs, enabling quick AI image generation without local installation.