
Brendan McKeag
Skip the front ends—learn how to use Jupyter Notebook on RunPod to run Stable Diffusion directly in Python. Great for devs who want full control.
AI Workloads

Brendan McKeag
Large language models can write poetry and solve logic puzzles—but fail at tasks like counting letters or doing math. Here’s why, and what it tells us about their design.
Learn AI

Brendan McKeag
Lower VRAM usage and improve inference speed using GGUF quantized models in KoboldCPP with just a few environment variables.
AI Workloads

Brendan McKeag
GGUF quantizations make large language models faster and more efficient. This guide walks you through using KoboldCPP to load, run, and manage quantized LLMs on Runpod.
Learn AI

Brendan McKeag
Better Forge is a new Runpod template that lets you launch Stable Diffusion pods in less time and with less hassle. Here's how it improves your workflow.
AI Infrastructure

Brendan McKeag
Deploy large language models like LLaMA or Mixtral on RunPod Serverless with strong privacy controls and no infrastructure headaches. Here’s how.
AI Infrastructure

Brendan McKeag
Use Ollama to compare multiple LLMs side-by-side on a single GPU pod—perfect for fast, realistic model evaluation with shared prompts.
AI Workloads
Oops! no result found for User type something