Brendan McKeag

SuperHot 8k Token Context Models Are Here For Text Generation

New 8k context models from TheBloke—like WizardLM, Vicuna, and Manticore—allow longer, more immersive text generation in Oobabooga. With more room for character memory and story progression, these models enhance AI storytelling.
Read article
Hardware & Trends

Savings Plans Are Here For Secure Cloud Pods – How To Purchase a Monthly Plan And Save Big

Learn how to use Runpod's new Savings Plans to save up to 20% on Secure Cloud pods with monthly or quarterly commitments—ideal for users with high GPU workloads.
Read article
Product Updates

Runpod is Proud to Sponsor the StockDory Chess Engine

Runpod is now an official sponsor of StockDory, a rapidly evolving open-source chess engine that improves faster than Stockfish. StockDory offers deep positional insight, lightning-fast calculations, and full customization—making it ideal for anyone looking to explore AI-driven chess analysis.
Read article
Product Updates

KoboldAI – The Other Roleplay Front End, And Why You May Want to Use It

While Oobabooga is a popular choice for text-based AI roleplay, KoboldAI offers a powerful alternative with smart context handling, more flexible editing, and better long-term memory retention. This guide compares the two frontends and walks through deploying KoboldAI on Runpod for writers and roleplayers looking for a deeper, more persistent AI interaction experience.
Read article
Learn AI

Breaking Out of the 2048 Token Context Limit in Oobabooga

Oobabooga now supports up to 8192 tokens of context, up from the previous 2048-token limit. Learn how to upgrade your install, download compatible models, and optimize your setup to take full advantage of expanded memory capacity in longform text generation.
Read article
Learn AI

Groundbreaking H100 NVidia GPUs Now Available On Runpod

Runpod now offers access to NVIDIA’s powerful H100 GPUs, designed for generative AI workloads at scale. These next-gen GPUs deliver 7–12x performance gains over the A100, making them ideal for training massive models like GPT-4 or deploying demanding inference tasks.
Read article
Hardware & Trends

How to Work With Long Term Memory In Oobabooga and Text Generation

Oobabooga has a 2048-token context limit, but with the Long Term Memory extension, you can store and retrieve relevant memories across conversations. This guide shows how to install the plugin, use the Character panel for persistent memory, and work around current context limitations.
Read article
Learn AI

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

12:22