Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

16k Context LLM Models Now Available On Runpod
Runpod now supports Panchovix’s 16k-token context models, allowing for much deeper context retention in long-form generation. These models require higher VRAM and may trade off some performance, but are ideal for extended sessions like roleplay or complex Q&A.
Product Updates

SuperHot 8k Token Context Models Are Here For Text Generation
New 8k context models from TheBloke—like WizardLM, Vicuna, and Manticore—allow longer, more immersive text generation in Oobabooga. With more room for character memory and story progression, these models enhance AI storytelling.
Hardware & Trends