Alyssa is Runpod's Content Marketing Manager. She lives in California with her kids and dogs.
Alyssa Mazzina
23 April 2025
Mixture of Experts (MoE): A Scalable AI Training Architecture
MoE models scale efficiently by activating only a subset of parameters. Learn how this architecture works, why it’s gaining traction, and how Runpod supports MoE training and inference.
Built on RunPod: How Cogito Trained Models Toward ASI
San Francisco-based Deep Cogito used RunPod infrastructure to train Cogito v1, a high-performance open model family aiming at artificial superintelligence. Here’s how they did it.
GPUs still dominate AI training in 2025, but emerging hardware and hybrid infrastructure are reshaping what's possible. Here’s what GTC revealed—and what it means for you.
The RTX 5090 Is Here: Serve 65,000+ Tokens Per Second on RunPod
The new NVIDIA RTX 5090 is now live on RunPod. With blazing-fast inference speeds and large memory capacity, it’s ideal for real-time LLM workloads and AI scaling.
How to Choose a Cloud GPU for Deep Learning (Ultimate Guide)
Choosing a cloud GPU isn’t just about power—it’s about efficiency, memory, compatibility, and budget. This guide helps you select the right GPU for your deep learning projects.
Bare Metal vs. Instant Clusters: What’s Best for Your AI Workload?
Runpod now offers Instant Clusters alongside Bare Metal. This post compares the two deployment options and explains when to choose one over the other for your compute needs.