Blog

Runpod Blog

Our team’s insights on building better and scaling smarter.
All
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

LTXVideo by Lightricks: Sleeper Hit in Open-Source Video Gen

LTXVideo may have flown under the radar, but it’s one of the most exciting open-source video generation models of the year. Learn what makes it special and how to try it.
Read article
Hardware & Trends

How to Run LTXVideo in ComfyUI on Runpod

LTXVideo by Lightricks is a high-performance open-source video generation package supporting text, image, and video prompting. This guide walks you through installing it in a ComfyUI pod on Runpod, including repo setup, required models, and workflow usage.
Read article
AI Workloads

Building an OCR System Using Runpod Serverless

Learn how to automate receipt and invoice processing by building an OCR system using Runpod Serverless and pre-trained Hugging Face models. This guide walks through deployment, image conversion, API inference, and structured PDF generation.
Read article
Learn AI

Build an OCR System Using RunPod Serverless

Learn how to build an OCR pipeline using RunPod Serverless and Hugging Face models. Great for processing receipts, invoices, and scanned documents at scale.
Read article
AI Infrastructure

Community Spotlight: How AnonAI Scaled Its Private Chatbot Platform with Runpod

AnonAI used Runpod to scale its decentralized chatbot platform with 40K+ users and zero data collection. Learn how they power private AI at scale.
Read article
Learn AI

Announcing Global Networking for Secure Pod-to-Pod Communication Across Data Centers

Runpod now supports secure internal communication between pods across data centers. With Global Networking enabled, your pods can talk to each other privately via .runpod.internal—no open ports required.
Read article
Product Updates

How Much Can a GPU Cloud Save You? A Cost Breakdown vs On-Prem Clusters

We crunched the numbers: deploying 4x A100s on Runpod’s GPU cloud can save over $124,000 versus an on-prem cluster across 3 years. Learn why cloud beats on-prem for flexibility, cost, and scale.
Read article
Cost Optimization

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.