We've cooked up a bunch of improvements designed to reduce friction and make the.


We're excited to announce the launch of AP-JP-1, RunPod's first data center in Japan—now live in Fukushima. This marks a major step forward in our global infrastructure strategy and opens the door to dramatically better performance for users across the Asia-Pacific region.
Until now, developers and organizations in Asia had to rely on RunPod's US- or EU-based regions, facing latency ranges of 150–200ms. With AP-JP-1, users in Japan, South Korea, and nearby countries can now expect latencies as low as 8-50ms, enabling lower-latency inference and smoother workflows across the board.
For Japanese institutional entities operating under stringent regulatory frameworks, AP-JP-1 delivers comprehensive compliance with national data sovereignty mandates—a critical consideration for sectors managing sensitive information assets such as finance, healthcare, and governmental organizations
We’re launching AP-JP-1 with NVIDIA H200s, our most powerful GPU offering, ideal for large model training, fine-tuning, and high-throughput inference.
AP-JP-1 is purpose-built for:
Whether you're running real-time inference, fine-tuning large models, or ensuring your infrastructure complies with local regulations, AP-JP-1 brings the power of RunPod closer to you.
Deploy your next workload in AP-JP-1 now via the RunPod console.
For questions or feedback, reach out to our team or join the conversation on Discord.
This strategic expansion exemplifies RunPod's core mission to democratizing access to high-performance AI infrastructure on a truly global scale
The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.