We've cooked up a bunch of improvements designed to reduce friction and make the.


RunPod is excited to announce a major expansion of our Global Networking feature, which now supports 14 additional data centers. Following the successful launch in December 2024, we've seen tremendous adoption of this capability that enables seamless cross-data center communication between pods. This expansion significantly increases our global coverage, allowing more users to leverage the benefits of our virtual internal network regardless of geographic location.
Global Networking is now available in the following additional data centers:
These join our originally supported locations:
For those who might have missed our initial announcement, Global Networking allows pods to communicate with each other over a secure virtual internal network facilitated by RunPod. This powerful feature enables your pods to talk to each other without opening TCP or HTTP ports to the Internet, creating a private and secure environment for your applications. You can share data and run client-server applications across multiple pods in real time, while utilizing distributed computing resources across different geographic regions. All communication takes place over the private .runpod.internal
network.
Enabling Global Networking for your pods remains simple:
With our expanded Global Networking infrastructure, here are some theoretical implementations that could revolutionize AI workloads:
AI research teams could construct sophisticated training pipelines that segment workloads across geographic regions. For example, a team might distribute their data preprocessing across pods in US-TX-3 and US-TX-4, while running their primary model training in EU-FR-1 to take advantage of specific GPU availability. Training pods could communicate model gradients and parameter updates seamlessly over the internal network, with intermediate checkpoints flowing between pods without ever touching the public internet. Data scientists could orchestrate the entire pipeline from a central management pod, monitoring training progress and adjusting hyperparameters in real-time regardless of where the actual computation occurs.
Global Networking could enable powerful federated learning architectures where model training happens across geographically distributed pods while raw data remains in its original location. A pharmaceutical company might deploy model training pods in US-GA-1 and EU-CZ-1 to process regional datasets, with a coordinator pod in US-IL-1 aggregating model updates without ever seeing the raw data. This approach would satisfy data residency requirements while still leveraging the combined knowledge from multiple regions to create more robust models.
AI applications requiring low-latency inference could deploy model serving pods across multiple regions (US-WA-1, EU-NL-1, OC-AU-1) to ensure users worldwide receive fast responses. A centralized pod in US-DE-1 could handle continuous model updates, automatically propagating the latest versions to edge serving pods over the secure internal network. This architecture would provide both the performance benefits of edge deployment and the management simplicity of centralized operations.
Reinforcement learning projects requiring massive parallel simulations could distribute simulation pods across US-GA-2, US-TX-4, and EUR-IS-2 to take advantage of available computing resources. A central controller pod in US-CA-2 would aggregate experiences and update policies, which would then be distributed back to the simulation pods. This approach could scale to thousands of simultaneous simulations while maintaining efficient policy updates through the secure, high-speed internal network.
Get Started with Global Networking Today
This expansion represents our ongoing commitment to providing flexible and powerful networking capabilities for our users. If you have questions about how to best utilize Global Networking in your specific use case, please reach out to our support team or join the discussion on our Discord server.
Give it a try today and experience the power of borderless pod communication!
The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.