Cloud GPUs
Rent NVIDIA H200 SXM GPUs from $3.99/hr
High-performance data center GPU based on Hopper architecture with 141GB HBM3e memory and 4.8TB/s bandwidth for accelerating generative AI and HPC workloads.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

Built on NVIDIA Hopper Architecture

Breakthrough architecture designed for transformer models, delivering unprecedented AI training and inference performance.

Fourth-Generation Tensor Cores

Advanced AI acceleration with FP8 precision deliver up to 5X faster training for large language models.

141GB HBM3e Memory

First GPU with HBM3e at 4.8TB/s bandwidth - nearly double the H100's capacity with 1.4X more memory bandwidth.

Multi-Instance GPU (MIG)

Partition a single H200 into up to 7 secure GPU instances, maximizing utilization for different workloads.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

4.8

TB/s

FP16 Tensor Performance

1,979

TFLOPS

NVLink Bandwidth

900

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
4.8
TB/s
Faster token generation and reduced latency during LLM inference.
Memory Bandwidth
1,979
Faster token generation and reduced latency during LLM inference.
Faster token generation and reduced latency during LLM inference.
4.8
FP16 Tensor Performance
1,979
TFLOPS
Accelerating transformer model computations and neural network operations.
FP16 Tensor Performance
1,979
Accelerating transformer model computations and neural network operations.
Accelerating transformer model computations and neural network operations.
1,979
NVLink Bandwidth
900
GB/s
Seamless multi-GPU scaling when models exceed single GPU memory capacity.
NVLink Bandwidth
900
Seamless multi-GPU scaling when models exceed single GPU memory capacity.
Seamless multi-GPU scaling when models exceed single GPU memory capacity.
900
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
3.59
/hr
N/A

Secure Cloud

$
3.99
/hr
Unique GPU Models
25
19
Unique GPU Models
Lorem ipsum
19
Lorem ipsum
25
Global Regions
17
14
Global Regions
Lorem ipsum
14
Lorem ipsum
17
Network Storage
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Enterprise-Grade Reliability
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Savings Plans
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
24/7 Support
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Delightful Dev Experience
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

12:22