Cloud GPUs
Rent NVIDIA H100 PCIe GPUs from $2.19/hr
High-performance data center GPU based on Hopper architecture with 80GB HBM3 memory and 14,592 CUDA cores for AI training, machine learning, and enterprise workloads.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

NVIDIA Hopper Architecture

Breakthrough architecture with fourth-generation Tensor Cores delivering up to 9X faster training for large language models.

Fourth-Generation Tensor Cores

Enhanced AI acceleration with FP8 precision and Transformer Engine delivering up to 30X faster inference.

80GB HBM3 Memory

High-bandwidth memory with 2TB/s bandwidth enables training and inference on large AI models.

PCIe Gen5 Interface

Standard PCIe form factor with 350W power consumption provides flexible deployment in existing servers.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

2.04

TB/s

FP16 Tensor Performance

1.513

PFLOPS

PCIe Gen5 ×16 Bandwidth

128

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
2.04
TB/s
Feeding massive model weights and datasets into HBM2e without stalls—essential for large-scale AI training and inference.
Memory Bandwidth
1.513
Feeding massive model weights and datasets into HBM2e without stalls—essential for large-scale AI training and inference.
Feeding massive model weights and datasets into HBM2e without stalls—essential for large-scale AI training and inference.
2.04
FP16 Tensor Performance
1.513
PFLOPS
Accelerating mixed-precision transformer training and inference, cutting fine-tuning time and boosting throughput.
FP16 Tensor Performance
1.513
Accelerating mixed-precision transformer training and inference, cutting fine-tuning time and boosting throughput.
Accelerating mixed-precision transformer training and inference, cutting fine-tuning time and boosting throughput.
1.513
PCIe Gen5 ×16 Bandwidth
128
GB/s
Enabling high-speed host-to-GPU and GPU-to-GPU transfers in multi-card training and inference when NVLink isn’t available.
PCIe Gen5 ×16 Bandwidth
128
Enabling high-speed host-to-GPU and GPU-to-GPU transfers in multi-card training and inference when NVLink isn’t available.
Enabling high-speed host-to-GPU and GPU-to-GPU transfers in multi-card training and inference when NVLink isn’t available.
128
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
1.99
/hr
N/A

Secure Cloud

$
2.19
/hr
Unique GPU Models
25
19
Unique GPU Models
Lorem ipsum
19
Lorem ipsum
25
Global Regions
17
14
Global Regions
Lorem ipsum
14
Lorem ipsum
17
Network Storage
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Enterprise-Grade Reliability
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Savings Plans
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
24/7 Support
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Delightful Dev Experience
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

12:22