Cloud GPUs
Rent NVIDIA A100 PCIe GPUs from $1.64/hr
High-performance data center GPU based on Ampere architecture with 80GB HBM2e memory and 6,912 CUDA cores for AI training, machine learning, and high-performance computing workloads.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

NVIDIA Ampere Architecture

Breakthrough architecture delivering up to 20X higher performance over previous generation with exceptional power efficiency.

Third-Generation Tensor Cores

Enhanced AI acceleration supporting multiple precision formats delivering significant performance gains for training and inference.

80GB HBM2e Memory

World's fastest memory bandwidth at over 2TB/s enables handling of the largest AI models and datasets.

Multi-Instance GPU (MIG)

Partitioning capability creates up to 7 isolated GPU instances, optimizing resource utilization for multiple workloads.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

1.555

TB/s

FP16 Tensor Performance

312

TFLOPS

PCIe Gen4 ×16 Bandwidth

63

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
1.555
TB/s
Feeding massive high-res datasets and model weights into VRAM without stalls for large-scale AI training and inference.
Memory Bandwidth
1.555
TB/s
Feeding massive high-res datasets and model weights into VRAM without stalls for large-scale AI training and inference.
FP16 Tensor Performance
312
TFLOPS
Accelerating mixed-precision neural network training and inference, cutting fine-tuning time and boosting throughput.
FP16 Tensor Performance
312
TFLOPS
Accelerating mixed-precision neural network training and inference, cutting fine-tuning time and boosting throughput.
PCIe Gen4 ×16 Bandwidth
63
GB/s
Providing high-speed host-to-device and GPU-to-GPU transfers for smooth multi-card scaling when NVLink isn’t available.
PCIe Gen4 ×16 Bandwidth
63
GB/s
Providing high-speed host-to-device and GPU-to-GPU transfers for smooth multi-card scaling when NVLink isn’t available.
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
1.19
/hr
N/A

Secure Cloud

$
1.64
/hr
Unique GPU Models
25
19
Unique GPU Models
Secure Cloud
19
Community Cloud
25
Global Regions
17
14
Global Regions
Secure Cloud
14
Community Cloud
17
Network Storage
Network Storage
Secure Cloud
✔️
Community Cloud
✖️
Enterprise-Grade Reliability
Enterprise-Grade Reliability
Secure Cloud
✔️
Community Cloud
✖️
Savings Plans
Savings Plans
Secure Cloud
✔️
Community Cloud
✖️
24/7 Support
24/7 Support
Secure Cloud
✔️
Community Cloud
✔️
Delightful Dev Experience
Delightful Dev Experience
Secure Cloud
✔️
Community Cloud
✔️

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.