Cloud GPUs
Rent NVIDIA A40 GPUs from $0.4/hr
Data center GPU based on Ampere architecture with 48GB GDDR6 memory and 10,752 CUDA cores for AI workloads, professional visualization, and virtual workstation applications.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

NVIDIA Ampere Architecture

Advanced data center architecture delivering up to 2X power efficiency for visual computing and AI workloads.

Third-Generation Tensor Cores

Enhanced AI acceleration with TF32 precision delivering up to 5X faster training throughput.

48GB GDDR6 Memory

Large memory capacity with 696GB/s bandwidth for complex AI models and datasets.

Third-Generation RT Cores

Advanced ray tracing acceleration with 2X throughput for photorealistic rendering and visual computing.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

696

GB/s

FP16 Tensor Performance

149.7

TFLOPS

NVLink Bandwidth

112.5

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
696
GB/s
Feeding large, high-resolution image and data batches into VRAM without stalls for visualization and inference.
Memory Bandwidth
149.7
Feeding large, high-resolution image and data batches into VRAM without stalls for visualization and inference.
Feeding large, high-resolution image and data batches into VRAM without stalls for visualization and inference.
696
FP16 Tensor Performance
149.7
TFLOPS
Accelerating mixed-precision deep learning tasks like image generation, classification, and model training.
FP16 Tensor Performance
149.7
Accelerating mixed-precision deep learning tasks like image generation, classification, and model training.
Accelerating mixed-precision deep learning tasks like image generation, classification, and model training.
149.7
NVLink Bandwidth
112.5
GB/s
Enabling high-bandwidth, low-latency multi-GPU data transfers to scale workloads seamlessly across cards.
NVLink Bandwidth
112.5
Enabling high-bandwidth, low-latency multi-GPU data transfers to scale workloads seamlessly across cards.
Enabling high-bandwidth, low-latency multi-GPU data transfers to scale workloads seamlessly across cards.
112.5
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
/hr
N/A

Secure Cloud

$
0.4
/hr
Unique GPU Models
25
19
Unique GPU Models
Lorem ipsum
19
Lorem ipsum
25
Global Regions
17
14
Global Regions
Lorem ipsum
14
Lorem ipsum
17
Network Storage
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Enterprise-Grade Reliability
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Savings Plans
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
24/7 Support
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Delightful Dev Experience
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

12:22