Cloud GPUs
Rent NVIDIA RTX A5000 GPUs from $0.26/hr
Professional workstation GPU based on Ampere architecture with 24GB GDDR6 memory and 8,192 CUDA cores for balanced performance in AI workloads.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

NVIDIA Ampere Architecture

Advanced workstation architecture delivering up to 2.5X the FP32 performance of previous generation for AI workflows.

Third-Generation Tensor Cores

Enhanced AI acceleration with structural sparsity support delivering up to 10X faster training performance for machine learning tasks.

24GB GDDR6 Memory

Generous memory capacity with 768GB/s bandwidth provides workspace needed for large AI models and datasets.

Second-Generation RT Cores

Hardware-accelerated ray tracing with 2X faster performance enables real-time photorealistic rendering and motion blur effects.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

768

GB/s

FP16 Tensor Performance

222.2

TFLOPS

NVLink Bandwidth

112.5

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
768
GB/s
Feeding massive 4K/8K image tiles and high-res datasets into VRAM for real-time rendering, visualization, and high-throughput image model inference.
Memory Bandwidth
222.2
Feeding massive 4K/8K image tiles and high-res datasets into VRAM for real-time rendering, visualization, and high-throughput image model inference.
Feeding massive 4K/8K image tiles and high-res datasets into VRAM for real-time rendering, visualization, and high-throughput image model inference.
768
FP16 Tensor Performance
222.2
TFLOPS
Speeding mixed-precision deep learning tasks like image generation, neural style transfer, and super-resolution model training.
FP16 Tensor Performance
222.2
Speeding mixed-precision deep learning tasks like image generation, neural style transfer, and super-resolution model training.
Speeding mixed-precision deep learning tasks like image generation, neural style transfer, and super-resolution model training.
222.2
NVLink Bandwidth
112.5
GB/s
Pooling memory and compute across two A5000s to train or infer extremely large image and graphics workloads without PCIe bottlenecks.
NVLink Bandwidth
112.5
Pooling memory and compute across two A5000s to train or infer extremely large image and graphics workloads without PCIe bottlenecks.
Pooling memory and compute across two A5000s to train or infer extremely large image and graphics workloads without PCIe bottlenecks.
112.5
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
0.16
/hr
N/A

Secure Cloud

$
0.26
/hr
Unique GPU Models
25
19
Unique GPU Models
Lorem ipsum
19
Lorem ipsum
25
Global Regions
17
14
Global Regions
Lorem ipsum
14
Lorem ipsum
17
Network Storage
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Enterprise-Grade Reliability
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Savings Plans
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
24/7 Support
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Delightful Dev Experience
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

12:22