Cloud GPUs
Rent NVIDIA RTX A4000 GPUs from $0.24/hr
Professional single-slot GPU based on Ampere architecture with 16GB GDDR6 memory and 6,144 CUDA cores for AI workloads, machine learning, and compact workstation builds.

Powering the next generation of AI & high-performance computing.

Engineered for large-scale AI training, deep learning, and high-performance workloads, delivering unprecedented compute power and efficiency.

NVIDIA Ampere Architecture

Power-efficient workstation architecture delivering up to 2.7X the FP32 performance of previous generation in single-slot form factor.

Third-Generation Tensor Cores

Enhanced AI acceleration with structural sparsity support delivering up to 11X faster training performance for machine learning tasks.

16GB GDDR6 Memory

Substantial memory capacity with 448GB/s bandwidth enables AI model training and inference on medium to large datasets.

Single-Slot Design

Most powerful single-slot professional GPU with 140W power consumption, fitting into compact workstations and space-constrained systems.
Performance

Key specs at a glance.

Performance benchmarks that push AI, ML, and HPC workloads further.

Memory Bandwidth

448

GB/s

FP16 Tensor Performance

19.17

TFLOPS

PCIe Gen4 ×16 Bandwidth

63

GB/s
Use Cases

Popular use cases.

Designed for demanding workloads
—learn if this GPU fits your needs.
Technical Specs

Ready for your most
demanding workloads.

Essential technical specifications to help you choose the right GPU for your workload.

Specification

Details

Great for...

Memory Bandwidth
448
GB/s
Feeding large, high-resolution image and 3D datasets into GPU memory for real-time rendering and AI image-model inference.
Memory Bandwidth
19.17
Feeding large, high-resolution image and 3D datasets into GPU memory for real-time rendering and AI image-model inference.
Feeding large, high-resolution image and 3D datasets into GPU memory for real-time rendering and AI image-model inference.
448
FP16 Tensor Performance
19.17
TFLOPS
Accelerating convolution and transformer operations in image-generation and classification workloads.
FP16 Tensor Performance
19.17
Accelerating convolution and transformer operations in image-generation and classification workloads.
Accelerating convolution and transformer operations in image-generation and classification workloads.
19.17
PCIe Gen4 ×16 Bandwidth
63
GB/s
High-speed GPU-to-GPU and host-to-device transfers in multi-card image-model training when NVLink isn’t supported.
PCIe Gen4 ×16 Bandwidth
63
High-speed GPU-to-GPU and host-to-device transfers in multi-card image-model training when NVLink isn’t supported.
High-speed GPU-to-GPU and host-to-device transfers in multi-card image-model training when NVLink isn’t supported.
63
Comparison

Powerful GPUs. Globally available.
Reliability you can trust.

30+ GPUs, 31 regions, instant scale. Fine-tune or go full Skynet—we’ve got you.

Community Cloud

$
0.17
/hr
N/A

Secure Cloud

$
0.24
/hr
Unique GPU Models
25
19
Unique GPU Models
Lorem ipsum
19
Lorem ipsum
25
Global Regions
17
14
Global Regions
Lorem ipsum
14
Lorem ipsum
17
Network Storage
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Enterprise-Grade Reliability
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Savings Plans
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
24/7 Support
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Delightful Dev Experience
Network Storage
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum

7,035,265,000

Requests since launch & 400k developers worldwide

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

12:22