New pricing: More AI power, less cost!
Learn more

GPU Cloud
Pricing

Powerful & cost-effective GPUs built to support any workload.
GPUs are billed by the minute. No fees for ingress/egress.
Thousands of GPUs across 30+ Regions
Deploy any container on Secure Cloud. Public and private image repos are supported. Configure your environment the way you want.
Zero fees for ingress/egress
Global interoperability
99.99% Uptime
$0.05/GB/month Network Storage
Multi-region Support

80GB+ VRAM

nvidia
Starting from $2.79/hr

H100 NVL
94GB VRAM
180GB RAM
16 vCPUs
$2.79/hr
Secure Cloud
$2.79/hr
Community Cloud
nvidia
Starting from $3.99/hr

H200 SXM
143GB VRAM
$3.99/hr
Secure Cloud
amd
Starting from $3.49/hr

MI300X
192GB VRAM
283GB RAM
24 vCPUs
$3.49/hr
Secure Cloud

80GB VRAM

nvidia
Starting from $2.69/hr

H100 PCIe
80GB VRAM
188GB RAM
32 vCPUs
$2.69/hr
Secure Cloud
$2.69/hr
Community Cloud
nvidia
Starting from $2.99/hr

H100 SXM
80GB VRAM
125GB RAM
24 vCPUs
$2.99/hr
Secure Cloud
$2.99/hr
Community Cloud
nvidia
Starting from $1.19/hr

A100 PCIe
80GB VRAM
83GB RAM
8 vCPUs
$1.64/hr
Secure Cloud
$1.19/hr
Community Cloud
nvidia
Starting from $1.89/hr

A100 SXM
80GB VRAM
125GB RAM
16 vCPUs
$1.89/hr
Secure Cloud

48GB VRAM

nvidia
Starting from $0.39/hr

A40
48GB VRAM
50GB RAM
9 vCPUs
$0.39/hr
Secure Cloud
$0.47/hr
Community Cloud
nvidia
Starting from $0.99/hr

L40
48GB VRAM
125GB RAM
16 vCPUs
$0.99/hr
Secure Cloud
nvidia
Starting from $0.79/hr

L40S
48GB VRAM
62GB RAM
12 vCPUs
$1.03/hr
Secure Cloud
$0.79/hr
Community Cloud
nvidia
Starting from $0.49/hr

RTX A6000
48GB VRAM
50GB RAM
8 vCPUs
$0.76/hr
Secure Cloud
$0.49/hr
Community Cloud
nvidia
Starting from $0.74/hr

RTX 6000 Ada
48GB VRAM
62GB RAM
14 vCPUs
$0.99/hr
Secure Cloud
$0.74/hr
Community Cloud

24GB VRAM AND UNDER

nvidia
Starting from $0.22/hr

RTX A5000
24GB VRAM
24GB RAM
4 vCPUs
$0.43/hr
Secure Cloud
$0.22/hr
Community Cloud
nvidia
Starting from $0.44/hr

RTX 4090
24GB VRAM
26GB RAM
4 vCPUs
$0.69/hr
Secure Cloud
$0.44/hr
Community Cloud
nvidia
Starting from $0.22/hr

RTX 3090
24GB VRAM
24GB RAM
4 vCPUs
$0.43/hr
Secure Cloud
$0.22/hr
Community Cloud
nvidia
Starting from $0.27/hr

RTX 3090 Ti
24GB VRAM
???GB RAM
??? vCPUs
$0.27/hr
Community Cloud
nvidia
Starting from $0.22/hr

A30
24GB VRAM
31GB RAM
8 vCPUs
$0.22/hr
Community Cloud
nvidia
Starting from $0.43/hr

L4
24GB VRAM
50GB RAM
12 vCPUs
$0.43/hr
Secure Cloud
nvidia
Starting from $0.19/hr

RTX A4500
20GB VRAM
29GB RAM
4 vCPUs
$0.35/hr
Secure Cloud
$0.19/hr
Community Cloud
nvidia
Starting from $0.20/hr

RTX 4000 Ada
20GB VRAM
47GB RAM
9 vCPUs
$0.38/hr
Secure Cloud
$0.20/hr
Community Cloud
nvidia
Starting from $0.17/hr

RTX A4000
16GB VRAM
17GB RAM
4 vCPUs
$0.32/hr
Secure Cloud
$0.17/hr
Community Cloud
nvidia
Starting from $0.00/hr

Tesla V100
16GB VRAM
39GB RAM
4 vCPUs
$0.19/hr
Community Cloud
nvidia
Starting from $0.28/hr

RTX 2000 Ada
16GB VRAM
31GB RAM
6 vCPUs
$0.28/hr
Secure Cloud
nvidia
Starting from $0.00/hr

RTX 3080
10GB VRAM
15GB RAM
8 vCPUs
$0.17/hr
Community Cloud

Storage
Pricing

Flexible and cost-effective storage for every workload. No fees for ingress/egress.
Persistent and temporary storage available.
Over 100PB of storage available across North America and Europe.
Customize your pod volume and container disk in a few clicks, and access additional persistent storage with network volumes.
Zero fees for ingress/egress
Global interoperability
NVMe SSD
Multi-Region Support

Pod Storage

Storage TypeRunning PodsIdle Pods
Volume$0.10/GB/Month$0.20/GB/Month
Container Disk$0.10/GB/Month$0.20/GB/Month

Persistent Network Storage

Storage TypeUnder 1TBOver 1TB
Network Volume$0.07/GB/Month$0.05/GB/Month

Serverless
Pricing

Save 15% over other Serverless cloud providers on flex workers alone.
Create active workers and configure queue delay for even more savings.
GPU Price Per
Flex
Active
80 GB
A100
$0.00076
$0.00060
High throughput GPU, yet still very cost-effective.
80 GB
H100
PRO
$0.00155
$0.00124
Extreme throughput for big models.
48 GB
A6000, A40
$0.00034
$0.00024
A cost-effective option for running big models.
48 GB
L40, L40S, 6000 Ada
PRO
$0.00053
$0.00037
Extreme inference throughput on LLMs like Llama 3 7B.
24 GB
L4, A5000, 3090
$0.00019
$0.00013
Great for small-to-medium sized inference workloads.
24 GB
4090
PRO
$0.00031
$0.00021
Extreme throughput for small-to-medium models.
16 GB
A4000, A4500, RTX 4000
$0.00016
$0.00011
The most cost-effective for small models.
Are you an early-stage startup or ML researcher?
Get up to $25K in free compute credits with Runpod. These can be used towards on-demand GPUs and Serverless endpoints.
Apply
We're with you from seed to scale
Book a call with our sales team to learn more.
Gain
Additional Savings
with Reservations
Save more by committing to longer-term usage. Reserve discounted active and flex workers by speaking with our team.
Book a call