RunPod
Pricing
Serverless
Blog
Docs
Sign up
Login
New pricing: More AI power, less cost!
Learn more
GPU Cloud
Pricing
Powerful & cost-effective GPUs built to support any workload.
GPUs are billed by the minute. No fees for ingress/egress.
Get started
Thousands of GPUs across 30+ Regions
Deploy any container on Secure Cloud. Public and private image repos are supported. Configure your environment the way you want.
Zero fees for ingress/egress
Global interoperability
99.99% Uptime
$0.05/GB/month Network Storage
Multi-region Support
192GB VRAM
Starting from
$3.99
/hr
MI300X
192GB VRAM
283GB RAM
24 vCPUs
$3.99/hr
Secure Cloud
80GB VRAM
Starting from
$2.69
/hr
H100 PCIe
80GB VRAM
176GB RAM
16 vCPUs
$3.29/hr
Secure Cloud
$2.69/hr
Community Cloud
Starting from
$2.99
/hr
H100 SXM
80GB VRAM
125GB RAM
16 vCPUs
$3.99/hr
Secure Cloud
$2.99/hr
Community Cloud
Starting from
$1.19
/hr
A100 PCIe
80GB VRAM
83GB RAM
8 vCPUs
$1.69/hr
Secure Cloud
$1.19/hr
Community Cloud
Starting from
$1.94
/hr
A100 SXM
80GB VRAM
125GB RAM
16 vCPUs
$1.94/hr
Secure Cloud
48GB VRAM
Starting from
$0.35
/hr
A40
48GB VRAM
50GB RAM
9 vCPUs
$0.35/hr
Secure Cloud
$0.47/hr
Community Cloud
Starting from
$0.99
/hr
L40
48GB VRAM
250GB RAM
16 vCPUs
$0.99/hr
Secure Cloud
Starting from
$0.89
/hr
L40S
48GB VRAM
62GB RAM
12 vCPUs
$1.19/hr
Secure Cloud
$0.89/hr
Community Cloud
Starting from
$0.49
/hr
RTX A6000
48GB VRAM
50GB RAM
8 vCPUs
$0.76/hr
Secure Cloud
$0.49/hr
Community Cloud
Starting from
$0.79
/hr
RTX 6000 Ada
48GB VRAM
62GB RAM
14 vCPUs
$1.03/hr
Secure Cloud
$0.79/hr
Community Cloud
24GB VRAM AND UNDER
Starting from
$0.22
/hr
RTX A5000
24GB VRAM
24GB RAM
8 vCPUs
$0.43/hr
Secure Cloud
$0.22/hr
Community Cloud
Starting from
$0.44
/hr
RTX 4090
24GB VRAM
24GB RAM
6 vCPUs
$0.69/hr
Secure Cloud
$0.44/hr
Community Cloud
Starting from
$0.22
/hr
RTX 3090
24GB VRAM
24GB RAM
4 vCPUs
$0.43/hr
Secure Cloud
$0.22/hr
Community Cloud
Starting from
$0.27
/hr
RTX 3090 Ti
24GB VRAM
30GB RAM
14 vCPUs
$0.27/hr
Community Cloud
Starting from
$0.22
/hr
A30
24GB VRAM
31GB RAM
8 vCPUs
$0.22/hr
Community Cloud
Starting from
$0.19
/hr
RTX A4500
20GB VRAM
31GB RAM
12 vCPUs
$0.35/hr
Secure Cloud
$0.19/hr
Community Cloud
Starting from
$0.20
/hr
RTX A4000 Ada
20GB VRAM
31GB RAM
4 vCPUs
$0.38/hr
Secure Cloud
$0.20/hr
Community Cloud
Starting from
$0.17
/hr
RTX A4000
16GB VRAM
17GB RAM
4 vCPUs
$0.32/hr
Secure Cloud
$0.17/hr
Community Cloud
Starting from
$0.00
/hr
RTX 3080
10GB VRAM
30GB RAM
6 vCPUs
$0.17/hr
Community Cloud
Starting from
$0.00
/hr
RTX 3070
8GB VRAM
26GB RAM
4 vCPUs
$0.13/hr
Community Cloud
Storage
Pricing
Flexible and cost-effective storage for every workload. No fees for ingress/egress.
Persistent and temporary storage available.
Over 100PB of storage available across North America and Europe.
Customize your pod volume and container disk in a few clicks, and access additional persistent storage with network volumes.
Zero fees for ingress/egress
Global interoperability
NVMe SSD
Multi-Region Support
Pod Storage
Storage Type
Running Pods
Idle Pods
Volume
$0.10/GB/Month
$0.20/GB/Month
Container Disk
$0.10/GB/Month
$0.20/GB/Month
Persistent Network Storage
Storage Type
Under 1TB
Over 1TB
Network Volume
$0.07/GB/Month
$0.05/GB/Month
Serverless
Pricing
Save 15% over other Serverless cloud providers on flex workers alone.
Create active workers and configure queue delay for even more savings.
Get started
Book a call
GPU Price Per
Second
Hour
Flex
Active
80 GB
A100
$0.00076
$0.00060
High throughput GPU, yet still very cost-effective.
80 GB
H100
PRO
$0.00155
$0.00124
Extreme throughput for big models.
48 GB
A6000
$0.00034
$0.00024
A cost-effective option for running big models.
48 GB
L40
PRO
$0.00053
$0.00037
Extreme inference throughput on LLMs like Llama 3 7B.
24 GB
A5000
$0.00019
$0.00013
Great for small-to-medium sized inference workloads.
24 GB
4090
PRO
$0.00031
$0.00021
Extreme throughput for small-to-medium models.
16 GB
A4000
$0.00016
$0.00011
The most cost-effective for small models.
Are you an early-stage startup or ML researcher?
Get up to $25K in free compute credits with Runpod. These can be used towards on-demand GPUs and Serverless endpoints.
Apply
We're with you from seed to scale
Book a call with our sales team to learn more.
Gain
Additional Savings
with Reservations
Save more by committing to longer-term usage. Reserve discounted active and flex workers by speaking with our team.
Book a call
Products
Secure Cloud
Community Cloud
Serverless
Resources
Docs
FAQ
Blog
Become a Host
Company
About
Careers
Compliance
Cookie Policy
Disclaimer
Privacy Policy
Terms of Service
Contact
Contact Us
Discord
help@runpod.io
referrals@runpod.io
press@runpod.io
RunPod
Copyright © 2024. All rights reserved.