Explore our credit programs for startups
You've unlocked a referral bonus! Sign up today and you'll get a random credit bonus between $5 and $500
You've unlocked a referral bonus!
Claim Your Bonus
Claim Bonus
Back
News
September 25th, 2025
minute read

Deploy ComfyUI as a Serverless API Endpoint

Eliot Cowley
Eliot Cowley

In a previous blog post, we explored Runpod Serverless, a pay-as-you-go cloud computing solution that doesn’t require managing servers to scale and maintain your applications. We deployed some basic code from templates that just printed some text to the console, but now let’s do something more performance-intensive.

ComfyUI is an open-source, node-based application for generative AI workflows. You can deploy ComfyUI as an API endpoint on Runpod Serverless, send workflows via API calls, and receive AI-generated images in response.

What you’ll learn

In this blog post you’ll learn how to:

  • Deploy ComfyUI to a serverless endpoint using the Runpod Hub and Docker images
  • Call the endpoint in Python and generate images based on ComfyUI workflows
  • Use different models with ComfyUI

Requirements

Deploy ComfyUI from Runpod Hub with FLUX.1-dev model

Runpod Hub provides convenient repositories that you can quickly deploy to Runpod Serverless without much setup. Let’s deploy the ComfyUI repo from Runpod Hub to a serverless endpoint, which will allow us to make requests to it from code.

  1. Sign in to the Runpod Console.
  2. Select Serverless from the left menUnder Ready-to-Deploy Repos, select ComfyUI.
  1. Under Ready-to-Deploy Repos, select ComfyUI.

This is a ready-to-deploy template from the Runpod Hub. It uses the FLUX.1-dev-fp8 model and only works with this model. Later in this post, we will deploy this template with other models using Docker.

  1. Select Deploy to deploy the latest version of the template.
  1. In the Configure ComfyUI dialog, check Refresh Worker. This will ensure that the worker stops after each finished job. Then, select Next.
  1. In the Deploy ComfyUI dialog, select Create Endpoint.
  1. On the endpoint overview page, wait for the status to say Ready.
  1. Let’s call our endpoint using Python. Create a folder on your computer for this project and open it in your preferred code editor (I’ll be using VSCodium).
  2. Set up the development environment by following Prerequisites.
  3. In your virtual environment, create a Python file (name it whatever you like).
  4. Add the following import statements at the top of the file:
import base64
import requests
import runpod

Requests to ComfyUI return images in the form of base-64 strings by default, so we need the base64 library to decode them.

The requests library helps us send requests to our API endpoint.

  1. Add the following headers. Enter your Runpod API key.
headers = {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer <YOUR API KEY>'
}
  1. Next, let’s add the ComfyUI workflow which defines the parameters ComfyUI should use to generate images. You can create your own using ComfyUI, or you can use an example like this one:
data = {
  "input": {
    "workflow": {
      "6": {
        "inputs": {
          "text": "anime cat with massive fluffy fennec ears and a big fluffy tail blonde messy long hair blue eyes wearing a construction outfit placing a fancy black forest cake with candles on top of a dinner table of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest and very expensive stuff everywhere there are paintings on the walls",
          "clip": ["30", 1]
        },
        "class_type": "CLIPTextEncode",
        "_meta": {
          "title": "CLIP Text Encode (Positive Prompt)"
        }
      },
      "8": {
        "inputs": {
          "samples": ["31", 0],
          "vae": ["30", 2]
        },
        "class_type": "VAEDecode",
        "_meta": {
          "title": "VAE Decode"
        }
      },
      "9": {
        "inputs": {
          "filename_prefix": "ComfyUI",
          "images": ["8", 0]
        },
        "class_type": "SaveImage",
        "_meta": {
          "title": "Save Image"
        }
      },
      "27": {
        "inputs": {
          "width": 512,
          "height": 512,
          "batch_size": 1
        },
        "class_type": "EmptySD3LatentImage",
        "_meta": {
          "title": "EmptySD3LatentImage"
        }
      },
      "30": {
        "inputs": {
          "ckpt_name": "flux1-dev-fp8.safetensors"
        },
        "class_type": "CheckpointLoaderSimple",
        "_meta": {
          "title": "Load Checkpoint"
        }
      },
      "31": {
        "inputs": {
          "seed": 243057879077961,
          "steps": 10,
          "cfg": 1,
          "sampler_name": "euler",
          "scheduler": "simple",
          "denoise": 1,
          "model": ["30", 0],
          "positive": ["35", 0],
          "negative": ["33", 0],
          "latent_image": ["27", 0]
        },
        "class_type": "KSampler",
        "_meta": {
          "title": "KSampler"
        }
      },
      "33": {
        "inputs": {
          "text": "",
          "clip": ["30", 1]
        },
        "class_type": "CLIPTextEncode",
        "_meta": {
          "title": "CLIP Text Encode (Negative Prompt)"
        }
      },
      "35": {
        "inputs": {
          "guidance": 3.5,
          "conditioning": ["6", 0]
        },
        "class_type": "FluxGuidance",
        "_meta": {
          "title": "FluxGuidance"
        }
      },
      "38": {
        "inputs": {
          "images": ["8", 0]
        },
        "class_type": "PreviewImage",
        "_meta": {
          "title": "Preview Image"
        }
      },
      "40": {
        "inputs": {
          "filename_prefix": "ComfyUI",
          "images": ["8", 0]
        },
        "class_type": "SaveImage",
        "_meta": {
          "title": "Save Image"
        }
      }
    }
  }
}
  1. Back in the Runpod console, on the page for your endpoint, select the Requests tab. Next to the Run button, select the drop-down arrow and select RunSync. The /runsync endpoint is for synchronous requests that wait for the job to complete and return the result directly. Copy the URL in the text box - this is your endpoint URL. Send the request to your endpoint and store the first image in the response in a variable:
response = requests.post(
'<YOUR ENDPOINT URL>',
headers=headers,
json=data)
json = response.json()
base64_string = json['output']['images'][0]['data']
  1. Convert the base-64 string into an image file:
imgdata = base64.b64decode(base64_string)
filename = 'image.jpg'
with open(filename, 'wb') as f:
    f.write(imgdata)
  1. Run the program and open the image file that it outputs. It should be an image that the FLUX.1-dev-fp8 model generated based on the description in input.workflow.6.inputs.text (in the case of the example, a cat looking at a birthday cake).
    Here is the full code example:
import base64
import requests
import runpod

headers = {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer <YOUR API KEY>'
}

data = {
  "input": {
    "workflow": {
      "6": {
        "inputs": {
          "text": "anime cat with massive fluffy fennec ears and a big fluffy tail blonde messy long hair blue eyes wearing a construction outfit placing a fancy black forest cake with candles on top of a dinner table of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest and very expensive stuff everywhere there are paintings on the walls",
          "clip": ["30", 1]
        },
        "class_type": "CLIPTextEncode",
        "_meta": {
          "title": "CLIP Text Encode (Positive Prompt)"
        }
      },
      "8": {
        "inputs": {
          "samples": ["31", 0],
          "vae": ["30", 2]
        },
        "class_type": "VAEDecode",
        "_meta": {
          "title": "VAE Decode"
        }
      },
      "9": {
        "inputs": {
          "filename_prefix": "ComfyUI",
          "images": ["8", 0]
        },
        "class_type": "SaveImage",
        "_meta": {
          "title": "Save Image"
        }
      },
      "27": {
        "inputs": {
          "width": 512,
          "height": 512,
          "batch_size": 1
        },
        "class_type": "EmptySD3LatentImage",
        "_meta": {
          "title": "EmptySD3LatentImage"
        }
      },
      "30": {
        "inputs": {
          "ckpt_name": "flux1-dev-fp8.safetensors"
        },
        "class_type": "CheckpointLoaderSimple",
        "_meta": {
          "title": "Load Checkpoint"
        }
      },
      "31": {
        "inputs": {
          "seed": 243057879077961,
          "steps": 10,
          "cfg": 1,
          "sampler_name": "euler",
          "scheduler": "simple",
          "denoise": 1,
          "model": ["30", 0],
          "positive": ["35", 0],
          "negative": ["33", 0],
          "latent_image": ["27", 0]
        },
        "class_type": "KSampler",
        "_meta": {
          "title": "KSampler"
        }
      },
      "33": {
        "inputs": {
          "text": "",
          "clip": ["30", 1]
        },
        "class_type": "CLIPTextEncode",
        "_meta": {
          "title": "CLIP Text Encode (Negative Prompt)"
        }
      },
      "35": {
        "inputs": {
          "guidance": 3.5,
          "conditioning": ["6", 0]
        },
        "class_type": "FluxGuidance",
        "_meta": {
          "title": "FluxGuidance"
        }
      },
      "38": {
        "inputs": {
          "images": ["8", 0]
        },
        "class_type": "PreviewImage",
        "_meta": {
          "title": "Preview Image"
        }
      },
      "40": {
        "inputs": {
          "filename_prefix": "ComfyUI",
          "images": ["8", 0]
        },
        "class_type": "SaveImage",
        "_meta": {
          "title": "Save Image"
        }
      }
    }
  }
}

response = requests.post('<YOUR ENDPOINT URL>', headers=headers, json=data)
json = response.json()
base64_string = json['output']['images'][0]['data']

imgdata = base64.b64decode(base64_string)
filename = 'image.png'
with open(filename, 'wb') as f:
    f.write(imgdata)

Deploy ComfyUI with a different model

The ComfyUI template on the Runpod Hub makes it easy to deploy as a serverless endpoint, but it is restricted to the FLUX.1-dev-fp8 model. If you want to use a different model, you can use the worker-comfyui repository on GitHub.

Runpod provides official container images on Docker Hub that deploy ComfyUI with various models. In this tutorial, we will use one of these images, but if you want to use a model that Runpod does not have an image for, you can use the latest base image and supply your own model.

  1. Log in to the Runpod Console. Open the Serverless page and select New Endpoint.
  2. On the Deploy a New Serverless Endpoint page, select Import from Docker Registry.
  1. On the Container Configuration page, paste the name of the container image from Docker, then select Next. In my case, I am using the Stable Diffusion 3 Medium model.
  1. Enter an Endpoint Name, and select the GPU Configuration for your model based on the minimum VRAM required in GPU recommendations. For Stable Diffusion 3 Medium, I chose 16 GB.
  2. Open Container Configuration and set the Container Disk to the recommended container size for your model in GPU recommendations. For Stable Diffusion 3 Medium, I used 20 GB.
  3. Select Deploy Endpoint.
  4. Wait for the status to be Ready. Then, select the Requests tab. Select the drop-down arrow next to Run and select RunSync. Copy the new endpoint URL and paste it into your Python program that we wrote earlier (in the requests.post() call).
  5. Create a workflow for your model in ComfyUI and paste the JSON into the data variable. Runpod has some example workflows in the worker-comfyui repository on GitHub. For Stable Diffusion 3 Medium, I used this workflow.
  6. Run the program and check the output. It should generate an image based on the new workflow.

Next steps

Congratulations, you successfully deployed ComfyUI to a serverless endpoint both from a Runpod Hub repository and a Docker image! Runpod provides many ways to quickly start running common AI workloads without much setup.

Is there a particular model that you want to use with ComfyUI, but isn’t in any of Runpod’s Docker images? Try customizing your setup by creating your own Dockerfile starting from one of the base images and baking the model you want into your image. Then deploy it to Runpod either from Docker or your own GitHub repository.

Newly  Features

We've cooked up a bunch of improvements designed to reduce friction and make the.

Create ->
Newly  Features

We've cooked up a bunch of improvements designed to reduce friction and make the.

Create ->
Newly  Features

We've cooked up a bunch of improvements designed to reduce friction and make the.

Create ->
Newly  Features

We've cooked up a bunch of improvements designed to reduce friction and make the.

Create ->

Build what’s next.

The most cost-effective platform for building, training, and scaling machine learning models—ready when you are.

You’ve unlocked a
referral bonus!

Sign up today and you’ll get a random credit bonus between $5 and $500 when you spend your first $10 on Runpod.