Skip to content

yuvraj108c/ComfyUI-Upscaler-Tensorrt

Repository files navigation

ComfyUI Upscaler TensorRT ⚡

python cuda trt by-nc-sa/4.0

This project provides a Tensorrt implementation for fast image upscaling using models inside ComfyUI (2-4x faster)

⭐ Support

If you like my projects and wish to see updates and new features, please consider supporting me. It helps a lot!

ComfyUI-Depth-Anything-Tensorrt ComfyUI-Upscaler-Tensorrt ComfyUI-Dwpose-Tensorrt ComfyUI-Rife-Tensorrt

ComfyUI-Whisper ComfyUI_InvSR ComfyUI-Thera ComfyUI-Video-Depth-Anything ComfyUI-PiperTTS

buy-me-coffees paypal-donation

⏱️ Performance

Note: The following results were benchmarked on FP16 engines inside ComfyUI, using 100 identical frames

Device Model Input Resolution (WxH) Output Resolution (WxH) FPS
RTX5090 4x-UltraSharp 512 x 512 2048 x 2048 12.7
RTX5090 4x-UltraSharp 1280 x 1280 5120 x 5120 2.0
RTX4090 4x-UltraSharp 512 x 512 2048 x 2048 6.7
RTX4090 4x-UltraSharp 1280 x 1280 5120 x 5120 1.1
RTX3060 4x-UltraSharp 512 x 512 2048 x 2048 2.2
RTX3060 4x-UltraSharp 1280 x 1280 5120 x 5120 0.35

🚀 Installation

  • Install via the manager
  • Or, navigate to the /ComfyUI/custom_nodes directory
git clone https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt.git
cd ./ComfyUI-Upscaler-Tensorrt
pip install -r requirements.txt

🛠️ Supported Models

☀️ Usage

  • Load example workflow
  • Choose the appropriate model from the dropdown
  • The tensorrt engine will be built automatically
  • Load an image of resolution between 256-1280px
  • Set resize_to to resize the upscaled images to fixed resolutions

🔧 Custom Models

  • To export other ESRGAN models, you'll have to build the onnx model first, using export_onnx.py
  • Place the onnx model in /ComfyUI/models/onnx/YOUR_MODEL.onnx
  • Then, add your model to this list as shown:
    "model": (["4x-AnimeSharp", "4x-UltraSharp", "4x-WTP-UDS-Esrgan", "4x_NMKD-Siax_200k", "4x_RealisticRescaler_100000_G", "4x_foolhardy_Remacri", "RealESRGAN_x4"], {"default": "4x-UltraSharp", "tooltip": "These models have been tested with tensorrt"}),
  • Finally, run the same workflow and choose your model
  • If you've tested another working tensorrt model, let me know to add it officially to this node

🚨 Updates

30 April 2025

  • Merge #48 by @BiiirdPrograms to fix soft-lock by raising an error when input image dimensions unsupported

4 March 2025 (breaking)

  • Automatic tensorrt engines are built from the workflow itself, to simplify the process for non-technical people
  • Separate model loading and tensorrt processing into different nodes
  • Optimise post processing
  • Update onnx export script

⚠️ Known issues

  • If you upgrade tensorrt version, you'll have to rebuild the engines
  • Only models with ESRGAN architecture are currently working
  • High ram usage when exporting .pth to .onnx

🤖 Environment tested

  • Ubuntu 22.04 LTS, Cuda 12.4, Tensorrt 10.8, Python 3.10, H100 GPU
  • Windows 11

👏 Credits

License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)