So any good interfaces with robust TensorRT support?
Tried https://github.com/comfyanonymous/ComfyUI_TensorRT.git but hasn't been maintained over a year and doesn't seem to work.
[TRT] [W] Unable to determine GPU memory usage: In getGpuMemStatsInBytes at common/extended/resources.cpp:1097
[TRT] [W] Unable to determine GPU memory usage: In getGpuMemStatsInBytes at common/extended/resources.cpp:1097
[TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 3187, GPU 0 (MiB) terminate called after throwing an instance of 'nvinfer1::APIUsageError'
what(): CUDA initialization failure with error: 35. Please check your CUDA installation: http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html In checkCudaInstalledAndPrintMemoryUsage at optimizer/api/builder.cpp:1238
Aborted (core dumped) python main.py

I also wanna try this:
https://huggingface.co/stabilityai/stable-diffusion-xl-1.0-tensorrt
>>106366860
You can use multiple different text encoders together (provided they are compatible with the model.)
Anyway iirc the StabilityAI actually tried to train on just one of them for SDXL, but it didn't work out so they used both or something like that.
Both are piece of shit ancient models with limitations like next to no NLP.
But you need them for SDXL.
Be aware that some finetunes also train them (like Illustrious) so you will get deformed nightmares if you force the wrong CLIP on the wrong model.