Search Results

Found 1 results for "1d62380f2c3c34bf34cd30c28df6860a" across all boards searching md5.

Anonymous /g/105578091#105582780
6/13/2025, 5:42:51 PM
>>105582326
not launched yet, soon though
i could support unet/transformer, but i think its more useful to have text encoders and vae remote, compared to running text encoders and vae locally and running the generation remotely
plus i dont have the resources atm, generation is time consuming and from a service perspective its like gpu arbitrage
compared to text encoders and vae which are heavy on resources but very fast, that's the benefit for local users, you dont need the ram/vram to load them and save the offload time
but there's another use case of internal deployments for like art studios or something, for that it would make sense to support generation itself, the benefit for internal deployments is like cost saving, instead of buying a gpu for every workstation you can have a few gpus on a server and everyone shares it
other models i plan on supporting for public service are like clip/siglip image for ip adapter, controlnet depth, pose, etc. maybe upscaling like esrgan because thats quite heavy in pytorch but a lot lighter with honeyml
honeyml is what i'm calling my aitemplate fork now, it includes the diffusers_ait model implementations too, it's public already but i'm not actively supporting it for other people to use, if that makes sense, i will eventually
the zeromq system is not public yet, im not sure it will be because its unlikely any individual will want/need to deploy it themselves except for their own version of my service, and i wouldn't mind getting paid by companies for the internal deployment use case