>>281682506
I have been banned for off-topic in the past for explaining here, and there is no doubt this guy
>>281682456 is reporting every post so in future, I would recommend asking >>>/g/ldg and/or >>>/g/adt for a better answer.
The problem with 4gb vram isn't just that it will take long, but you will get OOM (out-of-memory) errors. If the GPU's vram capacity cannot accommodate the size of the model, then it simply cannot successfully generate.
Because of this, low vram GPUs need to use a quantization. For 4GB vram, you'd need the strongest quant + dram swap:
https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF/tree/main