← Home ← Back to /g/

Thread 105579741

18 posts 4 images /g/
Anonymous No.105579741 >>105579759 >>105580109
Nvidia on suicide watch lmao
Anonymous No.105579759 >>105579950
>>105579741 (OP)
Meanwhile at Nvidia
Anonymous No.105579771 >>105579880 >>105579950
Why is there so many amd thread all of a sudden? Did Indians woke up or some shit?
Anonymous No.105579880
>>105579771
because there's amd keynote recently
Anonymous No.105579950
>>105579759
true, they have huawei live rent free in their heads.
nvidia is going to get gaped by them in AI just to come crawling back to gaymen, and finding that their drivers have been irrevocably shit since 2024 and there's no turning back the tide towards amd.
>>105579771
>indians
only shit streeters refuse to call these browns pajeet at the very least. even redditors don't give them the time of day anymore. keep bagholding that stock pookesh
Anonymous No.105580022 >>105580048 >>105580078
wut is this?
Anonymous No.105580048 >>105580314
>>105580022
AMD equivalent of CUDA
Anonymous No.105580066 >>105580095 >>105580301 >>105580575 >>105580621
>AI doesn't work on AMD REEEE
I never understood where this came from. LM Studio with any model that can be loaded into my VRAM just works on my RX 7700 XT.
Anonymous No.105580078
>>105580022
7th times the charm. Trust the plan. This time for real. Lmao
Anonymous No.105580095
>>105580066
Your use case is the most basic one. Inference, one card, local so you don't care about performance that much. Things break down if you need more.
Anonymous No.105580109 >>105580575
>>105579741 (OP)
I wouldn't touch another amd card until amd launches UDNA, because I suspect they are going to abandon rdna very quickly
Anonymous No.105580301
>>105580066
>12GB VRAM
You're using models that could almost be run on a smart phone
Anonymous No.105580314 >>105580376
>>105580048
Why does this finish nvidia? They've had cuda for more than a decade
Anonymous No.105580376 >>105580389
>>105580314
your mom had u for more than a decade
Anonymous No.105580389
>>105580376
My mom on suicide watch confirmed
Anonymous No.105580408
Lmao. amd was unable to fix the poor performances on blender3d for decades; don't expect anything from them.
Anonymous No.105580575
>>105580109
It most likely will, RDNA 4 buyers will wait 3 months for support, 3 months for 25% of the software suites to support ROCm then get dropped like a brick after a year and a half.
AMD's software team spends all their time pouring their energy into initially supporting something, doing a decent enough job, stalling for a year then dropping it entirely and they're getting more and more hostile towards others too as we saw with RDNA3 & 4 bios editing.

>>105580066
>LM studio
Shoulda picked something that had support for current ROCm with RDNA4 desu.
Anonymous No.105580621
>>105580066
its windows users afaik windows doesnt have rocm its linux only or was until recently. ive had no problem for ai stuff on my 5700xt, 6950xt or 7900xtx. 5700xt was a bit of a pain initially in the early stable diffusion days but once setup it was all good i actually upgraded to a 3090ti from the 5700xt but it was so awful outside of ai stuff that it made me seethe and sell it i had crashes in normal desktop use and gaming all the time kek