Anonymous
8/24/2025, 10:59:11 PM
No.106371873
[Report]
>>106371891
>>106372445
>>106372533
>>106373546
>>106373818
Dear Donald Trump
Thanks for saving Intel. Intel is behind on ai. One reason is NVIDIA has patent domination of "Cuda", which is the most important ai feature unique to NVIDIA. Because ai is a critical resource, please force NVIDIA to license their "Cuda" full stack (silicon, interconnect, drivers, software) to AMD and Intel, on fair terms, and MANDATE gpu in the above $200 expected sale price (often higher than "MSRP") have "Cuda" of appropriate proportionate power at a minimum.
Additional mandates: please require ALL MANUFACTURERS of GPU offer realistic* ram upgraded amounts for all gpu cores, and force them to upgrade the memory controllers in alternative skus providing these upgraded ram amounts. DON'T ALLOW GOUGING ON RAM. Apple and NVIDIA are notorious about gouging on vram (yes chips and qc increase on chips but they are charging excessive premiums).
to recap: please license "Cuda", mandate its inclusion, and force vram option availability.
To an ai enabled American people with ai at home! To a bright future! MAGA!!!
Thanks for your attention on this matter, and GOD BLESS THE PRESIDENT AND GOD BLESS THE UNITED STATES OF AMERICA!
*a realistic vram max is where common max size LLM's tokens per second falls below usable threshhold. at that point, more memory is pointless for LLM. similar tests can be done with audio and t2i inage generation, but typically LLM are rhe most memory starved.
Additional mandates: please require ALL MANUFACTURERS of GPU offer realistic* ram upgraded amounts for all gpu cores, and force them to upgrade the memory controllers in alternative skus providing these upgraded ram amounts. DON'T ALLOW GOUGING ON RAM. Apple and NVIDIA are notorious about gouging on vram (yes chips and qc increase on chips but they are charging excessive premiums).
to recap: please license "Cuda", mandate its inclusion, and force vram option availability.
To an ai enabled American people with ai at home! To a bright future! MAGA!!!
Thanks for your attention on this matter, and GOD BLESS THE PRESIDENT AND GOD BLESS THE UNITED STATES OF AMERICA!
*a realistic vram max is where common max size LLM's tokens per second falls below usable threshhold. at that point, more memory is pointless for LLM. similar tests can be done with audio and t2i inage generation, but typically LLM are rhe most memory starved.