Thread 106181775 - /g/ [Archived: 296 hours ago]

Anonymous
8/7/2025, 10:36:04 PM No.106181775
berry
berry
md5: be2be874c14341121a6321adcd409958🔍
my prediction: they are gonna be the blackberry of the AI race. had the early lead, squandered it on trying to push a stale format and coast on brand recognition
Replies: >>106181823 >>106181894 >>106182102 >>106182150
Anonymous
8/7/2025, 10:38:01 PM No.106181800
It wasn't obvious after Meta and Google poached all of their talent?
Anonymous
8/7/2025, 10:39:55 PM No.106181823
>>106181775 (OP)
Scam Altman tanked it. He got his though.
Anonymous
8/7/2025, 10:45:46 PM No.106181894
>>106181775 (OP)
Machine learning gets dismissed entirely if OpenAI ever dalls apart.
It's either they can actually turn datacenters into God or it's completely worthless.
Replies: >>106182102
Anonymous
8/7/2025, 10:59:38 PM No.106182102
soulless
soulless
md5: 4eda2170e1c2bdfe3a5031a1f3e5a998🔍
>>106181775 (OP)
all of the innovations for better scaling are here, waiting to be implemented:
>1-bit quantization https://arxiv.org/abs/2310.11453v1
>Kolmogorov-Arnold Networks https://arxiv.org/abs/2404.19756
>diffusion llms https://arxiv.org/abs/2502.09992
>ASI-ARCH/autonomous architecture discovery https://github.com/GAIR-NLP/ASI-Arch?tab=readme-ov-file
>jepa https://arxiv.org/abs/2301.08243
>sparsity https://arxiv.org/abs/2412.12178
>synthetic data https://arxiv.org/abs/2503.14023
>transformers squared https://arxiv.org/abs/2501.06252
>titans https://arxiv.org/abs/2501.00663
>context engineering https://arxiv.org/abs/2501.00663
>MLE-STAR https://arxiv.org/abs/2506.15692
>hierachical reasoning model https://arxiv.org/abs/2506.21734
>graphRAG https://github.com/LHRLAB/Graph-R1
The issue is that OpenAI has mistakenly positioned themselves as the new-big-model-every-year company (like they're releasing smartphones or something), so they're afraid to take risks on training (*and releasing*) smaller research models.
>>106181894
maybe this is true for normal people and for like, 1-2 years, but we already have models that are undeniably useful. remember that google became a multi-trillion dollar company from a simple algorithm for ranking web pages. even the current models, with their hallucinations and obvious limitations, are basically like a search engine on steroids (and search engines also fucking suck nowadays).
i see massive divestment if openai continues to over-promise and under-deliver, but machine learning will be as relevant as computing in general forever now. it has already become a basically unavoidable part of life.
Replies: >>106182188 >>106182262
Anonymous
8/7/2025, 11:03:08 PM No.106182150
>>106181775 (OP)
>AI race
First crab out of the bucket goes in the pot.
Anonymous
8/7/2025, 11:05:36 PM No.106182188
>>106182102
I assume it's less about risks of the model training not working (since they are test papers not implemented at scale), and a lot more about the promises they made about LLMs to investors. If LLMs turn out to be a dead end, a lot of investors are going to want to see their money back ASAP
Replies: >>106182457
Anonymous
8/7/2025, 11:10:48 PM No.106182262
>>106182102
>maybe this is true for normal people and for like,
No, it's true for investors. They don't give a fuck about copy/paste toys, they want a pathway to invincible defense systems and other utopian bullshit that all these companies keep Todd Howarding.
>we already have models that are undeniably useful.
For spammers and scammers yes
>remember that google became a multi-trillion dollar company from a simple algorithm for ranking web pages.
They became multi-trillion dollar company from ridiculous amounts of investor backfunding (partially by the government)
>even the current models, with their hallucinations and obvious limitations, are basically like a search engine on steroids (and search engines also fucking suck nowadays).
Search engines had to become far worse than they used to be for LLMs to be a competetive alternative.
>i see massive divestment if openai continues to over-promise and under-deliver, but machine learning will be as relevant as computing in general forever now. it has already become a basically unavoidable part of life.
Speak for yourself.
Replies: >>106182457
Anonymous
8/7/2025, 11:24:10 PM No.106182457
ocelot3
ocelot3
md5: 9d62eaa5a6d358a71fc631f9a173d441🔍
>>106182188
you're absolutely right, but i was trying to get at that:
the point isn't that the model training runs the risk of not working, it's that the training smaller models runs the risk of being underwhelming.
There's a "lump sum" bias to how openai operates: a new large model that delivers a 10% improvement all-at-once is going to seem more impressive than them training a handful of smaller models in a shorter amount of time that are 2 or 3% better than the last using the new research. they are trapped in the private equity spiral of promising perpetual quantum leaps in growth which isn't possible without experimentation and diversification
I have no doubt that LLMs could scale to what we call AGI, the problem is that could mean we need a either a thousand OR A BILLION times more computer...
>>106182262
you just sound like you have some weird personal baggage surrounding the technology
Anonymous
8/7/2025, 11:54:38 PM No.106182846
Blackberry, they're more like the windows phone lmao.