← Home ← Back to /g/

Thread 106328294

54 posts 8 images /g/
Anonymous No.106328294 >>106328302 >>106328314 >>106328322 >>106328337 >>106328419 >>106328483 >>106328626 >>106328687 >>106328924 >>106329211 >>106331163 >>106332091 >>106332297 >>106334401 >>106334413 >>106335117
just another 500$ billion dollars bro
Anonymous No.106328302
>>106328294 (OP)
>THE NEXT MODEL WILL BE AGI, TRUST THE PLAN GOLEM
Anonymous No.106328314
>>106328294 (OP)
The hype train furnace needs bigger capex to burn so the slop gets zestier
Anonymous No.106328322
>>106328294 (OP)
Umm ludditebros, I think AIchuds won.
Anonymous No.106328337
>>106328294 (OP)
>bro just $50 bn more bro, trust me bro in 2 weeks we'll have agi bro, bro it's gonna happen, you don't wanna get left behind bro, like please bro on god bro just $50 bn more bro you won't regret it in 2 weeks bro
Anonymous No.106328349 >>106328469 >>106328805 >>106333325
the tech has plateaued, we have reached the end of Moore's law of AI
it's time to admit that AI won't get much better and it's futile to burn billions more on this
Anonymous No.106328354 >>106328382
Anonymous No.106328382 >>106328646
>>106328354
laughing like a retard at this cause instead of a face swap its just the badly cropped subtitles
Anonymous No.106328419 >>106328440 >>106328465 >>106328498 >>106328587 >>106328621 >>106328743 >>106328850 >>106328921 >>106331347
>>106328294 (OP)
I genuinely don't know if the models are getting worse or if I'm getting better at spotting mistakes and hallucinations in the outputs. Feels like GPT-5 is wrong on something like 20% of the time I ask it a question, maybe more.
Anonymous No.106328440 >>106328452
>>106328419
My guess is that they ran out of training data so now they're using AIslop as training material.
Anonymous No.106328452
>>106328440
>hallucinating the hallucinations
Kek, we'll see if 2 wrongs make 1 right with AI.
Anonymous No.106328465 >>106328498
>>106328419
GPT 5 is better at programming but otherwise feels worse
Anonymous No.106328469
>>106328349
Elon already said all the data for has pretty much plateaued.
Anonymous No.106328483
>>106328294 (OP)
>Just one more version iteration, and we can start making a profit.
Anonymous No.106328498 >>106328671 >>106328892
>>106328419
I feel like Gpt5 is worse at programming somehow.
>>106328465
Maybe I use a different language or different kinds of projects.
Anonymous No.106328511 >>106328540 >>106328558
it's over for luddites this time
Anonymous No.106328540
>>106328511
Why do you keep posting the same thing over and over?
Anonymous No.106328558
>>106328511
Hey! That's my line!
Anonymous No.106328587
>>106328419
>the models are getting worse
I have this exact same impression, I use a prompt that attempt to instruct the LLM to output concise and direct answers instead of the usual syncophantic garbage, it works very well with Mistral 3.0 mini in the Duckduckgo platform (https://duck.ai), If I ask for a "foreach in C" it returns just the code, usually in an ok way. Recently they added GPT 5 mini as an option and I tried it with same prompt and the model sometimes insists about explaining things or worse, guiding me in a "pair programming" way.
Anonymous No.106328621 >>106328921
>>106328419
To make running ChatGPT cheaper, they need to either make it dumber or make it more efficient. They weren't able to make it more efficient
Anonymous No.106328626 >>106328687
>>106328294 (OP)
what's only 1 trillion dollars between friends?
Anonymous No.106328646
>>106328382
I fucking hate/love this site.
Anonymous No.106328671
>>106328498
Probably depends on your prompting style, generally it's better at one-shots and way better with my base instruction of being concise and copy-pasteable, I also keep features at one per conversation. In general GPT5-Thinking is much better than GPT4-Thinking.
Anonymous No.106328687
>>106328294 (OP)
>>106328626
Nothing a bail can't fix
Anonymous No.106328743 >>106328844 >>106328961 >>106335010
>>106328419
The way they created the models is just retarded
>I don't want to think how logic works so let's make it all percentages
>Slam a bunch of info and maybe one day we'll get clear answers
>gets an average of everything
>ok cool, try harder
>same result
>uh oh, give it more data!
>There is no more data
>Give it the data we produced
>model vomits nonsense
>How can this be!?
>Make more empty promises because its a quadrillion dollar bubble
Anonymous No.106328805
>>106328349
Tracks on a super exponential curve, yup it plateaued!
I guess that's how it feels when it surpasses you, 1.5 times better than you is no different than x1000.
Anonymous No.106328844
>>106328743
There are logic based models and they suck.
Anonymous No.106328850
>>106328419
I think its the censorship and ""safety training"" that dumbs it down combined with an additional layer of dumbing to make it cheaper to run.
Anonymous No.106328892
>>106328498
Other than the prompt, programming language is a big factor too.

I am originally a GPT user but tried Claude because of the release of GPT5.

What I noticed is that Claude Sonnet 4.0 often used outdated Shaders and DOTS/ECS (0.51) in Unity albeit just a very minor one but having a default instruction to avoid it still default to the same pattern of DOTS/ECS.

While on GPT5 sometimes it just fucks itself up, it doesn't answer my questions but rather answer what is already been answered or worst completely out of context specially during debugging. Not to mention GPT5 is effing slow

I haven't played with Opus 4.1 much because that thing burns token fast even though I tried to be conservative with the prompts. I just use Opus mostly for detailed planning or debugging.
Anonymous No.106328921 >>106335151
>>106328419
It's this >>106328621 right? Wasn't gpt 5's whole thing that it was cheaper? They have to make concessions somewhere. This is definitely something of a honeymoon phase in AI, with investors footing the bill. It will change.
Anonymous No.106328924 >>106328956 >>106331074
>>106328294 (OP)
Is there a word similar to embezzlement but for investor money instead of tax money?
Is that even illegal?
Anonymous No.106328956
>>106328924
defrauding i think
Anonymous No.106328961 >>106331141
>>106328743
There are some mathematical and technical details that you have to keep in mind. You need efficient optimisation, stochastic gradient descent gives you that, you need parallelism so matrix multiplications are good. Fod SGD your loss function also has to be differentiable, like cross entropy.
It's not as simple as just looking at how your own thinking looks to you and then implementing it, it will probably not scale. There were many approaches that were already tried and transformers are by far the best so far.
Anonymous No.106329211
>>106328294 (OP)
it was a PhD level model as claimed. Pile higher and deeper.
Anonymous No.106331074
>>106328924
it's always legal when the government's involved
Anonymous No.106331141
>>106328961
The only thing the transformer is good at is keeping track of context. Perhaps it's time to admit that true general cognition requires a bit more than that, which we haven't figured out yet.
Anonymous No.106331163
>>106328294 (OP)
The goyim have to finally be coming around to this by now, mustn't they? Fool me once...
Anonymous No.106331347
>>106328419
not an issue with my chinese and based and libre models
Anonymous No.106332091
>>106328294 (OP)
Ask Intel how this strategy played out for them.
Anonymous No.106332297
>>106328294 (OP)
https://www.youtube.com/watch?v=C65oaIHsdYM
Anonymous No.106332311 >>106333024 >>106333294 >>106333498 >>106335144
>company is obsessed with LLMs
>copilot usage is tracked as part of our KPIs
>if you don't use it enough, you won't get as good a raise
>all the normies are raving about gpt 5
i don't get it. why do normies love gpt5 so much?
Anonymous No.106333024
>>106332311
That's what happens when you have retarded women or jeets in directive positions, they make stupid bullshit up about how a company should be handled. They love to add meme tech solutions and spend hundreds of thousands on implementing it to justify their "forward thinking"
Anonymous No.106333294
>>106332311
One rarely discussed reason is that normies aren't very good at what they're doing. Same reason why junior devs overrate LLMs while they hardly save me any time.
Anonymous No.106333325 >>106335106
>>106328349
>the tech has plateaued, we have reached the end of Moore's law of AI
>it's time to admit that AI won't get much better and it's futile to burn billions more on this
t. seething devcuck who is 18 months away from being obsoleted by his own laptop
Anonymous No.106333498
>>106332311
>why do normies love gpt5 so much?
it's all they know and the tech is pretty much magic for the average joe
Anonymous No.106334401 >>106334629
>>106328294 (OP)
But he was tweeting for weeks he was scared go 5 because it was so good. Is he a retard or a lair?
STATLER + (or) Waldorf & Company. No.106334413
>>106328294 (OP)
BEAGHAGHAHAHA
Anonymous No.106334629
>>106334401
>Is he a retard or a lair?
why not both?
Anonymous No.106335010
>>106328743
That's basically true. People just aren't that smart and we really don't know what we're doing.
Anonymous No.106335106
>>106333325
Two more weeks Rajesh
Anonymous No.106335117
>>106328294 (OP)
>Moores law 2.0
Lol. People never learn.
Anonymous No.106335144
>>106332311
i genuinely think it's a google-ism where normgroids are so retarded they think chatgpt IS AI because it's the one they heard about on the news. chatGPT hit boomers like crack hit blacks in the 80s
Anonymous No.106335151
>>106328921
This shows AI is a major bubble as far as financial markets are concerned - they already pulled the plug on the growth phase and moved into "maturity" phase despite it not delivering any serious results.
Anonymous No.106335252
2 more weeks sirs