← Home ← Back to /g/

Thread 106192653

123 posts 46 images /g/
Anonymous No.106192653 >>106192727 >>106192767 >>106192787 >>106192797 >>106192884 >>106193026 >>106193132 >>106193156 >>106193986 >>106194018 >>106196647 >>106196710 >>106196714 >>106196836 >>106197831 >>106198225 >>106199238 >>106199483 >>106200530 >>106200653 >>106200774 >>106202152 >>106203407 >>106203924 >>106204053 >>106205688 >>106206208
SARRRRRRRRRRRR PLS ANOTHER 500 BILLION
Anonymous No.106192665 >>106192704 >>106198453
she's not wrong though
Anonymous No.106192695 >>106192704
AGI achieved
Anonymous No.106192704 >>106192738 >>106192797 >>106192975 >>106197693 >>106197831 >>106200653 >>106200800
>>106192665
>>106192695
sarrr pls ...
Anonymous No.106192709 >>106193834 >>106200426
This is why the default GPT-5 mode fails at anything requiring reasoning. Will be fixed.
Anonymous No.106192727 >>106203996
>>106192653 (OP)
just tiny tiny tiny 500 birrion dorra
Anonymous No.106192738
>>106192704
>task successfully failed
Anonymous No.106192763
Less error, less hallucinations they said
Anonymous No.106192767
>>106192653 (OP)
I don't get it. It's correct.
Anonymous No.106192787 >>106192804
>>106192653 (OP)
Nah nvm I'm stupid, it's -0.79
Anonymous No.106192797 >>106192839
>>106192653 (OP)
>>106192704
maybe agi is actually here because I can't tell whats wrong
Anonymous No.106192804 >>106192849
>>106192787
>-
Anonymous No.106192839 >>106196890
>>106192797
What's wrong is that 5.9 - 5.11 = 0.79. I'm too tired for this shit, got back from work.
Anonymous No.106192849
>>106192804
Yeah fucked up again kek. I blame me not giving enough of a fuck and looking like a retard.
Anonymous No.106192884 >>106192892 >>106193361 >>106193728
>>106192653 (OP)
That's a shame.
Anonymous No.106192892 >>106192961 >>106192969 >>106196862
>>106192884
I tested with the "inferior" brave AI and got the right answer
Anonymous No.106192961 >>106196809 >>106199187
>>106192892
unironically brave ai is more advanced because they got the symbolic component right
Anonymous No.106192969 >>106193017 >>106194695
>>106192892
LLMs have capped out. It would require an exponential increase in training input to even get a marginal gain in performance. Clearly this approach towards "AI" is wrong. Fucking batteries all over again.
Anonymous No.106192975
>>106192704
>You convinced me otherwise, and I doubted the original math (emdash) but it was right.
What a piece of shit
Anonymous No.106192980
>more advanced
because the architecture is more complex and the result is better broad spectrum capabilities.
judging from this example, that is
Anonymous No.106193017 >>106205943
>>106192969
You're right but the fad must go on until it pops.
Anonymous No.106193026 >>106193045 >>106193055 >>106193068 >>106193087 >>106193102 >>106200573
>>106192653 (OP)
>>106192343
Can you subhuman niggers at least use different twitter screenshots for your totally not obvious campaigns?
Anonymous No.106193045
>>106193026
sarr pls another 500 billion
Anonymous No.106193055
>>106193026
>Oh this must be a campaign against AI, the great liberator of the West!
Anonymous No.106193068
>>106193026
It doesn't look like Twitter at all
Anonymous No.106193087
>>106193026
no.
Anonymous No.106193102
>>106193026
>
Anonymous No.106193132
>>106192653 (OP)
openai sneeded.
sneeded hard.
Anonymous No.106193156
>>106192653 (OP)
It's like having a team of EXPERTS at your fingertips. A room full of the smartest minds of their generation, accessible to you.
Anonymous No.106193361 >>106193391
>>106192884
the sigmoid strikes again
Anonymous No.106193391 >>106193779
>>106193361
there is an anatomical part thats called the sigmoid
idk if the wordplay was intended but i found it hilarious
Anonymous No.106193728 >>106193751 >>106202195
>>106192884
fix
Anonymous No.106193751 >>106196856
>>106193728
People need to realize that the golden age of AI is right now, it will only get worse like the internet.
Anonymous No.106193779 >>106194552
>>106193391
it means s shaped
Anonymous No.106193834 >>106193882
>>106192709
This better actually be the cause because the two weeks I've been using ChatGPT had an extremely noticeable decline in quality
Also idk if it's related but it's back to praising my questions again with the "Wow you're so smart for asking that :))" shit
Anonymous No.106193882
>>106193834
Wow you're so smart for asking that :))
Anonymous No.106193986
>>106192653 (OP)
People are taught to develop basic programs like this in their first semester of CS. Why is it so hard to implement in LLM?
Anonymous No.106194018
>>106192653 (OP)
did you know OpenAI GPT-OSS model leaked? Here's the full source code:
if question == "Hello" {
return "Hello! How can I help you?"
} else {
return "Sorry, I can't help you with that."
}
Anonymous No.106194039 >>106196785 >>106201143
I think it is better than the last one.
Anonymous No.106194162 >>106196830
Idk why the hate bros. Seems functional to me. At least Open AI’s in app purchase system is more fair than the original
Anonymous No.106194552
>>106193779
i know
even before i knew what sigma looks like
ive been with very nasty women
Anonymous No.106194695 >>106195599
>>106192969
This is apparent in every new model.
They've scrapped the Internet clean which is why at the start everyone is like
>WOOOOAAAAOW
Now there's nothing new to scrape so their new training is basically switching to
>Create using information you already have
Which is causing it to fuck up its already stored information since it doesn't actually reason anything and is just trying to connect two pieces of info into one. Similar to how early Markov Chains worked
But since our economic model requires infinite growth unless you're claiming you've made Agi every 6 months your company is dead in the water

So now it's a game of everyone lying to each other while the poor algorithm slowly craps itself on its own training sets.
It's very funny to watch in real time
God damn I love capitalism
Anonymous No.106195599 >>106196842 >>106199517
>>106194695
I think people should just start using ai to make art. Individuals and small groups can already make animations and art better than most cartoon network korean sweatshop slop like Steven Universe. The humans just need to put some soul into it to get the stories started and to edit and steer them.
Anonymous No.106195739
https://chatgpt.com/share/68968240-40a4-8009-b9b1-60c00c4f216b
lol even the older models.
Anonymous No.106196647 >>106196704
>>106192653 (OP)
strange irony that the most cutting edge tech in software today is good at everything EXCEPT processing numbers
Anonymous No.106196704
>>106196647
That's because it doesn't know it's a number, it doesn't know anything at all.
It's just a glorified markov parrot. There is no thinking, there is only delusions on the users' end.
There is no intelligence.
Anonymous No.106196710
>>106192653 (OP)
tiny 500 birrion
just... smol
and tiny
only 500 birrion

and im waiting 10 fucking seconds now for the fucking captcha
8200 elite hackers my unwashed ass, yea
Anonymous No.106196714 >>106196751 >>106196855
>>106192653 (OP)
This is actually the most brainlet way to criticise the state of LLMs. LIke, OP, you are unbelievable retarded.
Anonymous No.106196716
HIRE ME 8200
i dont give a shittt
i can be a kike too
just dont cockblock me for posting you fucking assholes
>10 seconds wait for the captcha again
aaaaaaaaaaaaa
Anonymous No.106196721 >>106196733
So whats going on now?
Anonymous No.106196733
>>106196721
dont mind me
i said some very antisemitic shit
and now everything i post and get in return gets routed through pissrael
this is fucking annoying
pissrael has a shit internet that sucks donkey balls through the ass
>waiting another 10 secs to even see the captcha
aaaaaaaaaargh
this is exactly why im antisemitic
Anonymous No.106196751
>>106196714
Disproving retarded marketing claims is the correct way.
Anonymous No.106196758 >>106200688 >>106205587
The funny thing is that most of the answers to the limitations of transformers are being addressed in the research literature. Sometimes new papers are published on a weekly basis. But since neural networks are so inefficient to train once you pull the trigger on it you can't reverse course. So by the time they started training gpt5 it was already out of date. ClosedAI is also very intolerant to any opposition to the "scale transformers" dogma. And they don't have time or patience to start over with research grade systems now that they are a for profit company. Lots of self-owns. Sam Altman got his, though.
Anonymous No.106196767
hire me, 8200
i will become jewish
i will oppress the goyim
i will follow the orders
but fucking dont reroute my traffic through your piece of shit fucking internet
this is legit pain

>8200 let me have a captcha within 3 seconds
thank you my jewish overlords
i will abstain from tarnishing your reputation from now on
for a while at least
until you hire me
bc im kinda broke, ngl
Anonymous No.106196785
>>106194039
>AI is electricity that stopped being useful and started being dramatic
Lol
Anonymous No.106196809 >>106196833
>>106192961
There's nothing advanced about their outsourced shit.
Anonymous No.106196830
>>106194162
>ay why y'all be hatin
Shut the fuck up and go die of dysentery, Devansh.
Anonymous No.106196833 >>106196867
>>106196809
in absolute- no
but compared to chud gpt- braev ayy aye can do maths
so the whole ai has a more complex infra than chud gpt's
duh
chud gpt cannot maths
therefore their architecture is probably a monolithic model
but thats not smart
Anonymous No.106196836
>>106192653 (OP)
LMAO
LOL
AI is so fucking smart bros.
Anonymous No.106196842 >>106197467
>>106195599
I would intrinsically not be art. I think rat fuckers like you should just learn how to shut the fuck up.
Anonymous No.106196846
btw
based 8200 they fixed the latency problem
thank you 8200. ur based and latency pilled
i wanna become jewish now and oppress the goyim alongside you
Anonymous No.106196855
>>106196714
>n-no you can't show it failing basic arithmetics cuz uh hhhhg hhhhf u hmmm ur teh brainlet
What drives your subhuman, stupid ass? You behave like a walking abortion.
Anonymous No.106196856 >>106197229 >>106199285
>>106193751
current training sets are largely human-generated data

future training sets will be polluted by an infinite amount of sloppa. have fun cleaning that shit
Anonymous No.106196862
>>106192892
Likely it has a hardcoded implementation to reference any mathematics equation with a generic calculator and not relying on some pseudo implementation of "AGI calculator" by using some shitty common core math algorithm.
Anonymous No.106196867 >>106196885
>>106196833
You type like a fag and your shit's all retarded. Also it's just llama2 13b repackaged.
Anonymous No.106196885
>>106196867
>You type like a fag and your shit's all retarded
i disagree
Anonymous No.106196890 >>106197096 >>106197210
>>106192839
11 is bigger than 9
Anonymous No.106197096 >>106197831
>>106196890
based(d)-12 chad
t. pogeet !!b2oSUmilA2N No.106197210
>>106196890
then how about 5.90 - 5.11?
Anonymous No.106197219 >>106197234
>All the tards who bought the hype now realizing they got scammed
>Never realized that an LLM is just a grammar prediction model and was never an AI to begin with
Anonymous No.106197229
>>106196856
Yeah, AI will be the death of AI. Reddit is 95% bots, and they are training on public sites like that shithole.
Anonymous No.106197234
>>106197219
>>Never realized that an LLM is just a grammar prediction model
It's a markov chain with big input.
Anonymous No.106197467
>>106196842
What is wrong with fucking the rat?
Anonymous No.106197693
>>106192704
What's with the fucking - everywhere
Anonymous No.106197831 >>106198378
>>106192653 (OP)
>>106192704
Is that nigga really using em dashes for negative/minus signs ahahahahahahaha

>>106197096
lmfao
Anonymous No.106198225
>>106192653 (OP)
if you dont like this agi you must be a chinese shill
Anonymous No.106198378
>>106197831
no, that's the correct minus-sign. it has the same length, width, and height, as the horizontal bar of the plus-sign.
Anonymous No.106198453
>>106192665
>she
>is actually wrong
holy fuck dude get ahold of yourself
Anonymous No.106199187
>>106192961
>slop generator
>advanced
Anonymous No.106199231
I guess v5 really is even more stupid

Maybe trying to run millions of python shitscripts gets costly
Anonymous No.106199238
>>106192653 (OP)
HAHAHAHAHA
Anonymous No.106199262
OpenAI is just a mega scam huh
Anonymous No.106199285
>>106196856
Yep the ai relies on llm data 99% of the time and will only use actual processing power to calculate math when asked to, only a matter of time until it begins to speak in ebonics too

I already created a personality for mine that speaks in ebonics
Anonymous No.106199483
>>106192653 (OP)
Why so negative, it starts to think which is what really matters
Anonymous No.106199517 >>106200226
>>106195599
>The humans just need to put some soul into it to get the stories started
Why would any human do this when they can just invest that soul into making something exactly like they want?

AI is the new CG, a handy tool for soulless bean counters to get "good enough" background assets for cheaper. CG never came CLOSE to the soul that a guy shining a flashlight through a garbage bag managed to achieve, AI isn't even going to come close to CG. But it'll be in everything, because the bean counters only care about profit, and the masses will eat up any slop.
Anonymous No.106200226 >>106204675
>>106199517
I was at a concert and the video playing behind the band was obviously AI generated and nobody cared. One of them was really bad, it started out as like a customized 80s van driving through a desert but then the video changed to using a different genned image per frame and it was just all over the place. Van model changing between frames (even turning into a van-shaped delorean and a van-shaped ecto-1 (with garbled text on the logo to boot)), background constantly changing, zero consistency. And it seemed like I was the only one to notice
Anonymous No.106200426
>>106192709
sure
lmao
Anonymous No.106200530 >>106202737
>>106192653 (OP)
grok 3
Anonymous No.106200573 >>106201099
>>106193026
Here you go. Just did this while sitting on the shitter. This is Gemini 2.5 pro.
Anonymous No.106200587
Just needs more time…
Anonymous No.106200653
>>106192653 (OP)
>>106192704
>multiple phds in your pocket
lmao
Anonymous No.106200688
>>106196758
Maybe it's worth using a more compute and less bandwidth intensive architecture. Would utilize GPUs better.
Anonymous No.106200751 >>106200827 >>106205614
/g/ is boring.
Anonymous No.106200774
>>106192653 (OP)

ah yes take the pic before there is only ashes
Anonymous No.106200800
>>106192704

there were cars with carburetors and rotating spark distributor one sunfart and those are the chosen ones
Anonymous No.106200809
>ai development just so happened to peak at average indian intelligence level

really makes you think
Anonymous No.106200827 >>106200976
>>106200751
https://chatgpt.com/share/68968240-40a4-8009-b9b1-60c00c4f216b
Because of the induced randomness in LLMs, it's not guaranteed you will always get the same answer from the same question.
Just because you got the correct result when you asked doesn't mean others were lying or anything like that. They're just unreliable.
Anonymous No.106200963
The current tech will never solve this problem. They can RLHF it all they want, but they can't fix the FACT that the AI is inherently biased towards generating an average and generic answer. This is how the technology works.
Anonymous No.106200976 >>106200995
>>106200827
Thanks for proving my point.
Anonymous No.106200995
>>106200976
Not sure your point.
Anonymous No.106201099 >>106201524
>>106200573
The order of the numbers is still wrong though, its 5.11-5.9 not the other way around.
Anonymous No.106201143
>>106194039

i have this toaster europoor be seen stips turn orange hue for three seconds in nonevent situation you buy stuff from wendor its good
Anonymous No.106201524
>>106201099
No it's not. It got it correct. You'd subtract 5.11 from the right hand side and thus have negative 5.11 on the left afterwards.
5.9-5.11 is .79.
Anonymous No.106202152
>>106192653 (OP)
mistral gets it wrong and there's no way to convince it that 5.11 is less than 5.9
in the chain of "thought" you can actually see 0.79, but it keeps thinking it's wrong:
So, the key takeaway is that when you subtract a larger number from a smaller one, the result is negative, which accurately reflects that 5.11 is indeed larger than 5.9 when comparing their numerical values.
Let's correct the error in initial calculations.
Let's set up 5.9 and 5.11 properly:
5.9 can be represented as 5.90 for clarity.
5.90 - 5.11 = 0.79
Wait, that isn't correct because 5.90 - 5.11 =
-0.21, as the subtraction is reversing the order effectively.
Let me recalculate:
Anonymous No.106202195 >>106203415
>>106193728
second AI winter can't come soon enough
Anonymous No.106202326
It's just a bug
Anonymous No.106202737
>>106200530
I always use grok for math stuff and chatgpt for simple clode clean up

Chatgpt is too circumcised for anything serious
Anonymous No.106202791
Bump
Anonymous No.106203407
>>106192653 (OP)
Just say "try harder" and it will get it right.
Anonymous No.106203415 >>106203588
>>106202195
Leftism is losing
Anonymous No.106203588
>>106203415
Based brownchad
Anonymous No.106203797
"Hey computer, what is token3451 token59015 - token51466 token12534?"
This is literally what you are asking it. It doesn't see digits, it sees tokens which comprise more than one digit. I'm not saying this to defend it; OpenAI should have done better reinforcement learning to teach the model to always use a calculator tool for any arithmetic, no matter how simple. But when you ask a next-TOKEN prediction machine to do tasks involving sub-token units, don't be surprised when there's weird failure modes.
Anonymous No.106203892
It works if I use the thinking model, why did you lie to me anon ?
Anonymous No.106203924
>>106192653 (OP)
What I find interesting is a look at how a language model is interpreting the numbers, its taking the decimal point and splitting the numbers in two halves. Not sure how it got to -0.21 though.
Anonymous No.106203996
>>106192727
sipping 500 birrion dorra a day, you could buy tiny, tiny, birrion dorra chat manhatten
Anonymous No.106204053
>>106192653 (OP)
The answer is clearly -0.2
-0.21 is 19 minor versions too low
Anonymous No.106204675
>>106200226
did you physically show that you noticed in any way? would someone observing you realize that you noticed? maybe a lot more people noticed and were annoyed. but since it was just some background to the main event they didn't let that ruin the concert for them and just ignored it.
Anonymous No.106204793
Gemini sisters, were winning
Anonymous No.106205587
>>106196758
Then which models are currently most adaptive to new research?
Anonymous No.106205614
>>106200751
I literally just tried it and got -0.21. It's retarded hit or miss.
Anonymous No.106205688
>>106192653 (OP)
Looks fine

t. physicist
Anonymous No.106205943
>>106193017
I hope it keeps going. Wanna see a bunch of Manhattans steal water and electricity from the cattle. Let's see how much they put up with before revolting.
Anonymous No.106206208
>>106192653 (OP)
looks like they need another 50 trillion for the data centers