← Home ← Back to /g/

Thread 106315907

50 posts 16 images /g/
Anonymous No.106315907 [Report] >>106315942 >>106317365 >>106322609 >>106322704 >>106323084 >>106324943 >>106326387
>we screwed up
>now give us trillions
kek
Anonymous No.106315942 [Report]
>>106315907 (OP)
Agibros... it's over.
Anonymous No.106315949 [Report] >>106316200
i am curious how more compute will bring about AGI. is not the LLM hallucinating the biggest problem?
Anonymous No.106316200 [Report] >>106316242 >>106321831 >>106323147
>>106315949
LLMs can't be AGI.
All they can be is a word pattern approximator.

And AGI needs to be tangible, not pseud.
A person can have tangible, real knowledge.
An LLM can only pick an approximation.
LLMs can only be pseud.
They cannot be AGI, and the answer to the problem is not more LLM compute, but what type of program(s) need to be made and combined, along with LLMs functions, for an actual AGI.
Anonymous No.106316242 [Report] >>106316479 >>106316759 >>106316848 >>106321851 >>106321902 >>106322635 >>106322690 >>106323099 >>106323103 >>106324646 >>106328594
>>106316200
Anonymous No.106316479 [Report] >>106323271
>>106316242
Uh huh bullshit. LLMs are nothing but probability matrix functions. Run one locally, put the settings at zero-guess, always fit the highest chance choice, and it can do nothing but repeat the same response to the same input.

LLMs cannot become an AGI. Not without being just a part of a whole program that has function specific domains which are not LLM.
I've had this conversation with gpt 4.5 multiple times. 5 should have my criticisms in its training set.
Anonymous No.106316759 [Report] >>106316815 >>106321764 >>106321851 >>106321902
>>106316242
Yeah, this paragraph in particular is a bunch of bullshit.
Anonymous No.106316815 [Report] >>106316850 >>106317032
>>106316759
but it actually does all that stuff. you can test it yourself. or do you not believe your own lying eyes?
Anonymous No.106316848 [Report]
>>106316242
>A Large Language Model (LLM) is far more than a probability tree. While it predicts tokens, it does so by processing language through a construct that probabilistically predicts the next token based on the input
The thing just essentially repeated itself, what a load of trite.
Second paragraph is just romanticizing attention.
Anonymous No.106316850 [Report]
>>106316815
>but it actually does all that stuff. you can test it yourself. or do you not believe your own lying eyes?
Nigger, i probably know more about llms than 99% of this board full of jeets. Just look at how LLMs manage context memory, it is the most niggerlicious way possible it basically concatenates all the prompts exchanged during the session and put separation strings between each section, that's why the longer the session gets, the slower and more expensive each following prompt gets. It is a probabilistic machine where each vector is enriched with all vectors within the context window, it is the definition of slop.
Anonymous No.106317032 [Report] >>106319479 >>106321864
>>106316815
LLMs at least as they exist now are fundamentally flawed. They can't have anything like an active memory, short and long term, recursive information updating, or situational awareness.

You throw an LLM some type of untrained for string and it will throw out the user's intent entirely. It could be a riddle or joke an 8 year old would understand. The LLM after failing then being explained to will be all
>oh haha I get it now clever lol ur such a good user keep going tee hee
But the system failed to connect disparate ideas because it has no capacity to do so.

They aren't intelligence. They are word-task completion programs.
Whatever fixes this requires the program to fundamentally be something more than an LLM, something with LLM function in it, but not an LLM.
Anonymous No.106317151 [Report]
>barely squeeze out $500 billion from investors
>now try to ask them for trillions
openaibros... it's over
Anonymous No.106317265 [Report] >>106317347
I STILL REMEMBER WHEN EVERY OTHER RETARD ON THIS SITE WAS TELLING ME THAT GPT 4 OR 5 WAS GONNA BE AGI I TOLD YOU CLOWNS ITS JUST A FUCKING STATISTICAL REGRESSION
ENJOY HOLDING YOUR BAGS, I STILL HAVE SOME PALANTIR TO SELL YOU
Anonymous No.106317347 [Report] >>106317509 >>106322666
>>106317265
be humble in victory, and graceful in defeat.
Anonymous No.106317365 [Report]
>>106315907 (OP)
Worked for the banks and airlines
Anonymous No.106317453 [Report] >>106317957
Will he go down in history as the biggest grifter to ever do it?
Anonymous No.106317509 [Report]
>>106317347
Never give advice unless asked. The wise won't need it, the fool won't heed it.
Anonymous No.106317957 [Report]
>>106317453
Citigroup received a 2.9 trillion bailout before. Pretty damn hard to beat the banks.
Anonymous No.106319466 [Report]
>we
Anonymous No.106319479 [Report] >>106322659
>>106317032
>LLMs at least as they exist now are fundamentally flawed.
Yeah, they're topologically wrong; they construct a hyperspatial manifold in configuration space with the wrong shape. It lacks holes (which represent negations; how to handle "NOT" and "DON'T" and so on) because the training methods are all differentiable, smooth transforms.
There's a lot of stuff downstream of this fundamental problem. For example, you can't have a world model because you can't reliably distinguish the world model from the proposition you're testing against it. All attempts to keep things inside guard rails just make it go worse precisely because they prime the LLM to work in the way that isn't desired, because it cannot process the negation. It'll bullshit you indefinitely, but it it won't ever act logically.
This is so utterly alien to how people think that almost nobody gets it. (Of course, handling these concepts is also harder work for people too; we have only a limited number of special neurons for that stuff. Some people are really limited in this respect.)
Anonymous No.106321733 [Report]
Asking for more money?
Anonymous No.106321764 [Report] >>106321878
>>106316759
>IT DOESN'T JUST X....

it's so predictable bros
Anonymous No.106321831 [Report]
>>106316200
It's worse in the sense, they are purposefully trying to deceive us (trainers trying to make us believe its smarter than it really is). Large language models aren't able to gauged confidence in their own answer (certain, doubtful, or uncertain). Repeated questioning leads models to invert their prediction even when the initial answer was accurate. To mitigate this, on some subjects such as math, trainers make sure the model will reject a user questioning its output even if incorrect. And for more open ended statements, plausibility is 'salted' into predictions, words like maybe or possible to create the impression of nuanced understanding, despite the models reliance on probabilistic generation (it really doesn't know if its maybe or not).
Anonymous No.106321851 [Report]
>>106316242
>>106316759
what a load of bullshit
Anonymous No.106321864 [Report]
>>106317032
>The LLM after failing then being explained to will be all
>>oh haha I get it now clever lol ur such a good user keep going tee hee
That is intentional post-training behavior. That has nothing do with the inherent architecture of LLMs
Anonymous No.106321878 [Report]
>>106321764
That was my issue looked obvious to me then does now mess with the bull bully the retard over his dumb rape software
Anonymous No.106321902 [Report]
>>106316242
>>106316759
Challenge it on these statements and watch it backpedal.
Anonymous No.106322609 [Report]
>>106315907 (OP)
Funny, that it isn't accuracy or capability of the agent that's killing it, but normies getting the ick from not enough ass kissing.
Anonymous No.106322635 [Report]
>>106316242
This is about Noam Chomsky's work on linguistics. Just because the tokenizer follows grammar doesn't mean the LLM understands.
So yeah, there's a bit extra added to the probability matrix so the output isn't total gibberish
Anonymous No.106322659 [Report]
>>106319479
Way I like to visualize it is that all the known tokens are pebbles and the prompt is how the agent/assistant constructs a mountain out of it. The inference parameters are the initial conditions for a "ball" that rolls down the resulting mountain. The trace of the ball's path is the output.
This does leave the potential for constructing autopoietic paths or infinite loops, but since chat interfaces are always misinterpretations of the Turing test and the agent/assistant doesn't have any subjective sense for the time between prompts there functionally cannot be any stable subjective agency at all under the chat bot interface model.
The point about the world model is very important too, the model can only project semantics into its own latent space, it cannot actually reference outside data at all, only slices of the data that already occur in the training data. New semantic connections are impossible.
Anonymous No.106322666 [Report]
>>106317347
>Expecting concepts of attributes like humility or grace to be anything less than completely alien to the average ni/g/ger
Anon, I...
Anonymous No.106322673 [Report]
Do you guys think Gpt5 is even slightly better than Gpt4? I think it actually makes even more mistakes coding.
Anonymous No.106322678 [Report]
Beep boop beep.
Generative grammar is kinda dumb.
Anonymous No.106322690 [Report]
>>106316242
It's not completely wrong. Some hidden neurons will light up in specific context, this can be seen as internal representation. You might get a better intuition for this with NNs learning edge detection when classifying images.
Anonymous No.106322704 [Report] >>106322768
>>106315907 (OP)
>now give us trillions
That's not what he meant, retard.

His company is going to spend some buck on its own infrastructure, yes. But billions, not trillions.
The companies that would have to invest massively are the utility companies (water, electricity, ...), all across the world.
Anonymous No.106322768 [Report]
>>106322704
All that money and infrastructure is going to go to waste of the don't find a replacement for this guys brain.

There's no way they're going to milk anything from him since he's almost pushing 100.
Anonymous No.106323084 [Report] >>106324147
>>106315907 (OP)
Bruh who even has trillions of dollars what is he smoking
Anonymous No.106323099 [Report]
>>106316242
Source of this screenshot?
Because based on my research that's untrue, LLMs replicate patterns, they don't understand or assign meaning
Anonymous No.106323103 [Report]
>>106316242
Anonymous No.106323112 [Report]
AGI is not happening for the next 100 years, just how gullible can you people be?
Anonymous No.106323147 [Report] >>106324699
>>106316200
This.
AGI may one day use SOME LLM elements (and likely forced ones so companies can justify all the money they sunk into them), but an AGI won't come about VIA LLMs, they aren't thinking algorithms, they just take large quantities of data and repeat patterns found in that data set.
Anonymous No.106323271 [Report] >>106323418
>>106316479
> Duuhh bullshit. If you deliberately change the settings so the model runs deterministically, you get the same result every time you run it. See how it breaks when i deliberately fuck it up, therefore i am correct
Anonymous No.106323418 [Report]
They should repurpose LLMs into reliable search engines, ones that provide sources for each statement, with the source being 100% reliable.
Add an option for writing unreliable code and you've covered 95% of use cases.

>>106323271
>let's add more randomness to pretend the system is intelligent
You're basically praying the system can do what you can't.
Anonymous No.106324147 [Report]
>>106323084
This guy on the right
Anonymous No.106324646 [Report]
>>106316242
>It understands
Not really, that's just a convenient side effect of the text prediction capabilities.
Anonymous No.106324699 [Report] >>106327203
>>106323147
AGI and LLMs are so fundamentally different in concept that I can't imagine there will be any functional transfer beyond broad concepts like "training data".

The whole ghost in the machine is a total pipe dream that'll never work, we won't get AGI until we make the significant technological leap to biocomputing, which itself requires a mastery of genetics so high that one rogue individual can just engineer a virus to kill off everyone who isn't as cool as him.
Anonymous No.106324943 [Report]
>>106315907 (OP)
trillions of whose dollars?
Anonymous No.106326387 [Report]
>>106315907 (OP)
>JUST A FEW TRILLION DOLLARS MAN
>I SWEAR ITS GONNA KNOW HOW MANY LETTERS ARE IN...UUUHHH...AT LEAST 10 WORDS IF WE GIVE IT 60% OF THE WHOLE US POWER GRID!!!!
Anonymous No.106327203 [Report]
>>106324699
mgw00
Anonymous No.106328594 [Report]
>>106316242
there was a paper recently that all these models learn very fractured, "spaghetti" representations that are very surface-level and essentially garbage underneath