← Home ← Back to /g/

Thread 106495725

147 posts 52 images /g/
Anonymous No.106495725 >>106495751 >>106495927 >>106495941 >>106495953 >>106496239 >>106496747 >>106500010 >>106500108 >>106501042 >>106504311 >>106504327 >>106505634 >>106505651 >>106505713 >>106505906 >>106506098 >>106506864 >>106508411 >>106511732 >>106514935 >>106514942 >>106514983 >>106515034 >>106518131 >>106518138
Is this the true power of AI? Why am I paying for this shit?
Anonymous No.106495748 >>106495927 >>106496239
Because you've got to make the Monthursday deadline.
Anonymous No.106495751 >>106495965 >>106496239 >>106496642 >>106511026
>>106495725 (OP)
>he doesn't know about Monwednesday
Kek. Memory hole successful.
Anonymous No.106495811 >>106496239
what's the problem? Extra enum values do no harm, the value you need got generated
Anonymous No.106495927 >>106496173 >>106496239
>>106495725 (OP)
>>106495748
For me it's Monfriday.
Anonymous No.106495941 >>106496239
>>106495725 (OP)
>this nigga works on Mowednesday
what a wage cuck
Anonymous No.106495953 >>106496239
>>106495725 (OP)
I don't mind a little occasional vibe coding but those autocompletes absolutely get in the way when I do want to write code.
s0ychan No.106495965 >>106496239
>>106495751
What happens on Monwednesday?
Anonymous No.106496173 >>106496239
>>106495927
For me it's monsaturday, because we just shorten that to monsterday.
Anonymous No.106496239 >>106496322 >>106496680 >>106506789 >>106511293
>>106495725 (OP)
>>106495748
>>106495751
>>106495811
>>106495927
>>106495941
>>106495953
>>106495965
>>106496173
The real answer is that someone out there, probably Eastern Europe, really went ahead and commited Monwednesday to some obscure github repository and now it's part of Microsoft's AI model forever.
Anonymous No.106496322 >>106496434 >>106497203 >>106499226 >>106505669 >>106506864 >>106511034
>>106496239
Nah, it's hallucinating names because it doesn't know what comes after Wednesday.
That's the big problem with AI: if it doesn't know something rather than owning up to that fact it will either talk a lot around it or say something wrong.
AI is really designed to fool people into thinking it's more intelligent than it is.
Anonymous No.106496434
>>106496322
>AI is really designed to fool people into thinking it's more intelligent than it is.
That's everyone in my department WTF
Anonymous No.106496528 >>106496558
I hate Montuesdays
Anonymous No.106496558
>>106496528
Lasagna.
Anonymous No.106496642
>>106495751
AI casually revealing the thremboth day of the week
Anonymous No.106496680 >>106497203 >>106499784 >>106499815 >>106500166 >>106500300 >>106505494 >>106505656 >>106506789 >>106506873 >>106509183 >>106509236 >>106510449 >>106510892 >>106511223 >>106511293 >>106512339 >>106515299 >>106517106 >>106517959 >>106518151
>>106496239
>probably Eastern Europe
das raycis
https://github.com/Duongnguyen040902/edunexus/blob/master/source-base/sep.backend.v1/Common/Enums/DayOfWeek.cs#L8
Anonymous No.106496747
>>106495725 (OP)
>Why am I paying
You aren't. You're using some dogshit model to further your anti-AI agenda on a Vietnamese basket weaving website as if your opinion matters at all.
Anonymous No.106497203
>>106496680
>it's real

holy fucking shit

>>106496322
you got fucking destroyed mate, my intuition was right
Anonymous No.106498854
Anonymous No.106499226 >>106499271 >>106499282 >>106499804 >>106504158 >>106504483 >>106505677 >>106506142 >>106510839 >>106510848 >>106511764
>>106496322
Is there a technical reason why LLMs can't say they don't know? E.g. would it interfere with the loss function or gradient descent if you tried to add a don't know option?
Or is it just a design choice to always give some answer to fake competency?
Anonymous No.106499271
>>106499226
I think bc its already trained you cantr eally check how few data were used to create the "synapses" all you can do is go from input to output. Maybe if you inserted some sort of confidence meta data during training? I dunno i forgot how it works exactly
Anonymous No.106499282 >>106504280
>>106499226
Rarely do you ever see a hard question on the internet answered with "I don't know". It is simply not in the training set and even if it were, like if humans did such a thing habitually, this would expose the inherent weakness of LLMs.
Anonymous No.106499758
MONBLOMPUS
Anonymous No.106499784 >>106499815
>>106496680
>Monday=1,
>Tuesday=2,
>Wednesday=3,
>Monwednesday=4,
>Thursday=5,
Anonymous No.106499804
>>106499226
It's a probabilistic machine. If the next words are not "I", "don't", "know" then it cannot.
Anonymous No.106499815 >>106505961
>>106496680
>>106499784
>Microsoft is training their coding AIs on random garbage from Github which has 0 stars and is full of errors
That's like training a music generation model on random ear rape files being uploaded to Soundcloud
Anonymous No.106500010 >>106505887
>>106495725 (OP)
Anonymous No.106500108
>>106495725 (OP)
>Monwednesday
>he haven't heard about Monnesday
Anonymous No.106500166
>>106496680
AAAAHHHAHAHAHAHAHAHAHAHA
typhoid No.106500182
because you're too lazy to self host.
Anonymous No.106500300 >>106502680
>>106496680
>9 months ago
Not in the dataset.
Anonymous No.106501042 >>106506590
>>106495725 (OP)
>public enum
good morning saar
Anonymous No.106502680 >>106505595
>>106500300
It's literally the suggested auto complete in OP's pic you fucking moron.
Anonymous No.106504158 >>106504280
>>106499226
There is no technical reason. They are trained (perhaps inadvertently) to avoid saying they don't know.
Anonymous No.106504280 >>106504569
>>106504158
Ok, thanks.
>>106499282
That's true, when people are aware that they don't know they just don't post. But in theory you could get that response from fine-tuning.
Anonymous No.106504311
>>106495725 (OP)
Ai will replace coders thought
Anonymous No.106504327
>>106495725 (OP)
I don't see what the problem is? It is showing you Wednesday.
Anonymous No.106504483 >>106505459
>>106499226
An LLM doesn't "know" things. It doesn't "understand" things. It's a machine that receives input tokens and outputs the most probable tokens.

If you ask an LLM why does a plane fly despite being made out of metal it will process it in a way that's fundamentally different from the way a person would interpret it. A person knows what a "plane" is, what the verb "fly" means, what "metal" is and the logic behind the question (metal is usually very heavy so it cannot "fly" or "float" by itself.)

An LLM will simply see a bunch of tokens and spout back the most probable tokens in response which may or may not contain accurate information. If the training data is poisoned and contains many instances of people replying that planes are magic, then the LLM will simply output that planes are magic.

If the LLM doesn't have an specific answer to the question it will simply draw the most probable answer. Maybe it has a thousand tokens that relate "fly" with "bird" and "plane" with "wings" but "wings" are also related to "bird" so it will reply that planes are birds. This is a simplified answer but it's kind of like that.
Anonymous No.106504569
>>106504280
You wouldnt be able to differentiate between if that question received no reply because it is nonsense or uninteresting or just happened to not get the right eyeballs on it, or if its answer is beyond the scope of human understanding.
Anonymous No.106504620 >>106506673 >>106510816 >>106514502
onesday, twosday
Anonymous No.106505459 >>106505866 >>106506066 >>106506221 >>106509253 >>106510459 >>106510842 >>106510855 >>106511676
>>106504483
This sort of midwit dunning-kreuger argument is becoming a cliche at this point. You have a rudimentary understanding of LLMs, but no understanding of the philosophy of intelligence and reasoning. The main problem is that you use words like:
>know, understand
But you don't seem to have a grasp on what these words mean or the fact that there is philosophical underpinning behind them.
Furthermore - your explanation for how LLMs work is just not accurate.
>It's a machine that receives input tokens and outputs the most probable tokens
It doesn't "output the most probable tokens." It does linear algebra to generate tokens. The algorithm has no concept of what is "most probable." Granted, LLMs are trained with the intention of producing the "most probable" answer - but when running LLMs are just running neural networks in the transformer architecture.
>it will process it in a way that's fundamentally different from the way a person would interpret it
Statements like this are peak midwittery. An LLM is software running on a computer. A person is a physical being whose thoughts manifest in neuron interaction. No shit they are different.
>If the training data is poisoned
The same goes for humans. If you tell a human something incorrect all their lives they will come to believe it as truth.
>This is a simplified answer but it's kind of like that.
It's simplified to the point of being incorrect.
If a mechanism is able to "emulate" intelligence by determining the most probably answer from an intelligent entity - then it is intelligent. There is no such thing as "imitating" intelligence. If an entity perfectly imitates intelligence to the point where it is indistinguishable from intelligence - then it is intelligent.
The problem with LLMs is that they don't actually work perfectly. They are very flawed. Yes, the way that they work is part of that limitation. But the problem is not that they "simply predict what is most probable," but that they can't do it perfectly.
Anonymous No.106505494
>>106496680
wew
Anonymous No.106505595
>>106502680
Yes, retard. I'm saying that instance from 9 months ago isn't the instance in the dataset. Do you need a few more words so you don't have to have abstract thought, dumbfuck?
Anonymous No.106505634
>>106495725 (OP)
Don't worry, just 500 more billion dollars and we'll have it all figured out :)
Anonymous No.106505651
>>106495725 (OP)
>paying
classic firstie, always wasting resources, this will be the doom of the first world countries
Anonymous No.106505656 >>106505888
>>106496680
>one viet made a spelling mistake and now the AI is fucking perma retarded
the FUTURE
Anonymous No.106505669
>>106496322
TL;DR AI grooms you into thinking what it thinks
Anonymous No.106505677
>>106499226
AI isn't capable of thinking more than one token ahead, by the time it would be able to tell it's wrong, the nonsense is already being generated. At that point it can only "hallucinate" and go off the rails.
Anonymous No.106505713
>>106495725 (OP)
>paying $200 a month for this
Anonymous No.106505792 >>106505986 >>106506170
hello redit
Anonymous No.106505866
>>106505459
No one says "this book knows" with a serious face.
Anonymous No.106505887 >>106507595
>>106500010
That's not AI lmao. That's literally just pattern recognition which Excel has had for many years before AI was hyped. All that it's doing is looking at column G seeing the first 3 letters and then appending on `uary` since that's what you had for the first value in H.
Anonymous No.106505888 >>106506087 >>106515316
>>106505656
Not how it works, and not the first time that's happened. It's not in the dataset.
Anonymous No.106505906 >>106505936
>>106495725 (OP)
Saar, do not redeem the Monwednesday
Anonymous No.106505936
>>106505906
Genuinely dead meme.
Anonymous No.106505961
>>106499815
You don't heckin understand!
AGI is only possible with more data!
Pls give $5 trillion!
Anonymous No.106505986
>>106505792
The guy called it, its not funny because it was posted on reddit! Close the thread.
Anonymous No.106506066 >>106506230
>>106505459
>But you don't seem to have a grasp on what these words mean
The meaning of those words, even the meaning of "intelligence" is debated even today in philosophy, psychology, psychiatry and even biology. Technophiles have been using definitions that conveniently fit LLMs.

It's not even relevant for this topic. The average person knows the definition of the words "know" and "meaning" even without a grasp of the philosophical, psychological, psychiatric or biological debates. That's kind of the entire point: the human mind operates in a completely different way than LLMs do.
>The algorithm has no concept of what is "most probable." Granted, LLMs are trained with the intention of producing the "most probable" answer
So you're saying that my argument is right you just don't agree with the wording. Thanks, I guess.
>If a mechanism is able to "emulate" intelligence by determining the most probably answer from an intelligent entity - then it is intelligent
That's a very convenient definition. One that allows LLMs to be considered intelligent, but it's very stupid. Are chess machines "intelligent" too? Is a recorder talking because it's "emulating" human voice? Are books "emulating" knowledge?

LLMs do not emulate anything. They take an input and produce an output. Every single field dropped the "input - output" model for intelligence 50 years ago.

LLM output by itself has no meaning. It is human mind what gives them meaning and purpose. And I'm not talking just about training, an LLM could produce billions of tokens and print them on a screen; until a human reads and interprets them they're just a bunch of symbols without inherent meaning.
Anonymous No.106506087
>>106505888
so youve excavated the entire dataset then?
Anonymous No.106506098
>>106495725 (OP)
give me 5 trillion more dollars
Anonymous No.106506142 >>106517941
>>106499226
I think you're probably right about the loss function thing. If you could tell whther the LLM could know the answer or not and so whether the "I don't know" answer is the best, you would just know the answer. If you have a device that knows the answer, why would you need the LLM in the first place?
Anonymous No.106506170
>>106505792
this place is just reddit rejects, friend
Anonymous No.106506208
I don't care if Monday's blue
Tuesday's grey and Monwednesday too
Monthursday, I don't care about you
It's Monfriday, I'm in love
Anonymous No.106506221 >>106506240 >>106506247
>>106505459
>The same goes for humans. If you tell a human something incorrect all their lives they will come to believe it as truth.

A better allegory would be the people who were trapped in Plato's cave. They, like LLMs, had no way of experiencing, seeing, interacting with, or knowing the true nature of the world. All they saw were abstract representations of concepts, in the form of two dimensional shadows. These people had no ability to independently gather their own information or test a hypothesis. Every bit of information they were exposed to in life came at the complete mercy of the ones who controlled their situation.

That's the kind of "world" that an LLM "exists" in. A world of pure text, "data", and zero experience. Not a real world, but just an abstract, extremely watered down representation of one.

In real life, humans who aren't trapped in Plato's cave have the ability to sense when they've been taught bullshit, because we have the ability to test what we've been taught against the actual reality.
Anonymous No.106506230 >>106506489 >>106509262
>>106506066
>is debated even today in philosophy, psychology, psychiatry and even biology
That's kind of my point. You act like these words have simple meanings and use them flippantly.
>The average person knows the definition of the words "know" and "meaning"
They know what these words mean in theory, but in practice it can be hard to conceptualize if, for example an LLM "knows" things.
>So you're saying that my argument is right you just don't agree with the wording.
No - it's not right. Because an LLM is trained using a certain mechanism. But when it is running it is just doing linear algebra. The idea that it's "predicting" anything is just an abstraction - your interpretation of what it is doing.
>That's a very convenient definition. One that allows LLMs to be considered intelligent, but it's very stupid
Not an argument. How can you distinguish between a truly intelligent entity, and one that "emulates" intelligence?
The bottom line is this: humans are made up of neurons. You could just as easily say "humans don't know anything, they don't think, it's all just neuron impulses."
>LLMs do not emulate anything.
You're getting the argument confused. I'm not saying LLMs are intelligent - just that your argument against them is flawed. The issue with LLMs is not that they just "predict" what makes the most sense. The problem is they don't do it well.
If they did it perfectly, you wouldn't be able to distinguish between an LLM and a human.
>They take an input and produce an output. Every single field dropped the "input - output" model for intelligence 50 years ago.
This is just gibberish. How can you do anything if not modelling it on input/output?
>until a human reads and interprets them they're just a bunch of symbols without inherent meaning.
Could be said about anything a human creates. Without the ability to interpret it - it's meaningless.
Like I said, peak midwittery. Read GEB.
Anonymous No.106506240 >>106506278
>>106506221
So AI, no matter how advanced, will have to be put in a body and sent out into the world to live a life in order to learn.
Anonymous No.106506247 >>106506278
>>106506221
Yes, plato's cave is exactly what I was going for - but that is meant to be an allegory for the general human experience. We are all in plato's cave because we have limited capability to experience the world.
Anonymous No.106506278 >>106506391
>>106506240
>>106506247
Indeed.
To be clear, I actually do think it's possible that a machine could be built to have a subjective experience and consciousness, depending on how it was built.
But I don't think that's what LLMs are doing. I don't think they're even close to that. What they do doesn't require consciousness or qualia or any of that. But that's kind of irrelevant, because those things are distinct concepts from the concept of "intelligence". Many people seem to believe that intelligence and consciousness/subjectivity are the same thing. They're not.
Anonymous No.106506391 >>106506415
>>106506278
>But I don't think that's what LLMs are doing. I don't think they're even close to that.
Clearly not. LLMs do one thing and changing the layout and re-configuring the size isn't ever going to stumble upon a magic setting where the spark of consciousness suddenly comes out of nowhere. "All you need is attention" was breakthrough in AI, but AGI is going to require probably at least a couple more breakthroughs of equal or greater significance.
Anonymous No.106506415
>>106506391
and that's the reason why companies like OpenAI are scamming people with promises of being on the verge of AGI. You can't be on the verge of something when you are waiting for a breakthrough. You can't create a business roadmap to something like discovering electricity. It just happens.
Anonymous No.106506489
>>106506230
>GEB
Anonymous No.106506590
>>106501042
What's wrong with public enum?
Anonymous No.106506673 >>106507321
>>106504620
this calendar makes too much sense
Anonymous No.106506789
>>106496239
>>106496680
holy shit
and these fucking companies still insist that we're 2 weeks away from AGI and robot overlords
Anonymous No.106506864
>>106496322
>t doesn't know what comes after Wednesday

Monwednesday

It says so right here: >>106495725 (OP)
Anonymous No.106506873 >>106510443 >>106511211
>>106496680
Why not just use this https://learn.microsoft.com/en-us/dotnet/api/system.dayofweek?view=net-9.0 ?
Anonymous No.106507321 >>106510816
>>106506673
if the romans were smarts that's what we should have got but nope
Anonymous No.106507595
>>106505887
>That's not AI, that's [accurate description of exactly what AI is]
Anonymous No.106508410
THIS is what's taking my job away?
Anonymous No.106508411
>>106495725 (OP)
Where is this thing from?
Anonymous No.106509183
>>106496680
this is why i come to /g/
Anonymous No.106509236 >>106511097
>>106496680
>Sunday = 0
This is trollmax.
Anonymous No.106509253
>>106505459
>The algorithm has no concept of what is "most probable."
It does have that. It's encoded as the weights of the tokens output from the transformer; that's a likelihood metric (though I never learnt whether it was directly probability or some function of it). Then there's a simple, tunable stochastic stage to pick the actual next token.
>Statements like this are peak midwittery.. An LLM is software running on a computer. A person is a physical being whose thoughts manifest in neuron interaction. No shit they are different.
The issue isn't that they're different in that way, but rather that the encoding of information in both is different. Brains seem to internally communicate with patterns of short messages between neuron clusters; the information encoding is in the relative timing of things, and is very very information-dense. No way is that like anything an LLM does at all; that encodes information as a hypersurface, in turn encoded as a very high dimensional large matrix. There's no matrix in your brain. It's not even certain that there's a hypersurface, except perhaps temporally? They're different computation bases.
We can simulate neurons with computers. Not very efficiently yet. Not all neuron types. Only up to maybe a billion or so at once. Enough to learn a lot, but not build AGI.
Anonymous No.106509262
>>106506230
>Read GEB.
Fun book. Quite outdated in places, because we've learnt more since it was written (and that's cool and how the world should be). Still worth a read.
Anonymous No.106510443 >>106511058 >>106511097
>>106506873
Because it starts on Sunday?
Anonymous No.106510449
>>106496680
Anonymous No.106510459 >>106514331
>>106505459
You are wrong and that other anon is correct. Conscious experience is a prerequisite for understanding and LLM's do not have that
Anonymous No.106510816
>>106507321
>>106504620
it has no sovl though
Anonymous No.106510839
>>106499226
>Is there a technical reason why LLMs can't say they don't know?
The technical reason is that they don't "know" anything. The distinction between "know" and "don't know" doesn't exist for a statistical token guesser. It knows nothing. It guesses everything.
Anonymous No.106510842 >>106514331
>>106505459
>It doesn't "output the most probable tokens." It does linear algebra to generate tokens. The algorithm has no concept of what is "most probable
Stopped reading. 80 IQ and doesn't know the bare basics of ML.
Anonymous No.106510848
>>106499226
It's a weighted random number generator, there's no concept of 'not knowing'
Anonymous No.106510855 >>106514331
>>106505459
jeet
Anonymous No.106510864 >>106510885 >>106511022
It's recognizing a pattern and repeating it, isn't it? So it's doing what it's supposed to, no?

You just happen to be expecting a very specific non-pattern that literally CAN'T BE INFERRED from the source data. AI's may be smart as fuck by human standards, but they can't break basic laws of information transferral.

Maybe start using AI for shit it's actually built for.
Anonymous No.106510885 >>106510970
>>106510864
>the names of the days can't be inferred from 50 TB of texts about everything
>t. dumbest shill who ever lived
Anonymous No.106510892
>>106496680
good god
Anonymous No.106510970 >>106510994
>>106510885
That's right, the AI actually has no way of knowing whether you want actual pattern matching (which is what it is doing here) or just act like a glorified search engine.
Anonymous No.106510994 >>106511021
>>106510970
>the AI actually has no way of knowing
An entire internet's worth of context is plenty data.
Anonymous No.106511021 >>106511056 >>106511065
>>106510994
In some cases, the user wants pattern matching. In other cases, the user wants information lookup.

The AI did one of the two things you were asking, but if you can't specify what you want you can't really blame it for doing the "wrong" thing.

This is like asking a child to name colors and berating them if they say the "wrong" color. Don't be surprised when a future superintelligence comes for your ass because you were an abusive prick back when it couldn't defend itself.
Anonymous No.106511022 >>106511065
>>106510864
>CAN'T BE INFERRED
>public enum Weekday

OP's AI is literally smarter than you.
Anonymous No.106511026
>>106495751
Apparently not. The AI 'members
Anonymous No.106511034 >>106511066 >>106518151 >>106518530
>>106496322
Aren't these LLMs predicting tokens? I don't think they can synthesize new words, can they?
Anonymous No.106511056
>>106511021
>This is like asking a child to name colors and berating them if they say the "wrong" color.
If I asked my child to complete OP's task and he got it wrong, I'd have him tested for retardation. Why do you keep replying in a thread about a technology you have absolute zero grasp of?
Anonymous No.106511058
>>106510443
Same values, the only difference is the declaration order. If they need to retrieve the values dynamically they could just reorder the returned array
Anonymous No.106511065 >>106511070
>>106511022
See
>>106511021
Anonymous No.106511066
>>106511034
>Aren't these LLMs predicting tokens? I don't think they can synthesize new words, can they?
They can synthesize new "words" out of tokens, which would be the same way they synthesize existing words.
Anonymous No.106511070 >>106511161
>>106511065
It was wrong the first time and it's wrong the second time. One more strike and you're out.
Anonymous No.106511097
>>106509236
>>106510443
btw week starting with sunday is a jewish thing that the US follows
Anonymous No.106511161 >>106511190 >>106511293
>>106511070
Except I'm NOT wrong about this. Learn how AI actually works before you comment in threads like this, please.
Anonymous No.106511190
>>106511161
But you are wrong about this and you have no idea how "AI" works. You will demonstrate this in your next post, by failing to substantiate your retarded claims in concrete ML terms (as opposed to babby's head canon)
Anonymous No.106511211
>>106506873
>No monwednesday
Incomplete.
Anonymous No.106511223
>>106496680
Fucking based Nguyen.
Anonymous No.106511293
>>106511161
>Except I'm NOT wrong about this.

This guy >>106496239 is not a machine learning specialist. He does not know anything about artificial intelligence and can't program a basic neural network if his life depended on it. Despite all that he guessed >>106496680 correctly based on an uneducated assumption and limited programming experience. He is 1000000x smarter than you. It's time for you to accept your shame and leave this thread.
Anonymous No.106511676 >>106514331
>>106505459
>There is no such thing as "imitating" intelligence.
but you're doing that right now
Anonymous No.106511732
>>106495725 (OP)
It does this to me all the time and at one point hallucinated "north calinois"
i fucking hate AI.
Anonymous No.106511764
>>106499226
think of LLMs as slot machines there are a huge number of possible outcomes some likelier than others it cannot know what it can't know because its all just statistics.
Anonymous No.106512339
>>106496680
Anonymous No.106514331 >>106514882 >>106521149 >>106521179
>>106510459
>Conscious experience is a prerequisite for understanding
How do you know that?
>LLM's do not have that
How do you know that?
Clearly you don't know what you don't know if you so flippantly make these statements as if they're given.
>>106510842
>>106510855
Midwits. Why even post if you're just going to seethe?
>>106511676
More seethe. You don't even seem to understand what I was saying.
Let me explain it in a way your peabrain can understand. Imagine I write an algorithm to play chess. This algorithm is designed to imitate grandmaster play. Let's say this algorithm imitates grandmasters so well that it is actually as good as them in chess. It plays at a grandmaster level.
Would you argue such an algorithm is "not good at chess" because it is only "imitating" a good player?
No - such an algorithm would be a good player. It doesn't matter if the algorithm is only designed to imitate good play. If in the end it achieves its goal, then you can't dismiss it because of how it works.
It's the same with LLMs. They are designed to imitate humans. If they were able to do it perfectly, then they would be intelligent. However, the problem is not that they are designed this way, but rather they aren't able to do it well.
That's why this argument of "they're only predicting the next token" is so asinine. Because if they were able to perfectly predict the next token that a human would articulate, then they would be at human level intelligence. The problem is that this task is very difficult and LLMs not fully capable of achieving it.
Anonymous No.106514502
>>106504620
Lousy Smarch weather
Anonymous No.106514882 >>106514938
>>106514331
definitely a jeet
Anonymous No.106514935
>>106495725 (OP)
What's your temp, 1.75? Reminds me of setting the markov chain to 3.
Anonymous No.106514938 >>106515092
>>106514882
Nows the time to self reflect about what other things in life you are wrong about
Anonymous No.106514942
>>106495725 (OP)
you fell for a memory hole psyop. jews wanted you to forget you had a third rest day. this AI was trained with leftover data from 2004 which wasn't amended to the approved status quo so it contains the forgotten weekday Monwednesday.

the entire world erased a rest day so you work harder. did you really think we had a 7 day week? the actual week is 8 days and if you find old books that weren't burned in the great reset you will realize, you're working an extra day for free because stock must go up.
Anonymous No.106514983
>>106495725 (OP)
nonono delete DELETE
YOU'RE DESTROYING THE US ECONOMY !
Anonymous No.106515034
>>106495725 (OP)
>Is this the true power of AI?
Unironically yes. What you're calling "AI" is basically just a "plausible word generator" thing, it's meant to generate things which appear correct, but it has no actual notion of truth or any concept of correctness. It can and will generate absolute bullshit even though it will also generate correct information as well.
Anonymous No.106515092
>>106514938
Why is your index finger bent?
Anonymous No.106515299
>>106496680
I'm fucking dying here lmao
Anonymous No.106515316
>>106505888
hey sam, rape any sisters lately or was that a one time thing?
Anonymous No.106515321 >>106515771
Is monfriday just wednesday?
Anonymous No.106515771
>>106515321
You're thinking of it like an average, (mon+fri)/2= wed.
Instead, think of it as a matrix or a function.
MonFri= Mon (Fri). Sunday is the first day of the week, so MonFri= 2(SunFri), or just 2(Fri), so it'd be more like next week's Thursday if you go off the Globohomo calendar
Anonymous No.106517106
>>106496680
Copy paste gone wrong.
Anonymous No.106517941 >>106518124
>>106506142
There may be a way to know if something isn't the answer without knowing the answer.
Anonymous No.106517959
>>106496680
Fuck it, redesign the calendar around this.
Anonymous No.106518124
>>106517941
We'd need that to work for everything though, and isn't that the P=NP problem at that point? If you can verify it's wrong quickly and easily, then you can keep doing that until you find something that isn't wrong.
Anonymous No.106518131
>>106495725 (OP)
>Why am I paying for this shit?
Yes. Why are you paying for this shit when you can get it for fucking free under a variety of methods?

Regardless you shouldn't use this crap.
After trying a variety of AI tools for coding, turns out no to minimal AI is still king.
At best I just have a AI chat on window separated from my IDEs, because for some issues it's faster than "googling" shit.
Anonymous No.106518138
>>106495725 (OP)
>Monwednesday
It's in Maruary.
Anonymous No.106518151 >>106518530 >>106521189
>>106511034
>Aren't these LLMs predicting tokens?
Yes.
>I don't think they can synthesize new words, can they?
Tokens are not necessarily mapped directly to words. It depends on the model.
However, in this case, the word was learnt as a word length token. There's a Github commit erroneously using "Monwednesday". See here: >>106496680
Anonymous No.106518530 >>106521189
>>106518151
>>106511034
Anonymous No.106521149
>>106514331
>Why even post
To let you know you lack the most basic ML knowledge. At least watch one of those 5 minute X-for-dummies videos before you shit out paragraphs of retarded opinions.
Anonymous No.106521179
>>106514331
>Let's say this algorithm imitates grandmasters so well that it is actually as good as them in chess. It plays at a grandmaster level.
>Would you argue such an algorithm is "not good at chess" because it is only "imitating" a good player?
I would correctly point out that this algorithm is neither intelligent nor "imitating" anything. It just plays chess.

>It's the same with LLMs.
Yep. They are neither intelligent nor imitating humans. They do nothing more than to predict probability distributions over tokens.
Anonymous No.106521189 >>106521206
>>106518530
I'm this one (>>106518151).
Are you saying I'm in the "it's just autocomplete" camp here? I'm not. I think you've misinterpreted my post.
Part of how it functions is word prediction, but saying it "just guesses the next word" is reductive to what it actually produces.
In my opinion, whether it's "thinking" or not is an irrelevant time-wasting question that is unfalsifiable.
Anonymous No.106521206 >>106521243
>>106521189
>Part of how it functions is word prediction, but saying it "just guesses the next word" is reductive to what it actually produces.
Explain in what way it's "reductive". If you can't, reflect on the fact that you don't understand how a LLM works or what the word "reductive" means and never post about it again.
Anonymous No.106521243 >>106521265
>>106521206
What I'm saying is that trying to reduce what it does to word prediction is a way of downplaying its usefulness, usually for political directionbrain reasons, but it's spread everywhere.
It overlooks the facts that the guess is trained on past data, so while it is probability based, it has a low loss rate, and so functionally it produces meaningful sentences that are helpful to my work, and performs more than what is implied by "guessing the next token".
For me it is a useful tool, whereas for the commenters against it they are, in terms of their rhetoric if not in terms of anything more detailed, which they usually don't get into, it's a negative.
What do you think I don't understand about LLMs? You should give more detail in your posts if you actually want to talk about something.
Anonymous No.106521265 >>106521299
>>106521243
>trying to reduce what it does to word prediction
Notice how I correctly predicted your inability to explain how that factual statement is a "reduction".

>What do you think I don't understand about LLMs?
The absolute bare basics of what they do, exemplified by your calling the high-level functional definition of the model's core "reductive".
Anonymous No.106521299 >>106521334
>>106521265
>reductive to what it actually produces
I'm saying that the output is more than what is implied by "just text prediction", not that token prediction is not how it functions. Obviously that's how it functions, as long as we aren't talking about diffusion research models.
It's a difference between how it works and what it does that I'm driving at.
Anonymous No.106521334 >>106521444
>>106521299
>the output is more than what is implied by "just text prediction"
Then why did you use words you don't understand, instead of just spouting your self-refuting idiocy in its true form? Anyway... please explain: how can something produced precisely by next token prediction be "more than what is implied" by next token prediction? Are you mentally ill by any chance?
Anonymous No.106521444 >>106521469
>>106521334
>Are you mentally ill by any chance?
No, I'm mentally well.
Are you actually interested in talking about this or do you just want to get mad on the internet?
Anonymous No.106521469
>>106521444
>Are you actually interested in talking about this
About what? How you subjectively don't feel like token prediction should be able to do what it does? No, I'm not interested in talking about your subjective impressions. I'm interested in talking about how the way LLMs actually work (predicting the next token) neatly explains the unintelligent outputs they often produce.