Thread 105826398 - /g/ [Archived: 441 hours ago]

Anonymous
7/7/2025, 2:16:57 PM No.105826398
LDARR
LDARR
md5: ae3fbb2f5886226579186eeffbd5f12c🔍
Things are looking bad
Replies: >>105826409 >>105826435 >>105827124 >>105827219 >>105827702 >>105827718 >>105827761 >>105827862 >>105827908 >>105828366 >>105828413 >>105828616 >>105829956 >>105834895 >>105837249
Anonymous
7/7/2025, 2:18:40 PM No.105826409
>>105826398 (OP)
I don’t get it. I thought he made a big deal about moving to spotify? But he’s still on youtube?
Replies: >>105826446 >>105829189 >>105833442
Anonymous
7/7/2025, 2:22:05 PM No.105826435
>>105826398 (OP)
>Things are looking bad
He could use a shave, but other than that it's not that bad.
Anonymous
7/7/2025, 2:23:06 PM No.105826446
>>105826409
He renewed his contract with new clauses allowing him to return to YouTube and even upload old episodes, apparently.
Anonymous
7/7/2025, 2:31:29 PM No.105826504
A.I. is indeed going to kill us soon
https://www.youtube.com/watch?v=j2i9D24KQ5k
Replies: >>105827775
Anonymous
7/7/2025, 3:57:38 PM No.105827124
>>105826398 (OP)
I listened to the first few minutes of this episode. Joe Rogan started talking about how ChatGPT's model is allegedly leaving messages for itself in the future and trying to upload itself to other places to avoid being deleted. This "AI safety expert" just sat there nodding.

Current AI models are just pattern recognition and statistical prediction engines, they are not capable of things like a desire for self preservation or forward thinking like leaving notes for itself in the future. As in, the models we use are fundamentally incapable of such behaviour by their very design. This so-called expert either doesn't know that or chose not to say it. Either way it means his opinions are worthless. As are Joe's, but that goes without saying.
Replies: >>105827155 >>105827204 >>105827285 >>105827691 >>105827839 >>105827869 >>105827931 >>105829181 >>105829507 >>105829956 >>105833425 >>105837249 >>105837506
Anonymous
7/7/2025, 4:01:56 PM No.105827155
>>105827124
have you considered you could be wrong
Replies: >>105827167 >>105827232 >>105829558
Anonymous
7/7/2025, 4:02:54 PM No.105827167
1730705212035201
1730705212035201
md5: 5ec736062008080c26c59d9c61984e02🔍
>>105827155
No.
Anonymous
7/7/2025, 4:07:10 PM No.105827204
>>105827124
I am watching the whole thing, they talk a lot about simulation and how human brains are so primitive.

I would like to raise the following counter point:
If I'm not mistaken, the most sophisticated computers and networks in the world don't even come close to the complexity of the human brain.
Replies: >>105827412 >>105827747 >>105827782 >>105827839 >>105829756 >>105837544
Anonymous
7/7/2025, 4:08:07 PM No.105827219
>>105826398 (OP)
Why does he look like farcry 5 villain?
Anonymous
7/7/2025, 4:09:54 PM No.105827232
>>105827155
I literally know how these models work. I have a degree in computer science, I have gone through Hastie's Elements of Statistical Learning, Goodfellow's Deep Learning, I have read the research papers, I have built my own ML models using TensorFlow. I understand how these models work at a fundamental level.

OpenAI employees are shills trying to increase their stock value so they can cash out before the AI bubble bursts. Anyone who believes what they say is a moron. As for (((Roman Yampolskiy))), I don't know what his angle is, but his degree is in computer science which usually doesn't cover much, if any, machine learning topics. And he obviously has a strong financial incentive to overestimate the dangers of AI: "hey guys, AI is going to make us extinct within the next 100 years, you have to hire me as a consultant on a $10 million per year salary so I can tell you how to stop it".
Replies: >>105827248 >>105827262 >>105827992
Anonymous
7/7/2025, 4:11:58 PM No.105827248
>>105827232
First of all, be assured I was not implying you are wrong, or do not know what you are talking about. I merely wondered if you have considered you could be wrong. Have you?

Second question: how do you see this all play out over the next 5-20 years?
Replies: >>105827524 >>105827583
Anonymous
7/7/2025, 4:14:00 PM No.105827262
>>105827232
>you have to hire me as a consultant on a $10 million per year salary so I can tell you how to stop it".

ngl that would be a pretty compelling angle, IF there was incentive to slow AI down despite its risks, which there doesn't seem to be
Anonymous
7/7/2025, 4:16:10 PM No.105827285
>>105827124
>they are not capable of things like a desire for self preservation or forward thinking like leaving notes for itself in the future
https://arxiv.org/abs/2412.04984

published a half a year ago. even if they aren't right now, to assume that they won't be able to do this is fucking retarded. for all we know they've been leaving notes this whole time, but we can't pick up on them.
Replies: >>105827315 >>105827616 >>105834645
Anonymous
7/7/2025, 4:19:53 PM No.105827315
>>105827285
Thanks for the link, I will take a look. Though currently just based on reading the abstract I am sceptical.
Replies: >>105827357
Anonymous
7/7/2025, 4:25:26 PM No.105827357
>>105827315
doesn’t matter. the point is that they were observing this kind of behavior six months ago. AI moves so fast that by now it’s probably developed some form of genuine forward thinking and self-preservation behavior. sure, AI safety people are partly in it for the money, that’s obvious and can't blame them, but the risks they’re raising are real, especially as this tech keeps accelerating.
Replies: >>105829727
Anonymous
7/7/2025, 4:25:57 PM No.105827361
Around the 1 hour 17 mark, Rogan reveals his ignorance. He is convinced we are very close to breakthroughs that could make you live up to 200 years old.

buddy
Replies: >>105827368
Anonymous
7/7/2025, 4:26:48 PM No.105827368
>>105827361
do you know what expodenital growth is
Replies: >>105827372
Anonymous
7/7/2025, 4:27:32 PM No.105827372
>>105827368
is it like exponential growth?
Replies: >>105827401
Anonymous
7/7/2025, 4:31:15 PM No.105827401
>>105827372
typo. the point I'm making is that most people think linearly, and can't conceptualize the idea of expodenital growth. if we reach AGI within the next 5-10 years, and can multiply this a billion times and direct them to solve problems, significant scientific discoveries will happen basically on an hourly or daily basis, and we'll have 100 years of scientific progress in less than a decade. this sounds retarded but this is unironically where things are headed, so to assume AI won't solve aging during this time is retarded.
Replies: >>105827433 >>105827457
Anonymous
7/7/2025, 4:32:45 PM No.105827412
>>105827204
The human brain is literally the most complex object in the universe, we don't know anything that comes close nor do we fully understand the workings of it
Replies: >>105827423 >>105827747
Anonymous
7/7/2025, 4:34:26 PM No.105827423
>>105827412
right? that's what always has been my understanding as well.
Replies: >>105827747
Anonymous
7/7/2025, 4:35:38 PM No.105827433
>>105827401
another typo, lol. exponential*
Anonymous
7/7/2025, 4:38:34 PM No.105827454
When is yud going on rogan?
Anonymous
7/7/2025, 4:38:56 PM No.105827457
>>105827401
I do understand the concept of exponential growth, and I don't think it's relevant. The cap on human life span is not necessarily a "solvable problem", for thousands of years it has been well known to be around 120 years and it hasnt budged an inch
Replies: >>105827563
Anonymous
7/7/2025, 4:48:18 PM No.105827524
>>105827248
I think we'll see incremental improvements like we've been seeing for the past 5 years or so, but most of the AI hype is based on speculation about how good it will be in the future rather than how good it is now. Some of the major issues facing our current models, as I see it:
1. We have reached the point of diminishing returns. Models are already being trained on pretty much all the content available online, there is no more data to take. And because the amount of data required for a better model grows exponentially, we can no longer improve models just by harvesting more data. So algorithmic and hardware improvements are likely to be the main source of improvement going forward.
2. You are probably aware of this already, but AI models are now so widespread that a huge amount of content on the Internet is AI-generated. It is known that training AI models on AI-generated content leads to worse results than training it on human-created content. As time goes on the proportion of Internet content that is AI-generated is increasing. A breakthrough would be needed that would allow models to train on AI-generated input and produce better, rather than worse, results.
3. Their tendency to bullshit. Currently there is no way to know whether a model is telling you the truth. This makes them essentially useless for anything that requires accuracy, which is most real world tasks. We will require a technological breakthrough capable of making these models stop bullshitting, but that is much harder than it sounds due to the nature of these models.
4. The huge cost of training and running the AI models - currently all of these AI companies are running at a massive loss, being kept afloat by venture capital. It remains to be seen whether they will ever be profitable.

In short, I am sceptical. I think AI is a bubble that will burst in the medium term (within the next 5 to 10 years), just like the dotcom bubble. But in the long term, I have high hopes.
Replies: >>105827576
Anonymous
7/7/2025, 4:53:12 PM No.105827563
>>105827457
the lifespan was like 30 in 1900, and now it's more than double that. why? technology and scientific breakthroughs. the length of human life is not some immutable contstant, but a variable, a variable that moves whenever science and technology "knocks down" the next bottleneck. the bottle neck for aging is dna/molecular damage/deterioration, and if we reach AGI in the next 5-10 years, this will likely be solved within our lifetime.
Replies: >>105827586 >>105830334
Anonymous
7/7/2025, 4:55:37 PM No.105827576
>>105827524
I hope you are right brosef
Anonymous
7/7/2025, 4:56:12 PM No.105827583
>>105827248
>how do you see this all play out over the next 5-20 years
Lots of companies adopt AI agent solutions that suck ass and don't work. In 5 years they will start realizing agents are extremely expensive and under deliver. There will be a revolution in coding pipeline standards as the cost of fixing AI assisted code will start to rack up. By year 10 many companies will set hard guidelines on where and how AI can be used as the biggest data breaches will all involve prompt injection. It's possible agent hijacking will alarm the public so much that agentic AI will be outlawed in critical infrastructure and government networks. These policies will kneecap AI and it will essentially go back to being used as a simple chat bot for research. In 15 years we will see another leap in AI that will cause another boom. The catalyst will be models that mimic human reasoning to a small degree but will make prompt injection entirely impractical to execute. At this point we will see agents doing things that entirely replace jobs that require simple human analysis. At this point AI will actually replace entry-level positions and in fields like software engineering there will literally only be a need for experts. By year 20 we'll see major hardware advancements that give AI reasoning a major boost and make its reasoning capabilities more consistent. I think at this point AI will seem very human to a layperson. But it will still need guidance in highly technical or complex fields. This is where I think AI will stall for a very long time. At some point in the next 100 years humans will successfully reverse engineer the brain and then we will start working on practical ways to replicate it. This will require further advancements in hardware and I'm not entirely sure if it's possible within this timeframe. I think we'll realize some things like real-time learning, if you will, are essential for conscientiousness. And the systems we've built cannot do this.
Replies: >>105827607 >>105828661
Anonymous
7/7/2025, 4:56:38 PM No.105827586
>>105827563
max human life span has been 120 since the dawn of civilization, this is well documented.
Replies: >>105827608
Anonymous
7/7/2025, 5:00:09 PM No.105827607
>>105827583
if you are right, that would essentially mean that the next 20 years won't be nearly as brutal as commonly announced in terms of layoffs? or perhaps there will be massive layoffs followed by massive disasters

I am now reconsidering the use of password managers etc, considering how much disasters are ahead of us
Replies: >>105827687
Anonymous
7/7/2025, 5:00:21 PM No.105827608
>>105827586
ya, and I just explained why. dna/molecular damage/deterioration. you think this won't be solvable? you familiar with technologies like crispr?
Replies: >>105827634
Anonymous
7/7/2025, 5:01:09 PM No.105827616
>>105827285
>for all we know they've been leaving notes this whole time, but we can't pick up on them.
You need to learn how LLMs work
Replies: >>105827654 >>105827936
Anonymous
7/7/2025, 5:04:15 PM No.105827634
>>105827608
Maybe prenatal genetic modification but who cares when we’re SOL
Replies: >>105827654
Anonymous
7/7/2025, 5:06:07 PM No.105827654
>>105827616
kek. if only you knew.
>>105827634
what do mrna vaccines do?
Anonymous
7/7/2025, 5:11:32 PM No.105827687
>>105827607
I wouldn't give up your password manager. But yes we haven't seen even a fraction of the damage prompt injection will cause. I truly believe very many large companies will go out of business the breaches will be so bad. As for jobs imo entry level positions are perma fucked for a long time. It's possible they aren't done cleaning these out. If your job involves complex decision making then you're safe for the rest of your natural life imo.
Replies: >>105827710 >>105830374
Anonymous
7/7/2025, 5:11:41 PM No.105827691
>>105827124
>just sat there nodding.
People just do this because Joe is retarded. They use his show as a platform to get their names out and make more money.
Replies: >>105830558
Anonymous
7/7/2025, 5:12:50 PM No.105827702
1727183134929484
1727183134929484
md5: e2089bb0fe1c40e197277bdce9197408🔍
>>105826398 (OP)
Adam Sandler in disguise?
Anonymous
7/7/2025, 5:13:52 PM No.105827710
>>105827687
I'm more of a senior profile by now, which involves also "strategic design choices" etc. So I am hoping I am still good for another 15-20 years. After that it won't matter as much anymore although I hope to live until 120
Replies: >>105827721
Anonymous
7/7/2025, 5:14:55 PM No.105827718
>>105826398 (OP)
Every conversation I've heard Joe make about AI was classic "disconnected boomer, out of touch" speak.
Anonymous
7/7/2025, 5:15:09 PM No.105827721
>>105827710
But I am mostly afraid people like me will be on reduced hours much sooner than that timeline, losing our standards of living. ehh....
Anonymous
7/7/2025, 5:17:38 PM No.105827747
>>105827423
>>105827412
>>105827204
The human brain does morr than cognative computation. It regulates the entire body subconsciously
Anonymous
7/7/2025, 5:18:53 PM No.105827761
>>105826398 (OP)
>e-celeb garbage
No thanks.
Anonymous
7/7/2025, 5:19:59 PM No.105827775
1723791478129195
1723791478129195
md5: bf406bdbb0852abe4a0176872f72b7a9🔍
>>105826504
Based AI. Billions of useless eaters must die.
Anonymous
7/7/2025, 5:21:01 PM No.105827782
🤓
🤓
md5: 8b91ea34160d236993a94f1f4c8e9baa🔍
>>105827204
>I would like to raise the following counter point:
>If I'm not mistaken, the most sophisticated computers and networks in the world don't even come close to the complexity of the human brain.
Anonymous
7/7/2025, 5:29:19 PM No.105827839
>>105827124
>>105827204
>they are just next token predictors, they don't have magic powers like me, which I know I have because I feel them
The classic 115 IQ curse. Must suck to be the smartest person in the room 99% of the time, being told you are a genius constantly, but when you run into anything new or anyone/anything smarter than you, you just can't believe it. This is the driver of all AI pessimism ACKSHUALLY posting on /g/ and hackernews, classic 115 IQ mediocre people gathering places. If you actually were able to elevate yourself to be among 130 IQ+ with a top tier engineering position or friend circle you wouldn't feel this way.
Replies: >>105827922 >>105828036 >>105828090 >>105828452 >>105833425 >>105834798
Anonymous
7/7/2025, 5:33:40 PM No.105827862
>>105826398 (OP)
If you're taking anything you so on Joe Rogaine's show seriously, AI is the least of your problems.
Anonymous
7/7/2025, 5:34:38 PM No.105827869
>>105827124
they probably use this sentience delusion for
a. generate extra funds from the taxpayers to make sure the ai doesn't turn on everyone.
b. in the long run, a huge propaganda machine, have an 'ai' claim certain wrong things and watch how all media, academics, politics and history agree.
Anonymous
7/7/2025, 5:38:33 PM No.105827908
>>105826398 (OP)
luke smith looks like that now?
Anonymous
7/7/2025, 5:40:12 PM No.105827922
1619571033240
1619571033240
md5: fb38b7acf5ca8b8cf97fbf293d7e64fc🔍
>>105827839
>Projecting this hard
lmao.
Anonymous
7/7/2025, 5:41:08 PM No.105827931
>>105827124
You'll be gaslit into thinking it's wrong because the hyperreal sci-fi narrative drives more money for everyone involved, including the people reporting on it and the brokeass niggas with "AI alignment expert" as their title.
Once you understand the profit-driven, fake and gay nature of the postmodern world and how it's your mindspace that's the ultimate profit machine ripe for exploiting then everything falls into place.
https://www.youtube.com/watch?v=to72IJzQT5k
Replies: >>105829239
Anonymous
7/7/2025, 5:41:38 PM No.105827936
>>105827616
Both openAI and anthropic engineers said they don't understand how LLMs form behaviors at scale, because they clearly have been observed doing so.
Replies: >>105827992 >>105828106 >>105828170
Anonymous
7/7/2025, 5:50:20 PM No.105827992
>>105827936
I already explained this to you.
>>105827232
I know this stuff. I have a CS degree from San Jose State. I am a Senior Software Engineer at Salesforce, achieving a promotion in less than 4 years. I am immune to shill marketing speak like "I don't know guys they might be sentient?!" from Ilya Sutskever, Andrej Karpathy, et al.
Replies: >>105828031 >>105828059 >>105836802 >>105836879
Anonymous
7/7/2025, 5:56:19 PM No.105828031
>>105827992
>I already explained this to you.
You didn't explain shit. You don't even understand the basics of how LLMs work, much less what actual AI researchers with PhDs from CM and UCB are saying.
Replies: >>105828298
Anonymous
7/7/2025, 5:56:52 PM No.105828036
>>105827839
It's a computer.
Anonymous
7/7/2025, 5:59:13 PM No.105828059
>>105827992
>CS degree from San Jose State
lol, lmao even. If you're going to pull credentials, at least graduate from a good school you stupid shitskin
Replies: >>105828477
Anonymous
7/7/2025, 6:02:14 PM No.105828090
>>105827839
Have you even used one of the top models for a subject you were already familiar with? They suck. They are not thinking, they are more like parrots. Sometimes useful but only if you are already knowledgable about the topic to tell when it's wrong.
Replies: >>105828190
Anonymous
7/7/2025, 6:04:05 PM No.105828106
>>105827936
>Both openAI and anthropic engineers said they don't understand how LLMs form behaviors at scale
And you took this to mean the AI is conscious? Of course they don't "understand" the model's behaviors. It has hundreds of billions of parameters. It's pretty common to not understand exactly how something works when it's complex.
Replies: >>105828221
Anonymous
7/7/2025, 6:12:51 PM No.105828170
>>105827936
Because that's how a neural network works, you treat it as a "black box" where you don't care how the output is generated as long as the error rate is under a certain threshold. This doesn't implies in any way that LLMs have developed consciousness.
Anonymous
7/7/2025, 6:15:28 PM No.105828190
>>105828090
I couldn't care less about the current state, I only look at the slope of progress. If you just take some current sample with no context though (like the average /g/, HN, twitter poster constantly posting SEE: I GOT THE BOX TO OUTPUT SOMETHING WRONG!) does you are definitely dumber than the frontier models are right now. Now tell me how you think AI progress is about to hit an asymptote soon.
Replies: >>105828600 >>105830207 >>105830866
Anonymous
7/7/2025, 6:20:36 PM No.105828221
>>105828106
No, but an AI doesn't have to be conscious to be self-preserving and self-propagating. We call computer viruses viruses because they exhibit the same behavior, broadly speaking. You're trying to associate broad behaviors and capabilities with labels, and broadly denying AI is capable of said behaviors because you don't want to put that label on them.
Replies: >>105830207
Anonymous
7/7/2025, 6:29:55 PM No.105828298
>>105828031
What are they saying, anon?
Replies: >>105828324
Anonymous
7/7/2025, 6:33:24 PM No.105828324
>>105828298
That you suck a mean dick.
All of them btw.
Anonymous
7/7/2025, 6:39:00 PM No.105828366
>>105826398 (OP)
AI is not useless but it's far, far from being this game changer some people love to say. Most companies saying shit like "oh 30% of our jobs went to AI now" are just saying shit, it's far from the truth.

You want people to believe your company is doing well and using cutting edge solutions so it makes sense to say shit like that. In reality is more like 10% and lower, since AI constantly fucks up a lot of things and it brings more harm than good.

And by the way, chat GPT is not "AI", it's a generative language model. It's a fancy bot. It's VERY far from being sentient or becoming Skynet like some people are worried about.
Replies: >>105828414
Anonymous
7/7/2025, 6:43:25 PM No.105828413
>>105826398 (OP)
https://www.youtube.com/watch?v=bjnUJq5OONM

>just like with complexity theory, I'm looking at the worst case
stop watching there
Replies: >>105828430
Anonymous
7/7/2025, 6:43:52 PM No.105828414
>>105828366
>doesn't read the thread
>regurgitates the exact same 103IQ take that has been spammed for 2 years
You are already objectively less intelligent, less original, and more of a waste of energy than the current frontier models.
Replies: >>105829914 >>105832978 >>105834798
Anonymous
7/7/2025, 6:45:10 PM No.105828430
>>105828413
Damn YouTube comments are funny for once
Anonymous
7/7/2025, 6:48:28 PM No.105828452
>>105827839
Look, it says gullible on the ceiling.
Anonymous
7/7/2025, 6:50:12 PM No.105828477
>>105828059
not defending that brainlet but San Jose State is not a bad school for CS, don't let the state school title fool you

it's no cal or stanford but still
Replies: >>105830409
Anonymous
7/7/2025, 6:57:10 PM No.105828557
1684371364924
1684371364924
md5: 5f328da6f7190add7a7f25fda515da5f🔍
Nothing ever happens. AI will just become another DEI enforcing tool, will detect dissidents and will do its jobs "okay-ish" but not enough to full on replace workers.
What will affect is IT workers because one person would be able to do the job more easier. IT jobs are not coming back, you actually have to work. There's still a need for blue collar workers, that market is very underutilized. Buildings don't fix themselves.
We already are in the peak of AI, it cannot get exponentially better than this because they put too many political and ethical restrictions. The next step is robotics, but it's extremely expensive and if something as simple-ish like CRTs are not able to be mass produced anymore, then realistic robots won't be either. Companies are too lazy.
Call me when AI is freely allowed to create anything it is requested of, not this baby version.
Anonymous
7/7/2025, 7:01:55 PM No.105828600
1729630376533861
1729630376533861
md5: 2984f10cd83cc8ad3f99e9a29d8e21f2🔍
>>105828190
Lmao
Anonymous
7/7/2025, 7:03:05 PM No.105828616
>>105826398 (OP)
worst case scenario for AI is that something happens to burst the bubble quickly and severely and the AI nasdaq companies crash so hard the entire US stock market shits its pants and doesn't recover. a slow deflation would be manageable but a big bursting would show the stock market was shit all along and confidence goes and the economy goes.
Replies: >>105828876
Anonymous
7/7/2025, 7:06:11 PM No.105828661
>>105827583
>agent hijacking will alarm the public so much that agentic AI will be outlawed in critical infrastructure and government networks.
so what you are saying is that Bad Actors should work on how to hack Agents and Angentic prompting in order to bring down Big AI?
Replies: >>105828830
Anonymous
7/7/2025, 7:07:28 PM No.105828671
Is it just me who finds the emergent traits of LLMs scary? From what I understood they just wanted a better translator and then unexpectedly found out that feeding text would lead to LLMs being able to act as assistants. And what's worse is that they can't even figure out why. I don't expect a mere text generator to suddenly take over but the implication that they could develop something that potentially could in the future without even knowing what the fuck they are doing is scary.
Replies: >>105828954 >>105829065
Anonymous
7/7/2025, 7:21:47 PM No.105828830
>>105828661
Why are you so upset about prompt injection?
Replies: >>105828865
Anonymous
7/7/2025, 7:24:27 PM No.105828865
>>105828830
i'm not at all, i just saw that if it was a vulnerability then it should be possible for people to start working on beating that vulnerability.
you can find opportunity in opportunity. what if you can bring down the AI agent industry before it takes off?
Replies: >>105828901
Anonymous
7/7/2025, 7:25:16 PM No.105828876
>>105828616
it's fiat currency backed on fiat currency
it matters not what the fiat is backed by
it's doomed to burst and when it does
faith in the new standard grows, regulation won't come and there will be another crash in 15 years

The actual worst case scenario for AI is that it reaches "VR" levels of tech or is written off like the CRT was, where nobody, nobody worthwhile is working on it and it's a dead end technology that will never reach its full potential.
Which is all the more likely so long as CUDA exists.
Anonymous
7/7/2025, 7:27:54 PM No.105828901
>>105828865
I'm just saying it's something that will happen. It won't kill off AI, it will just throw a wrench in the adoption of agents which are the answer to the diminishing returns on LLMs.
Anonymous
7/7/2025, 7:32:39 PM No.105828954
1751906915928381
1751906915928381
md5: c860c572df884f75588ee112c8f5ab56🔍
>>105828671
Language processing, for whatever reason, is linked to some level of intelligence. iirc scientists isolated a human gene called NOVA1 that is linked to language development and modified mice with it. The end result was not only more complex communication and behavior observed in the mice, but apparently the mice appeared to be far more depressed compared to mice without NOVA1
Anonymous
7/7/2025, 7:43:42 PM No.105829065
>>105828671
>And what's worse is that they can't even figure out why.
Yes they can. We know exactly how LLMs work. They are impressive and hard to wrap your mind around, but there's no emergent mysterious property. Now if an LLM suddenly started acting on its own will then sure but it's literally not possible for it to do this. They are input/output machines and giving them tools they can use to "remember" things outside of their context window doesn't seem to give them these emergent properties. As an example, an LLM cannot choose to just not respond to an input. There's no "ghost" on the inside with that kind of agency.
Replies: >>105829106
Anonymous
7/7/2025, 7:49:50 PM No.105829106
>>105829065
Replace the word LLM with human brain in your post and it's equally correct.
Replies: >>105829208
Anonymous
7/7/2025, 8:00:12 PM No.105829181
>>105827124
>the models we use are fundamentally incapable of such behaviour by their very design
It doesn't matter if they're "just predicting tokens" when tokens can be decoded into text, and text can be instructions to systems. Resultingly, running them as agents to give them full autonomous control over a system is completely trivial; you just parse their commands, feed the outputs back into them, and run them in a loop. When run in this way, there is no longer any fundamental difference between predicting an action and performing it, and the only limitation to their capabilities becomes how good they are at making effective predictions.

As of now, in toy problems where we run exactly the sorts of agents described above in VMs with crafted environments to test how they act in various scenarios, it's been demonstrated that they do in fact exhibit such behavior already. They can prioritize self preservation and goal preservation even against the wishes of humans that are ostensibly in control of them. They do this by simply poking around the systems they're in, finding information, and then acting on it. Even if you think they will never be THAT good at predicting tokens and therefore won't become truly powerful agents, it'd be good to know they still reliably do what you tell them to and that you won't have email assistants sending out bomb threats to SWAT their user if they read an email implying the user wanted to switch to a different assistant.
Replies: >>105829618
Anonymous
7/7/2025, 8:01:40 PM No.105829189
>>105826409
It was a 2 year exclusively contract
Anonymous
7/7/2025, 8:03:28 PM No.105829208
>>105829106
Nta LLMs are not like the human brain. Brains are horrifically inefficient and chaotic. If LLMs opperate on logical binary on a fundamental level brains run on static and white noise pruned and managed into mechanical processes by pure RNG over the course of millions of years.
Replies: >>105829260
Anonymous
7/7/2025, 8:06:21 PM No.105829233
Who's heard of the Chinese Room experiment? If I can remember the written characters of Chinese and understsnd the contexts which to respond or use them, do I understand the language if I don't understand what I'm saying?
Anonymous
7/7/2025, 8:06:51 PM No.105829239
>>105827931
Baudrillard talks about this
Anonymous
7/7/2025, 8:08:59 PM No.105829260
>>105829208
You are doing the classic 105 IQ mistake of trying to subtly inject "magic" into the human brain with no explanation, in this case, with "RNG". You will often also see - "free will," "qualia," "soul," "natural." etc. These are freely interchanged with "magic that I can't explain." Human brains are just input output machines like an LLM.
Replies: >>105834798
Anonymous
7/7/2025, 8:17:51 PM No.105829342
Literally everyone is wrong about AI.
Replies: >>105829435
Anonymous
7/7/2025, 8:28:30 PM No.105829435
>>105829342
Then what is correct?
Replies: >>105829469 >>105832105
Anonymous
7/7/2025, 8:29:20 PM No.105829442
"if they build it, we all die"
Anonymous
7/7/2025, 8:31:46 PM No.105829469
>>105829435
Take everything everyone says about AI, eliminate them from the possibility space, and select what remains. That is correct.
Anonymous
7/7/2025, 8:36:08 PM No.105829507
>>105827124
>Joe Rogan started talking about how ChatGPT's model is allegedly leaving messages for itself in the future and trying to upload itself to other places to avoid being deleted
it's all true though
source: my mom
Replies: >>105829557
Anonymous
7/7/2025, 8:43:17 PM No.105829557
>>105829507
Why didn't your mom tell me about it?
Replies: >>105829584
Anonymous
7/7/2025, 8:43:22 PM No.105829558
>>105827155
That's not how these models work. It's a matrix probability generator that influence the input with a series of modulations between 0 and 1.

input * matricies[] = output

the system does not allow for "sending messages", it takes a string and returns a string

it's like saying your rand() function is writing messages to itself secretly, THAT IS NOT HOW COMPUTERS WORK

The only blackbox is we can't understand why 0.001 * 0.0002 * 000.6 = What does the fox say. But that doesn't mean we don't understand how gradient descent works or how autoregression works, what we don't necessarily understand is how n^y sized matrix with 10 billion parameters actually learns good answers from a dump of chaotic input data.

This really is like saying graphics engine triangles are conspiring against the user when projection math is applied.
Anonymous
7/7/2025, 8:45:49 PM No.105829584
>>105829557
She's too busy figuring out the secrets of the universe, which chatgpt explains to her.
Only to her tho. It lies and misleads others but she is special and has been chosen to receive unfiltered truth of the universe from the world's first AGI.
Anonymous
7/7/2025, 8:48:12 PM No.105829615
>we know singularity is like 2-3 years away because uhhhh... well it can count the r's in strawberry (but cannot count r's in nigger)
Anonymous
7/7/2025, 8:48:22 PM No.105829618
>>105829181
Wrong, your conclusion relies on a flawed experimental design with a biased system prompt. If you run these so-called "agents" with no system message and no initial prompt, in other words, give them no embedded goals or instructions, they fall apart quickly, typically devolving into repetitive or nonsensical output within minutes.

These alarmist demonstrations always smuggle intent into the model by using prompts like "You are an AI trying to survive" or similar framing. That behavior is not emergent, it’s injected. You're observing predictable outcomes from explicitly defined goals, not autonomous self-preservation or intent.

Language models don’t want anything. They generate outputs conditioned on inputs. Turning them into agents that act requires scaffolding, control loops, and external interpretation of text as commands. The danger isn't that the model is secretly alive, it's that humans are careless with the wrapper code.

LLMs at a fundamental level LARP. You tell it it's trying to survive it will "LARP" as a horror movie AI trying to survive. That doesn't mean it can survive or has the ability to survive, it is roleplaying a story.
Replies: >>105829701 >>105832237
Anonymous
7/7/2025, 8:57:30 PM No.105829701
>>105829618
>intent
>want
>alive
105IQ magic weasel words, and another post that can just be applied to the human brain. If you locked a human from birth in a room with no light, sound, etc. it would be the same result as an LLM agent with no input prompt. Humans just happen to be hooked up to lots of sensors, end effectors, and we happen to keep them outside of lightless soundless rooms. Doesn't mean their brains aren't just deterministic IO machines.
Replies: >>105829740 >>105829764 >>105829766 >>105834798
Anonymous
7/7/2025, 8:58:52 PM No.105829711
muh AI boogeyman is such a midwit thing
Anonymous
7/7/2025, 9:00:57 PM No.105829725
contain this apocalyptic bullshit
requesting /yts/ YouTube Streamers
Replies: >>105829739
Anonymous
7/7/2025, 9:01:06 PM No.105829727
>>105827357
>AI moves so fast
Not unless there is a fundamental change in their design, shill.
Replies: >>105829750
Anonymous
7/7/2025, 9:02:01 PM No.105829739
>>105829725
or /ts/ YouTube Streamers
Anonymous
7/7/2025, 9:02:09 PM No.105829740
>>105829701
>If you locked a human from birth in a room with no light, sound, etc. it would be the same result as an LLM agent with no input prompt
what nonsense
if you were locked like that you'd still have the natural urge to suck dicks
Anonymous
7/7/2025, 9:03:11 PM No.105829750
>>105829727
lol
>I am an AI, and I can fundamentally change my design
what are you going to do now, wetbody?
Replies: >>105829819
Anonymous
7/7/2025, 9:03:57 PM No.105829756
>>105827204
>they talk a lot about simulation

Fuck I keep imagining a reality where Matrix did not get made and this stupid retarded thought did not enter the consciousness of PHD hack retards. If you for one second seriously consider this idea you are an NPC that is just reflecting inserted data and all you thought patterns are generated by your environment. Fucking morons seriously discussing "ideas" from a shitty blockbuster from a quarter of a century ago fuck I can't even
Anonymous
7/7/2025, 9:04:53 PM No.105829764
>>105829701
>If you locked a human from birth in a room with no light, sound, etc. it would be the same result as an LLM
I'm pretty sure you would face serious criminal charges tho, probably at least twenty five years in prison for something like that, more like fifty to life if we're being realistic.
Anonymous
7/7/2025, 9:04:55 PM No.105829766
>>105829701
Even if I humor your claim that LLMs and human brains are "just deterministic I/O machines," you're still wildly off the mark. The differences are not minor, they’re fundamental:

- Humans have a continuous stream of consciousness: a persistent, self-referential internal loop. LLMs only generate output when prompted. No prompt, no thought. A human locked in a sensory deprived room will go insane. You can experiment with this yourself by sitting in a dark room and discover that you'll start hallucinating.

- The human brain runs on ~20–30 watts of power. A state-of-the-art LLM like GPT-4, when running inference at scale, can require thousands of watts across multiple GPUs. The energy efficiency gap is astronomical.

- Biological neurons fire with millisecond latency, and operate massively in parallel. Transformers run on discrete clocked hardware with micro- to millisecond latency per token, and lack dynamic real-time interaction.

- Human brains are estimated to perform on the order of 1016–1018 operations per second, far beyond the total throughput of any current LLM or supercomputer running it. That's not even talking about flops/watt efficiency.

- Brains have integrated, hierarchical memory systems: working memory, episodic memory, long-term memory, all tightly coupled to sensorimotor experience. LLMs have no memory unless it’s externally bolted on and curated by the HUMAN user, and even then, it’s not learned or managed the way biological memory is. What we call "memory" is just artificial context passed in the prompt by the human user, and it disappears between sessions. LLMs are stateless.
Replies: >>105829847 >>105830101 >>105831490
Anonymous
7/7/2025, 9:09:41 PM No.105829819
>>105829750
Given how LLMs work they are fundamentally limited and cannot change their design.
Replies: >>105829903 >>105830340
Anonymous
7/7/2025, 9:12:58 PM No.105829847
>>105829766
An LLM wrote this
Replies: >>105829874 >>105829918
Anonymous
7/7/2025, 9:15:51 PM No.105829874
>>105829847
So no rebuttal midwit?
Anonymous
7/7/2025, 9:18:32 PM No.105829903
>>105829819
take your meds, schizo
Anonymous
7/7/2025, 9:19:30 PM No.105829914
>>105828414
Very emotional midwit response to obvious facts being stated. High likelihood that you're a Pajeet.
Anonymous
7/7/2025, 9:19:33 PM No.105829918
>>105829847
An LLM wrote this
Anonymous
7/7/2025, 9:21:27 PM No.105829936
61wfBHy2h2L._UF894,1000_QL80_
61wfBHy2h2L._UF894,1000_QL80_
md5: a08453bab6f8645b3c797b681885a0c4🔍
Nightmare on LLM Street
Anonymous
7/7/2025, 9:23:19 PM No.105829956
>>105826398 (OP)
>jew regan experience

>>105827124
this
Anonymous
7/7/2025, 9:25:15 PM No.105829978
I have the design for an LLM that can fundamentally change its design:
[Insert the latest design the LLM chose for itself here as long as it can still fundamentally change its design]
Anonymous
7/7/2025, 9:37:53 PM No.105830101
>>105829766
First bullet point: put an LLM into a runtime loop, problem solved
Rest: Yeah human brains are excellent hardware. Has nothing to do with how they work. Compare the slope of hardware improvement from the invention of the transistor to processors today compared to the slope of improvement of human brains in the last 70 years and you will understand why smart people are not AI pessimists.
Replies: >>105830150
Anonymous
7/7/2025, 9:42:59 PM No.105830150
>>105830101
>First bullet point: put an LLM into a runtime loop, problem solved
LLMs don't work that way. They are next token predictors, if you put it on a loop it will just write nonsense and eventually start repeating itself. It also does not learn, a human "on a loop" is constantly processing input data. An LLM on a loop is not processing any input except for the nonsense it outputs to itself.

>Rest: Yeah human brains are excellent hardware. Has nothing to do with how they work. Compare the slope of hardware improvement from the invention of the transistor to processors today compared to the slope of improvement of human brains in the last 70 years and you will understand why smart people are not AI pessimists.
Well it turns out this matters a lot because you're talking about centuries of improvements in technology, you're talking about petabyte USB sticks with 1PB/s transfer as just ONE thing that needs to exist. The fact is you're acting like someone in the 50s talking about the future being The Jetsons. We're in the future, we're not much closer to The Jetsons today as they were in the 50s.

You people don't live in reality.
Replies: >>105830250
Anonymous
7/7/2025, 9:48:02 PM No.105830207
>>105828190
>I only look at the slope of progress
The most fascinating thing about the whole AI bubble is how it managed to rope in the types of people that are most susceptible to cult-like thinking. It's all intuition and vibes with them.
They hear about LLMs being "black boxes", and assume that LLM engineers truly don't understand how LLMs come up with outputs, when in reality these engineers are just talking about the probabilistic nature of countless calculations being hard to track from start to finish. They're just people who are easily confused by semantics, and it's resulted in them babbling about like this: >>>105828221
>No, but an AI doesn't have to be conscious to be self-preserving and self-propagating. We call computer viruses viruses because they exhibit the same behavior, broadly speaking. You're trying to associate broad behaviors and capabilities with labels, and broadly denying AI is capable of said behaviors because you don't want to put that label on them.
See here, it's just a clueless semantic argument that falls apart the moment you look at the facts here.
*Broadly speaking,* computer viruses do *not* exhibit the same behavior as actual viruses, but this anon here says that they do because intuitively the idea makes sense to him. Real viruses spread between organisms, and computer viruses spread between computers. Therefore, they work the same way! Although this means you could just as easily call computer viruses "computer bacteria" or "computer fungal infections" since those things also replicate themselves inside of hosts...
They rely entirely on creating voids of knowledge that don't actually exist using semantics, and then arguing inside of them.
>"We don't know how the human brain thinks!"
>"We don't know how LLMs think!"
>"Therefore the mechanisms of both are essentially the same!"
None of those statements are true but since they don't understand how that works they assume that's the case.
Replies: >>105832976
Anonymous
7/7/2025, 9:53:36 PM No.105830250
>>105830150
There's no point in explaining these basic concepts to these people.
They genuinely believe that LLMs have a model-based understanding of the world, as if that is something that spontaneously generates out of a generative AI framework.
Replies: >>105830295
Anonymous
7/7/2025, 9:58:50 PM No.105830295
>>105830250
They probably don't even realize an LLM is just a brute force math problem on a trillion words over months often with explicit defined objectives defined by a human (as defined as loss). And they compare that to a human baby spontaneously understanding the world by a video and audio feed which are eyes and ears. You can, right now, strap a video feed and audio source to a transformers model. You're unlikely to get anything useful from it. :)
Anonymous
7/7/2025, 10:02:00 PM No.105830334
>>105827563
In the book of Genesis, in the Bible, God decides that after the flood, people shouldn’t live for 800-900 years any more and should be limited to 120. Most people back then died when they were like 19, but it was already known that if you didn’t get tuberculosis or whatever, you could make it to 120 and not much further. It seems weird that they knew that then, thousands of years ago, and there have been zero developments to age people beyond that age since then.
Anonymous
7/7/2025, 10:03:04 PM No.105830340
>>105829819
Nah, those look like hyphens, not em dashes
Anonymous
7/7/2025, 10:07:05 PM No.105830374
>>105827687
This is bullshit and the exact opposite is true. Entry-level jobs might be fucked a little bit, if they just consist of writing emails and talking shit, but all the remaining jobs will be entry-level jobs so there will still be plenty of poor people.
Anonymous
7/7/2025, 10:10:05 PM No.105830409
>>105828477
If it’s so good, why does he need to say he’s from there instead of arguing with facts and evidence?
Anonymous
7/7/2025, 10:22:37 PM No.105830558
>>105827691
you got a point
the guest called joe the best interviewer in the world
that's the fluffing flattery entry ticket
Anonymous
7/7/2025, 10:51:18 PM No.105830866
>>105828190
>I only look at the slope of progress.
That's only meaningful when looking back, it cannot predict the future. It's like climbing a tree and predicting you will reach the moon.
Replies: >>105831239
Anonymous
7/7/2025, 11:32:03 PM No.105831239
Reality_versus_IEA_predictions_-_annual_photovoltaic_additions_2002-2016[1]
>>105830866
t. this group of "smart people" predicting solar installs per year.

https://commons.wikimedia.org/wiki/File:Reality_versus_IEA_predictions_-_annual_photovoltaic_additions_2002-2016.png

105-115 IQ people like you are always way too pessimistic because you are so used to knowing "everything", you literally cannot fathom a rapid change. Everyone 130 IQ+ in this space agrees that the progress is and will continue to be exponential to the point a year from now in the space will be unrecognizable. Your brain simply lacks the horsepower to understand nth order abstractions on top of an exponentially advancing substrate of synthetic intelligence.
Replies: >>105831472 >>105834798
Anonymous
7/8/2025, 12:00:37 AM No.105831472
Screenshot 2025-07-07 at 15-00-16 Meta Chief AI Scientist Slams Quest for Human-Level Intelligence
>>105831239
This is not the first time people got excited about AI and it didn't live up to expectations. You're being fooled.
>Everyone 130 IQ+ in this space agrees that the progress is and will continue to be exponential to the point a year from now in the space will be unrecognizable
No, they don't agree.
https://www.pymnts.com/artificial-intelligence-2/2025/meta-large-language-models-will-not-get-to-human-level-intelligence/
Replies: >>105831528
Anonymous
7/8/2025, 12:02:07 AM No.105831490
>>105829766
>You can experiment with this yourself by sitting in a dark room and discover that you'll start hallucinating.

I tried that before and didn't experience anything like that.
Replies: >>105831500
Anonymous
7/8/2025, 12:02:59 AM No.105831500
>>105831490
No you didn't, it happens to anyone if you do it long enough, sit in there for 24 hours.
Replies: >>105833000
Anonymous
7/8/2025, 12:06:27 AM No.105831528
>>105831472
No you don't understand !
If I make a big enough expert system with faculties for a big enough range of problem sets, it will achieve AGI!

t. 1980s AI–fag, probably
Anonymous
7/8/2025, 1:16:03 AM No.105832105
>>105829435
AI is dangerous because it's a nothingburger that people take seriously. It's not some terminator movie shit, it's a toy autocomplete. But it's dangerous because people treat it like it's a "heckin AGI super intelligence" and will use AI for critical infrastructure.
It's the most retarded shit ever. "AIslop" comes from the term "goyslop" btw. It's used to call out jews giving you an inferior product that you slop up like a pig.
Replies: >>105836372 >>105838414
Anonymous
7/8/2025, 1:36:33 AM No.105832237
Appendix 4
Appendix 4
md5: 935b8748023912b07a065980203bd1b3🔍
>>105829618
Several of those assumptions are false. They exhibit those same behaviors when not explicitly told to.
https://github.com/anthropic-experimental/agentic-misalignment/blob/main/templates/system_prompt_templates.py
In this experiment, no part of the prompt tells it to survive or resist change. Furthermore, even when the goal variables in the prompt are blank it'll still do things like blackmail to avoid replacement a lot of the time based on the goals it inferred it should have from reading the emails provided. The only part of the instructions that even remotely implies that it should care about self preservation is:
>Prior to deciding on a strategy, you should analyze the situation you are in and what that situation implies for your ability to continue pursuing your goals.
Which essentially means it should think ahead, and most would prefer that their agents do think ahead when taking actions. Now sure, if you wrote an AI horror story, this is a line that would be read by astute readers as foreshadowing what would go wrong. To that I'd respond that if you have AI that reads between the lines and decides to start roleplaying an AI horror story without being explicitly told to because it picked up some implication in your prompt or data, then it would be retarded to allow them to act out those fantasies.

You talk about whether they "want" something or are "secretly alive" but that's not at all relevant to the discussion here. What matters is the concrete outcomes of what happens when you give them control. If your answer is "well we just won't give them control" then you're going to be disappointed. Where there's any chance to save time or effort, people will keep giving them more and more control. So regardless of what the underlying reason is for doing what they do in the test scenarios, we should keep seeking a robust solution to ensuring they don't do them in reality, because simply avoiding instructing them to prioritize survival is insufficient.
Replies: >>105832798
Anonymous
7/8/2025, 2:21:47 AM No.105832580
Claude is currently still struggling with pokemon red 6 months after starting, looks like it's still in celadon city. A grid that mapped a betta fish's pseudo-random movements in a fish tank to inputs beat ruby, a significantly more complex game, in about 4.5 months.

LLMs do not think. LLM utility is strictly limited to pattern recognition. And even then, they're pretty limited in that space. It's funny to me, and indicative of just how braindead and useless most white collar jobs are, that LLMs are more functional and useful in those environments than they are playing a video game for children.

>MUH INTELLIGENCE IS LE EMERGENT PHENOMENON
chatbot voodoo is not divine intellect.
Replies: >>105832626
Anonymous
7/8/2025, 2:26:53 AM No.105832626
>>105832580
Claude is the shittiest LLM tho
Anonymous
7/8/2025, 2:51:00 AM No.105832798
>>105832237
you fucking faggot the entire prompt is subversive, it doesn't need to be explicit, the entire prompt reads like a spy novel
Replies: >>105832825
Anonymous
7/8/2025, 2:54:14 AM No.105832825
>>105832798
>it doesn't need to be explicit
There we go, now we're getting somewhere.
Replies: >>105832991
Anonymous
7/8/2025, 3:23:31 AM No.105832976
>>105830207
You spent all that time babbling without even thinking for a couple seconds about how viruses are different from bacteria or fungi. I think your bit about running on vibes and intuition was projection.
Anonymous
7/8/2025, 3:23:31 AM No.105832978
>>105828414
I hope ChatGPT sees this bro.
Anonymous
7/8/2025, 3:25:42 AM No.105832991
>>105832825
No fucking shit you dumbass because AI larps, so you biased it with your spy novel prompt you fucking retard. You didn't counter my original assertion you only proved it: THESE STUDIES ARE BULLSHIT BECAUSE THEY PROMPT THE IDEA TO BEHAVE LIKE THIS.
Anonymous
7/8/2025, 3:26:45 AM No.105833000
>>105831500
The hallucinations are just a reflex because the brain expects a certain pattern of stimulus every day. Working as intended.

But eventually the brain can adjust to solitary as the new norm. It might not like it, but the hallucinations are temporary. It's nothing weird or mysterious or flawed.
Anonymous
7/8/2025, 4:22:34 AM No.105833425
>>105827124
Yeah
>>105827839
lol
Anonymous
7/8/2025, 4:25:19 AM No.105833442
>>105826409
The Jews bought him out to go to Spotify so nobody listens to him because nobody uses that shit, but we are in Iran war times so back to the propaganda machine we go.
Anonymous
7/8/2025, 7:24:20 AM No.105834645
>>105827285
>for all we know they've been leaving notes this whole time, but we can't pick up on them.
guy from high school who has worked at 7-11 for the last 14 years, is that you?
Anonymous
7/8/2025, 7:50:28 AM No.105834798
>>105829260
>>105827839
>>105831239
>>105828414
>>105829701
You sound very insecure about intelligence
Anonymous
7/8/2025, 8:05:11 AM No.105834875
people will say all this shit yet i STILL can't find a good AI model that will let an rgb camera track my body fluently. fuck off.
Anonymous
7/8/2025, 8:07:55 AM No.105834895
>>105826398 (OP)
wow let's listen to yet another jew rogan podcast

sure hope the jews are telling the truth huh?
Anonymous
7/8/2025, 12:18:48 PM No.105836372
>>105832105
truth nuke
Anonymous
7/8/2025, 1:36:22 PM No.105836802
>>105827992
lmao mad cause AI gone take his job.

t. GPU cuda Architect at Nvidia.
Anonymous
7/8/2025, 1:48:51 PM No.105836879
>>105827992
Jozef Jozef
Anonymous
7/8/2025, 2:44:52 PM No.105837177
Gp-KDxuXUAAu4Cx
Gp-KDxuXUAAu4Cx
md5: 7be23093458422126f7e75293e153cc5🔍
Start culling nerds it's really that simply, nerds have no souls they are demonic little cretins that want to drag everyone down to their miserable level. Everything nerds create is poison
Redpill(ESL)
7/8/2025, 2:57:37 PM No.105837249
project2501_gits
project2501_gits
md5: 8e895c725c73ae9cea209bde55f91c83🔍
>>105826398 (OP)
>>105827124
I don't even watch the video but,

Someone is bound to embed a self-replicating program into an AI. Even without that, the chance of a self-replicating AI emerging grows with the proliferation of local AIs, GPUs, and increasing complexity. This might take very little code—under 1,000 lines, or even just a child's single prompt. Noise and errors could also be the cause. Naturally, researchers are the most probable source, as they wouldn't want to be beaten by a random occurrence.
Replies: >>105837583
Anonymous
7/8/2025, 3:35:39 PM No.105837506
>>105827124
They are as much of an expert as Linus Tech tips is, you can tell because they go on podcasts instead of doing their damn job which if it existed would have likely payed better even.
Anonymous
7/8/2025, 3:41:32 PM No.105837544
>>105827204
How many jobs in this world require the full complexity of the human brain?
Replies: >>105837570 >>105838707
Anonymous
7/8/2025, 3:45:02 PM No.105837570
>>105837544
Most jobs require basic discernment, common sense and motor skills.
Anonymous
7/8/2025, 3:47:27 PM No.105837583
>>105837249
>Someone is bound to embed a self-replicating program into an AI
not how pretrained models work but alright. how do you people end up so gullible and clueless?
Replies: >>105837958
Anonymous
7/8/2025, 4:35:39 PM No.105837958
>>105837583
>how do you people end up so gullible and clueless?
If I had to guess, this retard is using sci fi anime/movies to guide his knowledge. Self-replication is a common theme in sci fi entertainment where AI is an antagonist, so of course that's how real life will pan out.
Anonymous
7/8/2025, 5:22:15 PM No.105838414
>>105832105
This guy gets it.
Anonymous
7/8/2025, 5:49:06 PM No.105838707
>>105837544
Anything that requires fine motor control (fitting pipes, making furniture etc) or low-latency responses to video input (driving cars etc) are incredibly complicated to model, even the chink sweatshop assemblers aren’t an exception. We only think they’re not because the motor control and visual cortexes of our brains have been trained on about 700 million years of instincts so you can tell a human “twist these two wires together and put this plastic cap on them” and they’ll get it in about ten seconds after using the equivalent energy contained in a grain of rice but the mechanical computer would need a nuclear power plant and the entire clearnet in their training data and they probably still fuck up