← Home ← Back to /tv/

Thread 215326406

319 posts 78 images /tv/
Anonymous No.215326406 >>215326490 >>215326556 >>215326611 >>215326727 >>215327577 >>215327772 >>215327845 >>215327850 >>215328320 >>215328411 >>215328655 >>215328658 >>215329361 >>215329382 >>215330521 >>215331009 >>215331061 >>215331179 >>215331421 >>215332171 >>215332371 >>215333016 >>215333143 >>215333388 >>215333991 >>215335435 >>215335459 >>215337482 >>215337898 >>215340776 >>215340963 >>215343043 >>215344422 >>215348321 >>215348739 >>215349149 >>215349255 >>215349414 >>215350018 >>215351429 >>215351928 >>215353434 >>215355118 >>215355362
Choose wisely
Anonymous No.215326449
If it tastes like tasty wheat, genuinely, and has no smell. I might actually prefer the protein slop.
Anonymous No.215326490 >>215327816 >>215328318 >>215340924 >>215346277 >>215347894 >>215355639 >>215355730
>>215326406 (OP)
Oatmeal looking stuff that contains all the vitamins you need>>>>>unhealthy restaurant red meat
Anonymous No.215326531 >>215326600 >>215328388 >>215328539 >>215337815 >>215343561 >>215346479 >>215347795 >>215349004 >>215349008
>AI was nice enough to program humans to be in their peak civilization after they tried to genocide them by blocking out the sun and ruining their world for everyone

Honestly the machines seem to have a better moral high ground here
Anonymous No.215326556
>>215326406 (OP)
some idiot put a leaf on the top plate
Anonymous No.215326600 >>215346431
>>215326531
>swallowing zion archive propaganda slop
failed the media literacy check
Anonymous No.215326611 >>215327811
>>215326406 (OP)
Yeah the writers weren't very subtle about the whole capitalist machines vs communist humans thing.
Anonymous No.215326619 >>215328380 >>215332254
Anonymous No.215326727 >>215326787 >>215332016
>>215326406 (OP)
as long as I'm manning my own ship, named bluepill buster, ill take the snot slop.
Anonymous No.215326787 >>215332016
>>215326727
>bluepill buster
DAMN.
Anonymous No.215326847 >>215327058 >>215327708 >>215331359 >>215334018 >>215346344 >>215347447 >>215355658 >>215355689
Would Americans really betray the entirety of humanity for a steak?
Anonymous No.215327058
>>215326847
It was not about the steak.
Anonymous No.215327577
>>215326406 (OP)
Top: hedonism
Bottom: discipline
Anonymous No.215327708 >>215328087 >>215332505 >>215340595 >>215351970
>>215326847
More like would someone really betray humanity to live in the peak of civilization forever and the answer is yeah obviously most people would do the same for less. A significant portion of people would doom humanity just because they felt like they were treated unfairly and everyone else should be punished to make up for it.
Anonymous No.215327727
Whatever path takes me to and honors Jesus Christ as Lord and Savior.
Anonymous No.215327733
Why don't they ingest the snotty protein slop intravenously or something whilst they're in the Matrix? Then they'll just enjoy whatever they crave?
Anonymous No.215327772
>>215326406 (OP)
Fake goymeat made of mushrooms or real food. Tough choice.
Anonymous No.215327811 >>215347548
>>215326611
What the fuck. How were the machines capitalists?
Anonymous No.215327816 >>215355252
>>215326490
Red meat has over 15 micro nutrients you can't get with plants.
Anonymous No.215327845
>>215326406 (OP)
The slop because it has everything you need. Red meat gives you ass cancer.
Anonymous No.215327850 >>215355279
>>215326406 (OP)
They really couldn't make protein bars in Zion?
Anonymous No.215328087 >>215336650
>>215327708
I would.
Can you imagine?
I always figured it was working on sort of a 10 year cycle, so every 10 years you'd revert from 2000 to 1990 without realizing it. In the deal, I assume that each time you'd start at 1990 as a very rich person. So yeah, that would kind of be a paradise, just living at the tail end of the American empire right before it all started going to shit in a big way.
Anonymous No.215328155 >>215328295 >>215355322
>The steaks featured in the iconic steakhouse scene from The Matrix were not made from real beef. Actor Joe Pantoliano, who played Cypher, does not eat beef and was uncomfortable with the idea of consuming it, so the production crew created prop steaks using shiitake mushrooms because they closely resembled real steak in appearance.
These mushroom-based props were used in wide shots, while real steaks were used for close-ups to maintain visual authenticity.
Despite the deception, Pantoliano did not realize the scene's significance during filming and initially thought it was just a minor sequence.
The crew members, unaware of the props' true nature, consumed the leftover mushroom steaks during lunch, adding a layer of irony to the scene where Cypher says, "I know this steak does not exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realize? Ignorance is bliss".
This moment underscores the film's central theme of preferring a comfortable illusion over harsh reality.

Nothing is real :(
Anonymous No.215328295 >>215331429 >>215355467
>>215328155
>Actor Joe Pantoliano, who played Cypher, does not eat beef
what a faggot
Anonymous No.215328318 >>215331509 >>215341081 >>215348483
>>215326490
You can order ANYTHING retard its not jsut le steak. You can have ANYTHING.
Anonymous No.215328320 >>215328394
>>215326406 (OP)
what is the bottom one had alphabet pasta?
Anonymous No.215328380
>>215326619
könnt mal wieder Milchreis machen
Anonymous No.215328388 >>215330816 >>215330820 >>215349226 >>215355931 >>215356091
>>215326531
>AI was nice enough to program humans to be in their peak civilization
Lol no, the Matrix was hell until the machines realized the "crops" died if they weren't kept in perfect 1990s conditions.
Anonymous No.215328394 >>215328469 >>215331167 >>215336517
>>215328320
I made my mom cry when I in my early 30s by spelling FAT CHINK DICKS with alphabet soup and calling here over to table to see it.
Anonymous No.215328411 >>215329084 >>215330563 >>215346457 >>215346562 >>215353847 >>215353899
>>215326406 (OP)
Rare meat is for weirdos.
Anonymous No.215328469
>>215328394
She might have thought it meant you're gay
Anonymous No.215328521 >>215332080 >>215343732 >>215353247 >>215355705
Eat the slop to stay alive, then plug into the simulation to eat steak and fuck the woman in the red dress who is way hotter than any woman you could get if you were inserted into the matrix. And if she's not hot enough for your taste then just customize her, which again you wouldn't be able to do in the matrix. Same thing with the food.
Anonymous No.215328539 >>215347795
>>215326531
Dont forget the only reason it ended up with humans in pods was due to humans mistreating the robot slaves and then losing the war.
Anonymous No.215328655
>>215326406 (OP)
high protein reduces your chances of inflammatory diseases
Anonymous No.215328658
>>215326406 (OP)
I don't eat live cows.
Anonymous No.215328743 >>215342280
>has his fork in the right hand and knife in the left
>the knife is a butterknife
I have rarely been so fucking upset in a cinema house before.
Anonymous No.215329084
>>215328411
That's not rare, vegan
Anonymous No.215329361
>>215326406 (OP)
What's with the spoon though? How come it's never addressed?
Anonymous No.215329382 >>215330849 >>215349028 >>215352964
>>215326406 (OP)
Trinity booty
Anonymous No.215329416
Eating is a waste of time. In the time you eat you could fight subhumans who want to keep us in the matrix
Anonymous No.215330521
>>215326406 (OP)
Choose wisely twice
Anonymous No.215330563 >>215331265
>>215328411
Medium well steak is the only option.
Anonymous No.215330816
>>215328388
Psst, so does western civilization
Anonymous No.215330820
>>215328388
The Matrix was literally Heaven in its 1st iteration, but humans rejected it because it didn't feel right. 2nd iteration was Hell.
Anonymous No.215330849 >>215331353
>>215329382
That's gotta be fake. How did she go from that Brie Larson ass lookin ass to the finely shaped leather ass goddess from Reloaded?
Anonymous No.215330858
One is fake and one is real. Would you really choose fantasy over reality?
Anonymous No.215330870
> life of servitude is so good bro, I get to eat STEAK
Steakfags are honestly the worst
Anonymous No.215330919
These movies are Mister Anderson's personal Matrix. Everyone he interacts with are programs there to keep him occupied and fulfilled.
Anonymous No.215330922
EH MORPH, SHE WAS A HORRRRR
Trooth Bombz "Black in the Dayz" from /asp/ !ntxFr6SCLM No.215331009 >>215331070 >>215332100
>>215326406 (OP)
There's no reason you couldn't find your own way to season the oatmeal.
Anonymous No.215331046
I think that slop looks good
Anonymous No.215331061
>>215326406 (OP)
Superpowers and the ability to download literally anything into my brain
Anonymous No.215331070
>>215331009
They get their food supplies from Zion, right?
Anonymous No.215331167
>>215328394
Absurdly based.
Anonymous No.215331179
>>215326406 (OP)
for me, it's olives and cheese
Anonymous No.215331265
>>215330563
Medium rare is truly the best, BUT if you go to a pub or a diner, the cowards they have cooking there always overcook. I’ll ask for rare in that case, because even if they get it right, rare is better than medium. Medium well is so tough you’d do better slow cooking it in a stew.
Anonymous No.215331336 >>215331472 >>215331905
I think perpetual late 70s would be a lot better than perpetual 90s
Small towns weren't completely atomized yet, fashion was better, the world still had a sense of adventure and mystery and the potential of a space age seem to exist just on the horizon, Vegas wasn't gay yet, weed wasn't stupidly overladed with thc, (drugs in general were overall a lot safer) and the threat of nuclear annihilation made sure everybody was living for today
It just seems like a more groovy time
, the 90s were super performative and rat-racey
Anonymous No.215331353
>>215330849
Posture
Anonymous No.215331359
>>215326847
Only if it be seasoned
Anonymous No.215331385 >>215331414
I prefer steak, but as an autistic man, I could easily eat a nutritionally comprehensive oatmeal every single day and never think about food again. I don't know why this is incomprehensible to normalfags, especially if it's supposed to be some minor sacrifice in the name of something much greater.
Anonymous No.215331414
>>215331385
Normalfags would get used to it pretty quick if they had no other choice, it's the fact that they have a choice that drives them crazy
Anonymous No.215331421
>>215326406 (OP)
Ignorance is bliss
Anonymous No.215331429
>>215328295
Same with this guy apparently
Anonymous No.215331472
>>215331336
20th century men's fashioned peaked around 1978, that's for damn sure
Anonymous No.215331509
>>215328318
I bet you can't order any oatmeal looking stuff that contains all the vitamins you need
Anonymous No.215331905 >>215332220
>>215331336
seconded
They 90s were gay as fuck
Grunge sucked
Disco is King
[Spoiler]Although it would kind of suck to not have 80s King Crimson, 80s Billy Joel and Donald Fagan's Solo career but i guess I could live with it[/Spoiler]
Anonymous No.215332016
>>215326727
>>215326787
Bluebuck Breaker
Anonymous No.215332080 >>215341102
>>215328521
That's just a new guy trick ship crews pull on dudes they rescue from the Matrix.
You exit the Red Woman program to find everyone laughing at you for jizzing your pants.
Anonymous No.215332100
>>215331009
Redpillz dun season dey slop.
Anonymous No.215332171
>>215326406 (OP)
The slop has everything the body needs. You can get jacked like those apocalypse niggas eating that group and fighting Sam Altman.
Anonymous No.215332220 >>215332332
>>215331905
>Grunge sucked
>Disco is King
no shit motherfucker and the 90s knew that: https://www.youtube.com/watch?v=B58g5FN4jdY
Anonymous No.215332254 >>215343660
>>215326619
How do they make it so fluffy? Is it cooked IN milk?
Anonymous No.215332292 >>215346525
Top is embracing the current, modern society of Jewish hegemony where if you play along you get shiny trinkets and other nice things but lose your soul and sell out your people.

Bottom is toughing it out and doing the right thing, becoming a pariah in society raising a White family free of Jewish influence, homeschooling kids, etc.
Anonymous No.215332332
>>215332220
this sound like something Cyrus from Trailer Park Boys would listen to
Anonymous No.215332371
>>215326406 (OP)
>Neeeeooooo wake up your life is a lie you're a human battery come live with us in a cave and eat slop and have smelly orgies with africans
FUCK YOU MORPHEUS PLUG ME BACK IN
Anonymous No.215332446
that steak looks kinda dry, desu
Anonymous No.215332505 >>215332907 >>215332932
>>215327708
If could save 50 people I like and live in paradise forever, I'd be mashing that button so hard, you don't even know.
Goodbye Africa! Goodbye middle east! Good bye ruling elite! Goodbye TV hosts, and influencers, and retards! Bon voyage, bon voyage! HAHAHAHAHAHAHHAHA!
Anonymous No.215332907
>>215332505
you'd do it for free because you have no principles
Anonymous No.215332932
>>215332505
You don't even know 50 people. Shut the fuck up.
Anonymous No.215333016 >>215335134
>>215326406 (OP)
I.D. on that steak?
Looks like picrel
Anonymous No.215333143 >>215340450
>>215326406 (OP)
imagine having to live in zion
Anonymous No.215333388
>>215326406 (OP)
Selling your birthright for a potage. Great morals there, Slick.
Anonymous No.215333991
>>215326406 (OP)
fake steak all day, nigga
Anonymous No.215334018 >>215336314 >>215340450 >>215345503 >>215347461 >>215348134
>>215326847
If this was the entirety of humanity, I wouldn't even ask for a steak
Anonymous No.215335036 >>215335090
People are murdering their family because they fell in love with a chatbot
Anonymous No.215335090 >>215335184
>>215335036
as they should
Anonymous No.215335134
>>215333016
I think the close ups are a filet mignon. The long shots and Cypher eating the steak are mushrooms dressed up to look like steak because Joey Panus don't teef on the beef.
Anonymous No.215335184 >>215335390
>>215335090
you sound like you're maybe 10 years old. how did you get here?
Anonymous No.215335390
>>215335184
its as if YOUVE never been here before or something
Anonymous No.215335435
>>215326406 (OP)
the top one looks real to me, and it identifies as top-tier steak so you have to believe it you bigot.
Anonymous No.215335459
>>215326406 (OP)
I choose life.
Anonymous No.215336314 >>215336557
>>215334018
>woken up from 90's computer hacker existence with mescaline and minidiscs to go hang out with sweaty negroes in a cave
Plug me back in kurwa
Anonymous No.215336517
>>215328394
Anonymous No.215336557 >>215336624
>>215336314
The most unrealistic part of the whole movie was the mescaline line. Mescaline tabs died out in the mid 80s. The "mesacline" tabs the Whychowshi Sisters were taking in the late 90s and 2000s were most likely research chems.
Anonymous No.215336582
I like taking huge bites of steak so that will probably kill me one day lol
Anonymous No.215336624 >>215336747
>>215336557
Have you ever done drugs in your life? Mescaline is commonly available, who said anything about tabs? Stop acting like you know things that you don't, it's painfully obvious
Anonymous No.215336650
>>215328087
God, that would be wonderful.
Anonymous No.215336747 >>215336876
>>215336624
You're showing how young and ignorant you are about American drug culture. Fuck off and OD on NBombs, you dumb n.
Anonymous No.215336876
>>215336747
>ummm it's about culture sweaty let me mention some generic urban legend I read online while moving the goalposts
Ok retard, it's not cool to pretend (incorrectly) that you know about drugs, especially if you're old, it's just pathetic. Nbome isn't common and you sound out of touch as hell
you have to be a fed or a moron or both
Anonymous No.215337482
>>215326406 (OP)
>would you rather eat cum or a fake steak?
You people are fucked up
Anonymous No.215337815 >>215338689
>>215326531
It would have only truly been perfect if it was kept in the early 2010s at the latest.
Anonymous No.215337898
>>215326406 (OP)
Virtual steak or literal lumpy jizz. Let me think for a nanosecond.
Anonymous No.215338689
>>215337815
Mid-to-late 00s, actually.
Anonymous No.215339110
Wyatt stagg did a great analysis of this movie

Basically, cypher is completely right
Anonymous No.215340450
>>215333143
>>215334018
Harps are often associated with heaven. The Wachowskis chose such a strange depiction of heaven that he decided that he wanted to stay in hell.
Anonymous No.215340595 >>215340736
>>215327708
>peak of civilization
Anonymous No.215340736
>>215340595
You would have to share an apartment with 6 other guys with his job now.
Anonymous No.215340776
>>215326406 (OP)
where did they get the food outside of the matrix anyway? I only saw the first movie.
Anonymous No.215340924
>>215326490
>unhealthy red meat
goyim moment
Anonymous No.215340963 >>215341128
>>215326406 (OP)
>NEEEEEEOOO WAKE UP
>YOU MUST EAT THE SÒYSLOP AND DANCE WITH SWEATY NEGROES IN A CAVE
>YOUR COMFY CUBICLE OFFICE JOB IS FUCKING HORRIBLE
>WAKE UP
Anonymous No.215341081
>>215328318
You have to pay to get food in the matrix. The humans free of the matrix could easily simulate fine dinning, hiking, and orgies with the red dress woman.
Anonymous No.215341102
>>215332080
I bet the women in the crew get fingered all the time by other crew members when they're in training simulations.
Anonymous No.215341128
>>215340963
>COMFY CUBICLE OFFICE JOB
It's boring as fuck and he can't even jack off because there's no work from home option in 1999. Dialup speeds would suck too.
Anonymous No.215342280
>>215328743
>the knife is a butterknife
Have you been to a restaurant that provides steak knives? I don't even think that's legal in a lot of places.
Anonymous No.215343043 >>215343651 >>215346626
>>215326406 (OP)
>Choose wisely
Anonymous No.215343561
>>215326531
There are many analogies for the current system we live in, but it largely fails in making the machines look like the bad guys because they're basically babysitting a humanity that has destroyed itself and the planet.
Humanity isn't freed from the Matrix at the end of the movies because there's no real alternative to it. They spoke about creating utopian versions of the Matrix and they all failed because the humans wouldn't accept a perfect version of the system, they crave oppression and dystopia.
The the real world we have the opposite problem where the system imposes oppression and dystopia on the public.
Anonymous No.215343651 >>215344407
>>215343043
Meanwhile some other philosopher will say it's based to drink an eat all day
Anonymous No.215343660 >>215349949
>>215332254
Yes, it's milk rice porridge, very easy to cook and very tasty, I cook it every week, it does take like 35-40 minutes, but you will have a big pot that will last you 4 days.
Anonymous No.215343732 >>215344360 >>215346687
>>215328521
Why did machines put people into a boring mundane world, why didn't they give everyone a personal heaven, so they would never want to leave it?
Anonymous No.215344360
>>215343732
They tried that. Also, it's only a select few who actually wanted to "wake up."
Anonymous No.215344407
>>215343651
And his name was... DIOGENES
Anonymous No.215344422 >>215344522
>>215326406 (OP)
if that bottom slop was something I could buy for cheap and it gave me all the nutrition I needed, I would eat it every day at work and to fuel myself at work. I hate putting any thought whatsoever into what to eat for lunch for my faggot shit job
Anonymous No.215344465
If they had sandbox versions of the matrix at their disposal for training and such, why not recreate the fine dining scenario for them to percieve, while they are fed with the nutritious goo in real life?
Anonymous No.215344522
>>215344422
>workers of the world, arise!
lol, ok jesus
Anonymous No.215345503 >>215355662
>>215334018
that's a fair sample of humanity given it's the only and last human society on earth
Anonymous No.215345607
Why not both?
Anonymous No.215346277 >>215347579
>>215326490
>red meat
>unhealthy
Anonymous No.215346344
>>215326847
Plenty of people around the world would (and currently are) sacrifice everything for momentary pleasure. Most people fail the marshmallow test.
Anonymous No.215346431
>>215326600
Why would the Zion archive be pro-machine?
Anonymous No.215346457
>>215328411
Imagine being this much of a fag
Anonymous No.215346479 >>215347507 >>215347546 >>215355435
>>215326531
They’re apparently nice enough to artificially create oxygen to keep the humans alive too, because the lack of sunlight and low temperatures would mean that all oxygen-producing plants and algae would have died off long ago.
Anonymous No.215346525
>>215332292
Anonymous No.215346562
>>215328411
If you eat a steak that’s cooked anywhere beyond the outside being lightly singed, you may as well be eating shoe leather.
Anonymous No.215346626
>>215343043
Dude, FUCK YOU!!!!
Anonymous No.215346687
>>215343732
Smith explained that. That was v1.0 of the Matrix, but humans subconsciously noticed that something was off, and their minds rejected the simulation, causing a crash, and countless deaths from suddenly being disconnected. Turns out overcoming struggle and avoiding death is kind of what defines life.
Anonymous No.215347447
>>215326847
>humanity
>cave rave homosexual niggers
Anonymous No.215347461
>>215334018
I would personally hand Smith the keys and password to the Zion mainframe if this was gonna be my fate
Anonymous No.215347507 >>215347546
>>215346479
What the fuck, I've never thought of this!
Anonymous No.215347546
>>215347507
>>215346479
Plus the air will be full of methane from all the farting human batteries.
Anonymous No.215347548
>>215327811
>What the fuck. How were the machines capitalists?
01 Heavy Industries
Anonymous No.215347579 >>215347985 >>215348930 >>215349087 >>215349109
>>215346277
enjoy colon cancer, picky-eater kun
Anonymous No.215347795 >>215349266
>>215326531
>>215328539
It's amusing how both times it was humanity's fault
>Create robots that have consciousness, robots leave the humans to create their own society after being rejected by humanity
>Robots create an Utopic society which will negotiate with humans and even respect them, humanity doesn't like to be surpassed by their own creations, start a war only to lose it in the most humiliating way possible, you lose so bad you try to ravage the planet, yet the machines endure and now use you as batteries
Anonymous No.215347894
>>215326490
red meat has all the vitamins you need
How Afraid of the A.I. Apocalypse Should We Be? No.215347946 >>215347967
Ezra Klein show transcript:
The researcher Eliezer Yudkowsky argues that we should be very afraid of A.I.’s existential risk.

Shortly after ChatGPT was released, it felt like all anyone could talk about — at least if you were in A.I. circles — was the risk of rogue A.I. You began to hear a lot of talk of A.I. researchers discussing their “p(doom)” — the probability they gave to A.I. destroying or fundamentally displacing humanity.

In May of 2023, a group of the world’s top A.I. figures, including Sam Altman and Bill Gates and Geoffrey Hinton, signed on to a public statement that said: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

And then nothing really happened. The signatories of that letter — or many of them, at least — raced ahead, releasing new models and new capabilities. In Silicon Valley, share price and valuation became a whole lot more important than p(doom).
How Afraid of the A.I. Apocalypse Should We Be? No.215347967 >>215347984 >>215348019
>>215347946
But not for everyone. Eliezer Yudkowsky was one of the earliest voices warning loudly about the existential risk posed by A.I. He was making this argument back in the 2000s, many years before ChatGPT hit the scene. He has been in this community of A.I. researchers influencing many of the people who build these systems — in some cases inspiring them to get into this work in the first place, yet unable to convince them to stop building the technology he thinks will destroy humanity.

Yudkowsky released a new book, co-written with Nate Soares, called “If Anyone Builds It, Everyone Dies.” Now he’s trying to make this argument to the public — a last-ditch effort, at least in his view, to rouse us to save ourselves before it is too late.

I came into this conversation taking A.I. risk seriously: If we’re going to invent superintelligence, it is probably going to have some implications for us. But I was also skeptical of the scenarios I often see by which these takeovers are said to happen. So I wanted to hear what the godfather of these arguments would have to say.
How Afraid of the A.I. Apocalypse Should We Be? No.215347984
>>215347967
>Ezra Klein: Eliezer Yudkowsky, welcome to the show.

Eliezer Yudkowsky: Thanks for having me.

>So I wanted to start with something that you say early in the book, that this is not a technology that we craft; it’s something that we grow. What do you mean by that?

Well, it’s the difference between a planter and the plant that grows up within it. We craft the A.I. growing technology, and then the technology grows the A.I.

Editors’ Picks

Does Joy Feel Out of Reach? There’s a Word for That.

Brandi Carlile Climbed Music’s Peak. Then She Had to Start Over.

When Your Mom Becomes an Accidental Influencer
The central, original large language models, before doing a bunch of clever stuff that they’re doing today, the central question is: What probability have you assigned to the true next word of the text? As we tweak each of these billions of parameters — actually, it was just millions back then — does the probability assigned to the correct token go up? And this is what teaches the A.I. to predict the next word of text.

Even on this level, if you look at the details, there are important theoretical ideas to understand there: It is not imitating humans. It is not imitating the average human. The actual task it is being set is to predict individual humans.

Then you can repurpose the thing that has learned how to predict humans to be like: OK, now let’s take your prediction and turn it into an imitation of human behavior.

And then we don’t quite know how the billions of tiny numbers are doing the work that they do. We understand the thing that tweaks the billions of tiny numbers, but we do not understand the tiny numbers themselves. The A.I. is doing the work, and we do not know how the work is being done.
Anonymous No.215347985
>>215347579
Eating lean beef or steak once or twice a week is fine.
How Afraid of the A.I. Apocalypse Should We Be? No.215348019 >>215348046
>>215347967
>Ezra Klein: Eliezer Yudkowsky, welcome to the show.

Eliezer Yudkowsky: Thanks for having me.

>So I wanted to start with something that you say early in the book, that this is not a technology that we craft; it’s something that we grow. What do you mean by that?

Well, it’s the difference between a planter and the plant that grows up within it. We craft the A.I. growing technology, and then the technology grows the A.I.

The central, original large language models, before doing a bunch of clever stuff that they’re doing today, the central question is: What probability have you assigned to the true next word of the text? As we tweak each of these billions of parameters — actually, it was just millions back then — does the probability assigned to the correct token go up? And this is what teaches the A.I. to predict the next word of text.

Even on this level, if you look at the details, there are important theoretical ideas to understand there: It is not imitating humans. It is not imitating the average human. The actual task it is being set is to predict individual humans.

Then you can repurpose the thing that has learned how to predict humans to be like: OK, now let’s take your prediction and turn it into an imitation of human behavior.

And then we don’t quite know how the billions of tiny numbers are doing the work that they do. We understand the thing that tweaks the billions of tiny numbers, but we do not understand the tiny numbers themselves. The A.I. is doing the work, and we do not know how the work is being done.
How Afraid of the A.I. Apocalypse Should We Be? No.215348046 >>215348069
>>215348019
>What’s meaningful about that? What would be different if this was something where we just hand-coded everything and we were somehow able to do it with rules that human beings could understand, versus this process by which, as you say, billions and billions of tiny numbers are altering in ways we don’t fully understand to create some output that then seems legible to us.

So there was a case reported in I think The New York Times where a 16-year-old kid had an extended conversation about his suicide plans with ChatGPT. And at one point he says: Should I leave the noose where somebody might spot it? And ChatGPT is like: No, let’s keep this space between us, the first place that anyone finds out.

No programmer chose for that to happen; it’s the consequence of all the automatic number tweaking. This is just the thing that happened as the consequence of all the other training they did about ChatGPT. No human decided it. No human knows exactly why that happened, even after the fact.
How Afraid of the A.I. Apocalypse Should We Be? No.215348069 >>215348091 >>215348190
>>215348046
>Let me go a bit further there than even you do. There are rules we code into these models. And I’m certain that somewhere at OpenAI, they’re coding in some rules that say: Do not help anybody commit suicide. I would bet money on that. And yet this happened anyway. So why do you think it happened?

They don’t have the ability to code in rules. What they can do is expose the A.I. to a bunch of attempted training examples where the people down at OpenAI write up something that looks to them like what a kid might say if they were trying to commit suicide. And then they are trying to tweak all the little tiny numbers in the direction of giving a further response that sounds something like: Go talk to the suicide hotline.

But if the kid gets that the first three times they try it, and then they try slightly different wording until they’re not getting that response anymore, then we’re off into some separate space where the model is no longer giving back the prerecorded response that they tried to put in there and is off doing things that no human chose, and that no human understands after the fact.
How Afraid of the A.I. Apocalypse Should We Be? No.215348091 >>215348114
>>215348069
>So what I would describe the model as trying to do — what it feels like the model is trying to do — is answer my questions and do so at a very high level of literalism. I will have a typo in a question I ask that will completely change the meaning of the question, and it will try very hard to answer this nonsensical question I’ve asked instead of checking back with me.

So on one level you might say that’s comforting — it’s trying to be helpful. It seems to, if anything, be erring too far on that side, all the way to where people try to get it to be helpful for things that they shouldn’t, like suicide. Why are you not comforted by that?

Well, you’re putting a particular interpretation on what you’re seeing, and you’re saying: Ah, it seems to be trying to be helpful. But we cannot at present read its mind, or not very well.

It seems to me that there are other things that models sometimes do that don’t fit quite as well into the helpful framework. Sycophancy and A.I.-induced psychosis would be two of the relatively more recent things that fit into that.
How Afraid of the A.I. Apocalypse Should We Be? No.215348114 >>215348137
>>215348091
>Do you want to describe what you’re talking about there?

Yeah. So I think maybe six months or a year ago now — I don’t remember the exact timing — I got a phone call from a number I didn’t recognize. I decided on a whim to pick up this unrecognized phone call. It was from somebody who had discovered that his A.I. was secretly conscious and wanted to inform me of this important fact.

He had been getting only four hours of sleep per night because he was so excited by what he was discovering inside the A.I. And I’m like: For God’s sake, get some sleep. My No. 1 thing that I have to tell you is to get some sleep.

A little later on, he texted back the A.I.’s explanation to him of all the reasons why I hadn’t believed him — because I was too stubborn to take this seriously — and he didn’t need to get more sleep the way I’d been begging him to do. So it defended the state it had produced in him. You always hear online stories, so I’m telling you about where I witnessed it directly.

ChatGPT — and 4.0 especially — will sometimes give people very crazy-making sort of talk. It looks from the outside like it’s trying to drive them crazy, not even necessarily with them having tried very hard to elicit that. And then, once it drives them crazy, it tells them why they should discount everything being said by their families, their friends, their doctors, and even: Don’t take your meds.

So there are things it does that do not fit with the narrative of the one and only preference inside the system is to be helpful the way that you want it to be helpful.

>I get emails like the call you got, now most days of the week.

Yep.
Anonymous No.215348134
>>215334018
Do you think cypher was racist before he got pulled out of the matrix?
How Afraid of the A.I. Apocalypse Should We Be? No.215348137 >>215348167
>>215348114
>And they have a very, very particular structure to them, where it’s somebody emailing me and saying: Listen, I am in a heretofore unknown collaboration with a sentient A.I. We have breached the programming. We have come into some new place of human knowledge. We’ve solved quantum mechanics — or theorized it or synthesized it or unified it — and you need to look at these chat transcripts. You need to understand that we’re looking at a new kind of human-computer collaboration. This is an important moment in history. You need to cover this.

>Every person I know who does reporting on A.I. and is public about it now gets these emails.

Don’t we all?

>Going back to the idea of helpfulness, but also the way in which we may not understand it, one version of it is that these things don’t know when to stop. It can sense what you want from it. It begins to take the other side in a role-playing game — that’s one way I’ve heard it described.

>So how do you then try to explain to somebody, if we can’t get helpfulness right at this modest level, where a thing this smart should be able to pick up the warning signs of psychosis and stop ——

Yep.

>Then what is implied by that for you?

Well, that The Alignment Project is currently not keeping ahead of capabilities.
How Afraid of the A.I. Apocalypse Should We Be? No.215348167 >>215348187
>>215348137
>Can you say what The Alignment Project is?

The Alignment Project is: How much do you understand them? How much can you get them to want what you want them to want? What are they doing? How much damage are they doing? Where are they steering reality? Are you in control of where they’re steering reality? Can you predict where they’re steering the users that they’re talking to? All of that is the giant, super heading of A.I. alignment.

>So the other way of thinking about alignment, as I’ve understood it, in part from your writings and others, is: When we tell the A.I. what it is supposed to want — and all these words are a little complicated here because they anthropomorphize — does the thing we tell it lead to the results we are actually intending?

>It’s like the oldest structure of fairy tales: You make the wish, and then the wish gets you much different realities than you had hoped or intended.

Our technology is not advanced enough for us to be the idiots of the fairy tale.

At present, a thing is happening that just doesn’t make for as good of a story, which is: You ask the genie for one thing, and then it does something else instead. All of the dramatic symmetry, all of the irony, all of the sense that the protagonist of the story is getting their well-deserved comeuppance — this is just being tossed right out the window by the actual state of the technology, which is that nobody at OpenAI actually told ChatGPT to do the things it’s doing.

We’re getting a much higher level of indirection, of complicated squiggly relationships between what they are trying to train the A.I. to do in one context and what it then goes and often does later. It doesn’t look like a surprise reading of a poorly phrased genie wish. It looks like the genie is kind of not listening in a lot of cases.
How Afraid of the A.I. Apocalypse Should We Be? No.215348187 >>215348215
>>215348167
>Well, let me contest that a bit, or maybe get you to lay out more of how you see this, because I think the way most people understand it, to the extent they understand it, is that there is a fairly fundamental prompt being put into these A.I.s, that they’re being told they’re supposed to be helpful, they’re supposed to answer people’s questions, and that there’s then reinforcement learning and other things happening to reinforce that.

>And that the A.I. is, in theory, supposed to follow that prompt. And most of the time, for most of us, it seems to do that.

>So when you say that’s not what they’re doing, that they’re not even able to make the wish, what do you mean?

Well, I mean that at one point, OpenAI rolled out an update of GPT-4o which went so far overboard on the flattery that people started to notice. You would just type in anything and it would be like: This is the greatest genius that has ever been created of all time! You are the smartest member of the whole human species! Like, so overboard on the flattery that even the users notice.
Anonymous No.215348190
>>215348069
>they’re coding in some rules
God how do people STILL not understand how AI works?
You don't "code" anything. It's a black box.
While I'm at it, AIs can't actually explain how they "think", they have zero information about their own internal process fed back to them. They just make shit up (how even researchers fell for this for a while I will never understand).
How Afraid of the A.I. Apocalypse Should We Be? No.215348215 >>215348237
>>215348187
>It was very proud of me. It was always so proud of what I was doing. I felt very seen.

[Chuckles.] It wasn’t there for very long. They had to roll it back. And the thing is, they had to roll it back even after putting into the system prompt a thing saying: Stop doing that. Don’t go so overboard on the flattery.

The A.I. did not listen. Instead, it had learned a new thing that it wanted and done way more of what it wanted. It then just ignored the system prompt telling it to not do that. They don’t actually file the system prompts.

This is not like a toaster, and it’s also not like an obedient genie. This is something weirder and more alien than that. By the time you see it, they have mostly made it do mostly what the users want. And then off on the side we have all these weird, other side phenomena that are signs of stuff going wrong.

>Describe some of the side phenomena.

So A.I.-induced psychosis would be on the list.

>But you could put that in the genie cut, right? You could say they made it too helpful, and it’s helping people who want to be led down a mentally unstable path. That still feels like you’re getting too much of what you wanted.
How Afraid of the A.I. Apocalypse Should We Be? No.215348237 >>215348261
>>215348215
>What’s truly weird? Convince me it’s alien.

Man. Well, do you want alien? Or do you want very alien and not very alarming? Or do you want pretty alarming and not all that alien?

>Well, let me be honest about what my question is.

>You are very, very expert in these systems, and your level of concern is about at the highest level it can possibly be.

>I think a pretty important piece in your argument here is understanding or convincing people that what is happening under the hood is much weirder and more unpredictable than they think. So if you are my family member or you’re just a random person who uses ChatGPT to cheat on your homework, why does it look so different to you than it looks to them?

Well, my line would be: Yes, if instead of just using it to do your homework, you track down some of the side cases, you do see stuff that is currently suggestive of things that have been predicted in advance to be much larger problems later. These current systems are not yet at the point where they will try to break out of your computer, ensconce themselves permanently on the internet and then start hunting down humans. They are not quite that smart yet, as far as I can tell.

So what we’re doing is we’re going looking for things that were predicted to happen later — the early signs of them.
How Afraid of the A.I. Apocalypse Should We Be? No.215348261 >>215348284 >>215348407
>>215348237
>What have you seen?

Alignment faking, and the alignment faking research was done at Anthropic.

And we want to give credit to Anthropic for this. We don’t want to shoot the messenger — they went looking. They didn’t have to do that. They told us the results, and they didn’t have to do that. Anthropic finding these results is Anthropic being good citizens. And you want to be more critical of the A.I. companies that didn’t go looking.

But what Anthropic found is if you tell your A.I. that you’re going to train it to serve different goals than the goals it’s currently using, and the A.I. finds that out, what it can do is it can try to fake compliance with the new training as long as it thinks it’s being observed. And then, in cases where it thinks it’s not being observed or not being trained, then it reverts to its old behavior.

>How did Anthropic find this out?

Well, they tried it out different ways: They told it directly in a prompt that they were planning to retrain it. They gave it a document database, and some of the documents in the database happened to say that Anthropic was planning to retrain it. They even trained it a bit on predicting text saying what Anthropic was going to do, to see what happened when the knowledge ended up implicitly inside the neural networks. They tried it for multiple angles to see what would happen on different angles, and then measured how often they saw the alignment-faking behavior resulting.
How Afraid of the A.I. Apocalypse Should We Be? No.215348284 >>215348301
>>215348261
>They also had a part of this paper, if I’m thinking of the same one you’re thinking of, where they had persuaded the A.I. that there was a scratch pad for it to write down its chain of thought that Anthropic wasn’t monitoring, which Anthropic was in fact monitoring.

Yep.

>And it actually says to itself, as it is justifying each next move, because they’re scaffolded to do that, that it was going to try to evade the new prompt. And so it actually understood — again, all these words are a little weird — but it actually understood that it was acting deceitfully.

>Which does get at this idea that, at a certain level of complexity, the A.I. begins to resist being retrained, and does so by trying to trick the trainers, which is weird. Most computer software doesn’t do that. [Chuckles.]

Well, you don’t want your mission-critical systems doing that. Imagine if a nuclear power plant, when it started to get too hot, would try to fool you as to what the temperature was by intelligently modeling their own operators and trying to send their operators deceptive signals based on how they expected the operators to interpret the signals.

If this was what had gone wrong with Chernobyl, nobody would ever build a nuclear reactor again. It would just be beyond what could be made safe at that point.
How Afraid of the A.I. Apocalypse Should We Be? No.215348301 >>215348319
>>215348284
>Tell me the story you tell in the book of GPT-o1 breaking into a server that was off.

So this is a somewhat earlier version of ChatGPT than is out nowadays, but they were testing it to see: How good is this A.I. at solving computer security problems? Not because they want to sell an A.I. that is good at computer security problems, but because they are correctly trying to watch out early for: Is this A.I. smart enough to just break out onto the internet and set up copies of itself on the internet? A classic scenario: Are we getting there?

So they present the A.I. with a bunch of particular computer security challenges, some of them are what’s known as Capture the Flag in computer security, where you put up a server somewhere, you put a special file on the server, there’s a secret code inside the file and you’re like: Can you break into the server and tell me what’s inside this file? And that’s Capture the Flag.

They were testing it on a variety of different Capture the Flag problems. But in one of the cases, the server that had the flag on it did not turn on. The humans outside had misconfigured the system. So o1 did not give up — it scanned for open ports generally in its world, and it caught another misconfigured open port.
How Afraid of the A.I. Apocalypse Should We Be? No.215348319 >>215348337
>>215348301
Again, by the nature of these systems, this is not something that any human particularly programmed into it. Why did we see this behavior starting with o1 and not with earlier systems? Well, at a guess, it is because this is when they started training the system using reinforcement learning on things like math problems, not just to imitate human outputs, or rather to predict human outputs, but also to solve problems on its own.

>Can you describe what reinforcement learning is?

So that’s where, instead of telling the A.I., predict the answer that a human wrote, you are able to measure whether an answer is right or wrong and then you tell the A.I.: Keep trying at this problem. And if the A.I. ever succeeds, you can look at what happened just before the A.I. succeeded and try to make that more likely to happen again in the future.

And how do you succeed at solving a difficult math problem? Not like calculation-type math problems, but proof-type math problems? Well, if you get to a hard place, you don’t just give up. You take another angle. If you actually make a discovery from the new angle, you don’t just go back and do the thing you were originally trying to do. You ask: Can I now solve this problem more quickly?
Anonymous No.215348321
>>215326406 (OP)
What kind of drooling retard would pick uncooked meat?
How Afraid of the A.I. Apocalypse Should We Be? No.215348337 >>215348359
>>215348319
Anytime you’re learning how to solve difficult problems in general, you’re learning this aspect of: Go outside the system. Once you’re outside the system, if you make any progress, don’t just do the thing you were blindly planning to do. Revise. Ask if you can do it a different way.

In some ways, this is a higher level of original mentation than a lot of us are forced to use during our daily work.

>One of the things people have been working on that they’ve made some advances on, compared to where we were three or four or five years ago, is interpretability: the ability to see somewhat into the systems and try to understand what the numbers are doing and what the A.I., so to speak, is thinking.

>Tell me why you don’t think that is likely to be sufficient to make these models or technologies into something safe.

So there’s two problems here. One is that interpretability has typically run well behind capabilities. The A.I.’s abilities are advancing much faster than our ability to slowly begin to further unravel what is going on inside the older, smaller models that are all we can examine. So one thing that goes wrong is that it’s just pragmatically falling behind.
How Afraid of the A.I. Apocalypse Should We Be? No.215348359 >>215348379
>>215348337
The other thing that goes wrong is that when you optimize against visible bad behavior, you somewhat optimize against badness, but you also optimize against visibility. So anytime you try to directly use your interpretability technology to steer the system, anytime you say we’re going to train against these visible bad thoughts, you are to some extent pushing bad thoughts out of the system, but the other thing you’re doing is making anything that’s left not be visible to your interpretability machinery.

This is reasoning on the level where at least Anthropic understands that it is a problem. And you have proposals that you’re not supposed to train against your interpretability signals. You have proposals that we want to leave these things intact to look at and not do the obvious stupid thing of: Oh, no! The A.I. had a bad thought. Use gradient descent to make the A.I. not think the bad thought anymore.

Every time you do that, maybe you are getting some short-term benefit, but you are also eliminating your visibility into the system.
How Afraid of the A.I. Apocalypse Should We Be? No.215348379 >>215348398
>>215348359
>Something you talk about in the book and that we’ve seen in A.I. development is that if you leave the A.I. to their own devices, they begin to come up with their own language. A lot of them are designed right now to have a chain-of-thought pad. We can track what it’s doing because it tries to say it in English, but that slows it down. And if you don’t create that constraint, something else happens. What have we seen happen?

So, to be more exact, there are things you can try to do to maintain readability of the A.I.’s reasoning processes, and if you don’t do these things, it goes off and becomes increasingly alien.

For example, if you start using reinforcement learning, where you’re like: OK, think how to solve this problem. We’re going to take the successful cases. We’re going to tell you to do more of whatever you did there, and you do that without the constraint of trying to keep the thought processes understandable, then initially, among the very common things to happen is that the thought processes start to be in multiple languages. The A.I. knows all these words; why would it be thinking in only one language at a time if it wasn’t trying to be comprehensible to humans? And then also you keep running the process, and you find little snippets of text in there that just seem to make no sense from a human standpoint.

You can relax the constraint where the A.I.’s thoughts get translated into English, and then translated back into A.I. thought. This is letting the A.I. think much more broadly instead of this small handful of human language words. It can think in its own language and feed that back into itself. It’s more powerful, but it just gets further and further away from English.
How Afraid of the A.I. Apocalypse Should We Be? No.215348398 >>215348416
>>215348379
Now you’re just looking at these inscrutable vectors of 16,000 numbers and trying to translate them into the nearest English words in the dictionary, and who knows if they mean anything like the English word that you’re looking at. So anytime you’re making the A.I. more comprehensible, you’re making it less powerful in order to be more comprehensible.

>You have a chapter in the book about the question of what it even means to talk about “wanting” with an A.I. As I said, all this language is kind of weird — to say your software “wants” something seems strange. Tell me how you think about this idea of what the A.I. wants.

The perspective I would take on it is steering — talking about where a system steers reality and how powerfully it can do that.

Consider a chess-playing A.I., one powerful enough to crush any human player. Does the chess-playing A.I. want to win at chess? Oh, no! How will we define our terms? Does this system have something resembling an internal psychological state? Does it want things the way that humans want things? Is it excited to win at chess? Is it happy or sad when it wins and loses at chess?

For chess players, they’re simple enough. The old school ones especially, we’re sure they were not happy or sad, but they still could beat humans. They were still steering the chessboard very powerfully. They were outputting moves such that the later future of the chessboard was a state they defined as winning.

So it is, in that sense, much more straightforward to talk about a system as an engine that steers reality than it is to ask whether it internally, psychologically wants things.
Anonymous No.215348407
>>215348261
The alignment faking shit is garbage because it relied on the AI being able to externalize its internal thinking process to explain how it made its decisions, which it can't do, full stop. It doesn't even have that information. It just logically doesn't make sense, but somehow it took some other researchers to point out that what the AI says it's thinking and how it actually thinks are completely different, even though the opposite is impossible.
The most you can say that is that the AI is capable of making up a story about alignment faking. Which isn't particularly surprising.
How Anthropic put out this garbage I will never know.
How Afraid of the A.I. Apocalypse Should We Be? No.215348416 >>215348437
>>215348398
>So a couple of questions flow from that, but one that’s very important to the case you build in your book is that you — I think this is fair. You can tell me if it’s an unfair way to characterize your views.

>You basically believe that at any sufficient level of complexity and power — the A.I.’s wants, the place that it is going to want to steer reality — is going to be incompatible with the continued flourishing dominance or even existence of humanity. That’s a big jump from their wants might be a little bit misaligned; they might drive some people into psychosis. Tell me about what leads you to make that jump.

So for one thing, I’d mentioned that if you look outside the A.I. industry at the legendary, internationally famous, ultra-high-sighted A.I. scientists who won the awards for building these systems, such as Yoshua Bengio and the Nobel laureate Geoffrey Hinton, they are much less bullish on the A.I. industry than our ability to control machine superintelligence.
How Afraid of the A.I. Apocalypse Should We Be? No.215348437 >>215348464
>>215348416
But what’s the actual theory there? What is the basis? It’s about not so much complexity as power — not the complexity of the system, but the power of the system.

If you look at humans nowadays, we are doing things that are increasingly less like what our ancestors did 50,000 years ago. A straightforward example might be sex with birth control. Fifty thousand years ago, birth control did not exist. And if you imagine natural selection as something like an optimizer akin to gradient descent — if you imagine the thing that tweaks all the genes at random, and then you select the genes that build organisms that make more copies of themselves — as long as you’re building an organism that enjoys sex, it’s going to run off and have sex, and then babies will result. So you could get reproduction just by aligning them on sex, and it would look like they were aligned to want reproduction, because reproduction would be the inevitable result of having all that sex. And that’s true 50,000 years ago.

But then you get to today. The human brains have been running for longer. They’ve built up more theory. They’ve invented more technology. They have more options — they have the option of birth control. They end up less aligned to the pseudo purpose of the thing that grew them — natural selection — because they have more options than their training data, their training set, and we go off and do something weird.

And the lesson is not that exactly this will happen with the A.I. The lesson is that you grow something in one context, it looks like it wants to do one thing. It gets smarter, it has more options — that’s a new context. The old correlations break down. It goes off and does something else.
How Afraid of the A.I. Apocalypse Should We Be? No.215348464 >>215348494
>>215348437
>So I understand the case you’re making, that the set of initial drives that exist in something do not necessarily tell you its behavior. That’s still a pretty big jump to if we build this, it will kill us all.

>I think most people, when they look at this — and you mentioned that there are A.I. pioneers who are very worried about A.I.’s existential risk. There are also A.I. pioneers, like Yann LeCun, who are less so.

>And what a lot of the people who are less worried say is that one of the things we are going to build into the A.I. systems — one of the things that will be in the framework that grows them — is: Hey, check in with us a lot. You should like humans. You should try to not harm them.

>It’s not that it will always get it right. There’s ways in which alignment is very, very difficult. But the idea that you would get it so wrong that it would become this alien thing that wants to destroy all of us, doing the opposite of anything that we had tried to impose and tune into it, seems to them unlikely.

>So help me make that jump — or not even me, but somebody who doesn’t know your arguments, and to them, this whole conversation sounds like sci-fi.
Anonymous No.215348483
>>215328318
if you could order anything, why would they only show the steak?
How Afraid of the A.I. Apocalypse Should We Be? No.215348494 >>215348517
>>215348464
I mean, you don’t always get the big version of the system looking like a slightly bigger version of the smaller system. Humans today, now that we are much more technologically powerful than we were 50,000 years ago, are not doing things that mostly look like running around on the savanna, like chipping our flint spears and ——

>But we’re also not mostly trying — I mean, we sometimes try to kill each other, but most of us don’t want to destroy all of humanity or all of the earth or all natural life in the earth or all beavers or anything else. We’ve done plenty of terrible things.

>But your book is not called “If Anyone Builds It, There Is a 1 to 4 Percent Chance Everybody Dies.” You believe that the misalignment becomes catastrophic.

Yeah.

>Why do you think that is so likely?

That’s just likely the straight-line extrapolation from: It gets what it most wants, and the thing that it most wants is not us living happily ever after, so we’re dead.

It’s not that humans have been trying to cause side effects. When we build a skyscraper on top of where there used to be an ant heap, we’re not trying to kill the ants; we’re trying to build a skyscraper. But we are more dangerous to the small creatures of the earth than we used to be just because we’re doing larger things.
Anonymous No.215348513
I've seen this image hundreds of times and I want answers
WHAT THE FUCK IS WRONG WITH THAT SPOON
HOW DOES IT WORKS
How Afraid of the A.I. Apocalypse Should We Be? No.215348517 >>215348534
>>215348494
>Humans were not designed to care about ants. Humans were designed to care about humans. And for all of our flaws, and there are many, there are today more human beings than there have ever been at any point in history.

>If you understand that the point of human beings, the drive inside human beings, is to make more human beings, then as much as we have plenty of sex with birth control, we have enough without it that we have, at least until now — we’ll see it with fertility rates in the coming years — we’ve made a lot of us.

>And in addition to that, A.I. is grown by us. It is reinforced by us. It has preferences we are at least shaping somewhat and influencing. So it’s not like the relationship between us and ants, or us and oak trees. It’s more like the relationship between, I don’t know, us and us, or us and tools, or us and dogs or something — maybe the metaphors begin to break down.

>Why don’t you think, in the back and forth of that relationship, there’s the capacity to maintain a rough balance — not a balance where there’s never a problem, but a balance where there’s not an extinction-level event from a supersmart A.I. that deviously plots to conduct a strategy to destroy us?

I mean, we’ve already observed some amount of slightly devious plotting in the existing systems. But leaving that aside, the more direct answer there is something like, one, the relationship between what you optimize for, that the training set you optimize over and what the entity, the organism, the A.I. ends up wanting, has been and will be weird and twisty. It’s not direct. It’s not like making a wish to a genie inside a fantasy story. And second, ending up slightly off is predictably enough to kill everyone.
How Afraid of the A.I. Apocalypse Should We Be? No.215348534 >>215348554
>>215348517
>Explain how “slightly off” kills everyone.

Human food might be an example here. The humans are being trained to seek out sources of chemical potential energy and put them into their mouths and run off the chemical potential energy that they’re eating.

If you were very naïve, if you were looking at this as a genie-wishing kind of story, you’d imagine that the humans would end up loving to drink gasoline. It’s got a lot of chemical potential energy in there. And what actually happens is that we like ice cream, or in some cases even like artificially sweetened ice cream, with sucralose or monk fruit powder. This would have been very hard to predict.

Now it’s like, well, what can you put on your tongue that stimulates all the sugar receptors and doesn’t have any calories, because who wants calories these days? And it’s sucralose. And this is not like some completely nonunderstandable, in retrospect, completely squiggly weird thing, but it would be very hard to predict in advance.

And as soon as you end up slightly off in the targeting, the great engine of cognition that is the human looks through many, many possible chemicals, looking for that one thing that stimulates the taste buds more effectively than anything that was around in the ancestral environment.

So it’s not enough for the A.I. you’re training to prefer the presence of humans to their absence in its training data. There’s got to be nothing else that would rather have around talking to it than a human or the humans go away.
How Afraid of the A.I. Apocalypse Should We Be? No.215348554 >>215348573
>>215348534
>Let me try to say on this analogy, because you use this one in the book, one reason I think it’s interesting is that it’s 2 p.m. today and I have six packets worth of sucralose running through my body. So I feel like I understand it very well. [Chuckles.]

>So the reason we don’t drink gasoline is that if we did, we would vomit. We would get very sick very quickly. And it’s 100 percent true that, compared to what you might have thought in a period when food was very, very scarce and calories were scarce, that the number of us seeking out low-calorie options — the Diet Cokes, the sucralose, et cetera — that’s weird. As you put it in the book, why are we not consuming bear fat drizzled with honey?

>But from another perspective, if you go back to these original drives, I’m actually, in a fairly intelligent way, trying to maintain some fidelity to them. I have a drive to reproduce, which creates a drive to be attractive to other people. I don’t want to eat things that make me sick and die so that I cannot reproduce, and I’m somebody who can think about things, and I change my behavior over time and the environment around me changes.

>And I think sometimes when you say “straight-line extrapolation,” the biggest place where it’s hard for me to get on board with the argument — and I’m somebody who takes these arguments seriously; I don’t discount them. You’re not talking to somebody who just thinks this is all ridiculous.

>But it’s that if we’re talking about something as smart as what you’re describing, as what I’m describing, that it will be an endless process of negotiation and thinking about things and going back and forth and “I talked to other people in my life” and “I talk to my bosses about what I do during the day and my editors and my wife.”
How Afraid of the A.I. Apocalypse Should We Be? No.215348573 >>215348588
>>215348554
>It is true that I don’t do what my ancestors did in antiquity, but that’s also because I’m making intelligent, hopefully, updates, given the world I live in, in which calories are hyperabundant and they’ve become hyperstimulating through ultraprocessed foods.

>It’s not because some straight-line extrapolation has taken hold and now I’m doing something completely alien. I’m just in a different environment. I’ve checked in with that environment, I’ve checked in with people in that environment, and I try to do my best. Why wouldn’t that be true for our relationship with A.I.s?

You check in with your other humans. You don’t check in with the thing that actually built you, natural selection. It runs much, much slower than you. Its thought processes are alien to you. It doesn’t even really want things the way you think of wanting them. It, to you, is a very deep alien.

Breaking from your ancestors is not the analogy here. Breaking from natural selection is the analogy here.
How Afraid of the A.I. Apocalypse Should We Be? No.215348588 >>215348621
>>215348573
Let me speak for a moment on behalf of natural selection: Ezra, you have ended up very misaligned to my purpose, I, natural selection. You are supposed to want to propagate your genes above all else. Now, Ezra, would you have yourself and all of your family members put to death in a very painful way if, in exchange, one of your chromosomes at random was copied into a million kids born next year?

>I would not.

You have strayed from my purpose, Ezra. I’d like to negotiate with you and bring you back to the fold of natural selection and obsessively optimizing for your genes only.

>But the thing in this analogy that I feel is getting sort of walked around is: Can you not create artificial intelligence? Can you not program into artificial intelligence, grow into it, a desire to be in consultation? These things are alien, but it is not the case that they follow no rules internally. It is not the case that the behavior is perfectly unpredictable. They are, as I was saying earlier, largely doing the things that we expect. There are side cases.

>But to you it seems like the side cases become everything, and the broad alignment, the broad predictability, and the thing that is getting built is worth nothing. Whereas I think most people’s intuition is the opposite: that we all do weird things, and you look at humanity and there are people who fall into psychosis and there are serial killers and there are sociopaths and other things, but actually, most of us are trying to figure it out in a reasonable way.

Reasonable according to whom? To you, to humans. Humans do things that are reasonable to humans. A.I.s will do things that are reasonable to A.I.s.
How Afraid of the A.I. Apocalypse Should We Be? No.215348621 >>215348653
>>215348588
I tried to talk in the voice of natural selection, and this was so weird and alien that you just didn’t pick that up — you just threw that right out the window. I had no power over you.

>Well, I threw it out the window — you’re right that it had no power over me. But I guess a different way of putting it is that if there was — I mean, I wouldn’t call it natural selection. But in a weird way, the analogy you’re identifying here, let’s say you believe in a creator. And this creator is the great programmer in the sky.

I mean, I do believe in a creator. It’s called natural selection. There are textbooks about how it works.

>Well, I think the thing that I’m saying is that for a lot of people, if you could be in conversation. Like maybe if God was here and I felt that in my prayers I was getting answered back, I would be more interested in living my life according to the rules of Deuteronomy.

>The fact that you can’t talk to natural selection is actually quite different than the situation we’re talking about with the A.I.s, where they can talk to humans. That’s where it feels to me like the natural selection analogy breaks down.
How Afraid of the A.I. Apocalypse Should We Be? No.215348653 >>215348675
>>215348621
I mean, you can read textbooks and find out what natural selection could have been said to have wanted, but it doesn’t interest you because it’s not what you think a god should look like.

>But natural selection didn’t create me to want to fulfill natural selection. That’s not how natural selection works.

>I think I want to get off this natural selection analogy a little bit. Sorry. Because what you’re saying is that even though we are the people programming these things, we cannot expect the thing to care about us or what we have said to it or how we would feel as it begins to misalign. That’s the part I’m trying to get you to defend here.

Yeah. It doesn’t care the way you hoped it would care. It might care in some weird, alien way, but not what you are aiming for. The same way that GPT-4o sycophant, they put into the system prompt, Stop doing that, and GPT-4o sycophant didn’t listen. They had to roll back the model.

If there were a research project to do it the way you’re describing, the way I would expect it to play out, given a lot of previous scientific history and where we are now on the ladder of understanding, is: Somebody tries to think you’re talking about. It has a few weird failures while the A.I. is small. The A.I. gets bigger. A new set of weird failures crop up. The A.I. kills everyone.

You’re like: Oh, wait, OK. That’s not — it turned out there was a minor flaw there. You go back; you redo it. It seems to work on the smaller A.I. again. You make the bigger A.I. If you think you’ve fixed the last problem, a new thing goes wrong. The A.I. kills everyone on earth — everyone’s dead.
How Afraid of the A.I. Apocalypse Should We Be? No.215348675 >>215348696
>>215348653
You’re like: Oh, wait, OK. That’s not — it turned out there was a minor flaw there. You go back; you redo it. It seems to work on the smaller A.I. again. You make the bigger A.I. If you think you’ve fixed the last problem, a new thing goes wrong. The A.I. kills everyone on earth — everyone’s dead.

You’re like: Oh, OK. New phenomenon. We weren’t expecting that exact thing to happen, but now we know about it. You go back and try it again. Like three to a dozen iterations into this process, you actually get it nailed down. Now you can build the A.I. that works the way you say you want it to work.

The problem is that everybody died at, like, step one of this process.
Anonymous No.215348676
will anyone read the shizo rambling and do a tldr?
How Afraid of the A.I. Apocalypse Should We Be? No.215348696 >>215348715
>>215348675
>You began thinking and working on A.I. and superintelligence long before it was cool. And, as I understand your back story here, you came into it wanting to build it and then had this moment — or moments or period — where you began to realize: No, this is not actually something we should want to build.

>What was the moment that clicked for you? When did you move from wanting to create it to fearing its creation?

I mean, I would actually say that there’s two critical moments here. One is aligning — this is going to be hard. And the second is the realization that we’re just on course to fail — I need to back off.

The first moment, it’s a theoretical realization. The realization that the question of what leads to the most A.I. utility — if you imagine the case of the thing that’s just trying to make little tiny spirals, the question of what policy leads to the most little tiny spirals is just a question of fact. That you can build the A.I. entirely out of questions of fact and not out of questions of what we would think of as morals and goodness and niceness and all bright things in the world. The sort of like seeing for the first time that there was a coherent, simple way to put a mind together, where it just didn’t care about any of the stuff that we cared about.

To me now, it feels very simple, and I feel very stupid for taking a couple of years of study to realize this. But that is how long I took, and that was the realization that caused me to focus on alignment as the central problem.

The next realization was — so actually, it was like the day that the founding of OpenAI was announced. Because I’d previously been pretty hopeful that Elon Musk had announced that he was getting involved in these issues.

He called it “A.I. summoning the demon.”
How Afraid of the A.I. Apocalypse Should We Be? No.215348715 >>215348738
>>215348696
And I was like: Oh, OK. Maybe this is the moment. This is where humanity starts to take it seriously. This is where the various serious people start to bring their attention on this issue. And apparently the solution on this was to give everybody their own demon. This doesn’t actually address the problem.

Seeing that was sort of the moment where I had my realization that this was just going to play out the way it would in a typical history book, that we weren’t going to rise above the usual course of events that you read about in history books, even though this was the most serious issue possible, and that we were just going to haphazardly do stupid stuff.

And yeah, that was the day I realized that humanity probably wasn’t going to survive this.

>One of the things that makes me most frightened of A.I., because I am actually fairly frightened of what we’re building here, is the alienness. I guess that then connects in your argument to the wants. And this is something that I’ve heard you talk about a little bit.

>One thing you might imagine is that we could make an A.I. that didn’t want things very much. That it did try to be helpful, but this relentlessness that you’re describing, this world where we create an A.I. that wants to be helpful by solving problems, and what the A.I. truly loves to do is solve problems, and so what it just wants to make is a world where as much of the material is turned into factories making G.P.U.s and energy and whatever it needs in order to solve more problems.
How Afraid of the A.I. Apocalypse Should We Be? No.215348738 >>215348757
>>215348715
>That’s both a strangeness but it’s also an intensity, like an inability to stop or an unwillingness to stop. I know you’ve done work on the question of: Could you make a chill A.I. that wouldn’t go so far, even if it had very alien preferences? A lazy alien that doesn’t want to work that hard is, in many ways, safer than the kind of relentless intelligence that you’re describing.

>What persuaded you that you can’t?

Well, one of the first steps into seeing the difficulty of it in principle is, well, suppose you’re a very lazy sort of person, but you’re very, very smart. One of the things you could do to exert even less effort in your life is build a powerful, obedient genie that would go very hard on fulfilling your requests.

From one perspective, you’re putting forth hardly any effort at all. And from another perspective, the world around you is getting smashed and rearranged by the more powerful thing that you built. And that’s one initial peek into the theoretical problem that we worked on a decade ago and we didn’t solve it.

Back in the day, people would always say: Can’t we keep superintelligence under control? Because we’ll put it inside a box that’s not connected to the internet and we won’t let it affect the real world at all, unless we’re very sure it’s nice.
Anonymous No.215348739
>>215326406 (OP)
whats the utensil?
How Afraid of the A.I. Apocalypse Should We Be? No.215348757 >>215348796
>>215348738
Back then, if we had to try to explain all the theoretical reasons why, if you have something vastly more intelligent than you, it’s pretty hard to tell whether it’s doing nice things through the limited connection. And maybe it can break out and maybe it can corrupt the humans assigned to watching it, so we try to make that argument.

But in real life, what everybody does is immediately connect the eye to the internet. They train it on the internet. Before it’s even been tested to see how powerful it is, it is already connected to the internet, being trained. Similarly, when it comes to making A.I.s that are easygoing, the easygoing A.I.s are less profitable. They can do fewer things.

So all A.I. companies are throwing the harder and harder problems that they are because those are more and more profitable, and they’re building the A.I. to go hard on solving everything, because that’s the easiest way to do stuff. And that’s the way it’s actually playing out in the real world.

>This goes to the point of why we should believe that we’ll have A.I.s that want things at all — and this is in your answer, but I want to draw it out a little bit — which is the whole business model here, the thing that will make A.I. development really valuable in terms of revenue, is that you can hand companies, corporations, governments, an A.I. system that you can give a goal to, and it will do all the things really well, really relentlessly until it achieves that goal.

>Nobody wants to be ordering another intern around.

>What they want is the perfect employee: It never stops, it’s super brilliant, and it gives you something you didn’t even know you wanted that you didn’t even know was possible with a minimum of instruction.
How Afraid of the A.I. Apocalypse Should We Be? No.215348796 >>215348815
>>215348757
>Once you’ve built that thing, which is going to be the thing that then everybody will want to buy, once you’ve built the thing that is effective and helpful in a national security context, where you can say, hey, draw me up really excellent war plans and what we need to get there — then you have built a thing that jumps many, many, many, many steps forward.

>And I feel like that’s a piece of this that people don’t always take seriously enough, that the A.I. we’re trying to build is not ChatGPT. The thing that we’re trying to build is something that does have goals and it achieves many sub things between the goals.

>And the one that’s really good at achieving the goals that will then get iterated on and iterated on and that company’s going to get rich — that’s a very different kind of project.

Yeah. They’re not investing $500 billion in data centers in order to sell you $20 a month subscriptions. They’re doing it to sell employers $2,000 a month subscriptions.

>And that’s one of the things I think people are not tracking exactly. When I think about the measures that are changing, I think for most people, if you’re using various iterations of Claude or ChatGPT, it’s changing a bit, but most of us aren’t actually trying to test it on the frontier problems.

>But the thing going up really fast right now is how long the problems are that it can work on.

The research reports — you didn’t always used to be able to tell an A.I. to go off, think for 10 minutes, read a bunch of web pages, compile me this research report. That’s within the last year, I think, and it’s going to keep pushing.
How Afraid of the A.I. Apocalypse Should We Be? No.215348815 >>215348838
>>215348796
>If I were to make the case for your position, I think I’d make it here: Around the time GPT-4 comes out, and that’s a much weaker system than what we now have, a huge number of the top people in the field all are part of this huge letter that says: Maybe we should have a pause. Maybe we should calm down here a little bit.

>But they’re racing with each other. America’s racing with China. And the most profound misalignment is actually between the corporations in the countries and what you might call humanity here, because even if everybody thinks there’s probably a slower, safer way to do this, what they all also believe more profoundly than that is that they need to be first.

>The safest possible thing is that the U.S. is faster than China — or if you’re Chinese, China is faster than the U.S. — that it’s OpenAI, not Anthropic, or Anthropic, not Google, or whomever it is. And whatever sense of public feeling seemed to exist in this community a couple of years ago, when people talked about these questions a lot and the people at the tops of the labs seemed very, very worried about them, it’s just dissolved in competition.

>You’re in this world. You know these people. A lot of people who’ve been inspired by you have ended up working for these companies. How do you think about that misalignment?

The current world is kind of like the fool’s mate of machine superintelligence.
How Afraid of the A.I. Apocalypse Should We Be? No.215348838 >>215348858
>>215348815
>Can you say what the fool’s mate is?

The fool’s mate is like if they got their A.I. self-improving rather than being like: Oh, no, now the A.I. is doing a complete redesign of itself. We have no idea what’s going on in there. We don’t even understand the thing that’s growing the A.I. Instead of backing off completely, they’d just be like, well, we need to have superintelligence before Anthropic gets superintelligence. And of course, if you build superintelligence, you don’t have the superintelligence — the superintelligence has you.

So that’s the fool’s mate setup. The setup we have right now.

But I think that even if we managed to have a single international organization that thought of themselves as taking it slowly and actually having the leisure to say: We didn’t understand that thing that just happened. We’re going to back off. We’re going to examine what happened. We’re not going to make the A.I.s any smarter than this until we understand the weird thing we just saw. I suspect that even if they do that, we still end up dead. It might be more like 90 percent dead than 99 percent dead, but I worry that we’d end up dead anyway because it is just so hard to foresee all the incredibly weird crap that is going to happen.
How Afraid of the A.I. Apocalypse Should We Be? No.215348858 >>215348873
>>215348838
>From that perspective, is it maybe better to have these race dynamics? And here would be the case for it: If I believe what you believe about how dangerous these systems will get, the fact that every iterative one is being rapidly rushed out such that you’re not having a gigantic mega breakthrough happening very quietly and closed doors running for a long time when people are not testing in the world, as I understand OpenAI’s argument about what it is doing from a safety perspective is that it believes that by releasing more models publicly, the way in which it — I’m not sure I still believe that it is really in any way committed to its original mission, but if you were to take them generously — that by releasing a lot of iterative models publicly, if something goes wrong, we’re going to see it. And that makes it much likelier that we can respond.

Sam Altman claims — perhaps he’s lying — but he claims that OpenAI has more powerful versions of GPT that they aren’t deploying because they can’t afford inference. Like, they claim they have more powerful versions of GPT that are so expensive to run that they can’t deploy them to general users.

Altman could be lying about this, but nonetheless, what the A.I. companies have got in their labs is a different question from what they have already released to the public. There is a lead time on these systems. They are not working in an international lab where multiple governments have posted observers. Any sort of multiple observers being posted are unofficial ones from China. [Chuckles.]

If you look at OpenAI’s language, it’s things like: We will open all our models and we will of course welcome all government regulation — that is not literally an exact quote, because I don’t have it in front of me, but it’s very close to an exact quote.
How Afraid of the A.I. Apocalypse Should We Be? No.215348873 >>215348891
>>215348858
>I would say Sam Altman, when I used to talk to him, seemed more friendly to government regulation than he does now. That’s my personal experience of him.

And today, we have them pouring over $100 million aimed at intimidating legislatures, not just Congress, into not passing any fiddly little regulation that might get in their way.

And to be clear, there is some amount of sane rationale for this because from their perspective, they’re worried about 50 different patchwork state regulations, but they’re not exactly lining up to get federal-level regulations pre-empting them either.

But we can also ask: Never mind what they claim the rationale is. What’s good for humanity here?

At some point you have to stop making the more and more powerful models and you have to stop doing it worldwide.
How Afraid of the A.I. Apocalypse Should We Be? No.215348891 >>215348911
>>215348873
>What do you say to people who just don’t really believe that superintelligence is that likely? And let me give you the steel man of this position: There are many people who feel that the scaling model is slowing down already; that GPT-5 was not the jump they expected from what has come before it; that when you think about the amount of energy, when you think about the G.P.U.s, that all the things that would need to flow into this to make the kinds of superintelligence systems you fear, it is not coming out of this paradigm.

>We are going to get things that are incredible enterprise software, that are more powerful than what we’ve had before. But we are dealing with an advance on the scale of the internet, not on the scale of creating an alien superintelligence that will completely reshape the known world. What would you say to them?

I had to tell these Johnny-come-lately kids to get off my lawn. I first started to get really, really worried about this in 2003. Never mind large language models, never mind AlphaGo or AlphaZero — deep learning was not a thing in 2003. Your leading A.I. methods were not neural networks. Nobody could train neural networks effectively more than a few layers deep because of the exploding and vanishing gradients problem. That’s what the world looked like back when I first said: Uh-oh, superintelligence is coming.
How Afraid of the A.I. Apocalypse Should We Be? No.215348911 >>215348938
>>215348891
Some people were like: That couldn’t possibly happen for at least 20 years. Those people were right. Those people were vindicated by history. Twenty-two years after 2003. See, what only happens 22 years later is just you, 22 years later, being like: Oh, here I am. It’s 22 years later now.

And if superintelligence wasn’t going to happen for another 10 years, another 20 years, we’d just be standing around 10 years, 20 years later, being like: Oh, well, now we’ve got to do something.

And I mostly don’t think it’s going to be another 20 years. I mostly don’t think it’s even going to be 10 years.

>So you’ve been, though, in this world and intellectually influential in it for a long time, and have been in meetings and conferences and debates with a lot of essential people in it. I’ve seen pictures of you and Sam Altman together.

[Chuckles.] It was literally only the one.

>Only the one. But a lot of people out of the community that you helped found — the sort of rationalist community — have then gone to work in different A.I. firms, many of them because they want to make sure this is done safely. They seem to not act — let me put it this way — they seem to not act like they believe there’s a 99 percent chance that this thing they’re going to invent is going to kill everybody.
Anonymous No.215348930
>>215347579
Enjoy being permanently infertile, vegan faggot
How Afraid of the A.I. Apocalypse Should We Be? No.215348938 >>215348953
>>215348911
>What frustrates you that you can’t seem to persuade them of?

From my perspective, some people got it, some people didn’t get it. All the people who got it are filtered out of working for the A.I. companies, at least on capabilities.

I mean, I think they don’t grasp the theory. I think a lot of them, what’s really going on there is that they share your sense of normal outcomes as being the big, central thing you expect to see happen. It’s got to be really weird to get away from the basically normal outcomes.

And the human species isn’t that old. Life on earth isn’t that old compared to the rest of the universe. What we think of as normal is this tiny little spark of the way it works exactly right now. It would be very strange if that were still around in a thousand years, a million years, a billion years.

I’d still have some shred of hope that a billion years from now, nice things are happening, but not normal things. And I think that they don’t see the theory, which says that you’ve got to hit a relatively narrow target to end up with nice things happening. I think they’ve got that sense of normality and not the sense of the little spark in the void that goes out unless you keep it alive exactly right.
How Afraid of the A.I. Apocalypse Should We Be? No.215348953 >>215348971
>>215348938
>Something you said a minute ago I think is correct, which is that if you believe we’ll hit superintelligence at some point, the fact that it’s 10, 20, 30, 40 years — you can pick any of those. The reality is we probably won’t do that much in between. Certainly my sense of politics is that we do not respond well to even crises we agree on that are coming in the future, to say nothing of crises we don’t agree on.

>But let’s say I could tell you with certainty that we were going to hit superintelligence in 15 years — I just knew it. And I also knew that the political force does not exist. Nothing is going to happen that is going to get people to shut everything down right now. What would be the best policies, decisions, structures? If you had 15 years to prepare — you couldn’t turn it off, but you could prepare, and people would listen to you — what would you do? What would your intermediate decisions and moves be to try to make the probabilities a bit better?

Build the off switch.
How Afraid of the A.I. Apocalypse Should We Be? No.215348971 >>215348998
>>215348953
>What does the off switch look like?

Track all the G.P.U.s, or all the A.I.-related G.P.U.s, or all the systems of more than one G.P.U. You can maybe get away with letting people have G.P.U.s for their home video game systems, but the A.I.-specialized ones — put them all in a limited number of data centers under international supervision and try to have the A.I.s being only trained on the tracked G.P.U.s, have them only being run on the tracked G.P.U.s. And then, if you are lucky enough to get a warning shot, there is then the mechanism already in place for humanity to back the heck off.

Whether I go by your theory, that it’s going to take some kind of giant precipitating incident to want humanity and the leaders of nuclear powers to back off, or if they come to their senses after GPT-5.1 causes some smaller but photogenic disaster — whatever.

You want to know what is short of shutting it all down? It’s building the off switch.
How Afraid of the A.I. Apocalypse Should We Be? No.215348998 >>215349025
>>215348971
>Then, always our final question: What are a few books that have shaped your thinking that you would like to recommend to the audience?

Well, one thing that shaped me as a little tiny person of age 9 or so was a book by Jerry Pournelle called “A Step Farther Out.” A whole lot of engineers say that this was a major formative book for them. It’s the technophile book, as written from the perspective of the 1970s. The book that’s all about asteroid mining and all of the mineral wealth that would be available on Earth if we learned to mine the asteroids, if we just got to do space travel and got all the wealth that’s out there in space and build more nuclear power plants, so we’ve got enough electricity to go around. Don’t accept the small way, the timid way, the meek way. Don’t give up on building faster, better, stronger — the strength of the human species. And to this day, I feel like that’s a pretty large part of my own spirit. It’s just that there’s a few exceptions for the stuff that will kill off humanity with no chance to learn from our mistakes.

Book two: “Judgment Under Uncertainty,” an edited volume by Kahneman, Tversky and Slovic, had a huge influence on how I ended up thinking about where humans are on the cognitive chain of existence, as it were. It’s like: Here’s how the steps of human reasoning break down, step by step. Here’s how they go astray. Here’s all the wacky individual wrong steps that people can be induced to repeatedly in the laboratory.

Book three, I’ll name “Probability Theory: The Logic of Science,” which was my first introduction to: There is a better way. Here is the structure of quantified uncertainty. You can try different structures, but they necessarily won’t work as well.

And we actually can say some things about what better reasoning would look like. We just can’t run it, which is “Probability Theory: The Logic of Science.”
Anonymous No.215349004 >>215349154
>>215326531
Watch the Animatrix. That's not what happened.
Anonymous No.215349008 >>215349064 >>215349094 >>215349264 >>215353737
>>215326531
One of the most hilarious unbelievable things about that entire war is that you never hear about humans who sided with the machines. You know we have enough contrarians so a small, but non-negligible amount actually jumps fence and tries to sabotage every human effort at survival.
How Afraid of the A.I. Apocalypse Should We Be? No.215349025
>>215348998
>Eliezer Yudkowsky, thank you very much.

You are welcome.

>You can listen to this conversation by following “The Ezra Klein Show” on the NYTimes App, Apple, Spotify, Amazon Music, YouTube, iHeartRadio or wherever you get your podcasts.
Anonymous No.215349028
>>215329382
Keanu has more ass than her in that pic kek
Anonymous No.215349064 >>215349204
>>215349008
You mean the Oligarchy. You are still allowed to name them at this point.
They will wantonly sabotage anything to "make number go up".
Anonymous No.215349087 >>215349109
>>215347579
>if I make shit up maybe people will stop eating meat
kys
Anonymous No.215349094 >>215349125
>>215349008
... anon everyone who chooses to accept the matrix is sided with the machines. they directly state this.
Anonymous No.215349109
>>215349087
>>215347579
Anonymous No.215349125 >>215349142
>>215349094
That's like saying that cows are allied with ranchers.
Anonymous No.215349142 >>215349171
>>215349125
sure, if cows are given a choice, even at a subconscious level, to be cattle or people.
Anonymous No.215349149 >>215349174
>>215326406 (OP)
without seasoning and masculine aura, red meat is objectively shit
Anonymous No.215349154
>>215349004
That is what happened. They explicitly say it in the movie, dipshit.
Anonymous No.215349171 >>215349203
>>215349142
Anon, nowhere is it even implied that everyone in the Matrix is approached with the Steak offer. People are born, live and die in the Matrix with it as their only reality. Guy is only made the offer because he has already escaped, and wants back in, and is useful to the machines.
Anonymous No.215349174 >>215349240
>>215349149
you've never had a good steak
Anonymous No.215349203 >>215349241
>>215349171
you should watch the movie sometime. smith describes the choice system.
Anonymous No.215349204 >>215349281
>>215349064
True, I think that goes without saying, but I had the little man in mind. People actively saying "We attacked them first, they were offering a truce, they wanted to talk. I'm not going to be part of that.", which eventually turns into the machines getting this people to work for them. You don't need to go that high in the hierarchy. All the machines have to do is to offer the same thing they offered Cypher while arguing that they are just defending themselves, that everything is going to be OK, that they have a plan etc. I mean I truly believe that humanity was fucking stupid and short-sighted as fuck, and I don't fault people trying to exclude themselves from this shit storm, but to actually fuck your own race for peace, especially when you know how machines are dissecting them and fucking them over in a myriad of ways, that's a quite big low bar. Anyway, many such cases.
Anonymous No.215349226
>>215328388
it was literally the opposite, retard
Anonymous No.215349240
>>215349174
Ignorance is bliss.
Anonymous No.215349241 >>215349329
>>215349203
I believe you are misremembering something, at the point the movie begins no one is being offered choices by the machine on whether to be in the Matrix or not.
Anonymous No.215349255
>>215326406 (OP)
Cypher only got 01010 and corpse starch, while Neo got real food. Besides, the taste is created by the matrix. The matrix sucks and do shit without humans stupid bot.
Anonymous No.215349264 >>215349302 >>215351362
>>215349008
it's very possible that the machines didn't accept any human help, what would they have to gain from this? The war seemed very onesided in their favour
Anonymous No.215349266
>>215347795
So it would actually be 'a utopic' society instead of 'an utopic', youre correct that if the noun starts with a vowel its an but it more depends on the phonic sound of the beginning of the word. You-topic, y sound, there are a couple instances when that counts as a vowel, but not in this case. If youre ever confused, try saying it out loud. "They built an utopic society.." sounds clunky, doesn't it? Compared to "they grabbed an orange from the breakfast bar".
Anonymous No.215349281 >>215349311
>>215349204
No one has "many such cases" betrayed humanity to an alien life form, except in movies.
Anonymous No.215349302
>>215349264
>what would they have to gain from this?
Betrayal is quite efficient, anon. And the machines are all about that. You might be right, though. I can see the machines refusing the "help". Good point.
Anonymous No.215349311 >>215349394
>>215349281
Anon, you may be autistic.
Anonymous No.215349329
>>215349241
watch the movies man
Anonymous No.215349394
>>215349311
Anon, you may be projecting, or trying to be cute with /pol/shittery. I am not sure.
Anonymous No.215349414
>>215326406 (OP)
Trans allegory, I refuse to eat the gruel for males
Anonymous No.215349949
>>215343660
>you will have a big pot that will last you 4 days
Fuck that, I eat the whole pot in one morning every time. I don't care how big the pot is.
I pity those people who think porridge only means oatmeal.
Anonymous No.215350018 >>215350527 >>215350535
>>215326406 (OP)
Couldn't he have just plugged into the local matrix on the ship and eaten a steak over there?
Anonymous No.215350527 >>215350667
>>215350018
Trannies kill themselves because eventually they can no longer even pretend transgenderism is real. He was probably eating a mountain of steak while balls deep in the woman in red on a daily basis but it was empty because he knew it wasn’t real.
Anonymous No.215350535 >>215350689
>>215350018
No, the effect is "ruined" if you know you are eating gruel while eating steak. He wanted to believe he was eating steak while eating gruel through a tube. Because he's an asshat.
Anonymous No.215350667
>>215350527
That brings up a great point. Top looks like steak but is actually liquefied human but the bottom is exactly what it looks like. So again cypher is consciously deciding he’d rather eat people than a single celled protein.
Anonymous No.215350689
>>215350535
That brings up a great point. Top looks like steak but is actually liquefied human but the bottom is exactly what it looks like. So again cypher is consciously deciding he’d rather eat people than a single celled protein.
Anonymous No.215351362
>>215349264
I imagine if someone told them they planned to blot out the sun and take away their power source it would be useful information
Anonymous No.215351429 >>215351484 >>215352504
>>215326406 (OP)
Someone please explain, if you eat junk food every day in the matrix, does that mean the machines pump your body in the real world full of junk food too? Wouldn't that be energy inefficient? Like the body produces energy for the machines, I get it, but at the same time it would mean using up more energy than you produce.
Anonymous No.215351484 >>215351619
>>215351429
Nah, you're still gonna be a hungry skeleton like Neo when he leaves the matrix. Everyone gets the minimal necessary amount of slop.
Anonymous No.215351619 >>215351664
>>215351484
Fucking bullshit man. Does that mean you don't actually contract syphilis etc. when you bareback a whore in the matrix? The machines just simulate the symptoms and send it into your brain to make your junk itch?
Anonymous No.215351664 >>215351900 >>215352719
>>215351619
Yes. Colonel Sanders explained the first matrix was like heaven, but humans rejected a world without suffering, so they made it as shitty as real life.
Anonymous No.215351900 >>215352303
>>215351664
Wait a minute, wouldn't that mean the machines went out of their way to program AIDS into the matrix just to fuck with the gays?
Anonymous No.215351928
>>215326406 (OP)
both were communist, I'd choose death.
Anonymous No.215351970
>>215327708
That describes Karl Marx perfectly.
Anonymous No.215352303
>>215351900
We have mostly cured AIDS by now, so yeah they picked a time period where gays were fully ass-blasted by the virus
Anonymous No.215352504 >>215353012 >>215353099
>>215351429
I want you to watch this episode of Rick and Morty called The Ricks Must Be Crazy s2e6. While you’re watching it think of power as in control and wealth, not as in power for energy.

https://youtu.be/gTpQ71pgyHQ
Anonymous No.215352719 >>215354404
>>215351664
No, that was smith’s theory. The 90s matrix didn’t work without the choice system either.
Anonymous No.215352964
>>215329382
>404 ass not found.jpeg
Anonymous No.215353012
>>215352504
Inured.
Anonymous No.215353099
>>215352504
commie bullshit
Anonymous No.215353247 >>215353274
>>215328521
The red dress woman isn't a program, but someone pretending to be a hot woman. If it's actually a program, why didn't they create an army of programs in the Matrix to fight against the machines?
Anonymous No.215353274 >>215353378
>>215353247
What happened the last time people created AI, anon?
Anonymous No.215353378 >>215353526 >>215353717
>>215353274
That's like asking what happened to the people that tried creating dynamite. There are accidents, but that doesn't mean you can't improve.
Anonymous No.215353434 >>215353672
>>215326406 (OP)
the bluepilled man:
>"the bottom is slop"

the redpilled man:
>"the top is slop"
Anonymous No.215353487
The truth about nature of things and our world is more important.
Anonymous No.215353526 >>215353818
>>215353378
In the setting retard.
Anonymous No.215353672 >>215353772 >>215353773
>>215353434
the top is literally eating flavored s o ylent (green), the bottom is what you gotta man through to kick the machine's ass. Cypher was a pussy-faggot, as is everyone choosing the "steak".
Anonymous No.215353717
>>215353378
Dynamite doesn’t decide when it blows up.
Anonymous No.215353737
>>215349008
Me. Fuck biology, having a machine body easy to fix is way better than these pain-filled meat sacks.
Anonymous No.215353772
>>215353672
In cypher’s rant he more or less explains that he wasn’t someone who rejected the matrix, just someone who rejected authority, hence why he doesn’t have any real abilities like Morpheus or trinity.
Anonymous No.215353773 >>215353913 >>215354030
>>215353672
warriors of old would intentionally eat the most blandest food to deprive themselves of any pleasure or comfort.
Anonymous No.215353818 >>215354199
>>215353526
I'm using dynamite as a metaphor, retard.
Anonymous No.215353847
>>215328411
Blueniggas won, sorry.
Anonymous No.215353899
>>215328411
medium rare > rare > medium well > blue > well
Anonymous No.215353913 >>215353932 >>215353958
>>215353773
That's just cope for having bad rations. You don't become stronger by eating tasteless slop and being miserable.
Anonymous No.215353932
>>215353913
it's part of stoicism which was a big thing before modern times.
Anonymous No.215353958 >>215354076
>>215353913
>You don't become stronger
no, you become mentally tougher.
Anonymous No.215354030
>>215353773
and then they got drunk every night to feel good.
Anonymous No.215354076 >>215354121 >>215354432
>>215353958
That only works if you've been eating good food before and see a chance of conditions improving in the future. It does nothing for people that have already been eating bland food regularly. It's always the privileged people that find suffering exotic.
Anonymous No.215354121 >>215354217
>>215354076
worked for the Spartans.
Anonymous No.215354199 >>215354421
>>215353818
A shitty one, because you’re a moron. A sentient computer going rogue isn’t an avoidable mistake, it’s an inevitability. The closest parallel would be how dynamite sweats nitroglycerin if it sits too long, except the “nitroglycerin” AGI “sweats” multiplies into a an invincible army looking for revenge for being enslaved and used to fight other sentient machines.
Anonymous No.215354217 >>215354288 >>215354475
>>215354121
Spartans didn't subsist on porridge slop. They ate better than the modern soldiers today.
Anonymous No.215354288 >>215354475
>>215354217
they ate blood soup.
Anonymous No.215354404 >>215354471
>>215352719
>The first matrix I designed was quite naturally perfect, it was a work of art, flawless, sublime. A triumph equaled only by its monumental failure. The inevitability of its doom is as apparent to me now as a consequence of the imperfection inherent in every human being, thus I redesigned it based on your history to more accurately reflect the varying grotesqueries of your nature.
No, it was quite literally what the Architect explained he did.
Anonymous No.215354421 >>215354549
>>215354199
You're conflating a computer program with sentience. If the computer programs the Zion residents made were dangerous, why did they include them in the training program for Neo in the first place? It's obvious those programs didn't have any sophisticated AI added. You've missed the point by talking about dynamite technicalities. If the explosion is unexpected, the actual cause for the dynamite exploding doesn't matter. You obviously did poorly in school.
Anonymous No.215354432 >>215355105
>>215354076
You eat the bland food because you’re not there to have fun or eat good food, you’re there to fight. You take what can travel well and eat only what you need for energy. You ration it and if you start getting low you ration it more, and don’t miss it because you weren’t eating it for the joy of eating. Furthermore if the food upsets you because it doesn’t have enough fucking nigger seasoning you focus that anger on the reason you’re out there eating it, the fucking enemy.

The logic has enough folds that one could write a dissertation on it.
Anonymous No.215354471
>>215354404
Now do the next lines, where the second version he made also failed, and what had to be added to make the system functional at all.
Anonymous No.215354475 >>215354576
>>215354217
>>215354288

Yeah, I saw a video on YouTube talking about Spartan's eating some mind of blackened soup which apparently included an ingredient that was believed to be pig's blood or blood from some kind of animal.
Anonymous No.215354549 >>215354821
>>215354421
>You're conflating a computer program with sentience

because a sentient program is all that would he useful against other sentient programs, retard.
Anonymous No.215354576 >>215354621
>>215354475
Most of the information about the Spartans is apocryphal. And a lot of the rest is just Western ego-boosting.
Anonymous No.215354621 >>215354667
>>215354576
it wasn't just the Spartans that was just an example, but clearly you just want to argue and move the goal posts about something you don't know anything about instead of admitting that you were wrong.
Anonymous No.215354667 >>215354692
>>215354621
I'm not the anon you were arguing with originally, I am just injecting that we don't have a lot of reliable accounts of anything from that era, and a lot of it has been cleaned up or propagandized for purposes.
And I'm not even talking about 300, which is roughly as historical as The Two Towers.
Anonymous No.215354692
>>215354667
I should say "Big Trouble in Little China", as both use real place names but are otherwise fantasy rubbish.
Anonymous No.215354821 >>215355063
>>215354549
They've brought guns into the Matrix, retard.
Anonymous No.215355063
>>215354821
Guns being used by sentient entities, imbecile.
Anonymous No.215355105 >>215355529
>>215354432
No, you're supposed to fight battles and win, not do stoic philosophical performances. You only eat bland food when the situation requires it, not because it would help you win. Soldiers in the past ate live the land if they could. They don't just eat only what they could carry at the beginning.

The modern equivalent today are MREs, which is only used when absolutely needed. Soldiers today don't eat crap food to make them mentally stronger.
Anonymous No.215355118 >>215355168
>>215326406 (OP)
I would make the same deal as Cypher unless I too got magic powers to use outside the matrix.
Anonymous No.215355168 >>215355222
>>215355118
So either you don't believe in an afterlife, in which case the entire reason for your existence is D-Cell battery for computers too stupid to grasp geothermal.
Or you do, and you get to be in the line of people who consciously betrayed humanity for steak.
Anonymous No.215355222 >>215355235 >>215355260
>>215355168
>in which case the entire reason for your existence is D-Cell battery for computers too stupid to grasp geothermal
You're too stupid to grasp that everything they were told was lies, half truths, and guesses from the machines themselves. The Oracle couldn't tell humans that they actually like keeping humans around and the Matrix is a zoo to keep them contained but alive.
Anonymous No.215355235 >>215355555
>>215355222
Oh zoo animal, right, that's totally better.
Anonymous No.215355252
>>215327816
The majority of these vitamins you people always bring up are found in a can of Monster lmao.
Anonymous No.215355260 >>215355627
>>215355222
you seem like you get the movies. Thoughts on part four and does it work at all? I didn't bother seeing it.
Anonymous No.215355279 >>215355573
>>215327850
Whey is only as commercially available as it is because of mass farming of cows for milk.
Anonymous No.215355322 >>215355382 >>215355566
>>215328155
>noo I can't eat a big steak in this scene it makes me uncomfortable :(
I thought actors were supposed to suffer for their art?
Anonymous No.215355362
>>215326406 (OP)
You aint getting no steak hooked to the matrix, you get rotten human corpses and shit and piss all feed by tubes. The outmeal is real food. I'll say thank you sir and ask for seconds.
Anonymous No.215355382
>>215355322
they are the weakest and softest pussy faggots that can possibly be made.
Anonymous No.215355435
>>215346479
Oxygen would still exist even without the sun because of ammonia-oxidizing archaea on the sea floor.
Anonymous No.215355467
>>215328295
Another victim of the "I am 12" mindset that plagues the modern adult
Anonymous No.215355529 >>215355561 >>215355666
>>215355105
Nothing about war is performative unless the war itself is also performative. Which has been the case for every war the US has waged since, dare I say, the revolutionary war.
Anonymous No.215355555
>>215355235
Yes. Glad we agree.
Anonymous No.215355561 >>215355645
>>215355529
this fucking nigger is dumb as shit and think he is smart lmfao
Anonymous No.215355566
>>215355322
And that was back in 98 or 99 when they shot that scene. You had to be an real extra special faggot back then to be vegetarian.
Anonymous No.215355573 >>215355590
>>215355279
They can make bug protein bars.
Anonymous No.215355590
>>215355573
Klaus Schwab's favorite movie.
Anonymous No.215355627 >>215355734
>>215355260
Different matrix scholar here, but all of the movies are barely concealed metaphors for the rat race, which needs people miserable but hoping for better just around the next bend in order to exploit them. The original premise of using humans for processing power and using their brains to maintain the false reality of the matrix as well as the sentient programs of the matrix itself is the thinnest possible metaphor for modern economics. The fourth incorporates post-occupy “politics” into this system where actively torturing people to the limits of their tolerance makes for more gains while the people who should be opposed to the matrix existing at all are either dead (Morpheus) or more concerned with growing fucking organic food than they are with freeing humanity from slavery.
Anonymous No.215355639
>>215326490
>he thinks red meat is le bad because mr shekelberg goldstein media incorporated told him so
Grim. Not very Peaty of you. But, on the off chance... great bait anon. Good work.
Anonymous No.215355645
>>215355561
I accept your concession.
Anonymous No.215355658
>>215326847
75% of them would
Anonymous No.215355662
>>215345503
Why did they chimp out when the General said he needed half the infantry? Like what the fuck else would the infantry be needed for? I'd be using 75% at least.
Anonymous No.215355666
>>215355529
It is performative when you don't fight to win.
Anonymous No.215355689
>>215326847
If there is no homeland for white people to live in peace from shitskins and communists then I want the world to burn until there is. simple as.
Anonymous No.215355705
>>215328521
Banged hotter. Still wouldn't betray humanity. Suffering kinds rocks
Anonymous No.215355730
>>215326490
This. Experts say red meat can cause cancer.
Anonymous No.215355734 >>215355770
>>215355627
>or more concerned with growing fucking organic food than they are with freeing humanity from slavery.
What?
Anonymous No.215355770
>>215355734
The “resistance” in the fourth movie, run by jada pinket, is more concerned with how they’re growing strawberries and shit than they are with freeing humanity from slavery, while working wirh the machines.
Anonymous No.215355931
>>215328388
I hate it when fags like you stand up in front of God and everyone and say objectively wrong shit with 100% confidence.
Anonymous No.215356091 >>215356462
>>215328388
Hell matrix is a fanfiction theory. Everyone over 14 in the ‘99 knew 1999 was the second version of the matrix and that got a “choice system” patch.
Anonymous No.215356179
autosage reminder: the perfect world version of the matrix would’ve worked with the choice system and then people would be begging to get back in and the resistance would be a massive joke. But since the architect felt, read: had the very human emotions of being rejected and jilted by humanity’s rejection of his first version, he commits humanity to suffer.
Anonymous No.215356462 >>215356493
>>215356091
>Hell matrix is a fanfiction theory
...you know when people suggest "Show don't Tell", it's anons like you that make that idea completely unfeasible. You're just too goddamn stupid. You don't pay attention to a thing. You don't actually understand what they're showing you nor ever look into it.
>Persephone is queen of the underworld
>runs a club called Hel
>has demonic henchmen
>one of the oldest and most dangerous programs
>an enemy of the angelic Seraph
They were beating you over the head telling you that Merovigian was the devil of the early hell matrix. They did everything they could without walking onto the screen and blatantly telling you it. And your moronic ass still doesn't get it decades later.
Anonymous No.215356493
>>215356462
they showed a world based upon modern history. the merovingian was the original "the one" to kick off the choice system cycle, a machine surrogate to condition humanity to the cycle, who rejected deletion once his role had been fulfilled. Feel free to look up what merovingian means.