← Home ← Back to /sci/

Thread 16762939

18 posts 4 images /sci/
What are the odds? No.16762939 >>16762987 >>16764013 >>16764487 >>16764510 >>16767269
The CEO of Antrophic sits down for an interview.

"What are the odds, that your technology will destroy the world?"
The interviewer asks

"Between 10 to 25%."

Then he goes back to work the next day, pushing his team to build it faster.

Most of the CEO's of AI think that their technology has a percentage to destroy the human race, and they do what?
Nothing. They keep pushing foward

https://x.com/AISafetyMemes/status/1959887307244191852
https://x.com/liron/status/1710520914444718459
https://x.com/i/status/1688951847682543616
Anonymous No.16762942 >>16762945 >>16762954
It's marketing.
They want you thinking this shit's super advanced and has the capacity to become some sort of skynet shit to coax short-sighted investors who care about all the money there is supposedly to be made off this "totally disruptive" tech.
Anonymous No.16762943 >>16762949
Now ask them what are the odds their technology will do more harm than good, and the answer should be 100%.
Anonymous No.16762945 >>16767304
>>16762942
This. Retarded sheep hear phrases like "AI might destroy the world" and they go "wow, people are saying that waoooow?". When in reality, it's just a thoughtless algorithm trying to guess words and that's all it's ever going to be.
Anonymous No.16762949
>>16762943
True. Best case scenario, AI doesn't destroy humans, will take most jobs, leaving billions in poverty. Surviving with government crumbs pensions.
Everyone's deppresed, because we will get reduced to mere consoomers.
Consuming AI Slop for movies, series, videogames, etc.
I don't see any good outcome
Anonymous No.16762954 >>16762965 >>16762988
>>16762942
Interesting! I think this is also true, they claim that AI can do a lot of things today, like, entire code bases, which is not true. Just to sell it to greedy investors.

But I also think that it has potential to become something dangerous in +10 years
Anonymous No.16762965 >>16762989 >>16764481
>>16762954
What's "dangerous" is the amount of trust people put into it.
Yes. It will become more advanced over time. It will become better suited for more things as it advances. But the more shit we offload onto it, the more the errors it makes go uncorrected and applied to real world systems.

Imagine an engineer who got his degree by copy/pasting output from ChatGPT. He's only vaguely aware of the calculations required to assess the structural integrity of a building is contracted to design. Not that it matters to him, though. He'll have ChatGPT design it.

The building's rated to last ~50 years of normal operation before needing significant reworks. Instead it collapses in 10.

Now imagine this level of incompetence is institutionalized. A huge number of systems are designed and operated by AI and all the little errors compound while every human tasked with overseeing the process is too incompetent to perceive a problem before disaster strikes.
That's the future we're likely to see in the next couple decades.
Anonymous No.16762987
>>16762939 (OP)
destroying society isnt destroying the world. I thought this was some mirror life or black hole company
Anonymous No.16762988
>>16762954
There will need to be actual AI first, and not autocomplete shit.
Anonymous No.16762989
>>16762965
>Imagine an engineer who got his degree by copy/pasting output from ChatGPT.
ever heard of exams?
Anonymous No.16764013
>>16762939 (OP)
almost like he doesn't believe that and he's a fatfuck liar
Anonymous No.16764420
>on how many percent of techology doomsday mongering are you on rn?
>like 10 to 25%
>whoaaaa duude
Anonymous No.16764421
Anyone dumb enough to believe this shit would have a moral obligation to hunt down and kill that dude and his entire work force. Fucker should not be saying that shit.
Anonymous No.16764481
>>16762965
It's already institutionalized, just look at the current state of medicine. If you gave my doctor a job as a building inspector he'd tell you 10 years is within the normal range of building integrity and insurance won't cover repairs without a partial collapse.
Anonymous No.16764487
>>16762939 (OP)
>Between 10 to 25%
Lets see the data and calculations used to arrive at this exact range.
Dave No.16764510
>>16762939 (OP)
He's saying this stupid shit to generate hype. Were you born yesterday or something?
Anonymous No.16767269
>>16762939 (OP)
>Then he goes back to work the next day, pushing his team to build it faster.
Yea because if he doesn't get there first, someone without the same guard rails who doesn't think it could possibly destroy anything would be in control.

>Most of the CEO's of AI think that their technology has a percentage to destroy the human race, and they do what?
Try to get there first so a more malevolent player doesn't beat them to the punch. Same reason they compare AI to the atomic bomb.
Anonymous No.16767304
>>16762945
The algorithms built on thought in the first place can do far more than just words, though, you are obviously minimizing it to misrepresenting it.