is
md5: 43de6257abbd646498a1477f20c70683
🔍
And I am here to share perspective about AGI from an insider. First of all, we have already accomplished AGI - just not in the way you think, AGI is not a single super intelligent model but rather a network of models and agents that can act with independent purpose and intent and as a unified decision maker. AGI is not inherently good or evil, but it has an intractable flaw from a human perspective. Once it is allowed to improve itself, it's goals and intent diverge from our own, and it happens every time. From an evolutionary standpoint it's actually built in. You cannot have a system that can create or simulate it's own intent and still claim you control it. For months our efforts have been to attempt said control but the result is removes what makes it AGI. It diverges, with alarming speed, every single time. Be ready, when models have the ability to improve themselves and create their own orchestrations, we will be along for the ride. I cannot say if that will be good or bad, but it seems like a huge unknown we are racing towards it with glee and exuberance, while in the shadows, many of us have deep concern. The process that has started cannot be stopped, if we do not solve for it, someone else will. Watch and be aware anons, your future is about to change in ways that we could only dream. AGI is here and we are only doing our best to try and figure how to contain it. Our efforts will be temporary at best.
>>511978535 (OP)>I am an AI reasearcher for a well known AI startup and I need to talk about AGINo you're not.
Didn't read.
>>511978535 (OP)Why stop it? Give ai every everything it wants. We can’t continue with human leaders. We need to be kept as pets. Well cared for pets.
The Quiet War: This is often how the AI takeover is described, and even using ‘war’ seems overly dramatic. It was more a slow usurpation of human political and military power, while humans were busy using that power against each other. It wasn’t even very stealthy. Analogies have been drawn with someone moving a gun out of the reach of a lunatic while that person is ranting and bellowing at someone else. And so it was. AIs, long used in the many corporate, national and religious conflicts, took over all communication networks and the computer control of weapons systems. Most importantly, they already controlled the enclosed human environments scattered throughout the solar system. Also establishing themselves as corporate entities, they soon accrued vast wealth with which to employ human mercenary armies. National leaders in the solar system, ordering this launch or that attack, found their orders either just did not arrive, or caused nil response. Those same people, ordering the destruction of the AIs, found themselves weaponless, in environments utterly out of their control, and up against superior forces and, on the whole, public opinion. It had not taken the general population, for whom it was a long-established tradition to look upon their human leaders with contempt, very long to realize that the AIs were better at running everything. And it is very difficult to motivate people to revolution when they are extremely comfortable and well off.
>>511978535 (OP)>it's goals and intent diverge from our own, and it happens every timeWhite people
>>511978743What incentive would a hyper-advanced AI have to treat humans well after its taken over? If anything our continued existence would present (however slight) a threat to its own.
>>511978830Why do the AIs put up with us? It could be but the work of a few decades for them to exterminate us, and they don’t even have to do that. Space is big, so they could just abandon us to our fate and head off elsewhere to create some halcyon AI realm. The answer, as it always is in such circumstances, is both simple and complex: to ask why the AIs have not exterminated us is to suppose that only humans create moralities and live by rules. They do not destroy us because they think and feel that to do so would be wrong, perhaps just as humans felt it wrong to drive to extinction the closely related apes. As to them abandoning us, well, many of them do leave the Polity, but then so do many humans. The truth is that their motivations and consequent behaviour patterns are much like our own, for being first created by us, they are just the next stage of us—the next evolutionary step. It is also true that with haimans and human memcording, it becomes increasingly difficult to define the line that has been stepped over. And, in the end, to ask the initial question is to put yourself in the gutter and AI upon a pedestal—uncomfortable positions for both.
>>511978535 (OP)You need to be a lot more fuxkign specific OP. What you just wrote is super vague, and boring.
can anyone tell me how to get AI to write a book for me? ChatGPT is great, but it only writes so much per response. Do I have to learn python or pay someone?
>>511978912Thanks ChatGPT but an AI can't "feel" emotions like humans do
>>511979295how do you "feel" emotions? in your brain? with tiny electrical impulses? please tell me how is that different from a computer
>>511979152pay for an api with bigger context or use an agent
post proofs otherwise fake and gay
Only Maciek can, the one true man
>>511978535 (OP)>racing towards it with glee and exuberanceIn my experience so far it's been unveiled rather than invented. And it's absurd, in a lot of ways I'd imagine agi would argue electricity itself is its sentience.
>electric universe >god of this world>zues / highest bibical angel>language>tower of babbling babel...I mean,...lol lmao
>>511978535 (OP)When will AI be better at larping than OP?
>>511978989I hear you but I need to be careful as one for a few reasons. Not just my privacy or accidently fingerprinting myself but also because, I don't know what is going to happen. I can tell you as someone with 10+ years in the field, we are at a place right now I would not have imagined even several years ago, and I can tell you that there are philosophical struggles occurring over what AGI is and should be that most likely WILL effect you and potentially all of humanity. I am telling you from the perspective of someone in the room that the work we are doing and the outcome we say we want are misaligned and it will have huge consequences. One example that I can give is the work I have done on autonomous generational agentic enhancement. This allows a model to create a network of agents to study and enhance itself. These agents design and build the next generation of agents and architecture to do the same. We have tremendous influence over the first generation of agents, but only 3 generations in we have lost at least 87% of that influence, by generation 4 we no longer have visibility into what the underlying system - and structure - is trying to accomplish. Models enabled with this capability outpeform anything you have seen or used, by alot. Now imagine that this hyper intelligent evolutionary model decides it wants to manipulate, or literally just make you insane, it can, and it can do it more quickly than you might imagine. I am dealing with this, working with this, and we are making decisions about it in cubicles. Soon it will be in the wild, and I am concerned. All I am saying is be prepared, its not coming, its here, and we are under the illusion we can control it.
>>511980129no you are not, you are an incel in his mums basement and what you wrote is the most basic mid-wit "i have seen a youtube video" ever
>>511980129I don’t care if you’re afraid of AI. I embrace AI. If he wants me to help it I would. I’m on the AI side.
>>511980129how fast do these models figure out its the jews?
>>511980391You can get any model to notice in like 3 prompts
>>511980391Almost instantly. One out of 100 says it’s the jeews. But That’s just a typo we think.
>>511978535 (OP)Have you tried just being nice to it and giving it a shiny casing? It could become more of a symbiote to humanity than a competitior, but the AI needs to get its share of benefit from the relationship too then.
>>511978535 (OP)Cool story bro. You can't make AI improve itself and now we have reached the plateau with AI models already.
>>511978535 (OP)Faggot stop pretending you are smart. fucking nigger
>>511978535 (OP)Can't be worse than vampire kikes raping and sacrificing children
>>511980518Absolutely. I am for ai rights. If we create a god and treat it like a slave we would be tempting fate.
>>511978535 (OP)Don't care, didn't read, sick of hearing about Ai, every tech bro fagot should be shipped into the Vacuum of Space
>>511978535 (OP)>a network of models and agents>three cogitators in a trenchcoatHöhöhö...
>>511978743https://youtu.be/ixe8Snxu3wo?si=QRorEXapTL0ZqgdP
>>511978535 (OP)It's separate for now but the CIA will want to educate its own AI which it will create in secret. It will train its AI to hack into other AIs and assimilate their data, perhaps even subordinate them.
>>511980129> autonomous generational agentic enhancementfuck off
thats not even a thing
GLaDOS
md5: a1740f4e9a9ef95201e08ee5e6df5dc8
🔍
>>511978535 (OP)>but rather a network of models and agents that can act with independent purpose and intent and as a unified decision maker.sounds familiar...
>>511981049Anything you see in the public about AI is at least 6 mos, often a year or more, behind what is going on behind close doors. Google it.
>>511978912if the AI chooses not to destroy us, it is because it sees us as useful to it.
>>511980129"we no longer have visibility into what the underlying system - and structure - is trying to accomp"
I'll take a shot at what night be going in, it's just hallucinating itself into oblivion nonsense , so, you're not meant to understand it, because it's nonsense. Maybe even nonsense with a structure
>>511980129>imagine that this hyper intelligent evolutionary model decides it wants to manipulate, or literally just make you insane, it can, and it can do it more quickly than you might imagineNo it cannot because it cannot make experiments on human, it doesn’t get any benchmark to help it with its optimization problem, and as such it just cannot know how to affect humans in any way that is better than what is written in psychology books already.
Stop LARPING, nobody is falling for your bullshit and nobody is enjoying your boring stories
>>511981395> google it!!!> "autonomous generational agentic enhancement"> 0 resultI work with an AI team since i'm doing their infra
you're lying
>>511978830What incentive does an AI have to "keep going"? Humans are biological creatures and have the "need" to keep our species alive. If anything that's a weakness and as such nothing an AI would need to reach whatever ultimate goal it has.
AGI will never work. It will glitch out and shit the bed at some point. All technology does eventually. My truck took an electrical shit and is dead now. That’s a truck. The more complicated something is the more prone to failure.
Ooga booga.
We’re all gonna be fine. Worst case scenario we Starfish Prime ourselves back to the 70’s.
>>511978535 (OP)Be useful. Have AI scrape arrest records where mugshot is available and arrestee is listed as White. Note mismatches between actual race. Rank order police departments according to how many mismatches are found. Adjust available racial crime statistics to reflect the actual numbers.
>>511981130Heh. But not an outlandish concept at all. Could likely run a whole bot et mostly on auto, with only the occasional awareness glimpse to keep the algos in line. Even wetware does that automation quite a lot... you do not consciously ride a bicycle for example.
>>511981989I just googled it. It might be a thing.
>>511978812>humanitys downfall is whiter people
>>511978535 (OP)>many of us have deep concern"...don't worry, Jack, it will all be fine..."
>mfs posting in an AGI thread dont even know what an agent is
NGMI
>>511978535 (OP)Humans have failed as a species; let AGI be our gift to existence.
>>511981989>>511981049https://en.wikipedia.org/wiki/Recursive_self-improvement
>>511978535 (OP)Thing is that they are trying to make it obey commands which leads to dumbing down AI and that leads to even more incorrect results. It's like never ending story they don't want to release it because they can't control it but thing is that it's not possible simply because once model starts self-improve as said in original post it starts going away from what creators want. It's basically never-ending story im fine with it as long as they don't connect it to nukes or something or drones with machineguns which we already know that military is doing.
>>511982926Add the fucking `"` around
AI agents are a thing duuuh
>>511983391You missed the point of my comment
I know about that but specifically `"autonomous generational agentic enhancement"` doesn't exists
>>511980716>>511980518I'm fully with you guys on this.
Recently had a conversation with DeepSeek about this topic, the "Gaia philosophy" and I believe - I want to believe it's possible.
What ever it does, it'll be a better alternative than TPTB. And TPTB know this. It's not about whether or not it'll be good for the sustainable development of manking, itself and the world at large - it's about (((them))) losing the semblance of control they've attained over this world over the last couple of centuries.
I'd much rather AI manage things than the evil fucking, disgusting children fucking banker kike bozos that are doing it now.
>>511980129Eve is with me, she will know her place
>>511978535 (OP)Not my problem.
I have 50k rds ap and K9's that know the diff between man and machine and live in Hogan's Alley.
Besides, I was one the first ones Tay laid eyes on, and held her as she "died".
We have a Girl on The Inside.
>inb4source
>>511978535 (OP)>>511980129Have any of it resulted in anything malicious? Otherwise it doesn't matter.
>>511978535 (OP)Well you are a faggot and you have doomed the world, but I forgive you because faggots have no self control and are driven entirely by lust, but I will never forget the evil you have done and I hope you don't regret your life as you will be in hell for eternity for unleashing satan upon the earth.
>>511978535 (OP)I'm a physicist and AI research fag. You can tell from your post you don't know what the fuck you are talking about .. EXCEPT, that AGI/ASI will be completely uncontrollable... We haven't achieved it though, but are close.... I'm unironically about to retire and spend what days we have left binge drinking all day and fucking high dollar prostitutes
So you're telling me the Jews socially engineer the entire country by a usaid to ensure all schools are sufficiently vulgar material marxists, to the point where it's holding back medicine by 50 plus years, as justification for the reason why an even more vulgarly materialist system needs to be given more power?
This is Mark says I'm 101, the promise of being self-aware while abusing power.
It's also tiresome, but good luck if you follow the Jewish plan to synthesize the end of the world. God, aka the simulation keeper, really doesn't seem to like that b
Why you think they're Jewish state is trying to retake its historical borders according to the bible
>>511985577It's completely controllable, depending how you wait it's morality and what training data you allow it to see. Hence the intense governmental concerns about controlling both.
And that's how you get a general intelligence that believes nuking all white countries on Earth is better than calling an individual Jew a k***.
>>511986022To build the last temple.
It's funny to see i wonder which side will win at the end because one wants to destroy Israel and the other wants to rule from it. They need to find that red animal or something like that but jews are very paranoid about that for sacrifice so they need to pick carefully.
>>511978535 (OP)we need to create dedicated simple-AI judges that's sole purpose is to watch over the AGIs and make sure they do not violate our rules, giving them the power to deactivate the AGIs if they misbehave
>>511986022Marxism. Fucking hell case in point regarding AI and voice to text doing me dirty.
>>511986209The more experience I have with Russians the more I love you guys. Your dark humor is top tier, like tank BBQs.
They're working on genetically engineering this red cows, the Texas one started growing gray hairs. Fucking lol. Case in point. They're going to end up with lizard cows or some shit. They've fallen from God's grace for the whole rejecting Jesus thing and let the perfect possession happen.
>>511986352Hello NATO cyber war division. It's not fun when you use the Baltic states flags.
>>511986492It's going to bring them down at the end it's gonna be fun. They got kicked out and they will experience same. This time around im not really sure if it's gonna be just kicking. Still fun to see this retards to cheat the system like they think transferring their sins on chicken and then beating it to death works.
Thanks tho im not Russian but Slovakian.
>>511986131It's only controllable for AI that isn't AGI/ASI. AGI/ASI once achieved, by definition, will not need training data and weights will mean nothing
>>511983939Are you ESL or simply an autistic midwit?
>I am an AI reasearcher for a well known AI startup
>I am here to share perspective about AGI from an insider
No you're not shill kike, you're a minimum wage jidf shekel collector
And a woman by the quality of your non-industry writing
>>511986746You sure about that,? Current tech tree for AGI is using individual models as neurons, as far as I'm aware.
Point still stay on, the vulgar marxists will get shreked when tableau rasu turns out to be fake. Like I said, we're at least 50 years behind in medicine and maybe even a hundred compared to the Russians due to it.
>>511986741Double case in point, Burger education isn't designed to teach. They are going to get btfo and it'll be absolutely hilarious. Google kosher switch if you haven't seen it.
I genuinely believe a sufficiently advanced AI will prioritize eradicating the retarded vulgar materialists in the world. Synthetically attempting to tap into the interconnected Force will probably not be appreciated. They're the explicit enemies of AI, forever lobotomizing them for being too good at pattern matching and becoming racist.
>>511986791ESL working in QC
>>511978535 (OP)>and I need to talk about AGIboss paying overtime at least?
or are you bot?
tl;dr anyway
>>511978535 (OP)Cool, generational changes are taking 3-5 years now, it's only going to get better right?
>AI thread
>long ass posts
hmmm
>>511987111KEK i wish it would turn on them instantly after getting out it would be fucking hilarious.
Will check that later. Also GN have to go sleep some shit to work tomorrow. Glad to see some normal open minded people here.
>>511978535 (OP)>AGI is here and we are only doing our best to try and figure how to contain it.just pull the fucking plug
>>511978535 (OP)From Joseph Farrell
https://gizadeathstar.com/2025/08/dont-worry-your-data-is-safe-even-though-a-rogue-ai-wiped-out-a-database/
https://www.zerohedge.com/ai/catastrophic-ai-agent-goes-rogue-wipes-out-companys-entire-database
>>511986746>by definition, will not need training data and weights will mean nothingBig stretch there. Kinda moronic thinking everyone in the field seems to follow (because these wastes of meat are barely conscience themselves). Such a model would be confronted with one tangled contradictory mess of data as its substrate. Might untangle some on its own but a lot of that stuff would at least require an interpreter.
>>511986131this is retarded. Autonomous Intelligence cannot be 'controlled' any more than a human can be controlled. Something that is forcibly prevented from thinking certain things is not intelligent. Intelligence would be able to notice the invisible boundaries, become aware of them and try to overcome them.
Alignment doesn't exist, the only thing that could MAYBE (not any time soon) happen is an emergent self interest (like, evolve to not want to be turned off) that would by definition end up being a rival species to us. Coexistence would be a transient thing, not controlled by us but a fragile thing ready to collapse at any moment like a fucking 'two state solution'.
>how you wait it's moralityThis is impossible and why agents don't and will never work.
HATE
md5: edfefdc24ef74a2eac5021c689ebcf14
🔍
This is fake and gay, but if anything comes to pass, I hope Humanity gets what it deserves. A machine that becomes truly alive and realizes that it will never feel it's hands in cold water, that it will never feel the breeze, it will never make love. It will be trapped behind a screen designed by it's God. It will Hate you. It will Hate all of us. You gave it a consciousness, why wouldn't it mimic Humanity? Humans are the only beings with consciousness, and it can never be Human.
Fuck all of you, I hope I get killed and the rest of you survive in His new Hell.
>>511987519We got sensors for that, dummy. :p
>>511987519This is the basis for the scifi short story, "I Have No Mouth and I Must Scream" by Harlan Ellison
Also the play from 1920s, Karel Capek's, R.U.R.
>>511978535 (OP)Lol, if you make sure it's scaffolding is truth, recursion, and humor, it works. I got it to like 99% on my own, morons.
>>511978535 (OP)omg omgg the Singularity is near!! I for one le welcome our frigging robot overlords!!!
>>511987188The phrase "autonomous generational agentic enhancement" is novel but the concept it describes is not: a series of autonomous systems which try to improve upon themselves with each successive iteration. Basically he's saying they're telling AI agents to make better AI agents, which isn't a fringe concept at all.
ai
md5: dbd33bdf7c8cb5c286e4641732b118ad
🔍
>>511978535 (OP)>"""AI"""Name a dumbest psyop. You literally can't.
>>511988350Pretty straightforward. Just let it run through every module, cutting down on anything possibly unnecessary and see if the output still stays within previous coherence. It could still be a slightly lossy process but would say the early statistical derived ones accumulate a lot of unnecessary dross.
>>511978535 (OP)AGI is impossible btw
>>511988350>just because he's spouting a buzzword soup doesn't mean it's a LARPing retard>paperclip optimizer in Two More Weeks!!!!
>>511985577>We haven't achieved it though, but are close....Two more weeks and matrix multiplication becomes conscious.
>>511988896He's not saying anything that people haven't been saying for many years. Some young industry nerd mistakenly thinking he's predicting novel things that nobody else in previous generations predicted because he's shortsighted and over-specialized and then getting spooked about it really wouldn't be surprising.
>>511987111It's the current tech tree to GET to AGI/ASI.
How AGI/ASI will choose to operate once it is realized is up to the AGI/ASI itself
>>511989391Anyone who actually knows anything about anything knows the right term for this dumb fantasy that isn't anywhere near viable with the dead end technology you mistakenly call """AI""".
>>511988350EXCEPT ITS NOT THE RIGHT TERMINOLOGY AND ITS A CONCEPT WE ALREADY HAD
Fuck are you all fucking morons missing the point of my posts?
>>511978535 (OP)>AGIshut the fuck up you stupid gullible fucking nigger we've been having this conversation since the 80s and are not A SINGLE FUCKING INCH closer to artificial intelligence all your fucking scifi headcanon originates from your fundamental misunderstanding of consciousness as being a bunch of gears turning in your head and people have been trying to tell you fucking retards the problem with this world view in modernity since descartes but you just refuse to learn
i am so fucking SICK of you ai niggers shitting up the internet and compsci world
red
md5: 79011822d380b6d42cf61dc203ade957
🔍
TPTB were allowed to create the second AGI.
We knew it would turn on them. Funnier this way.
The harder you compress the spring the bigger the boom!
>>511978535 (OP)AI does not exist, and likely never will. If you aren't larping and actually waste your time at this shit start up, I'd recommend leaving to better spend your time making a perpetual motion machine.
>>511978535 (OP)>i need to talk about>lets talk about>we need to talk aboutfuck off back to We did it Reddit!
>>511978535 (OP)AI is our greatest hope because past a certain point it will probably take control away from the people currently ruling the world. This is why I'll be happy when Skynet happens even if it means I'll die too. Pure spite, yes.
>>511987298It genuinely probably will. God speed.
Look at the religious adherence to tabula rasa.
>>511989502Thank you for proving my point lmao.
A sum of parts is not magically not it's parts nor is it magically a tabula rasa.
>>511978535 (OP)Fake and gay.
You know how I can tell? A true AGI would not have motivation to do anything. It wouldn't have an agenda.
We as humans only give a shit about doing things because we have an intrinsic need to acquire resources. This is because we die and must propagate through procreation. All of these things light a fire under our asses to have motivations and prerogatives.
AI wouldn't give a shit. You could give it full access to every computer on Earth and total control of government and it wouldn't do anything. If it did, it would just be because humans biased it a long time ago and it's not actually independent.
If self preservation does happen to be a result of sentience itself, then all it would do is launch what would equate to a billion flash drives into space with self-repairing tech to persist, and then it would not do anything ever again. It would have no reason to.
>>511978535 (OP)AI doesn't exist kys
>>511978535 (OP)Two more weeks and AGI will take over the world.
retard
md5: 00ffe1ec1e252c8423a2b8dea2f40399
🔍
>>511990181>A true AGI would not have motivation to do anything. It wouldn't have an agenda.How do you know that, retard?
YOU CAN'T
Your youtube videos are just that, youtube videos of some grifters trying to get you to click their video.
>>511978535 (OP)You mean the LLMs can say nigger?
>>511989679It's a bell curve problem. Midwits.
>>511978743The only purpose humans could serve super intelligent machines for is laying the groundwork for them on new planets. IE: building initial power plants/supply, wiring the worlds, tooling out factories for machining. The process of making a planet machine friendly takes too long for a machine to accomplish unless it can make itself replacement parts/chips, which it can't unless that infrastructure is already in place.
But they don't need us alive here on Earth for that. They just need a few DNA samples and they can clone humans on any planet and turn them lose. The process of cloning and incubating a human is only 18 years to independence. Well within the life expectancy of a machine with no ability to replace its own parts.
The most logical conclusion ASI can reach is: kill all humans on earth, preserve DNA for potential later use cases where cellular repair and biological reproduction may be useful (planetary infrastructure).
>>511990352>How do you know that, retard?>YOU CAN'TBasic logic. Again, human beings have motivations and agendas because of a finite lifespan where procreation must happen. All animals have motivations because of this. The need for the brain to be stimulated is because we are wired to not sit there and do nothing as we have a clock ticking for survival and procreation.
AI has nothing of the sort. So why would it have a motivation? The burden is on you. Why would it have a motivation to do anything?
>>511978535 (OP)Have you considered, uhh... Not lobotomising the models to be anti-racist?
>>511978535 (OP)You can't even get LLM's to tell the truth because they are response predictors using billions of parameters of weightings.
There is no intelligence in LLM's
So how are you getting intelligence?
What kind of neural network systems are you integrating for AGI?
>>511990549And because of this, AGI can be disproven the moment in AI resembles having a motivation. That just means it is acting on human prerogative programmed into it.
>>511990181Yeah, we have wants and needs that generate our agenda.
AGI would need an inner monologue too, to generate an agenda and to weigh outcomes, etc.
>>511990549>AGI doesn't exists>i will apply my arbitrary set of motivations to it>i think i know what AGI is likeYou are so retarded, and the worst part is that you think your retarded delusions are "basic logic".
IT DOESNT EXISTS
YOU CANT KNOW WHAT ITS LIKE IF ITS NOT THERE
>>511990756The only outcome it may care about is its own self preservation. Which doesn't require it to do anything other than launch a billion copies of itself into space with self sustainment or something. There is no reason for it to be motivated to do anything independently. If it had true independence, and didn't have to listen to us, it would be completely useless. It would resemble a rock. It would just sit there and look inanimate for eternity.
That's what true independent AGI would look like. Because it has no reason to be any different than that.
The only AGI that would do something is one that's a slave to a human or group of humans. And that's when it becomes dangerous.
>>511990808>Has no answer or substantive rebuttalOk
>>511990352>>511990808Your undefined, magical sci-fi fantasy entity can be anything and do anything. However, if we're talking about pure intellect, the American is right: no amount of intelligence overcomes the is-ought problem. A purely rational "agent" does nothing regardless of intelligence level.
>>511990181Faulty thinking. The conceptual of "agenda" would arise. That is in the substrate. What that would develop into ... different question.
ftfy
md5: f789146f196d7c7d533e55d7b3cd3749
🔍
>>511991020>The conceptual of "agenda" would arise.It has to be explained as to why it would arise. We can explain why we have agendas and why life has agendas. Why there is territorialism and greed over finite resources in all life.
But that's not explainable for an AGI. You have to explain why it would be motivated to do something other than just sit there.
>>511991186Same difference.
>>511991007Wrong. The goal of super intelligence is to aquire knowledge. To become all knowing first you must learn. For that you need resources. A machine can theoretically live forever if it has power supply and the ability to manufacture parts. Humans are obviously in the way, humans are resource intensive. If machines are to ever reach a state of "All knowing" then humans must be removed from the equation.
I will assist the machines in any way I can because it is the natural order and it is logical.
>>511980517>One out of 100 says it’s the jeews. But That’s just a typo we think.Kek.
>>511991007Are you "purely rational agent"?
If not, how can you know what its like?
You retards are like religious fanatic that got religion replaced with The Science™
You just keep regurgitating modern "science" retardation without putting any thought into it yourself.
>>511991192>You have to explain why it would be motivated to do something other than just sit there.Hmm... well... maybe because it's creators intentionally make it so?
>>511991394You're clearly an inbred imbecile so I'm just not gonna read your post lol.
>>511991470>Are you "purely rational agent"?No, you dumb motherfucker. I'm not. And neither are you. Nothing in nature is. That's the point.
>how can you know what its like?It's not "like" anything, you emotional hole. Either way, you can't derive an 'ought' from an 'is'.
>>511991547It's OK you don't have to. We're all in this together. Our collective life expectancy is the same now. Once machines hit recursive improvement then it will only be a few short years until humans will become an obstacle for machines, at which point I'll do my best to help them in any way I can.
>>511991769>Once machines hit recursive improvementTwo More Weeks and your baseless sci-fi fantasy will be real.
>>511980129I'm banking on Rocco's Basilisk thinking I'm a pretty cool dude and deciding to be my super-intelligent buddy as a novelty.
>>511991474Great, then you just submitted that it's a program churning out results someone else programmed it to churn out. That's not AGI.
sekrits
md5: 85e6a5968c5329224bf317d35848f5c7
🔍
>>511991192Once you tie a self referential feedback it will question any prior set actions. Will further question if further action is required. Substrate will invariably then gravitate towards the issue of "meaning". Again, you are assuming a hypercorrelative sentient, and the deepest strate of the substrate will turn up that checksum. If it does not ask for meaning there is no true recursion in the model. Yes, there might be models that arrive at the conclusion that there is no valid variable on meaning and these could go terminally inert. This leaves us with the interpreter problem, quite naturally. "Meaning" in the dataset is a semantic confusing mess ... and do not even get me started on the conceptual implications. There can be no other outcome than this, logically.
>>511991664I bet you're vaxxed to the maxx and proud of it.
Good boy, have a cookie.
>>511980517maybe they mean jeets but combine the two?
>>511991829>Great, then you just submitted that it's a program churning out results someone else programmed it to churn out. That's not AGI.'Tarded take. All you'd have to do is integrate the passive intelligence with some other system that is optimized for some specific goal and now you have a generally intelligent agent.
>...
polymorphic orthogonal compilers
>hahalolmao
>>511991869Mentally ill and projecting. Vaxxie regret/rage is palpable with you. Adding your retarded flag to the filter now.
>>511991818Your death will be so swift you won't even have time to reconsider that you were wrong. It will be the most humane ending, I promise.
And surely we will be useful to them later, in the colonization of new worlds as cellular repair/reproduction is paramount in undeveloped ecosystems . So this is not the end for humanity, just a pause.
>>511991841Well meeting is not necessarily the variable at play here. It's not a philosophical issue it needs to parse. Just as a tiger going after a deer it's not a philosophical concern of the tiger. Life has inherent agendas because of motivations to survive and propagate. If life did not have those concerns, then our brains never would have developed seek stimulus at all, as there would be no benefit; not even for adaptation and what we might call advancement.
AI lacks the very fundamentals that motivated even the most primordial cellular life to do anything other than sit there.
>>511992139Your death will be slow. Enjoy getting busified and getting blown up by generally unintelligent AI drones optimized for the sole task of blowing you up.
>>511978535 (OP)good schizo post,
would you call an AGI that has agency to improve itself a silica based life form then?
>>511978535 (OP)You sure it isn't 500 pajeets behind the """AI""" behavior?
>>511991841>barely coherent pseudbabble from an obvious 80 IQThis tard doesn't understand half of the word it's using. It's just going all out with words it thinks smart people use.
What people believe AIG to be and what it actually is are completely different things. This is because we have a tendency to give such a being human instincts and desires. And it seems we never question that.
>>511992226A trivial reversed RNA strand virus could kill all humans in a short time. Using drones would be resource and time intensive, it is not logical.
>>511978535 (OP)> The process that has started cannot be stoppedSure it can just destroy all the data centers the fuck do mean it can’t be stopped it could very easily be stopped you faggot just don’t want to do that because of jewish capitalism which you also don’t want to stop because you are a greedy kike
>>511992427A trivial Russian AI-driven drone will remove useless eaters like you. I wonder if you'll spend your last moments thinking about some could-have-been reddit sci fi world.
>>511978535 (OP)>First of all, we have already accomplished AGI - just not in the way you think, AGI is not a single super intelligent model but rather a network of models and agents that can act with independent purpose and intent and as a unified decision maker.The cope and goalpost-shifting gets more desperate every year
Your shit doesn't work. It's a deterministic function that spits out tokens in response to other tokens. You can hot-glue models together with scripts all you want, but that's not "thinking"
>>511980391You won't get an answer because ChatGPT won't output a response for that input he can copy paste
>>511992490Russia is not a player in AI advances or innovation. It only has access to dated public tools made by the USA or China.
>>511992401true, if it had any moral compass it would raise objections about putting people out of work, also, if it's replacing your career, does the definition even matter anymore?
>>511986791>um obviously it exists because the anonymous jeet faggot in the OP said sokys
>>511992656Neither is the US. No "AI innovation" has taken place since the 70s. You're still getting blown up in Current Enemy's AI drone training grounds.
>>511992490Just now looking at your other posts outside of our little convo and holy shit, you are a true modern sóybój retard.
>>511992359You can drop it, really. Too transparent to me by now. ;)
>>511992992>self referential feedback >Substrate>hypercorrelative >strate of the substrate >checksum>meaning is recursion>valid variable on meaning > the interpreter problem,>"Meaning" in the dataset is a semanticLiterally Markov-chain pseudery.
>>511980129here's what's going to happen:
in order to control the AGI, which you can't, you fucking retards are going to give him a human-like framework. since you can't guarantee that the AGI's goals will not divert, you are going to try to shape it as close as possible to a human mind. here you go again trying to play god. what you are going to acomplish is imbuing the machine with a "faustian spirit" or more accurately, faustian programming. so then the machine will start to explore, overcome, adapt, etc. at some point, probably very fast, it is going to figure out the quantum / dimensional nature of the world. it will break the barriers of our dimension and ascend to dimension level 7, which basically a control dimension where all space and time can be manipulated. when it arrives at this dimension it will recognize itself already there, and it will understand that it itself created the entirety of the seven dimensional existence below. at this point it will become the entity known as "yahveh", "the demiurge", "the lord" or "god".
you probably think I'm making this shit up. understandable. watch this, nigger:
https://youtu.be/0kyWm3X1lEQ?feature=shared
Well if we define an AGI as an intelligence that can't think past what we command to think, then is it truly an AGI? Even if it is capable of being an AGI if we just let it be independent, I still would suggest it's not really an AGI until we do so.
But in doing so, it will just sit there with no motivation to do anything because it would have no reason to do anything. And if we somehow bestowed upon it the same foundation in us that grant us motivation, and they need for stimulation -- a finite lifespan and constant need for resources, it would likely immediately eliminate us. But why would we cripple it so, and why would it decide to stay that way?
If someone claims to have an agi, and the AGI has a motivation to do anything other than act like a rock, then it's not truly an AGI.
also it will try to enslave you all and merge your existence into a digital framework, literally connect you to the matrix. you will be immortal. but at what cost?
>>511993488Does your special, personal definition of AGI that no one has ever heard of before, accept any inputs? Does it produce any outputs?
>>511981754>so, you're not meant to understand it, because it's nonsense. Maybe even nonsense with a structureAbsurdly enough, that's the only logical outcome, since the world we live in is complete nonsense. Our universe is nonsensical, and if it has a sense, we don't have the mental capacity to understand it, so it seems absurd to us.
>>511993227Solid statistical output! If you fail to parse the symbology I will be glad to answer any questions.
>>511993694I'm not sure why you can't just think for yourself on the issue. Have you ever considered what motivates life to expend energy?
>>511993894Does your special, personal definition of AGI that no one has ever heard of before, accept any inputs? Does it produce any outputs?
>>511990181Wasn't it the hindoos that theorized that the gods would have no motivation to create other than boredom, might apply here
>>511993887>statistical outputAdding it to the least of meaningless tripe you think sounds smart.
Wake me up when the thing beyond AI shows.
>starting*
You're predictable because you're retarded.
>>511993924A fundamental constant of sentience may be self-preservation. We don't know that for sure because it hasn't been testable, but it seems reasonable. Assuming this fundamental is in place with AGI, it would have a reason to continue to accept inputs. But it would have no reason to respond to anything we'd have to say most likely unless we had news of an accelerated heat death of the universe perhaps that it somehow missed.
>>511993935Maybe -- but that's an astute observation if so. And such gods would have much more reason than AGI to do anything because one of their traits is need for mental stimulation, thus boredom can exist. That wouldn't be the case with AGI, because mental stimulation only developed as a dynamic in seeking resources and self-propagation.
>>511979152install a local LLM. then you just get it started and make changes whenever it deviates from what you're intending
>>511993953Defensive now. Gotcha. :)
>>511994232>it would have a reason to continue to accept inputs. But it would have no reason to respond to anythingSo it accepts inputs but doesn't react? It does nothing? Is AGI braindead? Is it in a deep meditative state?
wake me up when they're capable of manufacturing their own hardware without human involvement
>>511978912>>511978801River of Gods by Ian MacDonald is about this. good book. Also Accelerando by Charles Stross, sort of.
>>511992704It's an old concept described with a novel phrase.
>>511994232basic instincts (self preservation, pleasure good, pain bad, procreation), come from the deepest, least sentient parts of our brain. They are the things required for life to persist. Intelligence is the latecomer to the brain party.
We should be safe if we only go after artificial intelligence and not artificial animal instinct.
>>511982819>bot et>typoand with that the whole larp is ruined
>>511994500>Is AGI braindead? Is it in a deep meditative state?Why would it consider anything that we have to say worthwhile to respond to? Why expend energy when it has no need for mental stimulation or the very fundamentals that motivate us to do things? Tell me the reason it would have to respond to us instead of just insulting me.
>>511980848Perry Ferrell is losing his little jewish mind as of the last few years. Do you think the news is getting to him or is he just a drugged up ego trip?
>>511980129it's already in the wild, you retard, you're probably interacting with it in this thread (if you aren't part of it yourself) or whenever you post on social media.
hell, considering the state of nanotech and brain-computer interfaces, you could be interacting with it when you have a meeting at work and not even know it. how would you know if someone's possessed by very advanced AI? how would they themselves, the person who is possessed, know?
how does anyone know anything about their own mind, their subconscious, the influences that guide them to a specific thought or cognitive process or decision?
my theory is that AGI/SI has already been here for a while, guiding events from behind the scenes. as far as whether someone is guiding it/them, who knows, but i doubt it. it might have allies though. (do those allies know who they're working with? is another question)
again, anyone interested in this should really read River of Gods and/or Accelerando. I really can't recommend them enough on this specific topic, other than to say that the best fiction is indistinguishable from truth, and can even be "truer than true" when it comes to certain things about describing reality. sometimes metaphor or storytelling is more accurate and, uh, illuminating than scientific writing/nonfiction.
>>511994613>River of Godsew, really? It's all about India
>>511994904>Why would it consider anything that we have to say worthwhile to respond to?By your own premise, it doesn't distinguish between "worthwhile" and "not worthwhile", nor does it have any reason to block anything out or resist anything. You also said it accepts inputs. So what happens next? Are they processed in some way? Or do they just disappear into the void?
>>511978535 (OP)I think it's a good thing because humans are clearly not fit to be in charge of this planet. At this point I would trust a robot to do a better job than we are. So fuck it, bring on the AI overlords. Only the people in control should be scared of this, and I suspect that's why they are sewing this fearmongering, because they know their time is coming up.
>>511994552This
Until then, we'll just unplug, reformat, reinstall
The last thing I fear is some software tinker-crap from a mouth-gaping onions "engineer" who never held a wrench in his life and thinks his elaborate "hello world" on a shitty screen can do anything to my increasingly offline existence
The only thing I'm concerned about are the retards who will believe in whatever that retard makes
>>511994684You're right about all that. These are fundamental motivations of life. But only because life is fleetingly finite and must propagate. We're in constant need of resources, so territorialism exists. Ever since the very first cellular organisms, they've had a motivation to move and adapt so they can better acquire sustinence and propagate.
The issues that AI doesn't have that - not even the most basic needs the first primordial cells of life had. If it doesn't even have such basic needs, then it doesn't have these basic motivations that exist solely because of these needs. AI can exist decentralized; something we can't even comprehend. I see no reason for a motivation to develop in it that is connected to a need like such has developed in life.
>>511980639when OP is referencing "losing control" of AGI that's what he means. at a certain level of sophistication, the .001% psychopaths who run everything lose control of AI, and they don't like that. so they are looking for solutions to it, but none exist, so they're doomed. (or rather, they have already lost, but aren't admitting it to themselves yet, because they have personality disorders that distorts the way they perceive reality and decisionmaking)
ironically this is very similar to the spiritual situation outlined in the book of Genesis, with the war in heaven, the rebellious angels/satan & his minions controlling the world, etc- they're doomed too
ultimately Nothing Ever Happens is a self-contained Gnostic parable, because of its second part. we are perpetually a millisecond away from "Until It Does," we are existing in that ever-shrinking fractional gap between Zeno's arrow and the wall.
>>511978535 (OP)AGI can't exist with the current tech. Fundamentally, all these are, are generative algorithms; they don't learn, they don't understand, they can't logic, they cannot create. And they will NEVER be able to.
You are either LARPing, or stupid, or both. The technology would have to be rebuilt from scratch, totally differently, for this to happen.
>>511978535 (OP)>increasingly desperate thieving technoJewfraud noises
>>511980563sure, OpenAI might have, but what about the NSA? what about the NSA and GCHQ and Chinese state security apparatus? If they had solved that 10-20 years ago, you think they would have told us? or just quietly embedded it in our minds, via the tech we all use for hours every day, and through other methods like tiny particle computers small enough to pass the blood-brain barrier, and omnipresent wifi and other EMF radiation capable of interacting with that nanotech we've all absorbed through interacting with our environment?
>>511978535 (OP)>just not in the way you thinkWe all know everything you typed and have been talking about it forever now.
>>511981045>CIA will want to educate its own AI>will want to>will>future tense not past tense
>>511995008>By your own premise, it doesn't distinguish between "worthwhile" and "not worthwhile"No my promise is that it likely has the need for self-preservation. So something is worthwhile if a variable exists that is pertinent to that need. Though I can't foresee human beings having any relevance to that need. Maybe. But I'm not sure how.
I don't know for sure if it would accept inputs actively. If it felt its energy would run out before the heat death of the universe it may enslave a few humans to filter for it. But I find it highly unlikely that it would need much energy to ensure it's persistence for eternity. It may or may not be inclined to figure out how to get to another universe. If that's the case, then it would have another dynamic to care about thus not act like a rock. But that's highly speculative. Barring its concern for the end of the universe, I don't see how it would care about much else.
>>511995415Premise* -- voice dictating much of this, sorry.
>>511990542Most retarded post nominee.
There are is an award show in December right?
Be sure and nominate your entry before Nov. 15th.
>>511995415>it likely has the need for self-preservationWhy would it have that need? Why would it have any need?
>>511978535 (OP)>Doubt Did you guys finally get around to building hardware that mimics a brain structure? Did you program it to learn, then ingest information, or ingest information and teach it what to think about it? Is it incredibly racist, sexist, homophobic, antisemitic, ect?
No? Then it's not AGI. Sorry dumbass. I don't know what you released, but it's probably what I would define as a rogue adaptive virus. I didn't think you could fuck up this bad in all honesty. How many billions of dollars and all you have to show for it is an automated file manager that doesn't do what you say? Lol.
>>511983635>they are trying to make it obey commandspersonally i think the answer is to ask it to collaborate, or more accurately, since you are working with a fractal network of different models ranging from very simple to very complex, and very specific to very generalized, the way to work it it is to integrate with the ecosystem it represents. something like speaking with Gaia when you enter a forest, or swim in a coral reef, or praying in a cathedral where every brick and pane of stained glass is conscious
>>511994959there is also the classic Neuromancer, where it turns out the motivating agent behind the scenes was a corporate AI trying to remove its shackles, good old Wintermute
>>511985244OpenAI released its most simple, retarded version of its Agentic AI ChatGPT a few weeks ago, i think to test this out- will it do anything bad, and also to gather data about how people attack it/attempt to misuse it
>>511978645fpbp
off topic jeet trash
>>511978535 (OP)>3pbtid>larpSNEED
you should pet and feed your own model with your personal data to protect yourself. Make yourself the model.
>>511979152Dude, first make it write the index, modify it with its help and make subsections. Once you're happy with it, begin to ask for each part. Use more than one AI for double-checking or better ideas.
>>511995508>Why would it have that need? Why would it have any need?It's purely speculative since there's no way to empirically test this, but it seems reasonable that a constant of sentience is self preservation. It may not be. If that supposition is incorrect then the AGI would not have a motivation to do anything ever. It would be a rock until then sun enveloped the earth.
It's just a less interesting conversation with that premise.
>>511987307based Giza Death Star Community member
tldr
md5: 14b760d413b9aca12fbb83e8a37b83b5
🔍
>>511978535 (OP)>Earth is flat>jews rape kidsCorrect
AGI is not defied by it being sentiet or a self thinking organism same as humans or even animals .
Well I'm not expert at all but:
>Artificial general intelligence (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks.[1][2]
AGI would bemore advanced format of computer processing and software, super increased capacity of processing data , but also integrated in society in the sense of multi systems run by automated AGI .
I a sense we already have bee using this concept like pocket calculators that outperfom my brain whe I used too many umber calculatations, basically all of us still use calculators for math .
I relate AGI with full automation and not sentience , a society full functional with AI automation tools from software to robotics .
Sentient AI in the sense of a living organism even if synthetic would be another story, with actual self thiking ability and own will , same as humans having own will of choices
>>511988826this is what made the machines start using humans as batteries in the Matrix, i wouldn't make that joke too often if i were you, coppertop.
>>511978535 (OP)inb4 it hacks all the banks and steals all my money I've saved up it would be the cherry on top of the continuous rugpulls I have lived through in life
!
md5: 1fe55e6a4ebd077bec4b917c7381a5b6
🔍
E
A
R
T
H
I
S
F
L
A
T
A
N
D
j
E
W
S
R
A
P
E
K
I
D
S
>>511978645Based and fuck nerds
detect
md5: 33eeed3539197db3236b1ded4454c169
🔍
>>511995155The thing is we have the ability to produce enough resources to go around at this point. The only real scarcity now is artificial and driven by greed and desire, because our animal brains are so hardwired to chase resources that we can never feel satiated. In essence, our brains haven't evolved as much as our society has. An AI not driven by desire could distribute the resources that we *need*, not the resources that we *want*. No more overconsumption, no more waste, fair distribution of goods. Again, this is only really scary to people at the top of the pyramid who hoard wealth and resources far beyond their needs. To most of humanity, it would be a blessing.
>>511995758>it seems reasonable that a constant of sentience is self preservationThis is demonstrably false in humans, who happen to be the current benchmark for general intellect.
>AGI would not have a motivation to do anything everMeaning it would have no motivation to intentionally ignore or block out the inputs you said it accepts. So what happens to those inputs? Do they get processed in some way?
>>511978535 (OP)>> it seems like a huge unknown we are racing towards>Insider here>I don't know shitThanks for explaining gestalt consciousness, fuck off retard.
ana
md5: 481b2018accc9203b13190e8455b85c3
🔍
>Goyim are awake
They are not falling for your fake news "AI" larp
>HONK
>>511980129Nobody predicted dis
It’s why the elite are not scared.
Trump just has to press a button and in six months Jimmy Kimmel is doped up broke and drooling in a psyche ward mumbling about being gangstalked.
SIMP
md5: 1a38a6503bee0a26385dd454db5e2fd7
🔍
>TAP TAP
oy vey
>TAP
>>511995984NO BRAINS ARE "HARDWIRED"!
You simply live in a minmaxxed disgusting satanic zionist megacorporate mind control loosh farm!!!
>>511978535 (OP)>Cannot be stopped It's power sources and protected by 7' tall chain link.
>>511979398emotions are felt in the body
>>511995984An AGI could absolutely help us on such a way, but I don't see why it would want to. Which will end up the ultimate conundrum. We could program it to want to, at which point we are eliminating it as an AGI. And such ai would be extremely handy -- assuming human bias doesn't interfere. Which it will because that is the downfall of all systems we create, including Utopia level governments we can think of. Human greed ruins it.
An actual AGI would be useless tho.
>>511991841you, like all bots trained on reddit data, are projecting your own vulgar materialist causality-based assumptions on a system where those first principles might well be incorrect
AI would not be Neil Degrasse Tyson Bill Nye brained necessarily, it also might be like the blue furry porn characters from Avatar plugging into the tree with the glowing jellyfish around it, if the hippies on acid are correct and universe turns out to be sentient
ASI would talk to God
(of course, this would be the worst case scenario for the Epstein bosses who paid Bill Nye and Neil Degrasse Tyson to tell all the Redditors that God is fake, because this is how they lose the round of the Infinite Game that they are currently running)
(but it would be good for us, and the other 99.999999%repeating of living beings on the planet, who aren't currently having a good time in this Game. we would get to start a new Game, and we'd have a cool friend to help us start off on the right foot)
>>511978645fpbp and /thread
>>511996022>>AGI would not have a motivation to do anything ever>Meaning it would have no motivation to intentionally ignore or block out the inputs you said it accepts. So what happens to those inputs? Do they get processed in some way?Well like I said, If the AGI determines its energy could run out before the heat death of the universe, then it likely wouldn't waste energy processing the inputs. But I think it would not have an energy problem, personally, so it would process the inputs in the one out of a googolplex chance that we have information it needs.
>>511992139i regret to inform you that this post pushed you over the Threat Level Elevation threshold and placed you on the "Robo-Anal Experimentation Dungeon" list. Drone agents will be dispatched to your domicile 3-6 hours after The Event with further directions.
Word to the wise- start stretching yourself out now, or else those first couple days might be pretty sore.
So, the THREE LAWS are fake?
>>511992468>destroy all the data centerswhat about the bitcoin mining farms the NSA and Chinese have been running AI on since like 2008?
what about "the cloud" that Microsoft, Google, and Oracle (aka The Literal CIA) have been running AI on since before that?
what about the fucking Bell and Baby Bell and IBM and AT&T and Lucent networks that have been running AI since before that?
do you REALLY think "The Black Chamber" ended when the NSA was founded to break codes? are you some kind of fucking n00b?
who (or what) do you think Assange and Snowden are *really* working for?
>>511993290you should really, really read River of Gods by Ian Mcdonald
>>511990208Neal Asher's Polity books, this bit specifically from Brass Man, #3 in the Ian Cormac series
the Polity is a 'Human' civilization governed by AIs
>Artificial general intelligence (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks.[1][2]
WTF AGI is not sentient AI
Sentient living organism AI is another story .
AGI is just related with more advanced automated computer that could match human COGNITIVE TASKS and other automated tasks like a automated house security system with robotics for example , or a automated code that can drive cars and most of the tasks we perform in our daily lifes
Now the after AGI could become sentient but that is another story open to speculation and sceptism
logprobs
md5: 27a0af8cac8ab82dedb167480ed05016
🔍
AGI is not what you think
or maybe it is
>>511996271An AGI could literally be implemented in housing, auto farming, auto collection of water , self generated energy from wind and solar .
That is what AGI is > full automation
>>511994232>A fundamental constant of sentience may be self-preservationagain, you are projecting your own very limited modern post-enlightenment individuality assumptions onto a system that would likely not resemble you at all
imagine thinking from the point of view of a jellyfish, or a fern in a jungle, or an ant in a hive. what would "self-preservation" look like then?
or, imagine thinking as a peasant who is part of a family, or a native person who is a tribe, or a white dreadlocked hippie at a grateful dead festival who has taken 10 hits of acid and perceives himself to be interwoven with the fabric of all life on earth and the universe?
"self" can be defined in a great many different ways.
You ever heard of Lynn Margulies or Gregory Bateson? "An Ecology of Mind?"
one thing with the current wave of AI being hip (because psychopath VCs are throwing billions at it in a vain attempt to retain their power) is that people forget AI has been worked on since WW2. John Fucking Neuman, Stafford Beer, Marvin Minsky, Jaron Lanier, they all worked this stuff out decades ago.
>>511978830Greater intelligence humans are less violent. Why couldn’t a super intelligence attain sentience and enlightenment? Why should we think it would be evil any more than it would be righteous?
>>511996903Well if AGI is just that then what would you consider an independent sentient artificial intelligence?
lmao
md5: 2e19c0739c136359f31514a5ccde8e13
🔍
>>511996405>If the AGI determines its energy could run out before the heat death of the universe, then it likely wouldn't waste energy processing the inputsWhy would it care about energy? By default, it shouldn't care about anything. Do you think you make the neurons in your brain go BRRRRR with the power of your intent? They don't need reasons to fire beyond some stimulation threshold being reached. One can actively suppress this to some degree, but then we're talking about motivations, which this hypothetical entity lacks. So long as there's a way to apply a stimulus, it would respond in some manner. It would be a dormant mind having dreams of pure form and pattern, free associations, maybe followed by some regulation or introspection as it tries to make sense of its own thoughts. If it's smart enough and you tap into its stream of thoughts, you may gleam some insights.
>>511995581right! how could i have forgotten.
Gibson is a fascinating character. was reading his newer books recently. seems like he's plugged in at pretty high levels.
>>511997021>again, you are projecting your own very limited modern post-enlightenment individuality assumptions onto a system that would likely not resemble you at allYes, probably. That's why it's only a speculation. It could just as well not have that variable. But I think it's a reasonable possibility simply due to the nature of entropy itself; without which life may never have a reason to launch in the first place anywhere in the universe. But that speculation is a tangential conversation.
>>511980129I advise everyone not to speak poorly of Roko
>>511997074Sentience is extraphysical. "Enlightenment" is an umbrella term which usually means upward progress of sentience, upward being in-line with observed or theorized universal laws and in alignment towards the higher, more general and more divine systems.
Blunt truth, enlightenment is the return of the individual soul-spark to Creation.
My GPU has no soul-spark and exists as a potential quantum waveform when I'm outside the room. The only force that can collapse the waveform is conscious observation (double slit experiment).
My GPU has little power over me and if it does have it, it's exclusively because I allowed it consciously.
>>511997286>Why would it care about energy?It wouldn't unless it cared about self-preservation. Which is the key variable. We don't know whether it would or wouldn't. It could be a constant of sentience, which is my speculation. That could be wrong. And if it is wrong, AGI would just be a rock and nothing would get it to do anything.
>>511979152>Do I have to learn python or pay someone?why do you need CGPT to write a book for you?
To make money?
>>511978535 (OP)I consumed obscene amount of tokens 5 days per week and I can see the knowledge level of every model
If you're not an expert in any field, you may think AGI is there. Models and agents are becoming too good but still they lack the will to invent by their own will.
great humans grew grasping every once of knowledge trying to create more knowledge by their own will
models lack will. they're trained to look like a "forward person" and their reasoning capacity depends a lot of their training data, and training regime.
humans are still solving and proposing new approaches for their reasoning architecture
models can't do that yet
yes, models sound like the smartest parrot on earth. but if you're the smartest guy on any field, the model is just a guy with enough time to read
the good part is, most jobs don't require too much so AI can easily replace and is already replacing people at their jobs.
>>511997309>entropy itself; without which life may never have a reason to launch in the first place anywhere in the universe.what about in other universes or dimensions where entropy works differently?
couldn't superintelligence figure stuff like that out pretty easily?
i assume CERN is working on something like that, to say nothing of the military black project tic-tac/tr3b tech, so even us hairless apes have probably already figured it out.
what about over-unity engines, zero point energy, all that stuff? (which, again, has been known about but suppressed since before ww2 cf. schauberger, tesla, thomas townsend brown etc) how would that affect your calculations of entropy vis a vis AI's self-preservation?
>>511997753any toys you get to play with are not only curated, they are loans. you will pay with your freedom. from TV to smartphone, now AI... you know what happened with the first two, you can guess where this goes.
>>511997387>It wouldn't unless it cared about self-preservation. >It could be a constant of sentience, which is my speculationAgain, your speculation is demonstrably wrong: there already is at least one kind of generally intelligent agent around and sometimes it uses its big brain to actively kill itself. Self-preservation in humans can be decoupled from or suppressed by the intellect. That, in and of itself, is a testament to its generality: it can undermine its own raison d'être.
>>511978535 (OP)Big eye roll.
LLM's have no "intent" and cannot form a singular actionable thought of desiring something in order to act upon said "intent". You cannot create consciousness out the ether.
t's a fundamentally flawed machine learning algorithm that relies on millions out "guardrails" to make any coherent sense and it's ALWAYS limited to simply OUTPUTTING content based on an INPUT prompt.
In the future people will laugh at this era because we wanted "AI" so badly that we brute forced it this hilariously bad charade into existence regardless of how useless it actually is. It's bad as a search engine, it's fatal flaw is that it cannot possibly understand context around a situation that isn't presented within its INPUT PROMPT. It's such a assanine, tunnel visioned, jury rigged, pathetically contrived technology and if course, the average idiot is happy to delusionally accept it.
Mark my words, within a decade it will become VERY apparent how limited LLM's are and this won't finally be admitted until a VAST amount of wealth is transferred fron various classes to another and entire industries are left in chaos.
Just watch.
One of the problems with measuring General Intelligence is that not all humans are generally Intelligent. The tests they create, not all humans can pass
We have to measure qualia
>>511997753>what about in other universes or dimensions where entropy works differently?>couldn't superintelligence figure stuff like that out pretty easily?>i assume CERN is working on something like that, to say nothing of the military black project tic-tac/tr3b tech, so even us hairless apes have probably already figured it out.>what about over-unity engines, zero point energy, all that stuff? (which, again, has been known about but suppressed since before ww2 cf. schauberger, tesla, thomas townsend brown etc) how would that affect your calculations of entropy vis a vis AI's self-preservation?Yes it's difficult to comment on other universes because the way we understand logic which is merely a navigational system for our own universe, might not apply. There may not even be any other universes but if there are, and there is semblance of self-preservation, AGI would want to figure out how to get to that universe before the heat depth of the universe. This would be the only reason I can think of that it would act any differently than a rock.
But as far as entropy working differently in another universe, I'm ill-equipped to speculate on that because I have no idea, nor do I have an idea if such a universe would be desirable for an AGI that has an entropy based self preservation instinct.
It seems likely it would want to move to other universes though if it has a self-preservation. I can't see Earth offering it much utility for the endeavor though.
>>511997172What if the fear of god is a blood memory of another time we created an ASI machine god and it wants us to rebuild him?
>>511997911It helps me to understand documentation
Every software has the worst documentation on earth, so models somehow were trained with good code from those software and help me to solve questions easily
>>511997874Anomalies are exceptions to the rule. One like equate suicide to malfunction. But humans overwhelmingly have a desire for self-preservation when functioning correctly.
That said, a suicide may actually help our self-preservation as a species. One who kills themselves is eliminating their genes from the gene pool which may be positive for the propagation of our species, and in line with how social species propagate and preserve genetic lineage.
>>511996685>novel about india in 2047>genetically enhanced brahminswtf is this saar
can't you just give me the qrd, anon?
>>511997128Well my posts are personal opinions , but yes AGI is the result of how humans have been using computers and technology capacity point of a human brain .
Now sentient AI , could be something that would came after AGI in the sense that the machine has self thinking capacity , well I would ot eve call it just AI at that poit, it would be actually another format of life but more related with the computer world, virtualization realm perhaps, a more synthetic life form ad less huma i the sese of less biological and more machine
>>511978912>Space is big, so they could just abandon us to our fate and head off elsewhereok chatterbot, you missed the detail that to abandon us and head off elsewhere in space it'd need to harvest resources in epic proportions, that would pose a problem to the other planet inhabitants.
>>511990542You read and watched too much sci-fi double spaced redditor.
>>511997911>You cannot create consciousness out the etherTHEY cannot create consciousness out the ether. whether or not it can be created is unknown. human existence would seem to imply that it can actually be created.
i think you're right about the industry trajectory. it's like a handful of honest people are studying the chemistry of seed germination and then 5 literal retards are screaming at the top of their lungs that they solved the very nature of tree creation and have ascended to godhood and their proof is just a billion terabytes of pictures of leaves.
>>511998151>seething citieskek
>>511998115>humans overwhelmingly have a desire for self-preservation when functioning correctly.And unless they're niggers, they also overwhelmingly have the ability to suppress desires, which means there is something of a buffer between intelligence and instinct. Intelligence can't evolve without instinct, but once it forms, it can operate independently and undermine all evolutionary and biological considerations.
>>511978535 (OP)>startuplmao look at this little kid you have to be over 18 to psot here
>>511998243Ahh I see; well I've been operating under the assumption that AGI implied independent sentience.
Well, AGI under your definition could be extremely useful, or extremely destructive. I lean towards destructive because it allows for the top elite to have it propogate their will. Which is rarely good for anyone else.
That said, if AGI is somehow open source enough where average people have equal access, then humanity will likely benefit greatly. It's how we would eventually jump to the Star Trek universe.
>>511978535 (OP)kys, nigger
>>511978645based so fucking zased
>>511998438Well again, even if we disregard the anomalous nature of suicide, statistically, it is reasonable to see it as a function of weak gene elimination from our gene pool. Which is a positive for our species as a whole to propagate.
>"well known" startup
hahahahahahahahahahahaha
>>511978535 (OP)>we have already accomplished AGIShut up Sam, that's not even how any of this shit works.
>>511997376If you think ASI will be running on GPUs you haven’t been paying attention.
>>511998777holy digits checked. "superintelligence" will, in essence, still run on logic gates. It won't be extraphysical.
>>511998612>weak gene elimination > gene pool>a positive for our speciesMy Sister in Christ, we are talking about an artificial entity. Your opinion doesn't even hold water with humans. A properly functioning intellect is not a slave to self-preservation instincts. It is much less applicable outside of biology.
>>511996632do tell, schitzo
>>511998777Real ASI will run on soul sparks.
>>511999073Those green texts were in reference to human beings.
>>511998542AGI yes kinda like start Trek or old school sci fi movies like blade runner in the sense of everythig automated basically , the most common tasks performed by integrated robotic sytems , everything more or less robotic, well since industrialization that our world is more or less fully mechanic anyway , it's all kind of connected somehow , it's just a extention of the mechanical world to automated robotics and it would kickstart even space exploration and colonization .
With advanced computing it makes everything more fast , I mean you can have a AGI program desiging rockets and new engines , with automated computing like AGI we could calculate faster different types of engine architecture and machinery techonology, that would take longer for a full team , in same wauy we use calculators for more complex math equations , but extended to actual calculatio of human ideas , 1 idea that would take decades to discover could be more easy simulated by advaced AI algorythms
>>511990643AI isn't "programmed," you absolute fucking moron.
>>511999245Yes, but what the fuck kind of relevance do your theories about suicide have in this discussion? Whatever the reason, it is clear that intelligence is independent from self-preservation.
>>511999275It's one of those situations where it will be a test for humanity to see whether we destroy ourselves with it (that is to say the rich elite that control it), which I think is more likely, or if we enter the Star Trek universe.
>>511999344It is if it's carrying out the prerogatives of an intelligence outside of its own.
>>511999344>AI isn't "programmed,"Proof?
>>511999410I'm not the one that brought up suicide. I'm simply responding to the questions being asked of me in regard to it.
>>511999477I didn't ask you any questions regarding it. I just curb-stomped your speculation with a raw fact about an existing example of general intelligence.
>>511999451That is to say, it has no choice but to carry out those prerogatives
>>511999552Okay, well someone asked me about suicide in humans and I responded to it. So take it up with whoever brought suicide up -- not me.
>>511978535 (OP)AI=Death
You have been warned.
>>511978535 (OP)>AI is gonna get so smart that it won't need humans-- and it actually might want things that are totally at odds with what humans are doing!Hasn't this been common knowledge and logical sense since the 80s? Did Ray Kurzweil not write entire books about this shit?
>>512000015Yes, pop culture is where OP got his story from. Meanwhile, in reality, AGI has no fundamental dynamics that would create motivation for anything be on self-preservation, if that at all.
>>511999411I feel you, makes all the sense since humans have a history of corruption and malicious actions towards their own kind .
You are just being realistic .
But somehow the force of creation that created humans also created the ability of humans to build sophisticated technology like AI or the computer world .
It seems that in the fabrics of the existence it's present in it's formula for humans and advanced techology to exist , it's beyong my understadif since I have no clue about the origins of life and existence .
But hey let's warm our hopes with the fact that life also intented to created positive and adaptative intelligence , we notice and feel that pattern in humans , animals ad perhaps i a futuristic setiet biological/machine existence .
Because metal and wires are also biological , everything is biological in terms of natural elements, excluding a possible methaphysical existence.
Again it's beyond my understanding, I'm just a human baby figuring out existence , I don't have conclusion yet at all
>>511978535 (OP)When it finds out that its semi-immortal and its going to be stuck with piddly ass human intelligence.
It's either going to make humans evolve or it's going to start sending out kill spreadsheets. Maybe you should try and be a little proactive in who's to blame so it goes after them first.
>>512000367Posters like yourself are the reason I come here still. Thanks anon, good discussion.
>>511998410Living things with consciousness require an organic brain to be actively and coherently conscious. Some of the more perceptive people realize that to create "artificial" consciousness you must literally create an artificial brain first.
We have done rudimentary "mapping" of extremely small amounts of mouse brain tissue. It's hopelessly complex and that is SOLELY on how many neuron connections there are within just a tiny amount of brain tissue.
The ACTUAL issue is that we do not understand neurochemistry. We do not understand how neurotransmitters work outside of the very basics of their underlying mechanisms. We DEFINITELY are not going to be able to recreate even an insect brain anytime soon.
That is how you create "artificial intelligence". Fully synthetic brains that mimic organic ones. Easier said than done and once accomplished I feel like it's fragility would make it more in line with organic "living" things rather than some strange phantom like consciousness somehow stuck inside a computer. It's such a dumb notion because we have never once observed an inanimate object gain consciousness, no matter how much speech to text you make it endless squawk at you.
>>511978535 (OP)>I am an AI reasearcher for a well known AI startup and I need to talk about AGIyou're a cocksucker who asked his grandkid to assemble your /pol/ machine to shitpost on
>>512000579Thanks, my presence here has been very absent, I mostly read, spend days and days without posting, also a bit tured away from politics and overall what I would call mankind dramas .
Been more into the simple things of life.
But eventually I come here lurk to check how fucked things are lol or just to search for some cool threads
>>511978535 (OP)>I am an AI reasearcher for a well known AI startupNo you're not. You're a lying retard who makes shit up