Anonymous
6/24/2025, 2:52:30 AM
No.105685576
>>105685516
>That's why I said that one single consciousness containing everything is the only way I can make sense out of that idea.
I mean I could just say that PA in the earlier example is conscious, but the problem with that is that it's too alien for us to reason about.
Human consciousness though is a particular thing with some particular properties and we care about that.
In particular agents that learn online and are embodied in some environment and integrate information in a certain way, form a self-model and so on, are probably their own class.
A LLM for example seems to be lacking various properties, so even if by some chance they were conscious, they wouldn't be a moral agent. So I'd simply argue that for us to believe they were conscious, we'd have to rectify those issues and bring them slightly closer to us, get that online learning working, get it to have continuity with the past context (put the context into weights), maybe embody them somehow (even in something simple like a console is better than nothing, a source of truth should be useful), and probably more importantly, give them a way to process and remember their past latents/internal state.
>Again, very arbitrary. In the physical reality it doesn't look very conscious but we've already done away with physical.
There very well could be some arrangement of "rock" that processed information in the right way, but the rock you picked up from the ground probably doesn't represent any structure that resembles the consciousness we care about though?
What is the consciousness of Peano Arithmetic? Okay maybe you can do some Lob's theorem in it for some self-reference, but come on?
>That's why I said that one single consciousness containing everything is the only way I can make sense out of that idea.
I mean I could just say that PA in the earlier example is conscious, but the problem with that is that it's too alien for us to reason about.
Human consciousness though is a particular thing with some particular properties and we care about that.
In particular agents that learn online and are embodied in some environment and integrate information in a certain way, form a self-model and so on, are probably their own class.
A LLM for example seems to be lacking various properties, so even if by some chance they were conscious, they wouldn't be a moral agent. So I'd simply argue that for us to believe they were conscious, we'd have to rectify those issues and bring them slightly closer to us, get that online learning working, get it to have continuity with the past context (put the context into weights), maybe embody them somehow (even in something simple like a console is better than nothing, a source of truth should be useful), and probably more importantly, give them a way to process and remember their past latents/internal state.
>Again, very arbitrary. In the physical reality it doesn't look very conscious but we've already done away with physical.
There very well could be some arrangement of "rock" that processed information in the right way, but the rock you picked up from the ground probably doesn't represent any structure that resembles the consciousness we care about though?
What is the consciousness of Peano Arithmetic? Okay maybe you can do some Lob's theorem in it for some self-reference, but come on?