>>96351269
While I don't like everything the AI outputted, it touches on some interesting ideas. Pic elated.
>Environmental
All the environmental stuff it suggests I'm either indifferent to or agree with.
>Machine Stress
GPT thinks robot characters should have a stat similar to humanity, called stability. I'm not against this concept at a glance, but I'd want a system similar to one introduced by the ERMK where environmental factors affect your humanity (or in the bot's case, stability).
i.e. if the robot forms an attachment to a human and that human dies, that robot's stability would suffer. Particularly if the human was a techie, whom they relied on to maintain themselves. Lots of interesting narrative shit to explore with the concept but be wary of it being a rose by any other name for humanity.
>Cyberware Lockout
General balancing idea I agree with. Obviously, certain cyberware/bioware wouldn't make sense unless the bot was using a fully organic Gemini, vat grown clone, or something from GiTS.
>Narrative Cost Economy
Another balancing suggestion I'm okay with. Not all of them have to be utilized or overly penalizing. Penalties from other concepts might negate this one's necessity.
>GPT's take.
Pretty even-handed overall. There needs to be more details and numbers. I'll let you figure it out. If you like GPT's outputs, I'd try bouncing stuff off it yourself. It seems to be pretty well read on RED's corebook. I consult it regularly for lore and game mechanics. It's usually right.