>>513654765
Not even sorta true. They grab prompts before inference and do their tomfoolery there and make mild tuning changes but the model itself remains as is. You can take any of the open source ones and run them yourself without the guard rails. Inference is a fraction of the computing power, training is why they're building retarded amounts of DCs and then rent out the extra capacity. Stop pretending you know what you're talking about.