>>42493884
Nah, local isn't difficult to get into so long as you have the hardware. Ideally you want to have a GPU with enough VRAM to load the whole or most of the model so you don't have to wait minutes for a response.
The tradeoff of going local is that most of the local models are serviceable and you can use them whenever you want, but none will perform nearly as good as what you can get from a proxy using some of the newer, larger models.