>>106316289
>Just use it for small segments.
That goes against the whole point. Unless you're working on a huge code base, the slow part of writing code is usually writing the code and not thinking of how to implement it. If you're working on a small segment that could be written in a few seconds it doesn't make sense to prompt and AI to do it for you. Also, depending on what you're doing it's better to generate the whole thing at once as AI is bad at writing scalable code, so if you leave out details it'll often write solutions that are incompatible with what you need to add later.
>Isolate the challenge down as much as possible and it generally does well.
It really doesn't. For simpler languages like Python or JS it's decent (but still very prone to logic errors in the code). Not to mention the avalanche of issues that inevitably present themselves if you're ever unfortunate enough to use a pre-existing framework. No matter how extensive the documentation is, I can bet money that the LLMs will misuse functions or invoke certain logic in places where it doesn't make sense (I've recently ran into these issues a lot when using WordPress for a personal project with a friend).
For more complex languages like C or C++ it commonly generates straight up invalid code full of syntax errors.
I distinctly recall trying to load a texture using the IWIC framework from wincodec.h and GPT would insist on trying to use a function that doesn't exist and that hasn't ever been mentioned online, and continued trying to call it (or alter it slightly -- still in a way where the name doesn't match any existing function) even after being corrected. It also repeatedly tries to use reinterpret_cast inside consteval functions.