>>106281711
It's actually all of them.
To get anything remotely usable out of AI outside of baby's first TODO-list application, or to get it to meaningfully contribute to a pre-existing working codebase, you first need to be able to subpartition the problem into something it can munch on without running out of token space. Then you need to figure out how to precisely prompt it so it doesn't modify shit overzealously. At some point you're going to figure out that vendor A handles particular scenarios that vendor B doesn't and vice-versa, and you'll want to use both for different tasks.
In the end you end up with a set-up where you create a pipeline of different AI agents where a few of them cook up prompts to feed the others, which eventually trickle down into agentic AI actually writing code and making changes, which you then still have to review. (And for god's sake, don't make AIs do the code review as well. "Quis custodiet ipsos custodes?" applies.)
All-in-all the actual effort you need to get the AI to do anything useful can easily match, exceed, or in some cases even outright DWARF the original workload.
And as soon as you're done with feature A and pick up work on feature B, the entire shenanigans restart from square one, reassembling a new mesh of agents optimized to tackle the issue.
(And yes-- this is, from self-proclaimed expert proponents who have used this to improve their throughput, apparently the 'leading edge' approach. I call bullshit though.)