>>106964741
Alright, fair enough.
But it's still kinda too vague. What if you define a spec for a microservice like >>106964825 says?
Technically you can never look at the code and just define the whole system in natural language. This would get you good architecture (for some definition of good and architecture) but would probably still have vulnerabilities.
I don't think anyone expects current LLMs to be given a short prompt and to create a big project autonomously from scratch, that's a strawman.
What I was doing with the C thing was a mix between making a long spec of things to accomplish, and giving the LLM live feedback in a Claude Code like tool I made using other pre-existing code assistants.
This is the level of granularity I was working at. I was trying to measure progress based on statistical measures of the activations and loaded weights (mean, stddev, correlation, mean absolute error, squared error) compared to the data generated by the original Python implementation, not just "vibes", whatever that means. And that's exactly the problem. "Vibe" is a retarded zoomer buzzword that doesn't mean anything. If I ask it to write a game and there's a bug and I give the LLM feedback about that bug, is that a "vibe"? To me a vibe is something that either feels wrong or feels good, but a bug or a deformed character in a game is not a vibe, it's factual information. Nobody realistically only gives feedback to the LLM based on feelings and avoid giving it any objective concrete information. I disliked the term from the moment I saw it. It reeks of identity politics, it's one of those strawmen that only serve as punch bags for people to feel good for being for or against.
If it actually meant generating (any) code using AI then that would be something I could defend. But when it can be stretched to mean anything as ridiculous as you want that nobody actually believes so you can use it rhetorically for the sake of your more general anti AI argument, it's not useful.