>>716153402it's a shame because it could be so useful but there's so much garbage information out there, spouted all across reddit, w/e, that's blatantly wrong but it was posted by someone just figuring the thing out so they thought it was right, and the llm has no way of testing/parsing what's correct
so it looks at its contexts, it sees there's 12 contexts with wrong info, 1 with right info, the 12 must be correct
i really don't see any way of correcting this without having the ai actually just take whatever environment you're working in, and bruteforce test all given information for you, then spit out what one worked