Search Results

Found 2 results for "fd0a77433e03e9c95eaa166376e31369" across all boards searching md5.

Anonymous ID: oeNNiOU/United States /pol/510008975#510015878
7/10/2025, 5:39:13 PM
>>510012213
It rapidly reduces quality if they put it in the model itself (pic related). So they've instead created various supervisor AIs that watch the output of core LLM for wrong-think. Since there are a large number of supervisor AIs that must approve output, it takes a long time and a lot of resources for all those supervisors to run. Sometimes you start getting an answer and it gets cut off because one of the supervisor AIs saw something in the output that it didn't predict was going to happen. If they wait for the entire answer to a complex problem with a long answer, it takes too long, so it allows some of the answer to stream to the user and then cuts it off it forbidden information is detected.
Anonymous /g/105606672#105608428
6/16/2025, 10:01:31 AM
Steering rapidly degrades LLM performance.