2 results for "1bff612e83fa25329602d459202ed5e7"
>>545154684
Anon is talking about this paper: https://www.anthropic.com/research/small-samples-poison

This is obvious since if you have maximum likelihood and you condition the text based on some token(s), it will output what you want. This is amplified if you use rare tokens and you can modify the model weights (instead of ICL)

OP is conflating model collapse with the similar model poisoning
*beheads you*