Search Results
7/19/2025, 9:32:02 PM
>>105959636
1. what are you using to generate? comfyui, some online thing, what?
2. what model (llm) are you using, and how are you prompting it? are you using a system prompt too? what model for imagegen are you using? how are you generating the image?
3. if you feed an llm one "concept" or whatever (let's just call it a prompt), and then tell it to generate an image, and then throw another prompt at it, and then another, eventually you'll run out of context size and yes, the llm will start mixing, hallucinating, and not working, depending on gpu/etc. if you're using an online llm/image gen to do it, you have more room, but if your system prompt is shit, or the settings (temp/etc) are not right, it'll give you crap too
there is no "magic" prompt you can throw to make it do whatever you want it to
1. what are you using to generate? comfyui, some online thing, what?
2. what model (llm) are you using, and how are you prompting it? are you using a system prompt too? what model for imagegen are you using? how are you generating the image?
3. if you feed an llm one "concept" or whatever (let's just call it a prompt), and then tell it to generate an image, and then throw another prompt at it, and then another, eventually you'll run out of context size and yes, the llm will start mixing, hallucinating, and not working, depending on gpu/etc. if you're using an online llm/image gen to do it, you have more room, but if your system prompt is shit, or the settings (temp/etc) are not right, it'll give you crap too
there is no "magic" prompt you can throw to make it do whatever you want it to
Page 1