>>105848747
yes, 12 GB ram, I take it id need to run seperate instances of kobold for images and for llm and so the texting model would need to be simpler?