>>280513863>AI generation isn't just making new images from thin air, anon. It's a hodgepodge of existing works, that's why it's called the plagiarism machineThis isn't true, and it's called plagiarism machine by people who don't understand it. Diffusion models (the main type of "AI generation" that is popular these days, but not the only type that has been popular) are actually guided denoisers that take a 100% noisy and attempt to clean it up based on the prompt. They don't "glue together pieces of existing works" like artists on social media think.
These models are trained on massive databases of tagged images and those image tags aren't always correct or what one might expect. For instance, pictures of women wearing makeup are rarely going to be tagged with "makeup" but pictures of makeup products will be, so if you prompt for "woman with makeup" the model will produce an image of a woman and makeup products and it might even generate body horror cause it will have trouble reconciling how to generate a picture of makeup (products) and of a woman at the same time.
Similarly, what's going on in your picture is that people figured out that adding terms like "screencap" to the prompt will cause the model to produce images similar to ones that were tagged as "screencap"s in the training data (as opposed to fanart, comic panels, youtube thumbnails, movie posters, etc..).
These Stable Diffusion models in particular (like Midjourney, which uses it under the hood) are actually really small and can be run on consumer hardware. Unless you have a crap video card you can download and run these models on your computer. This does not mean that you've downloaded all of the images in the training set, as dumb social media artists would like you to believe.
All that said, relying on AI does make you dumb and people shouldn't be using it for tracing art (nor should they be using CG and real art like fucking Greg Land).