>>23119615
>Aren’t neural networks like LLMs and image generators “trained” off datasets at their core? Like billions of references images/text that it learns from. At least that’s how early AI image gen worked to my knowledge.
yees, that's what training always meant as far as i knew
it seems like that might've changed with how commonly used these LLMs are now, idk though
the training phase is orders of magnitude more computationally intensive than actually running the finished model
the guy who made the baritone model i mentioned, he had to rent microsoft azure servers during the training phase, but the finished product (sophisticated bot that plays minecraft, basically) can run on consumer hardware
i think he intended to sell it to the highest bidder
but yeah, you could train an LLM based on gigabytes of reddit/stackexchange posts/interactions, for example
you can train a computer vision classification model in a similar way, say by giving it like 10,000 pictures of dogs/cats, each labelled as dog or cat
>Once this technology becomes commonplace enough for scammers to use I think we’re gonna see a market crash. How are you supposed to sell product to shareholders if you can’t guarantee a product is being seen by people? Same thing with add clicks. How do we know those clicks are from real customers interested in a product and not bots?
it's gonna be interesting to see how it all shakes out, i don't really know
but i definitely think the more interesting stuff is gonna come from chaining different types of neural networks and other programs together in ingenious ways, and possibly from breakthroughs in the matrix math these things are based on in the first place