The Robocop?
What possibilities actually exist for 4Chan to utilize Large Language Model technology for the purposes of assisting with moderating the website? My idea here isn't so much a plain autojanny doing all the work, as much as a special tool for assisting us with doing some of the most important work, which falls under GR1.
Say that someone like the Babyshitter is subtly altering the image so that it doesn't get caught by the autoblock thing, like he's currently doing, what if we had an AI bot which had used the content put into the autoblock as its training data for the purposes of identifying subtly altered versions of it?
Then that bot patrols a board and then scans through every currently active thread for if someone posts something like that, and if someone does, it immediately makes a global GR1 report and maybe also pings the moderation channel on Discord.
A crucial question here is if the bot should be given any operational autonomy for this task, and thus be able to request/issue bans and delete posts. AI can be very fallible, so I'm inclined to say probably not.
What would be the practical and logistical constraints in developing this kind of bot, and then operating it? My primary idea with this is to deal with those posting illegal content like CSAM, and possibly certain people operating their own spam bots (ergo fighting fire with fire).
Finally, if this is at all possible to implement, what do we name the thing? Murphy? Fagballs?
Say that someone like the Babyshitter is subtly altering the image so that it doesn't get caught by the autoblock thing, like he's currently doing, what if we had an AI bot which had used the content put into the autoblock as its training data for the purposes of identifying subtly altered versions of it?
Then that bot patrols a board and then scans through every currently active thread for if someone posts something like that, and if someone does, it immediately makes a global GR1 report and maybe also pings the moderation channel on Discord.
A crucial question here is if the bot should be given any operational autonomy for this task, and thus be able to request/issue bans and delete posts. AI can be very fallible, so I'm inclined to say probably not.
What would be the practical and logistical constraints in developing this kind of bot, and then operating it? My primary idea with this is to deal with those posting illegal content like CSAM, and possibly certain people operating their own spam bots (ergo fighting fire with fire).
Finally, if this is at all possible to implement, what do we name the thing? Murphy? Fagballs?