Has there been any consideration given to adopting a machine learning model that can identify potential rule breaking content and auto report it? Obviously given the current state of ML it would be difficult for a model to correctly assess whether or not something is a rule breaking post, but there are definitely some rules a bot could easily understand, like posting NWS images.

Some of these models have been around for quite a while, and while I have no firsthand experience with them they claim to be pretty effective, like https://github.com/infinitered/nsfwjs which says it has a ~90% accuracy rate. There are quite a few models trained for this specific purpose, which are easily discovered by searching for them. I've even read articles about several social media companies that use "artificial intelligence" to identify things like child porn or gore and blocking them before they're even uploaded.

The only potential problem I could think of with this is that some boards would have too many false positives due to the topics (/fit/ would likely have more false positives than a board like /mu/ or /biz/), but it would probably be possible to tweak the confidence percentages for these boards. Even if this wouldn't work for all blue boards, it could still be very useful on others where there isn't much of a reason to post images of humans, let alone scantily clad ones.

Please let me know your thoughts in the comments below and don't forget to hit that subscribe button!