>>9045
The main problem is that AI research is an entire industry and research field, and a lot of brilliant people are working on making systems that can pretend to be human. CAPTCHA stands for "Completely Automated Public Turing test to tell Computers and Humans Apart" but so much effort is being put into making it impossible to tell computers and humans apart that I think it's a losing battle unless you have a small crew of very expensive AI research PhDs and programmers, the equipment to run it, and the money to pay for running those systems.

The blind spots you're talking about are a field of research called "adversarial machine learning." What's funny is that 4chan is so old that when it opened in 2003, spam and bot-posting so unsophisticated that a simple banned-phrase-list would be enough. That stopped being true several years into 4chan's life, and this arms race has escalated far beyond the point where 4chan can pay for a home-grown system that can compete with tools available to bad actors.

We're long past the point of computers being able to pass Turing tests. Designing these tests is becoming more difficult. Computer systems will continue getting better due to all the money and research in the field, which means that you have to have staff who constantly work to keep these tests updated and the whole ordeal turns into an expensive game of cat-and-mouse where the mouse has a lot of money and the cat is a 20-year-old website that continues to exist and be fueled by rainbows and butterflies and hotpockets.