>>281951840
The thing I can see AI video doing in the very near term is killing background/low quality outsourcing of which that material is already really suspect. It's also the perfect candidate as to do in betweening, you have a target first and end frame and I would in that case take AI slop over what people usually do for that. The only real issue is making it consistent from a technical perspective because the best video model at the moment that is open sourced, Wan 2.2, generates at 16 FPS and only an experimental text to video model is 24 FPS. And even then, anime is rarely all 24 FPS. Background can be down to 6 to 8 FPS and main objects in focus can go down to 8 to 12 FPS. It creates quite an uncanny look as one can see in the thread already and interpolation only goes up which works well but definitely does not look normal and looks even worse. It's the rotoscoping/soap opera effect so I have no clue if a model will ever come out that will do it properly but I imagine it won't be from the main providers of AI video models, it will probably be something some company or cooperative in Japan does years from now after all the bleeding edge stuff works itself out given the thin margins and what hardware you need.