>>105672908 (OP)>turn data with lots of repetitions into a compression format that compresses stuff by reducing the space taken up by repeated data>gets reduced 10x>turn completely random ass data with 0 repetition into a compression format that compresses stuff by reducing the space taken up by repeated data>basically stays the same sizewow I wonder why
>why hasnt this improved at all over the yearsbecause there are physical limits to what is possible, there are some cases where further compression of lossy formats can be done (e.g. jpeg to jpeg-xl, losslessly optimizing mp3s from cbr to vbr by discarding the empty useless bits with mp3packer) but this is usually only possible with old inefficient technologically dated formats and even in that case the efficiency gains are very very small (20% at best in the case of jpeg from 1980 losslessly transcoded to modern jpeg-xl, less than 5% when optimizing an mp3 with mp3packer)
losslessly optimizing an h264 stream would be mental, the complexity is just too high.