>>107086661
bfloat16 is literally just the upper half of a single precision float, so the only advantage is half the storage space at a cost of 70% of the precision.
It's still accurate to ~2 decimal digits (enough for percentages), and it's trivial to convert back to a single precision float, so it covers a decent number of use cases.
fp8 and fp4 are pretty much just for ML, yeah. They have shit accuracy and range, which only really works well when most of the numbers are normalized and you have literally billions of them to diffuse the error out over.