>>106874119
I was testing stuff for a bulk storage pool where space efficiency is more important than performance. I tried btrfs and found it to be unsuitable because of how easily I was able to brake it.
This is multiple years after I had last used it and wound up bricking a raid1 system because of some issue with nonatomic renames/snapshots that I can't recall the exact specifics of. I probably could have salvaged that older system, but after numerous other headaches with things like the pool requiring manual intervention to mount after a drive failed, and the aforementioned having to temporarily attach a flash drive to add capacity so that I could delete files, I was done. At the time I moved back to ext4 or xfs (can't recall).
>>106874164
I did in fact have mirrored metadata. No idea why you would assume that I didn't. It is (or at least was) fairly easy to break btrfs raid5/6 by dropping one or more discs during a sync. You could wind up with inconsistent metadata/data states that were not trivially recoverable.
I do remember reading something a few months later that sounded like it would have let me rollback stuff on that pool and import/mount it, but the fact that it even got to that state was enough to dissuade me from bothering. Whatever the actual problem is, it simply doesn't exist on zfs. I can yank a random drive out of my rack right now, wait 30 seconds, throw it back in, and zfs will do a partial resilver to bring it back up to date. I'll get angry messages in the event daemon, but nothing will break, even if I yank a drive and then power cycle the system.
>honestly, raid5/6 support in btrfs is at this point probably holding back adoption.
There are numerous companies using btrfs in production, but they're almost exclusively using it as either single disk instances or for basic mirroring. Raid5/6/7 support is useful for bulk storage cost reduction, and yeah, not having robust support does kill a lot of at home adoption cases.