← Home ← Back to /g/

Thread 106868703

97 posts 12 images /g/
Anonymous No.106868703 [Report] >>106868728 >>106868953 >>106869043 >>106869563 >>106870743 >>106871882 >>106874618 >>106875924 >>106878319 >>106878383 >>106878452 >>106880512
>corrupts the instant you use it
Anonymous No.106868728 [Report] >>106871392
>>106868703 (OP)
Sounds like a skill issue.
Anonymous No.106868742 [Report] >>106874618 >>106878262
recovery procedures are pain in the ass for this fs
Anonymous No.106868768 [Report]
Stop making shit up.
Anonymous No.106868875 [Report]
and since it touts itself as such a superior fs filling your drive with md5's of every chunk gotta make sure that COC_README.doc doesnt change and god forbid one of those md5 blocks or the datta it refers to has a hiccup STOP EVERYTHING NO MORE MOUNTING i smelled a problem i need user spend a week learning about btrfs and running long restore operations that ultimately dont do anything but report to you whats wrong
Anonymous No.106868883 [Report] >>106868944 >>106878342
but hey now you can atleast mount it read only and save a corrupt backup of your shit! just use ext4 like everyone else
Anonymous No.106868944 [Report]
>>106868883
No I'll use btrfs
Anonymous No.106868953 [Report]
>>106868703 (OP)
based on what?
mine is still going strong 8 years later.
Anonymous No.106869043 [Report] >>106871342 >>106871395 >>106871396 >>106873726
>>106868703 (OP)
> btrfs ate my data
I doubt this has ever happened to anyone who parrots this nor that a single parrot ever got that idea from a credible source
Anonymous No.106869125 [Report] >>106869134 >>106870691 >>106870703 >>106871365 >>106874199
I never had BTRFS get corrupted. However, due to all of this FUD about BTRFS corruption I switched to ext4. It corrupted with data loss in under 3 weeks of use.
Anonymous No.106869134 [Report]
>>106869125
fucking kek
Anonymous No.106869563 [Report]
>>106868703 (OP)
I heard raid can be problematic, but for home desktop usage, it's perfect. Fedora and openSUSE default to it, sadly only openSUSE has a working rollback option with snapper ootb.
Anonymous No.106870677 [Report] >>106870691
ext4 is faster
Anonymous No.106870691 [Report]
>>106869125
top fucking kek
>>106870677
xfs is sometimes slightly faster than ext4
Anonymous No.106870702 [Report]
btrfs is slow, dangerous, but on SMR hard drives is the only usable filesystem
too bad bcachefs got tranny'd and it is now out of tree because linus got mad at the developer
Anonymous No.106870703 [Report]
>>106869125
Yeah, journaling is disabled by default for performance reasons.
Anonymous No.106870743 [Report]
>>106868703 (OP)
Btrfs is good for production nodes.
Btrfs staands for btfo'ing r* boomers who'd run zfs everywhere.
Anonymous No.106871320 [Report]
is this the 2013 thread?
Anonymous No.106871342 [Report]
>>106869043
it has happened to me, just not in the last 10 years
Anonymous No.106871353 [Report] >>106872778
You make this thread, Kent?
Anonymous No.106871365 [Report] >>106871394
>>106869125
>due to all of this FUD about BTRFS corruption I switched to ext4. It corrupted with data loss in under 3 weeks of use.
Why does the default Linux filesystem have such severe corruption issues? On Windows with NTFS, I've never had data corruption on my system drive. There have been many, many forced shutdowns due to power outages, undervolting failures (on my laptop), overclocking failures (on my old gaming pc), and it has never corrupted.
I have had an external Scamsung SSD get corrupted, but I've yet to reformat it to find out whether it's the filesystem or just the SSD.
Anonymous No.106871382 [Report]
really though
why haven't they even got RAID5 stable yet?
Anonymous No.106871388 [Report]
the ultimate tinker tranny fs, oh look cant make a swap file, spend 10min jewgling why and a fix
Anonymous No.106871392 [Report] >>106871396 >>106871779 >>106882644
>>106868728
nta, but i've been using btrfs in the past.
one day, out of nowhere, i get a kernel panic, i reboot, drive is corrupted, no way to recover it.

i'm never using that piece of shit again, zfs on the other hand, never had any issues in a decade.
Anonymous No.106871394 [Report] >>106871678
>>106871365
>Why does the default Linux filesystem have such severe corruption issues?
it (btrfs) doesn't.
>On Windows with NTFS, I've never had data corruption on my system drive.
*that you're aware of. ntfs doesn't support checksumming so it has no internal mechanism for detecting corruption.
like just a couple weeks ago i backed up a friends computer to restore onto new hardware. no errors when copying, the drives appear in good condition... but when running a couple of the restored steam games, they would crash. running an integrity check in steam showed several corrupted files needing to be replaced, and afterwards the games ran fine. he was also complaining that one game wasn't able to save/load games so he has to leave his computer on for the duration of the game, that was also fixed upon integrity checking.
simply put, you don't know if anything you have on ntfs is corrupted, since not even ntfs knows that
Anonymous No.106871395 [Report] >>106871402
>>106869043
It's happened to me more than once, but I keep using it because nothing else has a competitive feature set and I have backups.
Anonymous No.106871396 [Report]
>>106869043
>>106871392
Anonymous No.106871402 [Report] >>106871411 >>106875005 >>106878407
>>106871395
ZFS mogs btrfs.
Anonymous No.106871411 [Report] >>106875910
>>106871402
Not in kernel and nowhere near as flexible arrays. Not even under consideration.
Anonymous No.106871527 [Report] >>106872066
I was just about to use bcachefs because it was getting stable enough and then they remove it from mainline.
Fuck.
Kent just get over yourself and cooperate with Linus.
Anonymous No.106871678 [Report] >>106871766
>>106871394
>it (btrfs)
btrfs is not the default linux filesystem, ext4 is
>simply put, you don't know if anything you have on ntfs is corrupted, since not even ntfs knows that
i know it's not corrupted because all my programs work and all my videos play without glitches, retard
imagine trying to fud people into thinking that all their data must be corrupted because "you don't KNOW that it's not" lomao
Anonymous No.106871766 [Report]
>>106871678
if you've never had a corrupted file or unexpected behaviour that reinstalling something fixed, then aren't you lucky. for me this is a "i don't have a backup because i've never had a drive die on me" kind of argument. just because it hasn't happened yet doesn't mean it won't. i've sure run into it many times.
Anonymous No.106871779 [Report] >>106878158 >>106878626
>>106871392
This happened to me with ext4. Turns out it was just a bad drive.
Anonymous No.106871789 [Report] >>106872049
I put btrfs on the data ssd in my mini pc home server so I can snapshot + borg backup to external hdd (ext4) easily. Being able to detect bitrot is nice. Single data and duplicate metadata + system so at least metadata errors can be healed.
Eventually I'd like to build a NAS with at least 2 mechanical drives running ZFS which should be the main protected ground truth data pool but for now btrfs on a single ssd is probably better than nothing.
Anonymous No.106871882 [Report] >>106882591
>>106868703 (OP)
Anonymous No.106872049 [Report] >>106873801
>>106871789
even on a single drive btrfs can protect you from mistakes (snapshots) and tell you about corrupted files (so you can restore them from a backup rather than letting them propagate to backups).
you can also easily convert it to a raid1 later on by just adding another drive to that btrfs volume and running a balance on it. btrfs has the most flexible raid out of anything out there, there's no planning or commitment required, it will adapt to whatever you need the moment you need it
Anonymous No.106872066 [Report]
>>106871527
Just use the DKMS module bruv, it will go into the kernel again eventually anyway
Anonymous No.106872778 [Report]
>>106871353
no, but some posts are def kent
Anonymous No.106873726 [Report] >>106873801 >>106874119
>>106869043
I got bitten by it years ago, and more recently was able to pretty trivially break stuff in testing with a raid5 setup. Went with zfs instead and have no real complaints. Sure, rolling your own kernels with dkms is a headache, but it's automatable. I also got bitten by the default quota settings being fucking retarded. I had a system that got too full and had to attach a USB drive to the pool to have enough free space to delete stuff. That's also something that can be worked around, but there's a lot of really retarded foot-guns like that in btrfs.

btrfs is fine, arguably ideal even, for single disk systems these days. It depends on whether you want to have baked in encryption or not. ZFS lets you have different encryptions schemes per dataset, and the encryption is done in a way that allows you to validate data without decrypting it, so you can have thin provisioned backups on an offsite machine without ever having to trust it to decrypt stuff. btrfs doesn't support that at all, so you're stuck with the limitations of LUKS. Maybe not a huge deal, but my encrypted backups are nearly 4TB, and there's a lot of changes from month to month. With ZFS I raw send the encrypted blocks and it Just Werks™. With btrfs I'd have to decrypt stuff, send whole disk images, or use some third party tool to manage that stuff, and all of that means you're adding extra steps. With ZFS you can manage all the snapshots, keep them thin provisioned, encrypted, and validated, and if you ever need to do a restore you just send it back. Trying to piece together multiple encrypted partial sends, or wasting space/backups on full backups is a nightmare.
Anonymous No.106873801 [Report] >>106873901
>>106873726
>cont.

>>106872049
>btrfs has the most flexible raid out of anything out there, there's no planning or commitment required, it will adapt to whatever you need the moment you need it

I'd argue that ceph is the most flexible because it handles rebalancing and multiple systems automatically. btrfs can rebalance, but it doesn't do so automatically. The way it distributes stuff is often really stupid as well, which means your rebuild times can get unhinged on modern 20TB+ capacity drives. If you have true mirrors, you can rebuild a failed disk by doing a linear copy from one end of the disk to the other, and it's pretty trivial to skip over unused space to accelerate this. btrfs will just throw shit on random drives by default, so rebuilding a failed drive requires random I/O traversal, and that can lead to atrocious rebuild performance.

Also, the way the block pointer structure is setup in btrfs means that there's no real inline deduplication, so if you're cloning data with reference copy, or using a tool to do deduplication, that gets broken any time you perform a rebalance/defrag. If you heavily use deduplication (which to be fair 99% of people don't), or if you're someone who does a lot of reflink copying for stuff like destructive edits or cloning repos, a rebalance can massively balloon your disk usage, and there's no "hey, this might be a problem, are you sure you want to do this?" type warning. IIRC there's not even a dryrun option to test for this. Deduping has to be handled by outside tools, and I'd argue that it's completely unhinged for it to not keep deduped data deduped when doing maintenance tasks.

>you can also easily convert it to a raid1 later on by just adding another drive
That's going to be true for almost any file system. A basic mirror is about as simple as it gets raidwise. You can convert an ext4, xfs, or whatever partition into a mirror with mdadm. You can attach a drive to a vdev in zfs and turn it into a mirror
Anonymous No.106873901 [Report] >>106874255 >>106874551
>>106873801
>that gets broken any time you perform a rebalance/defrag
data is unshared when doing a defrag, but not a balance, so a lot of this paragraph is simply incorrect

i've heard of ceph, but quite unfamiliar with it. i know enough about it to know it's a distributed file system. i don't know how feasible/sensible it is to use in a single/few-user or home setting. i'd be really curious to know more about it if someone here has used it

>That's going to be true for almost any file system. A basic mirror is about as simple as it gets raidwise. You can convert an ext4, xfs, or whatever partition into a mirror with mdadm. You can attach a drive to a vdev in zfs and turn it into a mirror
yea, it's not hard to make a mirror in general, but like in btrfs you can turn a single device volume into a raid1 with two commands, even while you're booted from it, then you could add a third and fourth and turn it into a raid10. i've done similar with mdadm, but it's not as nice to do. plus you can make a raid out of drives of different sizes with btrfs, or a raid1 out of more than two drives while only keeping two copies of data
Anonymous No.106874119 [Report] >>106874164 >>106874421
>>106873726
>trivially break stuff in testing with a raid5 setup
Raid 5 and 6 are marked as unstable and you got it to break?
What a genius you are! I imagine that must have been quite the accomplishment.
Why did you even try to use something that you were explicitly warned against?
Anonymous No.106874164 [Report] >>106874421
>>106874119
considering he didn't suggest he followed the recommended practice of using raid1 metadata with raid5, yea, i'm not surprised he was bit by it
honestly, raid5/6 support in btrfs is at this point probably holding back adoption. like everything else is fine. other parts could be better, but the same could be said about any filesystem. btrfs raid5/6 being left in an incomplete state however makes many people believe the whole filesystem is incomplete. like, it might be better if they removed it entirely. i don't really want them to, since if you understand the details then you /can/ use it fairly safely, but idk, it's a stain on it's reputation. it's like no matter how much it improved, people will just bring this up as evidence btrfs isn't ready yet, and that's sad
Anonymous No.106874199 [Report]
>>106869125
>ext4
>currupted in 3 weeks
Are you writing your datas by hands with a magnet?
Anonymous No.106874255 [Report]
>>106873901
>data is unshared when doing a defrag, but not a balance, so a lot of this paragraph is simply incorrect
My mistake then. It's been 4 or 5 years since I last looked into btrfs. Still, the paragraph is largely correct. The fact that this is a thing that happens without it being announced is rather stupid.

>but like in btrfs you can turn a single device volume into a raid1 with two commands, even while you're booted from it, then you could add a third and fourth and turn it into a raid10
No argument that it's clunky with mdadm, but expanding 1 disk to a 4 disk raid 10 would be two commands in zfs as well.
>zpool attach (poolname) (vdev_id) (new_device_id)
>zpool add (poolname) mirror (device_a device_b)
If you wanted to rebalance you could add a zfs rewrite -r on the root dataset. Rewriting does modify block creation times, so you wind up with a thick provisioned snapshot, but beyond that, there's no issues. 2-3 commands and you're done.

Growth is quite easy in ZFS. The main thing that you can't do is shrink a pool. The exception is if you have a pool of entirely mirrors.

The other main limitation is that you can't mix drive capacities within a virtual device. A pool comprised of 3 4TB drives and a 12TB drive in a raid10 type setup is 2x4+2x4 TB for 8TB usable. btrfs can give you more capacity there with its raid1 implementation, but as already mentioned, you pay for that with random rebuilds which are slower.
Anonymous No.106874421 [Report]
>>106874119
I was testing stuff for a bulk storage pool where space efficiency is more important than performance. I tried btrfs and found it to be unsuitable because of how easily I was able to brake it.

This is multiple years after I had last used it and wound up bricking a raid1 system because of some issue with nonatomic renames/snapshots that I can't recall the exact specifics of. I probably could have salvaged that older system, but after numerous other headaches with things like the pool requiring manual intervention to mount after a drive failed, and the aforementioned having to temporarily attach a flash drive to add capacity so that I could delete files, I was done. At the time I moved back to ext4 or xfs (can't recall).

>>106874164
I did in fact have mirrored metadata. No idea why you would assume that I didn't. It is (or at least was) fairly easy to break btrfs raid5/6 by dropping one or more discs during a sync. You could wind up with inconsistent metadata/data states that were not trivially recoverable.

I do remember reading something a few months later that sounded like it would have let me rollback stuff on that pool and import/mount it, but the fact that it even got to that state was enough to dissuade me from bothering. Whatever the actual problem is, it simply doesn't exist on zfs. I can yank a random drive out of my rack right now, wait 30 seconds, throw it back in, and zfs will do a partial resilver to bring it back up to date. I'll get angry messages in the event daemon, but nothing will break, even if I yank a drive and then power cycle the system.

>honestly, raid5/6 support in btrfs is at this point probably holding back adoption.
There are numerous companies using btrfs in production, but they're almost exclusively using it as either single disk instances or for basic mirroring. Raid5/6/7 support is useful for bulk storage cost reduction, and yeah, not having robust support does kill a lot of at home adoption cases.
Anonymous No.106874551 [Report]
>>106873901
>i've heard of ceph, but quite unfamiliar with it. i know enough about it to know it's a distributed file system. i don't know how feasible/sensible it is to use in a single/few-user or home setting. i'd be really curious to know more about it if someone here has used it

Ceph is a bit pointless on a single system because it's a lot of configuration, and it'll perform worse than most alternatives. One of its main points is that it automatically handles rebalancing between drives/nodes. If you pull a drive, it'll automatically begin replication of the blocks on that drive to elsewhere (it could be on the same or a different node). If you add a drive, it'll automatically begin redistributing blocks around, potentially pulling them from other nodes. Same with node addition/removal.

You can do mirroring across drives/nodes, or erasure encoding. You can even specify the degree of redundancy per node or setup biases to keep data "more local". For instance, you can set it up so that one node tries to get a full copy of all blocks while 3 others distribute the rest of the blocks. If the VMs/conatiners are running on that first, machine, you can significantly reduce the amount of data being sent over the network.

Ceph is VERY flexible, but it can also be a nightmare to configure because it has so many options. The hardware requirements for it are rather absurd in comparison because it needs to maintain a lot of information to do things quickly. btrfs/zfs/etc will do fairly simple block tree traversals when reconstructing data. Ceph tries to parallelize that as much as possible, so the on disk format has certain overheads and constraints.
>>106837915 No.106874618 [Report]
>>106868703 (OP)
been using it for years no problem. though I recently switched to zfs just because. zfsbootmanager is also a pretty good bootloader.

>>106868742
repairing my disk corruption was a mindless task with btrfs' CoW. one or two commands did the trick.
Anonymous No.106875005 [Report] >>106875910
>>106871402
slow dogshit for desktop absolutely unusable for random reads and medium sized files
t. daily driving
Anonymous No.106875910 [Report] >>106878140 >>106878230 >>106878375
>>106871411
> not in kernel
doesn't matter
> flexible array
that was a valid criticism a few years ago, no longer is, now you can expand them no issues.
>>106875005
this hasn't been my experience, and i've been using it for nearly a decade, it only got better.

now i find it more than fast enough.

maybe there is an issue with your hardware or how you configured zfs.
Anonymous No.106875924 [Report]
>>106868703 (OP)
No reason to ever not use ext24 desu
Anonymous No.106876041 [Report]
My first encounter with btrfs was garuda linux, I didn't like it so on my next attempt to install another distro it came up with a load of errors related to btrfs on the disk, I had to nuke the entire ssd before I could install it.

Personally I like zfs for snapshots.
Anonymous No.106878140 [Report] >>106880503
>>106875910
it's common knowledge zfs is horrid for small files and random read, when you move files your whole PC slows down and audio often crackles there's hundreds of posts about that on the internet
Anonymous No.106878158 [Report]
>>106871779
This is always the case, people blaming their filesystem for their shit hardware.

I've been running BTRFS for years, and the only time I got kernel panics was when my NVMe drive completely disappeared off of the bus and vanished from the system entirely. Turns out there was a bug with TRIM on Samsung drives and they released a firmware update a couple of months ago. No issues since.
Anonymous No.106878230 [Report] >>106878274 >>106880503
>>106875910
>that was a valid criticism a few years ago, no longer is, now you can expand them no issues.
Can you expand them with mixed drives of different sizes and still use them all things to flexible filesystem policies like making sure there's always three copies? Or do you have to buy drives of the exact same capacity (or replace them all) still?
Anonymous No.106878262 [Report]
>>106868742
Recovery is supposed to be completely automatic.
If you need to run the BTRFS tools then it is because the filesystem is in some really weird and inconsistent state. In most cases BTRFS can repair things itself if you have redundancy. I've only seen the filesystem fault once read/only for me because I ran out of disk space and this is a good safety measure (it's still fixable by remounting read-write and clearing up some space).
Anonymous No.106878274 [Report]
>>106878230
>Can you expand them with mixed drives of different sizes and still use them all things to flexible filesystem policies like making sure there's always three copies?
No
Anonymous No.106878319 [Report]
>>106868703 (OP)
Source?
Anonymous No.106878342 [Report] >>106878388
>>106868883
Using BTRFS for 10 years.
No data ever lost.
It just werks.
>Braindead yapping reply
Don't care; didn't ask. Plus, you're a nigger.
Anonymous No.106878375 [Report] >>106878770 >>106880503
>>106875910
Maybe ZFS is bad and the hype is just Dunning-Kruger tards who fell for the marketing? That's basically what Torvalds said. I wonder if he could be right?
Anonymous No.106878383 [Report] >>106878398
>>106868703 (OP)
Fucker corrupts even if you look at the drive funny.
Anonymous No.106878388 [Report]
>>106878342
Pretty much this. Only unrecoverable btrfs failure I had was on rhel7 where it's old shit code anyway or because my SSD literally failed. I had a weird size metadata issue once, but I fixed that with a rebalance.
Anonymous No.106878398 [Report] >>106878412
>>106878383
Have you considered that your hardware may be the problem? Wonder how many of you idiots are turd worlders with broken shit. At least btrfs is telling you your shit is broke.
Anonymous No.106878407 [Report]
>>106871402
ZFS wishes it was btrfs. It's just the slower, memory hogging, inflexible version of it.
Anonymous No.106878412 [Report]
>>106878398
They are braindead retards so don't understand what the filesystem is telling them.
Facebook actually found faulty RAID controllers in production thanks to BTRFS, but they of course understand the filesystem and don't scream at the filesystem at the first sign of problems and appropriate the blame to the right place.
Anonymous No.106878452 [Report] >>106878489 >>106878588 >>106878970 >>106878970
>>106868703 (OP)
I used BTRFS on a desktop for a few years. I liked the subvolumes feature: it let me put multiple distros on the same partition. I never had any problems but then again, I never used most of the advanced features. YMMV, I guess.
Currently running ext4 on LVM. Nothing against BTRFS directly, I didn't want to make a separate partition just for swap.
Anonymous No.106878489 [Report]
>>106878452
>I didn't want to make a separate partition just for swap.
You haven't had to do that since Linux 5.0 in 2019:
https://kernelnewbies.org/Linux_5.0#Btrfs_swap_file_support
Anonymous No.106878588 [Report]
>>106878452
Why are tinkertrannies so dead set on saving 0.5% of their drive space by disabling things they don't understand? Just use swap, tinkertrannies. It's good for you.
Anonymous No.106878626 [Report]
>>106871779
i had a bad sata cable kill two drives on me and i recovered both times via btrfs tools and replaced the cable and added a new drive back in
Anonymous No.106878770 [Report]
>>106878375
It still does some things better, but it's hyper-niche. The vast majority of home users actually want betterfs.
Anonymous No.106878868 [Report]
Had 2 corruptions requiring manual intervention so far, both without any data loss. First was caused by failing RAM, BTRFS was just the canary that caught it first.
The second was caused by a bug in Kernel 6.15.3, resulting in a faulty log-tree.
Anonymous No.106878970 [Report] >>106881044
>>106878452
>>106878452
Swap partitions are better than swap files because they don't incur filesystem overhead and the blocks are contiguous on disk
Anonymous No.106879213 [Report]
I hope they make RAID stable one day. I'm using ZFS in my NAS but being able to mix drive sizes sounds very appealing.
Anonymous No.106880503 [Report] >>106880704 >>106884864
>>106878140
ZFS with default 128k record size can be bad for tiny random read, but this post is just plain FUD.

I've literally managed 80+TB databases running on spinners using 8 and 16k database record sizes. The performance was usually better than running directly on xfs because of inline compression. If you're doing large amounts of random I/O and not tuning things correctly, you're retarded.

>>106878230
>Can you expand them with mixed drives of different sizes and still use them all things to flexible filesystem policies like making sure there's always three copies?
Yeah that's pretty easy to do. You never had to have identical drives. The limitation is that an individual virtual device is constrained by the capacity of the smallest drive in it. If you want to do 3 way mirrors, nothing stops you from mixing 6 4TB drives, and 3 12TB drives with a 12Tb hot spare to produce 3 sets of 3 way mirrors, 2 of them 4Tb, one of them 12TB.

They're adding anyraid "soon™" which lets you use mixed topologies where you just throw piles of disks in, but the price you'll pay for that is degraded resilver performance. The benefit of conventional mirroring is that rebuilds/resilvers can be done as linear operations, which with huge capacity drives is actually important. Random reading and writing because you have to do a full block tree traversal to can make a resilver take over a week instead of a single day.

>>106878375
Linus has a poor understanding of how hardware actually works, and he's shown that on more than a few occasions. There's a reason that all modern file systems that try to scale implement some form of copy on write and checksumming. BTRFS is better than it used to be and I'd argue should probably be the default that most people are using on their systems, but it's still missing a fair number of features of ZFS. If those features matter to you, then there isn't a good substitute.
Anonymous No.106880512 [Report]
>>106868703 (OP)
Is this the Wayland of filesystems?
Anonymous No.106880704 [Report] >>106880755 >>106881864 >>106881974 >>106882126
>>106880503
>but it's still missing a fair number of features of ZFS.
Such as? The only one I can think of is encryption at the filesystem level which is something that did get worked on but got put on the back burner over the years because LUKS exists and there's more important things to be doing to the filesystem.
Anonymous No.106880755 [Report]
>>106880704
Also caching is another big one too which I wished BTRFS had. With ZFS I can designate a small 500 GB NVMe drive as a cache drive and it will actually do something useful, despite its small size, thanks to L2ARC being a really clever algorithm.

I wish BTRFS had similar caching but it seems Bcachefs is the way to go for tiered storage. Kent at least built that in from the beginning.
Anonymous No.106881044 [Report] >>106881062
>>106878970
no, that's incorrect. in fact swap files have to be contiguous on disc for them to mountable /because/ they skip the filesystem layer. this is also why btrfs didn't support them early on, as they aren't regular files
there's no performance difference between swap partitions and files because they're accessed the same way
Anonymous No.106881062 [Report]
>>106881044
Indeed. BTRFS doesn't do any CoW for Swap files, it also doesn't work with compression, etc. It's just a normal contiguous file. They had it a decade ago but ripped it out because it didn't have any of the safety rails that make sure BTRFS stays hands off the Swap file and doesn't attempt to do something really stupid. Now they have that so BTRFS for example won't defrag a Swap file that's in use.
Anonymous No.106881092 [Report]
if btrfs is showing corruption right away, i would be concerned for the health of your hardware more than anything else. btrfs actually checks it's work unlike older/simpler filesystems
Anonymous No.106881864 [Report] >>106881974
>>106880704
Encryption without needed to decrypt to validate integrity is a huge deal because you don't have to provide potentially untrustworthy offsite backups your key (or even your local backups). Being able to setup complex retention policies on your backups without decrypting is a major win.

The cache layer is significantly better. There are caveats if your memory usage is very spikey because the kernel sees it as application data and freeing it isn't instantaneous, but the ARC is substantially more robust for mixed workload scenarios than any alternative.

Draid allows for stupidly fast rebuild times on large pools with some caveats in terms of space efficiency.

Online deduplication has many uses in more enterprise settings. Probably useless for home users, but dedup, particularly the new fast dedup, is extremely useful for some use cases.

Recursive atomic snapshots. You can take snapshots on a parent dataset (subvolume) and have some/all child datasets have one created in the same transaction group. This is important for some workloads where you have an application that uses mixed block sizes, but is also useful for home use stuff because you can lump various directories like /home and /opt into different child datasets and snapshot them concurrently. This avoids inconsistent states. Maybe btrfs finally got around to enabling this, but zfs just does this and always has.
Anonymous No.106881974 [Report] >>106882126
>>106880704
>>106881864
metadata specials allow for hybrid pools. You can have SSDs in a mirror and use dataset properties to put individual datasets purely on mirrored SSDs in a pool that has raidz devices. The metadata devices store all metadata and make things like directory listings in folders with millions of files nearly instantaneous despite the majority of the data being stored in a low IOPS raidz device. Btrfs does let you mirror metadata as opposed to erasure encoding it (and you should because of the write hole issues), but the performance difference in having dedicated devices for it and being able to allocate specific datasets to partially/completely use the specials is pretty significant.

permissions systems allow for different users to have access to different commands, which allows you to things like allow guests to manage their own snapshots but disallow them from adjusting their quotas, or have a backup server pull backups from the target system but be unable to manage snapshots on that system which adds security if either the live or backup system is ever compromised.

A minor annoyance to be sure, but the documentation and general conversationalness of ZFS is far, FAR better. The documentation is extensive, thorough, and generally provides examples with explanations. A great deal of care has gone into things like the naming conventions and general nomenclature so that things are easily understandable, even for someone who's a relative novice. btrfs has a ton of design baggage because they pushed stuff out too early, and a lot of the phrasings used are ambiguous or meant to mimic traditional raid, which can lead to points of confusion. Pick a random property on this list and judge for yourself. https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops.7.html Or a random command from here https://openzfs.github.io/openzfs-docs/man/master/8/index.html
Anonymous No.106882126 [Report]
>>106880704
>>106881974
And to be clear here, some of these may have been implemented in btrfs since I last dug into it, but the overall point remains the same. There's a lot of features that are clearly aimed at a combination of enterprise functionality, security and ease of use that are simply lacking or require convolutions to achieve in btrfs.
Anonymous No.106882155 [Report]
only time btrfs has ever failed on me was through my own dd retardation. compression, snapshots are all really handy.
Anonymous No.106882591 [Report]
>>106871882
Take the Bocchi The Rock File System pill
Anonymous No.106882644 [Report] >>106882656
>>106871392
I use btrfs since 2010.
So i used it already when it was experimental in the kernel and i had to write a custom initrd script to be able to use it as root.

Not once did i experience corruption.

There is some weird FUD campaign of RedHat going on, who want to shill their horrid in-house xfs.
But still, nobody uses xfs. Even Fedora, their very own distribution, voted in favor of btrfs instead of xfs. Every single RedHat employee voted against it, but it was of no use, because they got outnumbered.

btrfs has checksums also for metadata. Xfs is cobbled together bullshit with different versions that you don't know if they are compatible to each other and none of them has proper checksumming.
So don't follow for this idiotic RedHat campaign. Are you really more suspective to RedHat propaganda than a distro owned by them?
Anonymous No.106882656 [Report] >>106882672
>>106882644
>btrfs has checksums also for metadata.
Slight correction, you mean also for data. Xfs has metadata checksums but it does not checksum the data.
Anonymous No.106882672 [Report]
>>106882656
thanks, thats even more stupid then
Anonymous No.106882748 [Report] >>106882767 >>106883833
When basic POSIX tools like »df« don't work because btrfs is oh-so-snowflake-special, I lose interest, get wiser and continue with ext4.
Anonymous No.106882767 [Report] >>106882778
>>106882748
That's a problem with the implementation of df you're using. BTRFS comes with its own df tool just use:
btrfs fi df /path/to/subvolume
Anonymous No.106882778 [Report]
>>106882767
There's also:
btrfs fi usage /path/to/subvolume
Which has more user-friendly output.
Anonymous No.106882807 [Report] >>106883260 >>106884462
»own df tool«… does that come with btrfs's own »btrfs_is_best_fs« script as well?
Anonymous No.106883260 [Report] >>106884462
>>106882807
A df tool that can't even tell you the usage of metadata versus data is not very good, that's why BTRFS has its own. It contains way more information than the shitty POSIX version that existed long before CoW filesystems were even a thing.
Anonymous No.106883833 [Report]
>>106882748
Works on my machine.
Anonymous No.106884462 [Report]
>>106882807
>>106883260
'df' doesn't show metadata size because ext* is an ancient fs that preallocates its' metadata at format time. even ntfs has dynamic metadata
make no mistake, ext4 is just ext3 with a few extra features, and ext3 is just ext2 with a journal. we don't talk about ext1
Anonymous No.106884864 [Report]
>>106880503
>If you're doing large amounts of random I/O and not tuning things correctly
yeah the usecase of opening my fucking file manager and loading thumbnails causing the audio to crackle and xorg to lag is really fucking worthy of fine tuning
i use what's recommended for desktop which is probably 128k i dont remember, for storage i have this tuned
i use it on _desktop_ as my root on a single drive, the driver is clearly not optimized for this
>The performance was usually better than running directly on xfs because of inline compression
apples to bowling balls compression is another thing
Anonymous No.106885114 [Report]
I read hallucinations by btrfs fanboys about what a UNIX filesystem is supposed to be and do; I lose interest, get wiser and continue with ext4.