>>106252569
I've to Chelsio T580 NICs in my desktops and my server. Those are 40Gb QSFP+
My smaller RAIDZ2 pool peaks at about 2.2GB per second sustained linear read/write, so a bit under half the capability of those NICs. If stuff is in the ARC I can reach 3.5ish GB/sec read. I can burst to about 3.2ish GB/s write speeds until it starts throttling me due to "dirty data max" values. (it'll throttle you above a certain threshold of ram usage when it can't flush data out to disk fast enough. You can adjust this, but the defaults are fine for most stuff you'll be doing.) Locally (IE from one VM/container to the TrueNAS VM), I can handle 10ish GB burst writes in under half a second... it just takes 5ish sections to actually pave it out to disk.
The other annon is full of shit in saying you want multiple GB/TB of raw disk capacity. ZFS works with whatever memory you give it. More is obviously better, but multi GB/TB is for deduplication type workloads. I've kneecapped my TrueNAS VM down to 10GB of RAM when doing AI bullshit that required just about every piece of RAM I had. Nothing "broke", but you definitely noticed TrueNAS chugging on random workloads when it couldn't throw enough into the cache. Large directory lookups would have to go to disk unless you had recently viewed a folder, and random I/O for my local git tree went to absolute shit, but that's because I was redlining the disks and paving out dozens of TB for the AI stuff concurrently.
The thing to remember for home use stuff is that your expected load is typically going to be VASTLY lower than an actual enterprise situations. You don't need to worry about caching your TV rips because a single modern hard drive can stream 4k lossless rips to like 20 people concurrently. Home use stuff is typically 99% idle and when you are doing large amounts of data transfers, it's not stuff that caching would even benefit from. You can legitimately run a 100TB pool off of a raspi. It just won't run a large database well.