← Home ← Back to /g/

Thread 107113287

327 posts 84 images /g/
Anonymous No.107113287 [Report] >>107114687 >>107118692 >>107125495 >>107128126 >>107128167 >>107131721 >>107141348
/hsg/ - Home Server General
No NAT November edition

Previous >>107047472

READ THE WIKI! & help by contributing:
https://igwiki.lyci.de/wiki/Home_server

NAS Case Guide. Feel free to add to it:
https://igwiki.lyci.de/wiki/Home_server/Case_guide

/hsg/ is about learning and expanding your horizons. Know all about NAS? Learn virtualisation. Spun up some VMs? Learn about networking by standing up a OPNsense box and configuring some VLANs. There's always more to learn and chances to grow. Think you’re god-tier already? Setup OpenStack and report back.

>What software should I run?
Install Guix. Or whatever flavour of *nix is best for the job or most comfy for you. Jellyfin/Emby to replace Netflix, Nextcloud to replace Googlel, Ampache/Navidrome to replace Spotify, the list goes on. Look at the awesome self-hosted list and ask.

>Why should I have a home server?
/hsg/ is about learning and expanding your horizons. De-botnet your life. Learn something new. Serving applications to yourself, your family, and your frens feels good. Put your tech skills to good use for yourself and those close to you. Store their data with proper availability redundancy and backups and serve it back to them with a /comfy/ easy to use interface.

>Links & resources
Cool stuff to host: https://gitlab.com/awesome-selfhosted/awesome-selfhosted
RouterOS's: https://igwiki.lyci.de/wiki/Home_server#Custom
https://reddit.com/r/datahoarder
https://www.labgopher.com
https://www.reddit.com/r/homelab/wiki/index
https://wiki.debian.org/FreedomBox/Features
List of ARM-based SBCs: https://docs.google.com/spreadsheets/d/1PGaVu0sPBEy5GgLM8N-CvHB2FESdlfBOdQKqLziJLhQ
Low-power x86 systems: https://docs.google.com/spreadsheets/d/1LHvT2fRp7I6Hf18LcSzsNnjp10VI-odvwZpQZKv_NCI
Cheap disks: https://shucks.top/ & https://diskprices.com/

Remember:
RAID protects you from DOWNTIME
BACKUPS protect you from DATA LOSS
Anonymous No.107113658 [Report] >>107114637 >>107114794 >>107115074 >>107136439
Project is expanding, got a new 9U cabinet. I intend to put a table top on top of the cabinets so it looks a bit better than it does now. I also got more expansion for the plex server, I'm gonna be at 124TB, but I don't know if the cables from my HBA can extend long enough. Gotta see what happens.

Got a identical twin 14700K/128GB pair of main servers with an asus PN53 for quorum (all proxmox 9), one PBS server and my backup power supplies. Gonna experiment building a Kubernetes cluster from 5 intel NUC's this weekend probably.
Anonymous No.107113872 [Report] >>107116843 >>107124579 >>107129718
Is Pangolin the way?
https://youtu.be/8VdwOL7nYkY

Also, how can I use domain names on my local network when ProtonVPN hijacks DNS completely? I was thinking maybe adding my router's default gateway to my domain's A record, as long as that won't conflict with whatever services I need exposed to the Internet, but maybe I can do something like that through Pangolin instead? Should I use CrowdSec btw?
Anonymous No.107114201 [Report] >>107114339 >>107118219
do you guys seed your torrents forever or would rather save your disks?
Anonymous No.107114339 [Report]
>>107114201
I used to have share ratio set at max 10. Now I've got it set to 20, but usually end up sharing for way more until I've watched the media or whatever and am finally ready to move it to its final destination.
Anonymous No.107114534 [Report] >>107115681
>https://www.reddit.com/r/homelab/comments/1opbcov/building_a_rugged_localai_brain_box_need_one/

which one of you is this
Anonymous No.107114637 [Report] >>107136439 >>107145140
>>107113658
>Gonna experiment building a Kubernetes cluster from 5 intel NUC's this weekend probably.
actually have something like this planned. i have about the same number of NUCs too. what CPU gen are they?
Anonymous No.107114687 [Report] >>107115649
>>107113287 (OP)
>No NAT November edition
>IPv4 in pic
I envy your package count, my NixOS setup is at 4k+ with only two derivations saved
Anonymous No.107114794 [Report] >>107145140
>>107113658
How much were those cabinets? The main reason I hate rack servers is that they look ugly, but they don't make horizontal racks so I can make it into some kind of table or something.
Also wouldn't putting a table on it block the air flow?
Anonymous No.107115074 [Report] >>107136439 >>107145140
>>107113658
>upright UPS in a rack mount when 2U UPSs exist
i dont mean to ick on your yum but... why?
Anonymous No.107115488 [Report] >>107115533 >>107115607
Does CloudFlare actually offer E2E encryption or are they just lying out of their ass?
How would they be able to proxy content if they aren't always a man-in-the-middle?

The reason I ask is because I've seen domains using Cloudflare CDN/Proxy but not using Cloudflare SSL, however they must still decrypt everything right? So how does that work if the site doesn't use their cert?

Hm I guess you upload your site's cert to Cloudflare so they have the private key and can decrypt the packets, is that right?
Anonymous No.107115533 [Report] >>107116095 >>107116115
>>107115488
https://www.linkedin.com/pulse/cloudflares-technical-architecture-internet-largest-kord-campbell-g7ukc?tl=en

oh lol it's worse than i thought
>Cloudflare maintains extraordinary cryptographic authority through their certificate infrastructure. They automatically issue SSL certificates for all customer domains through partnerships with Let's Encrypt, Google Trust Services, SSL.com, and Sectigo. More significantly, Cloudflare operates individual Certificate Authorities for each customer account, enabling them to generate valid certificates for any subdomain without customer notification or consent.
>This certificate issuance capability extends beyond standard domain validation. Through their Universal SSL program, Cloudflare can create certificates that browsers trust implicitly, as they're signed by recognized Certificate Authorities. When organizations use Cloudflare Gateway with root certificate installation, Cloudflare gains the ability to issue valid certificates for any domain from the perspective of those devices - not just domains using their service. This transforms their infrastructure into a comprehensive certificate authority that can authenticate any HTTPS connection.
>The trust model implications are profound. Traditional SSL/TLS assumes end-to-end encryption between users and servers, with Certificate Authorities merely validating domain ownership. Cloudflare's model breaks this assumption by becoming both the certificate issuer and the connection terminator.
Anonymous No.107115607 [Report] >>107115619
>>107115488
oy vey!
Anonymous No.107115619 [Report] >>107115704
>>107115607
You're on CloudFlare right now, they can read everything your IP posts.
Anonymous No.107115649 [Report]
>>107114687
I was tired of waiting for the next thread, so I more or less copied the previous one with some minor tweaks, removing bloat like pfSense and Plex, etc. No NAT November was funny so I just kept it.
Anonymous No.107115681 [Report] >>107116793
>>107114534
>I'm building a small rugged AI device
>I just need one builder
Anonymous No.107115704 [Report] >>107115710 >>107115805
>>107115619
i don't mind, i'm being shit ton of nat,around 10 or more hops(proven by traceroute 8.8.8.8), 3rd worlders benefit, they need to go through some shitty beaurocracy just to find me a literal nobody?
i do wish i had anon registered local sourced vps i can tunnel my traffic into though
Anonymous No.107115710 [Report]
>>107115704
*behind shit ton
Anonymous No.107115805 [Report] >>107117175
>>107115704
i'd like to say i don't mind either, but truth is i have no choice and that's annoying. if i could i would browse private.4chan.org with a little slower load times but there is have no such option.

what i find to be the biggest problem is that most people don't know about this, https does actually no longer mean communication is encrypted between the web site and the visitor. there's a man in the middle and he can see every single password you type and compare your style of writing on other CloudFlare sites to figure out who you are. maybe you're not behind 7 proxies on your phone for example.
Anonymous No.107115934 [Report]
>get a dell r630
>unplug status light because fuck you i know drives are unplugged and one of the power supplies isn't there
>won't boot
gay
Anonymous No.107115945 [Report] >>107115994 >>107116873
Hey fellas, I'm looking for a cheap-ish nuc or something similar for storing a bit of data and jellyfin. At most I'd maybe like to add a 2-4tb hdd for data and that's about it.

Also, real retard question but since I'd also like to use home-assistant, can I run it on the same system as the server using a vm or something similar?

Any particular models a good idea? Also if any krautfags got a recommendation where to buy used ones I'd love to hear it, ebay seems kinda shit.
Anonymous No.107115994 [Report] >>107116873
>>107115945
used thinkcentre for 50 bucks.
Anonymous No.107116095 [Report] >>107116115 >>107116129 >>107124887
>>107115533
Now realize that every cert authority bundled with your browser can generate any cert they want and your browser will trust it. Now go skim the list of cert authorities and look at all the questionable ones you have installed.
Anonymous No.107116115 [Report]
>>107115533
>>107116095
https://arstechnica.com/security/2025/09/mis-issued-certificates-for-1-1-1-1-dns-service-pose-a-threat-to-the-internet/
Anonymous No.107116129 [Report] >>107116306
>>107116095
sure but that's not the same as seeing all data and activity between you and a web site.
Anonymous No.107116306 [Report] >>107116358
>>107116129
Actually it is. Anyone can gen a valid cert from one of these authorities and use it to snoop the traffic.
https://www.ssls.com/blog/root-certificate-authority-untrusted-by-browsers-after-concerns-about-ties-to-us-intelligence/
Anonymous No.107116358 [Report]
>>107116306
if i connect to their server sure. these things are a little more involved than just giving everything on a silver platter to cloudflare, not even anything malicious just willingly and working as intended.
Anonymous No.107116793 [Report] >>107118326
>>107115681
I'll make the logo!
Anonymous No.107116843 [Report] >>107117418
>>107113872
mDNS
Anonymous No.107116873 [Report]
>>107115945
>>107115994
I do this but with a 8TB external HDD
Anonymous No.107117006 [Report] >>107117175 >>107118066
Is it safe to run HDDs like that until I wait for the HBA card to arrive?
Anonymous No.107117175 [Report] >>107120580
>>107115805
you already know whomst to blame.
>>107117006
you already asked about this and got your answer thermal wise and wire load wise,
Anonymous No.107117361 [Report]
Use-case for putting Unbound in forwarding mode in order to trust some third party to resolve your requests when ISPs can just do rDNS lookup or check SNI to get everything anyway?
Anonymous No.107117418 [Report]
>>107116843
I mean, at least the .local suffix would would avoid confusion, but I won't be able to use my domain name this way like a cool home lab redditor.
Anonymous No.107117523 [Report] >>107127791
>Finally got the shit I need to retire my Haswell NAS
>TrueNAS Core is kill
Is there any good BSD based alternatives to Core?
Anonymous No.107118066 [Report] >>107120580
>>107117006
Yes it's fine.
I personally don't run more than 5 HDDs in a single cable but as long as not all 5 are at max speed you should be good. Fans are blowing air at the HDDs so temperature looks okay.
Anonymous No.107118219 [Report]
>>107114201
I perma-seed everything that's in a ready to use format. So mostly movies, shows, and portable games.
Anonymous No.107118277 [Report] >>107118571 >>107130307
systemd-networkd soon
Anonymous No.107118326 [Report]
>>107116793
>I'll make the vision and product direction!
Anonymous No.107118571 [Report] >>107119700
>>107118277
Why soon? Unless you need WiFi there's no reason to use NM. NM fucking sucks.
Anonymous No.107118692 [Report] >>107119836
>>107113287 (OP)
my ISP only provides ipv6 to commercial customers i cant take it i hate telcomm companies so much.
Anonymous No.107119700 [Report]
>>107118571
Soon as in I couldn't get to it till now
Anonymous No.107119836 [Report] >>107120820
>>107118692
You can get a /48 for free from Hurricane Electric https://tunnelbroker.net/
Anonymous No.107120532 [Report] >>107120820 >>107120958 >>107121061
I'm looking for a Homeserver system for Data Storage (around 5-10TB), Plex, Pihole and Homeassistant.

Was thinking about getting a used Optiplex, any other ideas?
Anonymous No.107120580 [Report] >>107120598 >>107120820
>>107117175
My question was if I would damage them if i run the PC with the drives connected to the power but without SATA connection

>>107118066
They are connected with two separate cables
Its the top 3 + fan controller on one cable and the bottom 3 on a different one
Anonymous No.107120598 [Report]
>>107120580
>My question was if I would damage them if i run the PC with the drives connected to the power but without SATA connection
No, all it does is spin the disk but not move the head (because there's literally no data being written or read).
Anonymous No.107120820 [Report] >>107120958 >>107124733
>>107120532
Optiplexes are pretty decent generally.

Make sure you get one with enough space and power connections to put drives in. You might have to look the model up to make sure of that. Their power supplies are not standard form factors, so if it doesn't have a few sata/molex connectors, you'll either have to buy an ITX supply and force stuff in, or buy a compatible one that does have enough connections.

I'd recommend taking a look to see if there's any newer ones at reasonable prices, but last I checked 7th and 8th gen intel were the best bang for the buck. Consider getting one with a non T processor. Those have lower power budgets. If you know you're only going to be using it for basic stuff, then the lower clocks/power budgets are fine. If you want to host a random game server or something, spend an extra 20 bucks on one that has a better CPU.

>>107120580
There's no real harm in it beyond putting additional wear on the drives, but the degree to which that actually matters is entirely open for debate. I'd personally just unhook the power cables from the PSU and plug them in once I'm ready to do the proper setup, but that's just me. If the system is already together and that's more than mildly inconvenient, it's probably not worthwhile unless you're planning on it taking months to get the HBA.

>>107119836
Hurricane electric IPs get blocked by a lot of anti spam stuff. I wouldn't rely on them.
Anonymous No.107120958 [Report] >>107124733
>>107120532
>>107120820
>Optiplex
This is an incredibly solid choice. They can be bought used for very cheap, they take up little space, and tend to consume very little power. They are so small that they can be rack-mounted and two of them can go side by side on a 1U 19in shelf (or 3D printed 1U housing)
>Consider getting one with a non T processor. Those have lower power budgets
Mine has a i7-7700T; the low power budget was actually a requirement and it works great despite the stress I put it under: doing video decoding, resizing, encoding, and pushing a stream 24/7 (and some other important stuff). Its uptime is considered 'critical' along with all my other networking equipment and the low power draw really helps, giving me an uptime of ~3 hours on power failure
Anonymous No.107121061 [Report] >>107121073 >>107122743 >>107131404
>>107120532
>Optiplex
NEC is quite popular lately if you want to be special snowflake.
They come in smol.
Anonymous No.107121073 [Report] >>107122743 >>107131404
>>107121061
and big
Anonymous No.107122124 [Report] >>107122216
hello
i have an old desktop running OMV as a mediaserver, if I wanted to use a pihole, is there any difference if I just installed the software on my current server or would there be benefits to actually getting an rpi for this purpose.
Anonymous No.107122202 [Report] >>107122610 >>107137971
Any networking guys can sanity check my 10g build out? 5gig symmetrical fiber being installed next week. ISP modem has a 10gbe -> Chinese firewall appliance running pfsense via SFP+ cat6a. Firewall to 8 port SFP+ switch via DAC. SFP+ switch has DAC to nas and compute server that will have ConnectX-4 cards. Connection to "old" 1gig switch will be SFP DAC. Last line out of SFP+ switch will be a 100m OM4 going for "the long run" (70-80m snaking all through my house) to my office which will have a sonnet sfp+ to thunderbolt adaptor for my MacBook Pro. Anything sound amiss here?
Anonymous No.107122216 [Report]
>>107122124
It's nice to have it separate if you tinker with the media server often. That way you still have your internet/DNS working whilst it's rebooting or down for maintenance/adding drives/whatever. All depends if you have others in the house also using the pihole like I do.
Anonymous No.107122404 [Report] >>107122516 >>107122639 >>107149379
Noob question. I have successfully hosted a static website from home with a dynamic DNS. Is it possible to have two domains point to the same IP but to different ports? This is because I want to host more than one website like:
/var/html/index.html
hosted on 192.168.1.5:12345
port forwarded to $my_public_ip:8080
aliased to mywebsite.com
And:
/var/html2/index.html
hosted on 192.168.1.5:69696
port foward to $my_public_ip:8081
aliased to anotherwebsite.com
This is the only way I think it could work, unless there's another solution to this I'm unaware of. Cheaper is better, as I'm still in "hobby" stages, but may ultimately just pay for cheap shared hosting for the other site if this isn't possible.
Anonymous No.107122516 [Report]
>>107122404 (me)
Meant /var/www
I'm trying OpenBSD's httpd at the moment, but in no way committed just yet.
Anonymous No.107122610 [Report]
>>107122202
Not directly. If you're doing vlans make sure the switches support that.

You can get all in one switches that have 4 SFP+ ports and 8-24 1Gb or 2.5Gb RJ45, which might simplify things a bit. If you already have the switches and they'll do what you want it's probably not worth upgrading, but it's a thought to consider.
Anonymous No.107122639 [Report] >>107122705
>>107122404
You can host multiple sites on a single IP and port. Browsers send the domain to the website, and internally apache and nginx have vhosts. You can bind specific domain(s) to a given site this way.
Anonymous No.107122705 [Report] >>107124725 >>107124749
>>107122639
That sounds good, thanks. Httpd's conf example has server "www.example.com" { (ports and dirs go in here) } which might be a similar story to vhosts (I'll eventually move onto nginx).
The problem now is my router settings has only one DDNS update URL field, and each domain has its own update URL. So it looks like I can't have both update.
Anonymous No.107122743 [Report] >>107122804
>>107121073
>>107121061
where can i find these if i'm not a SEA nigga
Anonymous No.107122804 [Report]
>>107122743
at a store that sells them.
Anonymous No.107124431 [Report]
i'm confused, podman can't run podman create on lxc but docker can.
i thought podman priority is being rootless daemonless, but aparently it does need real root when docker can run fine with lxc fake high mapped uid root.
this glows really bad
Anonymous No.107124579 [Report]
>>107113872
>how can I use domain names on my local network when ProtonVPN hijacks DNS completely
I socks proxy to a different machine and route all my LAN traffic through that. If you have no non-VPN machines then you could consider moving the VPN onto a router?
Anonymous No.107124725 [Report] >>107124905
>>107122705
Multiple ways around this:
CNAME second hostname to the first one
Update your DDNS from somehwere that's not your router that can handle multiple
Anonymous No.107124733 [Report] >>107129956
>>107120958
>>107120820

Hmm, I might run a small gameserver on it too, now that I think about it. Looking at optiplex cpus, the ones mostly available seem to be i7-7700, i5-9500, or more expensive ones, out of the two I feel like both would be able to handle the mentioned tasks, or am I wrong?
Anonymous No.107124749 [Report] >>107124905 >>107124939
>>107122705 (me)
I worked out updating the domains. Is it secure enough to request the update URL with ftp (an OpenBSD builtin) or is curl the better option? Also need to work out how to request the URL only when my IP address changes. A cronjob would be lousy.
Anonymous No.107124887 [Report] >>107146836
>>107116095
https://ccadb.my.salesforce-sites.com/mozilla/CACertificatesInFirefoxReport
>Government of Hong Kong
>Government of Turkey
>Government of Spain
lmao
Anonymous No.107124905 [Report]
>>107124725
Bumping you as you're helping, thanks.
>>107124749
Anonymous No.107124939 [Report] >>107125391
>>107124749
>only when my IP address changes. A cronjob would be lousy.
Write a small shell script that
1. curl ifconfig.me/ip to get your public IP
2. store it in some file
3. check if file content and grabbed IP mismatch
4. do the DDNS shit on mismatch

Also, personally I would prefer curl over ftp, feels easier to use
Obviously, it would be cooler to run your own DNS zone hosted on your own bind and update with nsupdate and bind keys and stuff...
Anonymous No.107125391 [Report]
>>107124939
That makes total sense, thanks. I'll try that. And yeah, I might go with curl in the end, since ftp wasn't working getting the IP from that link.
>run your own DNS zone hosted on your own bind
What's this?
Anonymous No.107125495 [Report] >>107125968
>>107113287 (OP)
I would switch my server to guix but they don't have a smartd service yet
Anonymous No.107125968 [Report] >>107126076 >>107126401 >>107152853
>>107125495
man guix fucking sucks, i hate using lisp interpreter as my init.
if there were nix without openrc i would instantly switch.
Anonymous No.107126076 [Report] >>107126133
>>107125968
good news, nixos uses systemd.
Anonymous No.107126133 [Report]
>>107126076
i'm drunk, i meant with, not without,
Anonymous No.107126174 [Report]
Anyone running an ASUS XG-C100C?
Is it normal to idle in the high 60s low 70s?
Anonymous No.107126401 [Report]
>>107125968
systemd is good for you just use it
Anonymous No.107127007 [Report] >>107127152 >>107127871 >>107128111 >>107136976
Would you recommend hosting your own email?
Anonymous No.107127152 [Report]
>>107127007
Sure, it's not that hard and rolling your own stuff gives you a lot of flexibility.
Having a domain with a catch all alias allows you to use "unique" mail addresses for services.
So if you get mail to those specific addresses, you know exactly who snitched on your data.
Anonymous No.107127262 [Report] >>107127264
why is firewall with docker still ass
Anonymous No.107127264 [Report]
>>107127262
just stop using ufw
Anonymous No.107127790 [Report]
I got 10 12 TB SAS disk for free from wife's work. What's the absolutely cheapest way to get them running in a raid z2 (or z3)?
Idk whether to trust the Chinese breakout cables+HBA and idk about compatibility with certain older disk shelves might be an issue.

Disks are Dell branded btw
Anonymous No.107127791 [Report]
>>107117523
If you're used to it, just do pure FreeBSD. That way at least you don't rely on some assholes cutting off support. Or if you really like the UI - go with Scale.
Anonymous No.107127871 [Report]
>>107127007
not really. i am glad i longer deal with that shit. if you do, mailchimp or sendgrid has/had a free tier for outgoing. use it or your deliverability will be poor.
Anonymous No.107128040 [Report] >>107128106 >>107128257 >>107128430
>buy our 10TB HDD's :^)
>oh it's actually more like 9
>oh and you need to buy multiple for storage
>you cant use 1 of them its for a backup
>oh and you can't fill them up entirely because they don't work well when nearly full so leave 10%

we have been played for fools for years now
Anonymous No.107128106 [Report]
>>107128040
>no, they're meant to be that size. they're TB, not TiB
the biggest scam of them all
Anonymous No.107128111 [Report]
>>107127007
absolutely not
Anonymous No.107128126 [Report] >>107128510
>>107113287 (OP)
mikrotik vs cisco vs huawei managed switch?
which one of these is least cancer
Anonymous No.107128167 [Report] >>107134208
>>107113287 (OP)
nice uterusOS
Anonymous No.107128257 [Report] >>107128334
>>107128040
>they don't work well when nearly full so leave 10%
Idk much about exactly how storage and allocation works but it this was the case, for a 10TB drive could you just format only 9TB and leave the other 1TB unallocated?
Anonymous No.107128334 [Report]
>>107128257
Gramps is just yelling again because he invested BIG into IDE jumpers
Anonymous No.107128430 [Report]
>>107128040
>oh and the setup you bought, buy it again because that 1 drive isn't a back it's ""redundancy"""
>and actually you need at least 2 redundant drives anyway
Anonymous No.107128510 [Report] >>107128543
>>107128126
>mikrotik
prosumer
>cisco
safe, good for your windows servers.
>huawei
hate america
Anonymous No.107128543 [Report] >>107128995
>>107128510
tfw want all three mashed together

any good cisco models rec? dont need poe, fanless prefered and dont need more than 16 ports, i have few servers with 10 gig nics but i dont make use of high bandwidth

mikrotek has safe $150 options
huawei is $100 somehow, but no firmware updates since 2023
i dont know how to navigate cisco skus, want new thing with firmware updates for upcoming years
Anonymous No.107128995 [Report] >>107129608
>>107128543
huawei is $100 because of salt typhoon you idiot im not helping you
Anonymous No.107129131 [Report] >>107129651 >>107129827 >>107149449
if you copy files even for personal use in the USA its a felony because you dont own those files and its DMCA copyright infringement.

You are committing a crime by copying.

Enjoy being raided by the FBI because we all know you dont need 16TB for 1000 photos.

You're obviously storing pirated and copyrighted material because you need that much space.

Enjoy going to jail. It might as well be a crime to own HDD's in america because of copyright. They should just outlaw HDD's because what use would someone need besides storing pirating software/games/movies.

You are thieves and all your IP's are being recorded.
Anonymous No.107129390 [Report]
How often do you guys shut down your server for general maintenance, ie. cleaning?
I plan on getting an open air case. It's small and I can put it in a rack shelf, if I ever decide to get a rack later on. But I am worried about dust.
Anonymous No.107129608 [Report]
>>107128995
i dont need to trust the switch in my threat model

if i cared about backdoors i would control my own supply chain and design my own asics, salt typhoon is not relevant for casual installation anyways huawei was off the table ever since i read the support page and even their most recent ones were abandonware

you already helped me enough
Anonymous No.107129651 [Report] >>107130218
>>107129131
Get a load of this loser.
Anonymous No.107129718 [Report]
>>107113872
If you can add custom redirects to the dns you can do basically anything. what I do with airvpn is make *.lan redirect to 192.168.0.20 (my server) and on the server I have npm reverse proxying to my services, so like jellyfin.lan is proxied to 192.168.0.20:33013. You can also skip npm and do it directly in the dns if it supports entering domains and ports (if needed). Only caveat is you obviously need to be using that specific dns for this to work.
Anonymous No.107129827 [Report]
>>107129131
>when the loli post nut clarity hits at 3 am
Anonymous No.107129956 [Report]
>>107124733
Yeah those would be fine. The higher end CPUs above those cost disproportionately more than any performance gains they may have, so it's not worthwhile unless you actually think you'll be using that power.

>https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video
Check the intel QSV page on wiki. Kaby, Coffee and Comet lake (7, 8, 9, and 10 thousand series processors) support decoding h265 and encoding h264. If you can find a reasonably priced system with an 11, 12, or 13 thousand series processor (Rocket, Alder, and Raptor Lake respectively), maybe consider that because it supports h265 encoding as well as av1 decoding, but last I looked those were prohibitively expensive compared to the Kaby/Coffee lake parts (7, 8 and 9 since coffee had a refresh in the 9k series). And the latest stuff that supports AV1 encoding is still brand new, so not cheap, and not worthwhile unless you actually have a need for it.

Performance gains on post 10k+ chips aren't nonexistent, but they're marginal unless you get a lot more cores/higher clocks. Intel stalled hard on the IPC improvements which is a big part of why AyyMD bent them over and fucked them with both barrels with zen. It's really about the QSV capabilities. If you don't need the fancier stuff, fuck it, save the money and don't bother. For the price of these shitboxes you can always buy a new one in 3 years once the newer stuff has dropped in price.
Anonymous No.107130218 [Report]
>>107129651
I mean the US operates in a grey zone. Chances are nothing will happen to a zoomie pirating the games and the minecraft movie since it affects no one and its legally not stealing or theft since its just infinite copying of game/movie files. The only law being broken is DMCA, that's it.

BUT its still illegal like with weed despite no county court or state caring about personal game piracy. Even weed has more stigma and nuance because its a physical property.

Piracy is a loophole that allows the federal government to arrest people they don't like for a retarded reason while letting everyone else break copyright laws because its not about justice its about targeted persecution if you are someone like snowden or someone against the US government.

The US government would do better if it just let these people leave the country like send them to russia and revoke their citizenship.
Anonymous No.107130307 [Report] >>107137431 >>107140721
>>107118277
>hath-rust
Anonymous No.107130407 [Report] >>107130754 >>107133173
If I have 3x 12tb drives and 1 14tb drive in an array with ZFS, do I get only 24tb of usable storage with the 12tb drives being a bottleneck, or do I get slightly more from the 14tb drive?

How hard is setting up a ZFS array with Unraid (can I do it in HexOS?) if this will be my first server/NAS and I've never used anything but Windows before, and never use powershell, command line, etc?
Anonymous No.107130754 [Report] >>107131270 >>107133173 >>107133368 >>107137168
>>107130407
First ask yourself if you actually need ZFS. Remember, with ZFS you can't just add drives into the new pool, you have to make new ZFS pools each time. So whenever you want more storage you need to buy at least 2 drives.
Second, ask yourself if you actually need UnRAID. It costs money to use (who the FUCK pays for linux?) and you might be better of using OpenMediaVault instead. UnRAID is for those that needs to write a lot of files everyday and need it backed up immediately.
If you are using this as a media storage server then you don't need ZFS, you don't need unRAID. It's just going to give you a headache.
If you need ZFS then use TrueNAS. I wouldn't touch unRAID with a 10-foot pole.
Anonymous No.107131270 [Report] >>107133368
>>107130754
I only have a 4 bay NAS anyways so the amount i'd be starting with is the max amount of drives I can have at once no matter what.

If TrueNAS and HexOS can do ZFS, then I'd just use those, but my intial question still stands: Can I get more usable storage out of 3 12tb drives and one 14tb one then I could with 4 12tb drives with it, and how easy is it to set up for somebody who is entirely new to installing an OS and using anything but Windows?

I'd just go with the default Ugreen OS my NAS came with, but I hear ZFS has better file validation stuff and I want to try to get more storage out of having a 14 or 16tb drive being the final drive in the array

My plan was to do RAID 10 but with ZFS, it has it's own seperate array structures, right, not the normal RAID ones?
Anonymous No.107131404 [Report]
>>107121061
>>107121073
literally rebranded lelnopos
Anonymous No.107131721 [Report] >>107132702
>>107113287 (OP)
I'm going insane, I'm doing something wrong.
I bought a domain name via CloudFlare and I'm trying to access Immich and NextClould remotely using nginx proxy manager as a reverse proxy.

On CloudFlare I added an A record with the name immich which points to my public IP, so immich.<mydomain> should point to my public IP.
I've done the port forward of 80 and 443 on my opnsense router to my server.
I've made sure there are rules to allow 80 and 443 through UFW
I've set up the proxy host on nginx with the destination as the IP of the server running immich and the port is correct.
Yet no matter what I do I'm getting 502.
There must be something wrong with nginx and immich but I can't figure it out.
They're both running on the same server via docker, but in different docker containers. I've made sure they're both running on the same docker network.
I'm not sure what else to check
Anonymous No.107132664 [Report] >>107132717
Did you guys enable any extra things in unattended-ugprades or leave it as it is after installation?
Anonymous No.107132702 [Report] >>107137415
>>107131721
get into nginx docker and check curl or wget to http://immichcontainername:port
make sure immich port matches the one in docker ps
most proxy problems I had was these things + typo somewhere
Anonymous No.107132717 [Report]
>>107132664
>ugprades
I turned those off. don't want my stuff breaking for no reason. server isn't accessible from the internet, so I don't care much for security.
Anonymous No.107133173 [Report] >>107137168 >>107139188
>>107130407
Individual vdevs are limited by the capacity of the smaller drive. You could put all 4 of those drives into a raidz1 and get 36 TB of usable capacity, although raidz1 with larger capacity drives can get a bit questionable because of how long the resilver times take.

If you do two mirrors, you'll have 2 12 TB mirrors because each of the mirrored vdevs is limited by the smallest capacity drive. I wouldn't worry about this though because you're talking about single digit percentages of lost space. In a mirror configuration you'd "ideally" have 25TB of used space. So you're losing 4% capacity. That's not worth freaking out about. Also note that this isn't permanent. If you later replace the 12TB drive in the mixed mirror with a higher capacity drive, you can expand the vdev vertically to whatever the new smallest drive is. So if you replace the 12TB drive with a 16TB drive, you can grow to 14 on that mirror.

>How hard is setting up a ZFS array with Unraid (can I do it in HexOS?)
Not difficult, but I'd recommend using truenas if you just want a storage appliance. It's built around zfs and supports it better than unraid does.

>>107130754
This is completely wrong. ZFS lets you trivially add devices to a pool. That's the whole point. A pool is comprised of virtual devices, each virtual device is its own raid array that is responsible for its own redundancy, and pools are effectively striped across those arrays.

ZFS now has raidz expansion so that you can append drives to a raidz device and make it wider. You can start with 3 4TB drives in a raidz1 for 8TB, add a 4th, and grow that device to 12TB of usable space.

>107131270
Do you actually need RAID 10 for performance reasons, or is this a bulk storage device? If it's simple bulk storage, consider using raidz instead of mirrors.
Anonymous No.107133205 [Report]
Can anyone explain what is the use case for vagrant in prod or even dev environnement?
>>107133085

Vagrant's code isn't portable between hypervisors, so it has no advantage over native solutions.

Furthermore, the base image repository seems kind of outdated and sus if anyone can upload there.
Anonymous No.107133358 [Report] >>107133364
i wanna get my family off the subscription plan goy, what hardware should i get in current year? i'm anticipating non ideal conditions and shitty devices i would say worst case is like 6 people transcoding at the same time if nobody shares. being able to handle 10+ or even more would be cool, to give them the option to. i also don't have any kind of storage for it yet. is a tower pc + gpu + sas card and a bunch of 12/18tb's as needed a sane plan? what cpu/gpu should i be looking for? how much should i expect to spend minus the drives? heat isn't an issue i can put it in a less used room.
Anonymous No.107133364 [Report] >>107135078
>>107133358
>what hardware should i get in current year
to host plex and an arr stack**
Anonymous No.107133368 [Report] >>107134883 >>107139188
>>107130754
>with ZFS you can't just add drives into the new pool
You can now, the data just won't be balanced across all drives unless you recopy it or run a balancing script

>>107131270
>Can I get more usable storage out of 3 12tb drives and one 14tb one then I could with 4 12tb drives
With all drives in one vdev the lowest capacity will be the max addressable space for each drive, so 4x 12tb. However you can put multiple vdevs in 1 pool, so you can do the 3 12s and 1 14, however the 14 will have no parity since parity doesn't extend across vdevs. Actual usable space depends on how it is set up.
https://wintelguy.com/zfs-calc.pl

>how easy is it to set up
Truenas is piss easy, 100% gui. I was a wintoddler and had absolutely no problem getting my first pool going a few years ago.

>it has it's own seperate array structures
Yes but you can recreate equivalents without much difficulty. For example raid10 you would just do multiple mirror vdevs in one pool, all normal vdevs in the same pool all get striped together.
Anonymous No.107133387 [Report] >>107133740
what would be a good name for a vps company?
Anonymous No.107133414 [Report] >>107133589
so like i use ubuntu server on my pc to run an smb share on a zfs raid. with lxc containers running my services and stuff + a few minecraft and other game servers. since ubuntu wants to replace a bunch of the gnu untils and stuff with rewritten ones in rust and snaps and everything. i want a new operating system for my server. like something headless. i don't need vms, docker and a web interface. i just want something that i can use to run my smb share, run my zfs raid and some game servers and services in lxc containers with long support (like 5 years and up ideally). i do actually like ubuntu and their long support with ubuntu pro + their release model.
Anonymous No.107133589 [Report]
>>107133414
there's debian. it'd be very similar to setup.
Anonymous No.107133740 [Report] >>107133767
>>107133387
v-piss.net
Virtual Private Instant Server Service.
Anonymous No.107133767 [Report]
>>107133740
lmfao. that's a good one. I was thinking along the lines of useastnone or some corny fucking joke like that.
Anonymous No.107134208 [Report]
>>107128167
At least I have access to one. I bet you don't even have s-ex with whatever virgin OS you're using.
Anonymous No.107134883 [Report] >>107135360 >>107137216
>>107133368
>You can now, the data just won't be balanced across all drives unless you recopy it or run a balancing script

Just run zfs -rewrite. to rewrite data. Doesn't require external scripting.

RAIDZ expansion does reflow data across the drives, but you still need a rewrite if you want to utilize the new parity ratio on the old data.
Anonymous No.107135050 [Report] >>107146474
one hour to find out if my 'run every 3 hours' cron job succeeds or fails
Anonymous No.107135069 [Report] >>107136373 >>107137847
Sorry in advance already if this would be better for sqt
I want to mount 3.5 hard drives in an (micro?) atx case, but they don't fit exactly.
Is there a standard part I can use to mount/ secure the drives in the case? What is the part called?
Are all cases the same width or do I have to take measurements too?
Anonymous No.107135078 [Report]
>>107133364
is that one of those small unzip bombs i've heard about?
Anonymous No.107135084 [Report] >>107135194
ok guys i want to fix my fuckup eventually
>during covid
>start experimenting with truenas
>build system on truenas 12, i think 12x6 tb drives in 2 pools
>fuck something up
>basically removing half of system partition by accident
>be able to connect screen and keyboard to machine
>i think i was able to export the pools before shutting down the machine

ok now do i just install truenas 12 and import the pools again? it should just work right?
Anonymous No.107135194 [Report] >>107135240
>>107135084
Yes. When you go to import they should automatically be detected. Could possibly have errors, just run a scrub not a big deal. Probably should do it anyway just to be safe. If shit's fucked you might have to do some command line fuckery but I kind of doubt it as long as you didn't fuck up your pool, so I wouldn't worry about it until you get import errors and failures.
Anonymous No.107135240 [Report]
>>107135194
that would be awesome because there is still a lot of stuff on there (mostly hi res flacs)
Anonymous No.107135360 [Report]
>>107134883
Neat, didn't know about that
Anonymous No.107135399 [Report]
>2000 dollars in drives sure saved me money
Anonymous No.107135421 [Report] >>107136093
When were you able to just add drives willy-nilly in ZFS? If that's true I might switch to TrueNAS.
Anonymous No.107136093 [Report] >>107136196
>>107135421
They added expansion about one year ago I believe.
Anonymous No.107136196 [Report]
>>107136093
Ah, that's why. I haven't looked into ZFS for almost 5 years now.
Anonymous No.107136297 [Report] >>107136316 >>107137471
One of the drives in my current 8 TB mirror has failed, so I'm taking the chance to build a 40 TB RAIDZ across six 8 TB drives in its place.
I'm thinking of throwing in a pair of 480 GB Intel MLC SATA SSDs as a special vdev for metadata storage. Is this a good idea or am I being fucking retarded?
Anonymous No.107136316 [Report] >>107136335
>>107136297
I don't see how that's a bad idea. If you need those 40TB then go for it.
Anonymous No.107136324 [Report]
Whatever happened to enterprise schizo?
Anonymous No.107136335 [Report] >>107136884 >>107137471
>>107136316
it's more concerning the special vdev. It's hard to tell how big it should be given it's heavily dependant on your workload, but I don't do anything too exotic so I'll probably be okay.
Anonymous No.107136373 [Report]
>>107135069
You can't just mount .5 of a drive mate
Anonymous No.107136439 [Report] >>107145140
>>107113658
>>107114637
>Kubernetes cluster
What do you use these for? Is it something I can deploy from a server running trueNAS on the metal?
>>107115074
I can see the case where you already have one, and end up needing another before you had bought a rack. They look good, I like the displays on them. Eventually they will die and you can always go rack mount then. I'm currently contemplating my current upgrade path with that part of my build so it's something I've been considering.
Anonymous No.107136884 [Report] >>107136924 >>107137264 >>107137471
>>107136335
Isn't conventional wisdom 1 gb per tb? I would imagine 12:1 would be more than enough.
Personally I wouldn't fuck with special vdevs on pools that I care about and don't absolutely need a special, the risk of losing everything if you get unlucky for potentially minimal to zero gain and them being nonremovable always turned me off, though I am overly cautious I think.
Anonymous No.107136924 [Report] >>107137027
>>107136884
Yeah obviously there's a risk factor with them but I think the performance gains would be worth it. I'm already prioritising capacity over redundancy by using single disk parity anyway. I'm not a critical business datacentre that needs absolute uptime at all costs and I'll be maintaining multiple backups.
Anonymous No.107136976 [Report] >>107137006
>>107127007
As long as you have an IP that isn't auto-blocked by the email cartel, sure.
Anonymous No.107137006 [Report] >>107137056
>>107136976
you host your own email?
Anonymous No.107137027 [Report]
>>107136924
Well if you know what you want and you're prepared for the risks and how to mitigate them, I guess go right ahead then, we're all big boys and girls(male) here.
Anonymous No.107137056 [Report]
>>107137006
Used to. Had enough of paying for a VPS when I could just pay for an Email service. And all of my ISPs residential IP ranges are on block lists.
Anonymous No.107137168 [Report] >>107137216 >>107137644
>>107130754
>>107133173
I have a FreeNAS build that's been dormant for a while. Hardware:
https://www.newegg.com/gigabyte-ga-f2a88xm-d3h-micro-atx-amd-motherboard-amd-a88x-fm2-fm2/p/N82E16813128659
https://www.newegg.com/amd-a10-series-a10-7890k-godavari-socket-fm2-desktop-processor/p/N82E16819113402
https://www.fractal-design.com/products/cases/node/node-804/black/
32GB of ram, 4x8TB HDD, raidZ1 (12TB total usable)
It's been dormant for a while.
I have to get it back up and running, but assuming it's still working, and I'm using like 8TB total, I'm considering how to expand it. I have on hand 3x20TB and 1x12TB. Would making a raidZ2 be the best option here, and then go on the upgrade path by expanding that pool with more 20TB drives? Or just update that larger pool so it's all 20TB (40TB total), and then create an additional pool of 4x20TB in Z2 and stripe them for double the performance? I don't forsee needing more than that much storage for a while, but that can change pretty quickly. .
Anonymous No.107137216 [Report] >>107137644
>>107137168
>cont.
Alternatively, if I keep that initial new pool (4x20TB raidZ2) and just expand it as needed, is there a better option to just add an SSD or two for different functions? I have multiple 8TB and
4TB SSDs, but only 8 SATA ports, so if a 6x20TB Z2 for capacity and the SSDs for caching or whatever to upgrade speeds. Not sure which is the best route.
I found a 970 in the box too, so probably try to use that as well, either in a VM or for encoding if possible.

>>107134883
Anonymous No.107137264 [Report] >>107137397
>>107136884
conventional wisdom doesn't read freenas forum posts from a decade ago about hardware recommendations.
well, actually it does. common sense doesn't.
Anonymous No.107137397 [Report]
>>107137264
I thought it was 1%
Anonymous No.107137415 [Report] >>107137432
>>107132702
I double checked this. I can curl immich through the nginx docker and that works fine.
I'm thinking it must be an issue with either the way I'm port forwarding or UFW.
Anonymous No.107137431 [Report] >>107140721 >>107142063
>>107130307
Why yes. Do you not contribute to the network??
Anonymous No.107137432 [Report] >>107137629
>>107137415
Okay, I disabled UFW and it worked. Not sure why ufw is fucking things up, I have rules to allow 80 and 443
Anonymous No.107137471 [Report] >>107137489 >>107142179
>>107136297
metadata specials are almost never a bad idea, and I'd argue they should be the default choice unless you're doing a purely SSD pool or simply cannot fit more hardware in a mini build. Metadata is tiny.

You don't need to use SSDs either. You can use random mechanical drives if you aren't doing something that needs the IOPS. Think about it. You can build a functional pool of spinners in a mirror, why would metadata not work on them when metadata and data already does? The benefit of a special is that you have mirrors for random I/O and you can use the low IOPs raidz for bulk, that's true regardless of what you're using as metadata drives. If you don't have a few spare 500GB+ clackers lying around, fuck it, get SSDs, but if you do, try doing it with mechanicals first.

>RAIDZ1
I would recommend going with RAIDZ2 if you plan on expanding things later. RAIDZ resilver times can get substantial.

>>107136335
>>107136884
It's wildly dependent on what your recordsizes are, your redundant metadata settings, and whether you're using metadata bloat settings like dedupe. As a super general rule, you can expect metdata usage with default settings to be around .1% of your pools logical utilization. Smaller records means more records per TB means more metadata per TB. Larger records mean fewer. With multi MB blocks you can easily get into the sub .01% metadata ratio range, and for bulk media there is zero good reason not to be using at least 1MB records.

>continued
Anonymous No.107137489 [Report] >>107137571 >>107142179
>>107137471
>continued from

To see the metadata usage on your current pool use
>sudo zdb -LbbbA poolname
The row that you are looking for has L1 total in the righthand column. L0 is your general data percentage, and L1 is general metadata.
I run 3 wide mechanical mirrors on my RAIDZ2 pool and /heavily/ use the smallblocks property. Again, I'm doing this on rust and I have zero issues. Remember, smallblocks is a per dataset property, so you can set a reasonable baseline and if you have a dataset that really needs something else, set it as needed. My default is 256k, and for VMs/containers I just set that at the max of 16MB so no matter what the container/vm is doing it's living on the mirrored specials. The containers/VMs usually have small (64k or smaller) record sizes set for performance reasons, but a few of my containers are using larger records because they're storing stuff that compresses better with 256k-512k records. My bulk storage just filters anything below 256k to the metadata drives (so random txt files, thumbnails, and other shit), containers live purely on the metadata drives.

Also note that you can turn down the redundant-metadata settings per dataset. If you're running 3 wide with metadata, you still need to have 3 failures/UREs to actually lose anything. If you have some workload that happens to need a fair amount of write IOPs, you can set redundantmetadata=some on it and that will improve your performance a bit, and shrink the metadata usage even further if you need that for whatever reason.

>what if I have too much small blocks data
The special devices have a reservation for metadata. There's a tunable to adjust this if needed, but the default is that 25% of the space will be reserved for metadata, and once you exceed that small blocks are simply written to the main pool.
>https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-special-class-metadata-reserve-pct
Anonymous No.107137571 [Report] >>107137695
>>107137489
Say I have a cache/metadata pool with three SSDs in it, and one is larger than the other two. In a Z1 config, would the larger one use the extra space to correct/replace bad sectors on it? I don't mind losing the capacity, I want to use it as a way to prevent total data rot if one of the other two fail. Then I can expand as needed.
Anonymous No.107137629 [Report] >>107137776
>>107137432
pretty sure docker still completely ignores ufw
Anonymous No.107137644 [Report] >>107137686 >>107137773
>>107137168
>>107137216
>4x8TB RAIDZ1 --> 12TB usable
These numbers don't make sense. 4 8TB drives in a RAIDZ1 would be 24TB of usable capacity.

If I'm reading this correctly you are saying that you have
>4 8TB drives
>3 20TB drives
>1 12TB drive.

So 8 drives in total. The issue for you right now is that your capacities are a bit all over the place so you can't trivially throwing them into a fat vdev and call it a day. If you had mixed 12 and 14TB drives, it wouldn't really be worth worrying about, but 8TB drives mixed with 20TB drives would be shaving a ton of usable capacity off of any RAIDZ vdev, and you don't have enough drives to really justify multiple RAIDZs.

You do have some frankenmonster options if you're willing to play with mdadm a bit. You can RAID0 the 12TB and an 8TB drive together to make a 20TB block, then RAID0 the remaining 3 8TB drives together into a 24TB block. That gives you 60TB of usable RAIDZ2 capacity. Note that if you do this your performance is going to be a bit chaotic because I/O is going to get limited by the slowest drive (probably the 8s) for some operations. You'd also be making fault detection a lot more of a pain in the ass by doing this because ZFS just sees the RAID0 as a single device. If one of your 8TB drives in the 3 way fail, you won't have a clean way of telling which one did without a lot of poking and prodding.

There's nothing intrinsically "wrong' with doing this. You still get all the usual data guarantee, and can survive failures. ZFS just cares about block devices, and a RAID0 is a block device, just one made of multiple discs. Pedantically speaking your failure rates will be higher because you have 2-3 devices per device (xzibit meme goes here), but as long as you're scrubbing regularly, and are willing to replace any failed RAID0 with a new 20TB drive, you're fine. Technically you could find the bad drive and just throw away the RAID0 and rebuild it with an appropriate replacement too.
Anonymous No.107137686 [Report] >>107137773 >>107137858
>>107137644
Or, if you don't want to deal with the headache of using the same capacity drive all the time (because of costs) then use something like unraid or omv, also supports swapping out or adding in new drives without any problem. If you have the money for multiple same capacity drives then go buy those first.
Anonymous No.107137695 [Report] >>107137773 >>107137773
>>107137571
No. vdevs are constrained by the smallest capacity drive. AnyRAID is going to change that a bit, but that has some caveats as well.

I guess if you KNOW for sure that you have bad sectors in.. idk.. the first 3 GB of a drive, you could make a partition after that point and pass a different section of the drive to zfs, but that's more of a mechanical drive thing. SSDs don't store data in consistent locations so that concept doesn't really apply.

Plus, I'd just assume that any drive with bad sectors is on its way out and replace it. History has shown that that's usually an accurate bet. I've got a WD blue somewhere in a closet that has 4 bad 4k sectors in the middle of it that was stable like that for 5 years, but in general once a drive starts to show issues, it's going to die soon.

> cache/metadata pool with three SSDs in it
Do you mean a separate pool or a metadata vdev in a larger pool?
Anonymous No.107137773 [Report] >>107137868
>>107137644
>>107137686
Thanks. The environment is 4x4TB currently, typo on my side.
>>107137695
>Do you mean a separate pool or a metadata vdev in a larger pool?
I'd think that a Z0 pair of old 128GB SSDs would be good for a metadata cashe, and and increase the file size as I increase the SSD sizes on that pool.

>>107137695
I was more asking about the flash on disc support for using good nand. Like if I have a 3 disk SSD array in Z1, two were 120 GB, the third was 240GB, does the pool have a better chance of surviving and replicating in the future with the one drive being larger for TRIM or whatever?
Anonymous No.107137776 [Report]
>>107137629
that's not exactly it. docker creates it's own chain in iptables and you have to make sure you're doing the rules in such a way that they apply properly.
Anonymous No.107137804 [Report] >>107137830 >>107137847 >>107137888 >>107142016
what can i even do with a home server
Anonymous No.107137830 [Report]
>>107137804
Anonymous No.107137847 [Report]
>>107135069
zip ties
>>107137804
server is literally just a pc you access remotely, so literally anything
Anonymous No.107137858 [Report] >>107137910
>>107137686
AnyRAID will solve a lot of these weird mixed capacity situations, but I don't think telling other people to go use alternatives is good advice. UNRAID and the like do not have anywhere near the same level of integrity guarantees because they don't handle the write hole problem. Yeah it's a niche issue, but they have failure states that can be nightmarish to recover from. ZFS really won't care about this kind of stupidity.

I'd trust a mdadm layer shimmed under zfs more than I'd trust other stuff.

Should you do this in a production system? Probably not, but at the same time, home use cases are a lot more forgiving because the loads are lighter and they're unlikely to be as horrifically fragmented. I've also handled some pretty catastrophic fuckups in production systems in the wild doing exactly this kind of garbage during the drive shortages around covid. We had a rack where we mounted a backplane with a pile of 4TB HGST ultrastars to an oak plank and zip tied it to the rack. A bad power supply cooked one side of a bunch of mirrors and we couldn't get enough replacements for months to fill the shelf. We had 4 drives in the shelf and 16 hanging from the wood.

Many conversations were had with the client about how their penny pinching almost fucked them.

>MFW I got to drill holes into a live server to run the SAS cables.
Anonymous No.107137868 [Report] >>107137909
>>107137773
>4x4TB
Then you don't have sufficient capacity for what I typed above. You could do a frankendisk with the 12TB and 2 4TB drives. That'd give you 40TB of usable capacity with a raidz2.

>metadata cache
metadata specials are NOT cache. L2ARC is a cache drive, and I can all but guarantee that you won't benefit from L2ARC.
>Z0
That isn't a type. Not sure what you mean.
>SSD wear
Drive failure is chaotic and random. It's exceedingly unlikely that you will reach endurance thresholds, so you can largely ignore them. Failures outside of drives shutting down due to endurance are random chance based. The whole "buy different types of drives to stagger the failures" is largely FUD unless you have drives with known issues... and I mean... just don't buy or use drives like that? Or if you're that paranoid, write out a TB to one of the drives so they're staggered. It won't make any practical difference, but if you need your placebo dose to keep your autism in line, so be it.
Anonymous No.107137888 [Report] >>107137902
>>107137804
If you're not deep into tech stuff, probably nothing.
I only have a storage server, because I have multiple PCs and I transfer files between them. Having it centralized in a storage server helps. I also have family members that likes to watch movies and shows, so I set up a media server. Same with manga and books.
I don't like using 'smart' devices so I don't use homeassistant. And I don't like messing with my router so I don't bother with that stuff only.
Don't get into setting up a home server unless you have a reason to.
Anonymous No.107137902 [Report] >>107137924
>>107137888
smb alone is enough reason for running your own server, the rest follows.
Anonymous No.107137909 [Report] >>107138091
>>107137868
I meant an SSD pool with a zdev of two smaller SSDs for raid0 basically, and another in a Z1 with that one having a higher capicity to prevent nand decay with the extra capacity. Would a function like TRIM work here? That way I can replace the other pool when needed.
Anonymous No.107137910 [Report] >>107137931 >>107138091
>>107137858
>knows it's a niche issue
>uses a more complicated method to avoid it
>says it's fine for home scenarios
unraid is literally for home labs. give me proof about unraid not having the 'same level of integrity guarantees'
Anonymous No.107137924 [Report]
>>107137902
Yeah, but if the guy has one PC and maybe one phone why would he need a server? Everything else can just connect to that one PC.
It's only when you have multiple devices that needs to connect to one another is when I would tell someone to consider setting up a home server.
Anonymous No.107137931 [Report] >>107138091
>>107137910
>'same level of integrity guarantees'
At the File System level they're basically the same, it just depends on use case. TrueNas is better for most people that are doing a family oriented home server. If you're doing projects in a lab, use whatever.
Anonymous No.107137971 [Report] >>107138188 >>107138401 >>107138413 >>107141231
>>107122202
>Chinese firewall appliance running pfsense via SFP+ cat6a.
that's dumb. get a trustworth hardware platform
>SFP+ cat6a
also dumb, run fiber to the switch, DACs are shit and run hot, plus you want to opto-isolate the switch/firewall the most. it's lower power to run fiber on every link and higher reliability assuming you don't fuck with the cabling. most of the DACs I've worked with have dropout issues on sustained transfers
Anonymous No.107138032 [Report] >>107138876
I'm moving my server to a different system. I use docker for a couple of things but i'm not an expert in it. Can i just copy my volume folder(or entire /var/lib/docker?) over to my new system? Is there anything else i should pull over to minimize the chance for complications?
Anonymous No.107138091 [Report]
>>107137909
None of what you're saying here makes sense. Modern SSDs have mechanisms to rewrite data after a while to ensure that it's readable in the future. It seems that you're worrying about borderline conspiratorial stuff. Nothing stops you from putting a bunch of SSDs in a non-redundant pool to use as scratch space, but unless you're talking 10Gbit or higher speeds, why do you need that? If you're worrying about one of the drives dying, put them in a mirror or a RAIDZ.

>>107137910
>>107137931
unraid is so painfully trivial to break that it's not even funny and the performance is more than a little lackluster. You're honestly better off using btrfs mirroring than unraid if you want to go all out on mixed capacities. You're sacrificing raw capacity by doing so, but doing wide raid 5/6 with unraid is just begging for headaches.
>more complicated
Mate, if punching in like 3 commands to make a frankendisk is too much effort for you, why are you even here in the first place? That path leads to a simple pool down the road as anon upgrades/replaces drives, and it has robust recovery options.
>At the File System level they're basically the same,
Ah, so this has been bait the whole time. Welp, you got me.
Anonymous No.107138188 [Report]
>>107137971
This post is retarded.

Passive DACs use milliwatts of power. They're by far the lowest power module that can go into an SFP port. Active ones are around 500mW while passives are around 150mW. Optical transceivers are in the 1-2 watt range.

>opto-isolate
Pointlessly excessive for home use since it's unlikely there will be substantial interference and everything is most likely running off the same electrical circuit.

>most of the DACs I've worked with have dropout issues on sustained transfers
Don't buy noname chinesium then. 10gtek is completely fine and used widely in enterprise environments. Is it comparable to genuine cisco gear? Fuck no. Everything about them is cheaper, but it works perfectly fine.
Anonymous No.107138227 [Report] >>107138277 >>107138356
What VPS / VPN services do you guys use to get around CGNAT? Are there any with no data limits anymore?
Anonymous No.107138229 [Report] >>107138259
ntfy is comfy
Anonymous No.107138259 [Report] >>107138269
>>107138229
your mom is comfy, on my lap
Anonymous No.107138269 [Report]
>>107138259
that you for taking care of her
she is very lonely and sad
Anonymous No.107138277 [Report]
>>107138227
netbird
cheapest hetzner vps has 20tb traffic
you can probably get around your cgnat with ipv6 though if you haven't looked into that yet
Anonymous No.107138356 [Report]
>>107138227
OVH vpses have no data caps, and they actually mean it. I've moved 80ishTB through (IE in and out, so 160TB total) in a month through one of their 500Mb connection VPses. You won't always get the full advertised speed, but they don't give two shits about usage.
Anonymous No.107138401 [Report]
>>107137971
DACs do NOT run hot at all. everyone feel free to ignore this retard he buys bootleg shit and is confused why it doesn't work right.
Anonymous No.107138413 [Report] >>107138824
>>107137971
>get a trustworth hardware platform
The Chinese government actively threatens your online.
Invest into proven, secure US vendors like Cisco or Fortinet!
Anonymous No.107138824 [Report]
>>107138413
yes trust the (((secure))) US vendors!
Anonymous No.107138876 [Report] >>107139136 >>107139864 >>107140058
>>107138032
Migrate NixOS in the future to prevent this conundrum
Anonymous No.107139136 [Report]
>>107138876
or just don't use docker volumes.
Anonymous No.107139188 [Report] >>107139334 >>107139629
>>107133173
>>107133368
So what would the ZFS equailvent of Raid 10 be?
You mentioned Raid Z1, but as an example of an alternative.

Or with ZFS, do I not need as much redundancy as I would with normal Raid and could I get away with an alternate array format that gives me more space?
Anonymous No.107139334 [Report]
>>107139188
>So what would the ZFS equailvent of Raid 10 be?
Striped Mirrors.
Anonymous No.107139629 [Report]
>>107139188
ZFS has hierarchical structures.
>pool level
A pool is a top level abstraction that is all of the available storage. Any filesystem or block device created on the pool is done so virtually and all of them have full access to the pool's combined resources unless you specify otherwise
>vdev level
A pool is comprised of one or more virtual devices. Virtual devices can contain one or more drives in various RAID topologies. The main two are mirrors and RAIDZ. Mirrors are RAID1. Data is written fully to each drive within a given mirror. RAIDZ is analogous to RAID5 and RAID6. It uses parity generated via erasure encoding to handle disk failovers.
>block devices
Each vdev needs to be assembled out of block devices. This is usually done by passing entire drives to zfs, but you can pass partitions and other things. All ZFS cares about is "can I store 1s and 0s on this in a standard fashion?"

Expanding a ZFS pool is as simple as adding a new vdev. The space is instantaneously available to all datasets on the pool because there's no concept of a partition. All of the file mappings are abstracted. Data is striped across all vdevs in the pool. Technically speaking it is not striped because individual records are stored on a single vdev, and a file may have some or all of it's records on any particular vdev. Again, the concept of location doesn't really exist, it's all abstracted away.

To directly answer your question, the closest equivalent to a RAID10 setup is, as the other anon replied, striped mirrors. In other words, a pool comprised of multiple mirror vdevs.

RAIDZ is more like RAID5. RAIDZ comes in 3 flavors. RAIDZ1, RAIDZ2, and RAIDZ3. The number denotes the number of parity bits that you have, and thus, how many drives failures you can handle concurrently before losing data. An 8 disk raidz2 has roughly the capacity of 6 disks, and can handle 2 failures before losing data. RAIDZ can provide extra capacity at the expense of performance.
Anonymous No.107139737 [Report]
petio or overseerr? or something else?
Anonymous No.107139846 [Report] >>107139853
i want to get rid of the ubiquiti wap's at my house because i don't like their privacy policy saying they can monitor and use my data whenever they want for whatever they want. what's the least problematic waps that i can set up at my house that will do a good enough job no one will bitch about wifi being slow? ceiling mounts are ideal.
Anonymous No.107139853 [Report] >>107140309
>>107139846
full mikrotik stack
Anonymous No.107139864 [Report]
>>107138876
it's going to a nixos server. coming off my fedora slop that's been running for years. i'm not concerned about the dockers, rather the data. nixos can't recreate the data with a config/flake.
Anonymous No.107139935 [Report]
Okay, adults talking now: how difficult would it be to host alternative front-ends like Imginn, Nitter, and Invidious on my VPS? I basically have a personal project there, along with an XYZ domain I bought a few months ago. I'm tired of being stuck on the XCancel loading screen for minutes because of that anti-bot protection.
I'm not a programmer by the way.
Anonymous No.107140058 [Report] >>107140128
>>107138876
use docker export to tar the volumes and recreate them from that on the new server.
Anonymous No.107140128 [Report]
>>107140058
i ended up just tarring the full docker director in /usr/lib. this should achieve a similar result, or will they conflict?
Anonymous No.107140309 [Report] >>107142401
>>107139853
whats their best wap and is it good enough that the normies in my house won't bitch about their smart tv buffering or phone not having signal?
Anonymous No.107140721 [Report]
>>107137431
>>107130307
Although I will admit, H@H is not in the spirit of No NAT November, unfortunately. Maybe someone can convince Tenboro to change that.
Anonymous No.107140747 [Report] >>107140783 >>107140799
I want to set up something for my mom to keep her photos, what's a good self hosted service that's easy for boomers to sync their phone pictures to?
Anonymous No.107140783 [Report]
>>107140747
A printer and some tape, and some wall space.
Anonymous No.107140799 [Report] >>107142037
>>107140747
Immich
Anonymous No.107140952 [Report] >>107141066 >>107142888
>come up with short kino domain name with fitting tld
>random ass chink owns it
it's over, isn't it?
Anonymous No.107141066 [Report]
>>107140952
Juicekiller1488.org is still available.
Anonymous No.107141231 [Report] >>107141339
>>107137971
>SFP+ cat6a
>DACs are shit and run hot
Not DACs but 10GBASE-T transceivers. They're the shit ones. DACs are copper cables but they're fine with regards to temperature and power draw, and generally cooler than AOC (transceiver + fiber). I wouldn't use 10GBASE-T at all anywhere, I'd try to buy SFP+ NICs and run fiber on longer distances or DACs on short distances.
Anonymous No.107141339 [Report] >>107141595
>>107141231
New 10G Base-t transceivers can actually get down into the sub 2W range, but the cheap shit on ebay is all going to be hot as fuck without a fan pointed directly at it.
Anonymous No.107141348 [Report] >>107142939
>>107113287 (OP)
How do you handle subdomains when reverse proxying? My current setup uses an always-on wireguard VPN which uses pihole as DNS proxy to map each subdomain to a local address. It worked for quite a long time but recently I had issues with certain apps on my phone not working well with VPNs so I started questioning my setup. Disabling my VPN while at home would make the subdomains unreachable unless I manually set up a custom local DNS address in my networking profile on each of my devices but that's exactly what I wanted to avoid when setting up an always-on VPN in the first place. What's the cleanest solution here?
Anonymous No.107141595 [Report] >>107145037
>>107141339
So I heard, but last time I checked they weren't exactly cheap. Meanwhile a 10GBASE-LR SMF transceiver may cost less than half and a 10GBASE-SR MMF one may cost 1/3.
Anonymous No.107141640 [Report] >>107141651
There's gotta be a fucking proper solution to this
I've got sonarr up and running and I'm following seasonal shit which is downloading from AB
The context being that when the season ends, the individual episodes stop being tracked and I don't necessarily want to keep every episode for every show I watch
Why the fuck is there no option to make it so that when I delete the show entry from sonarr, it also removes the downloads from qbittorrent and the downloaded files? I know sonarr can remove torrents from qbittorrent, there's a built-in leecher setting to remove the torrent. Surely it could do the extra step of removing the file itself, there's even a way to recognize hardlinks and shit.
Has anyone figured this shit out? I even set up qbit-manage just now but there's no clear way of doing this shit. All i want is to get rid of the downloads whenever I delete the entry from sonarr, the whole point is automation but somehow such an obvious convenient clean up functionality hasn't even crossed anyone's mind? Meanwhile there's several services explicitly for removing just the torrents when they're done downloading, even though most modern torrent clients can do that by themselves???
Anonymous No.107141651 [Report] >>107141719 >>107143579
>>107141640
I would help you, but I don't have access to AB and I'm jealous.
Anonymous No.107141719 [Report]
>>107141651
If this is a cheeky way of asking for an invite, I don't have any since I don't even know what to upload for the min requirements to get invites.
The best I've managed myself is maybe by tagging un-hardlinked torrents. If I could change categories based on tags I could jerry-rig something ugly but operational, but almost as if they've predicted where I'm going with this, there's no functionality to do that. Every fucking time I try to do something it's like crawling uphill through broken glass
Anonymous No.107142016 [Report]
>>107137804
Buy it, use it, break it, fix it
Trash it, change it, mail – upgrade it
Charge it, point it, zoom it, press it
Snap it, work it, quick – erase it
Write it, cut it, paste it, save it
Load it, check it, quick – rewrite it
Plug it, play it, burn it, rip it
Drag and drop it, zip – unzip it
Lock it, fill it, call it, find it
View it, code it, jam – unlock it
Surf it, scroll it, pause it, click it
Cross it, crack it, switch – update it
Name it, read it, tune it, print it
Scan it, send it, fax – rename it
Touch it, bring it, pay it, watch it
Turn it, leave it, start – format it
Anonymous No.107142037 [Report]
>>107140799
Thanks
Anonymous No.107142063 [Report] >>107143594
>>107137431
Not judging, just noticing. Didn't even know there was a rust version.
Anonymous No.107142179 [Report]
>>107137471
>>107137489
Cheers for the info. That more or less tells me everything I need to know so I'll go ahead with the setup I mentioned. My current metadata size is around 8 GB so that gives me plenty of special vdev space to play with for now as long as I keep the smallblocks in check.
>I would recommend going with RAIDZ2 if you plan on expanding things later. RAIDZ resilver times can get substantial.
I'm aware of this but I want the extra capacity, all of my board's SATA ports will be used by this config and I'm willing to deal with the resulting bullshit if my array falls over during a resilver. I'll make sure my backups are on point.
One day I'll have a proper rack, disk shelf and SAS expander setup to work with then I can play with high levels of redundancy.
Anonymous No.107142401 [Report]
>>107140309
I'm running a bunch of hAP ax2/ax3, can get sustained ~20MB/s throughput on them.
Allegedly, they can technically do more, but it's good enough for me.
Appreciate the affordable price, PoE support, RouterOS flexibility.
CAPsMAN makes managing multiple APs with multiple SSIDs in different VLANs
easy once you wrap your head around the config stuff.

If people complain about their TVs, maybe introduce them to the concept of 'cable"
Anonymous No.107142888 [Report]
>>107140952
>been pondering on a domain name for weeks
owari
Anonymous No.107142939 [Report]
>>107141348
I run a tailscale setup to remotely access services. I have a domain and my home DNS server is set to be authorative for it as well as the real DNS servers. When I'm at home I get the addresses served by my local nameserver and when I'm not home it uses the actual DNS entries.
Anonymous No.107143083 [Report] >>107143287 >>107143598 >>107144250 >>107147443 >>107152016
Anons that don't have a static ip, do you worry about public ip changes? Mine changed for the first time in 3 years and my response was to create a validation script that runs every 3 hours via crontab. picrel is the result
Anonymous No.107143287 [Report]
>>107143083
I have a small script checking public IP and logging to a postgres db.
If it detects a change, it nsupdates a public A
Been running a cron job with
*/05 * * * *
Since 2015 it has tracked 834 changes.
After switching from LTE based shit to Starlink to finally fiber, IP only changes
when rebooting my core switch making the firewall lose the link,
so updates have gone done.
Anonymous No.107143579 [Report]
>>107141651
You're not missing anything
Anonymous No.107143594 [Report]
>>107142063
It's relatively new and I thought it would help with my quality getting nuked on a cyclical basis (it didn't and I have no idea how to troubleshoot that). Now I just keep it because it's better than the official version in my opinion
Anonymous No.107143598 [Report]
>>107143083
I use dynamic DNS for this, a cron script checks to see if the address has changed every so often and updates the record for my domain if it has.
Anonymous No.107144250 [Report]
>>107143083
wtf, I "bought" a static IP from my isp for an euro
Anonymous No.107144702 [Report] >>107144969 >>107145223 >>107146160 >>107147476
Hello frens... Help me... Lets say an old geezer that is a networking God in my job has decided to take me up only with the condition that I show him something worthwhile in 2-3months time. He suggested to me packet tracer because everything we do is CISCO based... Also he told me I should be getting ccna cert by the time summer rolls up... Anyway im asking you guys how Do i impress him in the coming months so he takes me on?
Anonymous No.107144947 [Report] >>107145147
I'm trying to create a ZFS pool and it keeps whinging that this one disk is in use and creates this weird ass partition layout every time i try the create command. I've tried clearing the partition table and rebooting multiple times without success. What the fuck's going on here?
Anonymous No.107144969 [Report] >>107146160
>>107144702
>cisco
>im asking you guys how Do i impress him in the coming months so he takes me on?
A script that uses CDP neighbor to discover other switches/routers on the network and traverses the network, jumping to each and records all devices, their ip, mac address, host switch, port on said host switch, and what vlan they are on in an excel spreadsheet (or smartsheet for others to view)
Anonymous No.107145037 [Report] >>107145437 >>107145845
>>107141595
>last time I checked they weren't exactly cheap
Same, but I hadn't actually looked it up in a while. It seems that prices crashed in the last year or two.
>https://store.10gtek.com/nvidia-compatible-10gbase-t-sfp-transceiver-up-to-80-meters-cat-6a-40-85-hpe-aruba/p-23474
>BROADCOM BCM84891 PHY
>1.6-2W
35 dollars. 45 if you need a different keying. That's not bad.

They were around 150-190 from every vendor I could find when I was last thinking about getting two of them. They support 1, 2.5, 5, and 10Gb, which is quite nice. A few years back I bought a new switch that supports 2.5GB, SFP+ and has PoE because it was cheaper to replace all 3 of my switches than buying 3 or 4 of these.
Anonymous No.107145140 [Report] >>107145287
Managed to make a k3s cluster. Gonna have to config longhorn too, maybe tomorrow.

>>107114637

8th gen i3, 16GB ram and 500GB SSD.

>>107114794

One cabinet was only about 130€. Lanberg WF01-6609-10S. Not the nicest but decent qualitiy and cheap, works for my purpose well. Actually fits 10U but the top cover is covering the most uppermost slot. The airflow is not an issue, it intakes from the front and the back is open, I removed the cover. Push pull config.

>>107115074

They were free. I had to replace a few batteries but beats paying full price for a nice-to-have UPS. I actually have a third one but I gotta get batteries for it too. They were brand new but just never used, they were bound for e-waste but I could not let that happen.

>>107136439
>What do you use these for? Is it something I can deploy from a server running trueNAS on the metal?

I don't even know. I don't know Kubernetes well at all, but I'm just installing it and figuring it out. It's hard for me to learn just by theory so I try to make actual installations. The nucs are a bit old to be in circulation so I just took them from work. They were collecting dust but seem to work.

I'll think what I'm gonna run on it tomorrow. I just have no genuine need.
Anonymous No.107145147 [Report] >>107145162 >>107145263
>>107144947
nvm I figured it out. The disk was holding onto an old RAID superblock from the consumer WD shitter NAS it used to be in. Cleared the superblock and all is well now.
Anonymous No.107145162 [Report]
>>107145147
sexo whatever character that is
Anonymous No.107145223 [Report]
>>107144702
CCNA shouldn't take all the way to summer. Get to work.
Anonymous No.107145263 [Report] >>107145569
>>107145147
yeah the md is a dead giveaway. wipefs -a before using a disk again. though the kernel might still see the md array. super annoying since it's automatically identified and mapped by the kernel every time. zfs just makes so much more sense than these hacked together layers of bloat like mdadm lvm luks and then your fs
Anonymous No.107145287 [Report] >>107146089
>>107145140
>Managed to make a k3s cluster
what distro did you use? did you just chatgpt a guide how to do it?
Anonymous No.107145437 [Report] >>107145845
>>107145037
Yeah, looks like a good deal now. Thanks for the link.
Anonymous No.107145569 [Report]
>>107145263
See I tried wipefs and I either did it wrong or it didn't clear the superblock. I had to manually zero the superblock via mdadm before it would play ball.
Anonymous No.107145570 [Report] >>107145634 >>107145830
>install nginx reverse proxy and a cloudflare ddns script to access a few services remotely through subdomains on my website
>test them on my mobile network
>just works
>test them on my work network
>can access landing pages but plex says I cannot login at this time
why does this happen?
I know the IT losers at my company blocked plex but I assumed they couldn't recognize it as plex traffic since it's going through ddns to my home server instead of the plex website. is it because the login form still uses plex's server?
Anonymous No.107145634 [Report] >>107145704
>>107145570
Do you TLS?
Anonymous No.107145704 [Report]
>>107145634
I believe I did, would that cause an issue?
Anonymous No.107145830 [Report]
>>107145570
probably makes a call to plex authentication servers from your remote location? no idea how plex authentication works and never will since jellyfin exists.
Anonymous No.107145845 [Report]
>>107145437
>>107145037
I found another one that seems to have the same chipset and is even cheaper, but no idea about the company. 10gtek is reasonably trustworth chinesium. No idea about this one.

>https://www.amazon.com/dp/B0B3F2SKMC
>29-31 dollars.

I'd probably just stick with 10gtek for that kind of price difference.
Anonymous No.107145888 [Report] >>107146221
anyone use authentik?
Anonymous No.107146089 [Report] >>107146114 >>107146587
>>107145287

Ubuntu server 24.04 LTS. Yes, I chatgpt'd my way through. I noticed many of the guides are out of date and I had to use prompt skillz to make chatgpt work, it tries to default to its training data and gets things wrong. Normally I don't have to use chatgpt for this but the information online is convoluted at least to me. I wouldn't recommend going into this entirely blind because you kind of have to know a little about basic homeservering, but technically I think I'm still a relatively novice level user.
Anonymous No.107146114 [Report] >>107146127 >>107146131 >>107146587
>>107146089
start using grok desu
Anonymous No.107146127 [Report]
>>107146114

No. I am fully invested in ChatGPT and since I use codex A LOT there is nothing else quite like that.
Anonymous No.107146131 [Report] >>107146176
>>107146114
>grok
kek, fuck off migajew retard. grok is the worst chatbot of the lot. works slower than any of the others by a long way too. so its slow AND shit.
Anonymous No.107146160 [Report] >>107147940
>>107144702
he wants you to show him a lab environment or something to do with your work that improves the network? what exactly is he looking for?

>>107144969
CDP will only map cisco endpoints because it's... cisco discovery protocol. he would have to implement LLDP to get full mapping for all host devices and what VLAN they're on, and LLDP is often considered a security risk and disabled, although opinions vary. there are also already tools which can do this better, and most tools are bad, which this will be. the tools that are bad also include other avenues and protocol awareness which would be miles better. he'd be better off developing some kind automation which reduces operational load for a common task.
Anonymous No.107146176 [Report] >>107146228
>>107146131
gonna have to disagree with you tranny. there are benchmarks that say otherwise via indisputable metrics.
elon is an insufferable autistic and so are you.
Anonymous No.107146221 [Report]
>>107145888
Used in the past, it worked for sso and there isn't much else to say other than that. Did it's job well
Anonymous No.107146228 [Report] >>107146698
>>107146176
no there aren't, jewish retard
Anonymous No.107146474 [Report] >>107146905
>>107135050
well?
Anonymous No.107146587 [Report] >>107146878
>>107146089
>>107146114
This is completely offtopic, but for sysadmin stuff Gemini renders much better results in my experience. It seems like if you enable Grounding with Google Search and Code execution for gemini-2.5-pro, it "taps" into higher quality resources, or at least is able to filter out the garbage and provide what I'd consider reasonable solutions, without much steering. In other words, the training data doesn't matter as much as the original documentation or forum discussions it finds through Google search. And if you expand the "Thinking" part of the output, you can see that the model is quite self-critical and readjusts its reasoning often (but not as cripplingly so as DeepSeek).
Anonymous No.107146698 [Report]
>>107146228
im actually a gorilla juicehead guido, sir. may i call you by your deadname?
https://llm-stats.com/benchmarks/llm-leaderboard-full
Anonymous No.107146836 [Report]
>>107124887
Honestly I have no clue why turkey is there. No turkish government site even uses it. I just remove that shit immediately whenever I set up a browser or server.
Anonymous No.107146878 [Report]
>>107146587

Thanks for the tip. It's 20 buckaroos for 2.5 pro, which is not too terrible, might try it for a month. I am very happy with codex for code generation but chatgpt isn't strong in sysadmin tasks to the same level.
Anonymous No.107146905 [Report]
>>107146474
it succeeded
Anonymous No.107147342 [Report] >>107147544
Maybe a stupid question but I want to retire my 2500k with 8gb to just run OMV with a few drives just as a backup nas and install jellyfin on it as a backup jellyfin. Will any cheap graphics card be enough for decoding / transcoding? Intel better or amd?
Anonymous No.107147443 [Report]
>>107143083
Got ddns-scripts on my OpenWRT router to practically instantly update DuckDNS if the interface IP changes.
Anonymous No.107147476 [Report]
>>107144702
Grind Jeremy's IT Lab CCNA course on youtube. Do 3 lessons a day. Should take 4 weeks max, if you're serious about it.
Anonymous No.107147544 [Report] >>107147632
>>107147342
Your 2500k has Intel Quick Sync. You don't need a GPU if you want to run Jellyfin. And what the fuck do you mean backup Jellyfin? If it's a backup why do you even need to run it? In fact, why are you even backing up Jellyfin on a WHOLE SEPARATE machine?
Anonymous No.107147632 [Report] >>107147795
>>107147544
If something in the real nas dies I can still watch my slop. I already have to back shit up since raid isn't supposed to be your backup as I was taught. What does a few 100 mbs matter to set up another jellyfin server in case of need?
Anonymous No.107147795 [Report] >>107147904
>>107147632
If your NAS dies then you restore it using the backup of your NAS.
I'm assuming you're using something like UnRAID or OMV and running Jellyfin in a docker container. You backup your NAS settings and configurations on a separate storage device, like a USB stick, then if it dies you can easily copy it back and have your set up running again. You don't use another machine for it.
Anonymous No.107147904 [Report] >>107148008 >>107148176
>>107147795
The NAS has 1 nmve (install + VMs) and 4 hdds (data) , it runs proxmox. One vm runs OMV and for now I run an lxc with jellyfin. Originally I wanted to mess around with proxmox on the 2500k and copy VMs and Lxcs around but thought it might be too much to ask from the 4 cores.

So my backup for a dead nmve in the NAS is what? I'm likely to fuck around and fuck shit up since I'm new and want to learn by trying shit out.
Anonymous No.107147940 [Report] >>107148920
>>107146160
Lab environment
Anonymous No.107148008 [Report] >>107148253
>>107147904
Jesus christ.
If your mobo only has 1 NVMe slot:
1. Get a second NVMe.
2. Unplug your main NVMe.
3. clonezilla main NVMe to backup NVMe on a separate machine using NVME enclosures.
3a. Do this whenever you make a big change. Or set up a cron job to do it every week or some shit on your Proxmox machine.
4. Plug main NVMe back in your Proxmox server.
In the event your NVMe dies then plug in your backup NVMe.

If your mobo has 2 NVMe slots:
1. LITERALLY MIRROR YOUR BOOT DRIVE TO A SECOND NVME

Nigga, you're literally making shit more complicated for yourself.
Anonymous No.107148176 [Report] >>107148253
>>107147904
Running a VM, to run a NAS, and then having that NAS run a container is one of the most jank shit I've seen in a while. Why not just run Jellyfin on it's own separate LXC?
Anonymous No.107148253 [Report] >>107148264 >>107148284 >>107148309
>>107148008
It seems a bit overdone to run this few data (currently 38gb for pve and vms containers) on a mirrored nmve. What if I could use the hw I already have as backup and just reinstall omv when the drive dies which let's be honest shouldn't be happening in years. Uptime is not a big necessity if the real data is safe.

>>107148176
>Why not just run Jellyfin on it's own separate LXC?
It does.
Anonymous No.107148264 [Report]
>>107148253
*safe and accessible from another pc.
Anonymous No.107148284 [Report] >>107148378
>>107148253
Since you're dead set on using a machine and an overly complicated set up, you do you champ.
Anonymous No.107148309 [Report] >>107148378
>>107148253
>It seems a bit overdone to run this few data (currently 38gb for pve and vms containers) on a mirrored nmve.
That's literally how you're supposed to do it for a storage appliance anon.
Anonymous No.107148378 [Report] >>107148401 >>107148433
>>107148284
I'll make it more complicated and setup the 2rd server as a headless debian and just use samba. Probably learn more.

>>107148309
Maybe stingy but the drive will probably not explode when it notices it isn't mirrored. Anything I could do with the extra space? Like creating and extra partition just for cache or i don't know what?
Anonymous No.107148401 [Report] >>107148430
>>107148378
>Probably learn more.
lol
Sure, you learn doing the wrong thing rather than learning the proper way to do it.
Anonymous No.107148430 [Report] >>107148480
>>107148401
Is building a 2nd server, with hw you already own, that can do the same thing as the 1st server when it dies really that outlandish? At best I have to wait for a new drive to arrive and not get stolen by jeet deliverers.
Anonymous No.107148433 [Report] >>107148462
>>107148378
Sure, point all the metadata for files to it.
Anonymous No.107148462 [Report] >>107148522
>>107148433
Can it be done with a mirrored nmve? I mean partitioning it so you use the first half for installs and the 2nd half for cache?
Anonymous No.107148480 [Report] >>107148559
>>107148430
>Is building a 2nd server, with hw you already own, that can do the same thing as the 1st server when it dies really that outlandish?
Yes it is, you dumb fuck. The best thing I would do in your situation is to first migrate OMV to a separate machine. OMV is recommended by the devs to run it bare metal. Running it on a VM works but if you have a problem then good fucking luck.
>b-but i want to learn
How the fuck are you going to learn if you're doing shit wrong?
If you want you can even make a cluster instead of a whole fucking new set up. At least that way when one of them dies you still have the 2nd machine running. Then you're fucking back to replacing the old drive that failed anyways.

>At best I have to wait for a new drive to arrive and not get stolen by jeet deliverers.
Buy the fucking drive now then while it working, so when it dies all you need is fucking 5 seconds to plug in the backup drive.
My god, you are retarded.
Anonymous No.107148522 [Report] >>107148559
>>107148462
I don't see why not, just create the raid array and then partition it.
Anonymous No.107148559 [Report]
>>107148480
OMV runs fine, it won't break cause I won't updoot. Because it's doing its task and only serves local. The devs just don't want to answer questions when it's not run bare metal.

Negro I'm going to learn when I do thing with gui and also do thing with commandline. Clustered your mom last nite. You are right I can keep a shitty homeserver running while I wait for delivery.

>>107148522
ebin
Anonymous No.107148694 [Report] >>107148730 >>107149307 >>107149656
Working with older hardware, want to build a home network with a TrueNAS as the fileserver. I see it has apps/docker now.

Is it more efficient for me to run stuff like jellyfin or handbrake on different machines that are more efficient at running those apps on other machines and just point them to the server for the data?

I feel like that is the best approach, especially if there are multiple users access the NAS for applications vs it just sending and receiving files.
Anonymous No.107148730 [Report] >>107149005 >>107149656
>>107148694
Is ZFS OS independent? Like if I have a ZFS Raid2 array, would BSD, FreeNAS, and TrueNAS all be able to 'mount' it (not sure what the right term is).
Anonymous No.107148920 [Report]
>>107147940
get CML make a DMVPN setup. cant do that on packet tracer. if you go the extra step to a real simulation software he might respect you more.
Anonymous No.107149005 [Report]
>>107148730
os independent? yes. but you have to watch versions. if the pool has features from a new version turned on it won't import.
Anonymous No.107149307 [Report]
>>107148694
It's best practice to separate your NAS and hypervisor. You can make it work by having TrueNAS also host your servers and probably easier, but if you have the hardware to separate them then might as well separate them.
Anonymous No.107149379 [Report]
>>107122404
Look at caddy, https://caddyserver.com/docs/
Anonymous No.107149449 [Report]
>>107129131
16TB? only if this was 2007
Anonymous No.107149656 [Report] >>107149688 >>107149696 >>107149709
>>107148730
As the other anon said, you do have to be careful about pool features. When you create a pool you can specifically enable/disable features. For instance, zstd compression allows the use of zstd compression. Before that your choices were (essentially) lz4 or gzip compression. If you attempt to import a pool that has a dataset with zstd compression enabled on a version of zfs that doesn't support it, it'll fail with an error message because it'll read the header information about pool features and go "idk wtf this is".

Pragmatically speaking you need to be far less worried about this stuff than you used to. Ever since the migration to openzfs they've done a lot to maintain feature parity across platforms. Worst case you'll have some delays depending on your platform/distro, and that's assuming you enabled the newer features which you don't have to do. ZFS will never silently enable new pool features without you specifically telling it to, so it's safe to import an older pool on a system running a newer version of ZFS temporarily.

I've been upgrading pools as soon as features become available now for years and not had any problems.

>>107148694
Install proxmox, and if you want a separate NAS appliance run truenas in a VM and pass disks through to it.

>efficient
Depends on how you define that. For home use your systems are going to be almost entirely idle the vast vast majority of the time. You (probably) don't need to run 40 transcodes at the same time, so even if the gpu in your hypervisor isn't as good as what you have on a different machine, it's unlikely to matter.

The only things that I'd suggest not virtualizing are things like game servers with very high performance requirements and your router. If something gets fucked up and you're virtualizing your router, you might not have a functional network to get in and fix stuff. If the router is running baremetal, you can always plug a monitor and a keyboard in if shit hits the fan.
Anonymous No.107149688 [Report] >>107149709 >>107150621
>>107149656
>if you want a separate NAS appliance run truenas in a VM and pass disks through to it.

THIS, use your NAS as a NAS, not as a hyper-visor.
Anonymous No.107149696 [Report] >>107149845 >>107150621
>>107149656
>Install proxmox, and if you want a separate NAS appliance run truenas in a VM and pass disks through to it.
Side note if you do this, SMART data/monitoring does not work. It works on Proxmox but your NAS VM won't see it. You could use an HBA card and do a PCIe passthrough instead so you have access to that but you open yourself up to some issues.
Anonymous No.107149709 [Report] >>107149845 >>107150621
>>107149656
Thanks, I have a small nuc type box with a couple of ethernet ports I plan to use as a router.
>>107149688
I want to do this, VM everything on bare metal, but not sure how to deploy it.
Anonymous No.107149845 [Report]
>>107149709

Install a hypervisor bare metal. Proxmox / ESX 8 ( you can find enterprise keys with no expiration, I use esx).

Like >>107149696 mentioned, pass through the SAS card your drives are attached to, to the TrueNAS VM. Assuming they are connected to a separate HBA and not on the mobo which can make things more difficult. Then deploy other VMs for your other apps. You can run a bunch of things on a single docker VM with 2 cores and 16GB RAM, so you won't need a lot of separate VMs.
Anonymous No.107150078 [Report] >>107150532
https://vmware.digiboy.ir/
I found ESXi keys (and ESXi patch ISOs, which incidentally need a support contract to download now) on this random Iranian website.
Anonymous No.107150129 [Report] >>107150178 >>107150691 >>107150755
Bought a Thinkcentre m715q before realizing ryzen would not be so good for jellyfin hardware transcoding. Should I return it and get an intel cpu thinkcentre because I cannot figure this hardware encoding out for the life of me.
Anonymous No.107150178 [Report] >>107150255
>>107150129
if you can, I would
Anonymous No.107150255 [Report] >>107150691 >>107150715 >>107150755
>>107150178
It’s eBay so I will try. Honestly just spoonfeed me. Is buying a thinkcentre with an intel cpu good enough or should I look for a certain model? Or just forget thinkcentre entirely and buy a different mini pc?
Anonymous No.107150532 [Report] >>107150586
>>107150078
Sussy
Anonymous No.107150586 [Report]
>>107150532
md5 matches what Broadcom publishes so seems okay to me
Anonymous No.107150621 [Report] >>107150686 >>107150719
>>107149688
I ran stuff directly on proxmox and ran an LXC for my SMB/NFS shares for years. It worked fine. I use truenas in a VM now, but it's mostly an ease of management thing. Nothing fundamentally changed. The hype around keeping them separate is overblown if you ask me.

>>107149696
>SMART stuff
This matters less than you'd think it does. SMART is still reported to the hypervisor, so if you have alerts setup it'll still get sent to you. That being said, I've completely given up on caring about SMART.

The problem isn't that SMART gives you false positives, it's that it gives false negatives. If SMART says a drive is having problems, it's almost always dying. If SMART says a drive is healthy, it /may/ be healthy, but you don't know that it is healthy. I don't think I've ever seen a system where ZFS noticed issues after SMART did. I've had drives do all kinds of stupid bullshit and swear they were healthy. Not any recent drives in recent years, but I've got a modest collection of bad drives that do all kinds of weird shit like randomly writing to the wrong sectors. ZFS, BTRFS, and my limited testing with BCACHEFS all correctly say those drives are fucked, and they do it almost immediately.

Never trust physical storage.

>>107149709
pfsense/opnsense are both great. They'll run on just about anything and have pretty good out of the box driver support for most common NICs.

I personally virtualize my router, but that's because I have a dedicated NIC on the board just for proxmox's webgui/remoting in, I setup the network aliases so that it's ID will never change, and that's plugged into a separate physical network for management stuff. Between that and a keyboard/monitor at the rack, I can recover stuff if shit truly hits the fan. You can get it working if you don't do that, but if/when stuff does break you're in for a really fucking bad time. A dedicated device for your router is the way to go for people that aren't prepared for a long and shitty day.
Anonymous No.107150686 [Report] >>107150855 >>107150940
>>107150621
>This matters less than you'd think it does. SMART is still reported to the hypervisor, so if you have alerts setup it'll still get sent to you.
Yeah, but the problem with this VMs should stay within VMs. If I set up an alert because my HDD is dying I want to set it up on my NAS VM rather than my hypervisor. But sure, lets say you set up a ton of cron jobs using your hypervisor. I would still like to have SMART data. I have external fans that sends HDD temps and controls when they speed up or speed back down. If you're running fans 100% it's not a problem but my server is in my room, where I sleep. This keeps noise levels down.
Also I trust SMART data more than I trust TrueNAS or unRAID's SMART reporting tool.
Anonymous No.107150691 [Report] >>107150855
>>107150129
>>107150255
intel QSV largely just works out of the box. I haven't played with the AMD encoding options, but friends that have mostly say that it works, but isn't as high quality as QSV. You might not actually need hardware transcoding if stuff is just going around the local network. Sure, you'll use a lot of bandwidth sending raw 4k over the network... but if it's all on the LAN, who gives a shit?

What cpu is in it? If it's a 2400GE that's roughly comparable to a 7700T. It has slightly higher multithreaded performance and more or less identical single thread performance. That should be supported by jellyfin.
Anonymous No.107150715 [Report]
>>107150255
NTA. I got a ThinkCentre m720q with an Intel cpu and it works great for Jellyfin transcoding. Quite compact and no issues so far.
Anonymous No.107150719 [Report] >>107150855
>>107150621
I'd use OPNsense over pfsense for many reasons
Anonymous No.107150755 [Report] >>107150855
ZFS tools should be integrated into Proxmox well enough that I don't know why you would bother using a VM for storage specifically.

>>107150129
Congratulations on the start of your cluster.

>>107150255
Really depends on what gen of intel you can get. Or if you want to splurge on the variants that have the pcie riser in it. Some variants have dual m.2 nvme slots, but I'm assuming your storage is handled elsewhere?
Anonymous No.107150855 [Report]
>>107150686
>Also I trust SMART data more than I trust TrueNAS or unRAID's SMART reporting tool.
My point is that I don't trust such reporting at all. I trust ZFS to detect errors. I stopped trusting drive self reporting over a decade ago.

As I said, I've got a collection of drives that claim they're healthy that aren't. I've got a few more that claim they have some issues when in actuality they're scrap. I probably should throw them away, but I've used them in the past to demonstrate this to people. I've had multiple conversations where I was told that this is impossible or exceedingly rare, and people only listened when I brought in a box of bad drives and showed each one of them fucking up a live system. Some people need to see it to believe it.

>>107150691
Forgot to mention. Check here for walkthroughs on testing stuff.
https://jellyfin.org/docs/general/post-install/transcoding/hardware-acceleration/#configure--verify-hardware-acceleration

>>107150719
As would I, but there are some advantages to pfsense.

>>107150755
>ZFS tools should be integrated into Proxmox well enough that I don't know why you would bother using a VM for storage specifically.
I largely agree. I swapped to using a truenas VM for managing my large zpools because it's easier to manage a lot of minor automation stuff. I used to go through and configure sanoid, then some custom python scripts, and finally I just swapped to truenas because I can configure stuff in a few seconds and it does the job just fine. Truenas has had some slow rollouts of certain zfs features via the gui, but at least it has a functional one for managing things like autohiding empty snapshots, automating offsite backups, and such.
Anonymous No.107150940 [Report] >>107151941 >>107152655
>>107150686
>my server is in my room, where I sleep.
Anonymous No.107151282 [Report]
I still don't get this
>trash guide, recyclarr, profilarr, niggar, redditar
obsession, can't this shit just be built in into radarr and sonarr by default?
I did a test run of this shit, it added gorillion of custom formats, I picked one of the 1080 profiles, requested a movie and it queued a torrent with 3 seeds that will download in a week if I get lucky, my first test media server with garbage setup and configuration would have picked the 3k seed one and would be done in an hour max, and this thing seems like it will make it even more convoluted to get dubs in language i want
Anonymous No.107151941 [Report]
>>107150940
I do this. I hear the drive clicking every night.
Anonymous No.107151998 [Report] >>107152477 >>107152516
Anyone have a less... professional setup?

me
>thinkpad t460 with debain server installed
>connected to network switch with its own subnetwork
>connect main PC to that as well
>Laptop only has 250gb HDD so put all my storage on main desktop and run jellyfin off of that
>run all other services off of laptop
>wireguard on my home router to access from outside
>otherwise everything is on the same subnet so I just alias them in my hosts file
Anonymous No.107152016 [Report]
>>107143083
I don't keep my server behind DNS - I wireguard in to my network and my "DNS" is my hosts file
Anonymous No.107152477 [Report]
>>107151998
your setup is doing its job and we are all very proud of it
Anonymous No.107152516 [Report] >>107152571 >>107152871 >>107152891
>>107151998
>wireguard on my home router
i never understood this part
for what purpose
why not wireguard to individual devices
Anonymous No.107152571 [Report] >>107152891
>>107152516
>don't expose one port
>expose several ports
makes way more sense, sure
Anonymous No.107152655 [Report]
>>107150940
im poor
dont make fun of me
Anonymous No.107152853 [Report]
>>107125968
Just use above Gentoo/Artix/Devuan.
Anonymous No.107152871 [Report]
>>107152516
I have wireguard running on my router
Anonymous No.107152891 [Report]
>>107152516
I actually have multiple sites connected with point-to-point wireguard VPN tunnels on the opnsense boxes that control the networks. Also >>107152571.
Anonymous No.107152902 [Report] >>107152997
Getting more into home automation. Home Assistant is neat but it's kind of braindead. You have to write all the automations yourself and the inference detection is minimal.

Working on IRC bots instead which report info on my houseplants and such
Anonymous No.107152997 [Report]
>>107152902
>Working on IRC bots instead which report info on my houseplants and such
Based
Anonymous No.107153696 [Report] >>107153811 >>107155086
New thread?
Anonymous No.107153811 [Report] >>107154437 >>107155086
>>107153696
I'll host the logo
Anonymous No.107154437 [Report] >>107155086
>>107153811
bgp announcement when?
Anonymous No.107155024 [Report] >>107155032 >>107155086
make a new thread so I can ask my fucking noob question
Anonymous No.107155032 [Report] >>107155042 >>107155086
>>107155024
no this is the last thread ever
Anonymous No.107155042 [Report] >>107155080 >>107155086
>>107155032
noooooooooooooooooooooooooooooooooooooooooooooooookay so what would be the least retarded way to upgrade my zfs pool in proxmox, should I make a new pool altogether or add new larger drives to the pool
Anonymous No.107155080 [Report] >>107155087
>>107155042
yes
Anonymous No.107155086 [Report] >>107155100
>>107153696
>>107153811
>>107154437
>>107155024
>>107155032
>>107155042
I didn't sign up as the new bread maker when I made this thread. You make it.
Anonymous No.107155087 [Report]
>>107155080
thank you so much anon I feel so educated
Anonymous No.107155100 [Report]
>>107155086
This is outrageous. It's unfair.