← Home ← Back to /g/

Thread 106335912

80 posts 12 images /g/
Anonymous No.106335912 >>106336634 >>106337253 >>106337935 >>106339136 >>106339734 >>106340197 >>106340295 >>106340298 >>106340534 >>106340549 >>106340823 >>106342993 >>106343488 >>106344672 >>106344681 >>106345795 >>106346131 >>106346543 >>106349314 >>106350269 >>106352070 >>106352560 >>106353999 >>106355412
Opinions about kubernetes?
Anonymous No.106336114 >>106336507 >>106355412
Anonymous No.106336507
>>106336114
I form my opinions on tech based on this guy
Anonymous No.106336634
>>106335912 (OP)
If you have a use case for it, it’s great. Not so great if you need something more basic. Suffers a bit from lacking stuff that should be in the core - like seriously who the fuck thought letting someone else make helm was a good solution?
Anonymous No.106337253
>>106335912 (OP)
I hate that I need to know it to land a job
Anonymous No.106337935 >>106339101 >>106339725 >>106340261 >>106351835 >>106355417
>>106335912 (OP)
How do you guys manage your secrets?
Anonymous No.106339101
>>106337935
Just use vault
Anonymous No.106339136
>>106335912 (OP)
It started out as a python script loop to restart vms that died.
Then it bloated out uncontrollably.
Anonymous No.106339725
>>106337935
ksops
ESO + Azure Key Vault
Anonymous No.106339734 >>106340295
>>106335912 (OP)
Pays my bills
Anonymous No.106340197 >>106340219 >>106340227 >>106340295 >>106351431 >>106351843
>>106335912 (OP)
>operate it yourself and need uptime
allocate an additional team just for kubernetes ops

>use it a "as a Service"
Only makes sense if you operate some huge compute clusters with several heterogeneous services that need to be spread over compute nodes, you need a service mesh, metrics plugins, node affinities etc. etc.
It's the "industrial farming conglomerate" level of "cattle, not pets".

For very homogenous fan-out things without bells and whistles you can just use scalable container services or boot up VMs on demand directly.

For small scale docker-compose or systemd and a bit of ansible grease will do.
Anonymous No.106340219 >>106351941
>>106340197
I forgot to mention: Helm is cancer
Anonymous No.106340227
>>106340197
Getting access to Operators is enough reason to fuck with kubernetes imo.
Customized workflows become very messy with docker-compose + ansible, and are usually much simpler using the operator for the given application.
Anonymous No.106340261 >>106344305
>>106337935
just put them into secrets lol who cares
that's what secrets are for
>noooo you have to store in Secret Management System for security sir!
Anonymous No.106340295 >>106340340
>>106335912 (OP)
The best way to get HA and load balancing. Also allows for horizontal scaling very easily.

>>106339734
Also based for homelab use.

>>106340197
>scalable container services
Like... Kubernetes?
>VMs on demand
Tons of overhead and is objectively slower than just spinning up another pod. Imagine having to run an ansible playbook for 30 minutes.
Kubernetes is free and eliminates a ton of issues and makes deployment more efficient with gitops.
When a business relies on any kind of software it cannot go down, ever. That's worth hiring a small ops team to handle a kubernetes cluster.
Anonymous No.106340298
>>106335912 (OP)
unholy abomination in service of satan (cloud computing and saas). together with docker makes devs' job a living hell.
Anonymous No.106340340 >>106340441 >>106340687
>>106340295
>Like... Kubernetes?
ECS

>Imagine having to run an ansible playbook for 30 minutes.
I "script" VM bringup in sloppy Rust just for the threads and error-handling.
Cloud VM is ready in 2 minutes, including package installation and ~60GB app assets download.
Ansible is for things we're running on-prem (for compliance reasons) or on Hetzner (AWS tax too high).
Anonymous No.106340441 >>106340622
>>106340340
ECS is literally like a proprietary Kubernetes alternative by Amazon, but less capable. Using an UI to manage Kubernetes is 90% the same as managing ECS, but with different names for the same concepts. They are just named differently and are proprietary and managed by Amazon instead of yourself.
Anonymous No.106340534
>>106335912 (OP)
I don't know it, and I don't care to know it.
Anonymous No.106340549
>>106335912 (OP)
solution looking for a problem to solve
Anonymous No.106340622
>>106340441
>but less capable
only using it for "start 10 containers of this, put them behind a load balancer", that's it.
Simplicity in terms of components we actively use is the goal here, then I can also run N=1 locally in docker.

>UI
Using CDK, it's just a few lines of code that way. If I did it with managed kubernetes i'd still have to set up up the non-EKS parts in AWS anyway.


My approach is to get things to run local first, then put it in the cloud and only replace a few pieces with cloud services (e.g. filesystem -> blob storage, config file -> secrets manager) and then scale it up. Keeps things portable and easier to test.
This works for my problem domain because it's really simple fan-out with little shared state.
Anonymous No.106340687 >>106340738
>>106340340
>ECS
do people actually use this over EKS, or are you taking the piss? it's like training wheels without the bike.
Anonymous No.106340738 >>106340766
>>106340687
It just werks, for simple cases
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs_patterns.ApplicationLoadBalancedFargateService.html
Anonymous No.106340766 >>106340804
>>106340738
ECS + serverless? do you have money growing on trees?

this just gets weirder and weirder.
Anonymous No.106340804
>>106340766
>do you have money growing on trees?
Kinda yeah, the costs for the web services are a rounding error compared to the GPUs we also rent.
Anonymous No.106340823
>>106335912 (OP)
Gives me money for food and shelter.
Anonymous No.106342993
>>106335912 (OP)
>Opinions about kubernetes?

One of the worst pieces of garbage software i've used in a professional context.

Stay away from it.
Anonymous No.106343488 >>106344356
>>106335912 (OP)
You don't really need it.

>muh gulag uses it
>muh scale
>muh nine-nines HA

You seriously don't need it. That being said, it's alright and it's cool, but the documentation is horrible, it's way too complicated to get things done in it, and the ecosystem it breeds is terrible, because it's by and for pretentious Eurasian asshole bigtech techies. "Oh fuck I need to... oh snap! They haven't defined it in the API yet so I guess I'm screwed whoop de doop!"

Again, it does work and it is pretty cool once you get used to it, but the learning curve is way too steep, the community is not very friendly to newbies, and you need to learn to talk the talk and walk the way, but once it clicks it is cool. I run a homelab on kubernetes and by far it's the best uptime I've ever had in my life running my apps (plex, seedbox, websites, etc.). It is worthwhile and by all means if you can work with it, do it, but it's not the second coming of Christ like googlers say.

I also recommend k9s as an alternate CLI client. I stopped projectile vomiting at kubernetes after I picked it up, and now I love it a whole lot.
Anonymous No.106344305
>>106340261
It's not only about the security for me, its about ease of use and decent back up and restore capabilities.
Anonymous No.106344356
>>106343488
This guy gets it. Homelab with k3s and k9s is so comfy to and just werkz. Also feels nice getting paid 300k a year and dealing with large systems.

ECS and fargate / whatever docker vm wrapper stuff is kinda trash, just run docker lmao
Anonymous No.106344672 >>106344924 >>106350201 >>106350376
>>106335912 (OP)
How do I get started with k8s? Can I spin up a basic cluster on an old machine with one or two drives?
Anonymous No.106344681
>>106335912 (OP)
only non-productive people hate it
Anonymous No.106344924
>>106344672
Yeah, try out k3s. Has a simple setup script. There's also an ansible playbook for it too.
Anonymous No.106345120 >>106345450 >>106351835
The devops guys at my company love it.

>"It makes things scalable!"
>So, can I get more CPU for this service?
>No, the cluster is full.

>"It's high availability!"
>Random shit is constantly down for dozens of different reasons

>"It's modern"
>Cool, can I still do that thing that Unix has provided since 1970
>Well, actually, you need to install this project from github, but make sure to pick the project from the master branch because the current release is imcompatible

It's for devops people justifying their own employment and larping as FAANG engineers.
Anonymous No.106345346
Overengineered piece of shit, that many DevOps fags can't even use properly. When you see production OracleDB running in k8s, you will realize that hell is real.
Anonymous No.106345450 >>106345499
>>106345120
Enlighten us.What would be an easier way to deploy let's say 200 services in your company. Where each service might release a new update every week. And have the least amount of downtime?
Anonymous No.106345499 >>106345767
>>106345450
My previous company was considerably bigger (Fortune 500 company), operating at a much larger scale, with our own hardware and data centers, and we deployed shit using Puppet, which I'd say is an order of magnitude less complex than all this Kubernetes, Helm, Terraform, etc. nonsense.
Anonymous No.106345538 >>106345754
it runs so well i run the risk of forgetting how i set it up when something breaks
Anonymous No.106345754
>>106345538
That's what ArgoCD is for, or one of the other gitops solutions.
Anonymous No.106345767 >>106352012
>>106345499
Sure it's easier... at first. But when you want to do stuff that kubernetes excels at. Like rolling releases and self-healing or scaling. You end up relying on either Puppet Bolt, and or various scripts. Is it really easier then? And alot of times you probably still want to use docker to containerize your application anyways. You can use docker with puppett. But why not just use kubernetes at that point?
Anonymous No.106345795 >>106345925 >>106346312
>>106335912 (OP)
Fuck K8s. Made every company think they are google-scale or some shit. Leading to over engineered abominations that somehow I end up needing to fix.
Anonymous No.106345925 >>106346050
>>106345795
We got like 100 websites at my company, and tons of services doing data processing and what not for each of those sites. Kubernetes is a god send. It beats the rickety ass multiple Docker swarm setups that we had.
Anonymous No.106346050
>>106345925
Recently had to converge a mix of VirtualBox Windows VMs, Docker Swarm clusters, Docker Compose and some random EC2s. Feel like I can run the entire world on K8s now.
Anonymous No.106346131 >>106346341 >>106350296 >>106351835 >>106352029
>>106335912 (OP)
Solution for a problem that doesn't exist
Unreliable shit in hands of 90% people
If you want to have less reliability (because it's not easy to make it reliable) go ahead
Every company had different ways of working with it, each was less reliable than plain old JEE web server(s) and deployment of wars, ears, jars with some Jenkins
If you say it's not scaling you are wrong. JBoss HA is a thing. Splitting wars across different web servers and configuring service discovery is not rocket science either and is much better than retarded not typesafe json endpoints, RMI was so much better.
Chances that your people actually grok Kubernetes are slim so you can expect shit reliability with expensive people still having no idea what they are doing
No wonder people waste so much money on bullshit Cloud, it's because they are incapable of working with it
I worked at 3 different companies where we had kubernetes and I've never seen one without retarded issues that wouldn't exist if it wasn't used
If I as a dev need to learn this shit and their custom way of working with it to help devops resolve their issues it means they failed at their job and I was the dev+ops there
Most devops are ops-dev that picked up a fancy tool without having prerequisite knowledge and most companies have such people

When I look at the software today I see that Java was ahead of it's time for most things:
Java desktop apps were considered slow, but now we have bullshit electron which is even more bloated. The JVM improved so much and the computers are so much faster that these applications today would fly.
Java EE and being able to modularize systems to simple artifacts you deploy to server which the server recognizes and reloads is basically what microservices premise is, but is actually not awful.
JNDI configuration is unfathomably more sensible than bullshit yamls.
RMI just works, is typesafe, seamless serialization, no coercion of stuff to string (JSON).
Anonymous No.106346312
>>106345795
When you do it right it does work though. The hard part is nobody knows how to fucking configure Kubernetes, not one single person so you rent one and then that gives you various degrees of lock-in, etc.

When you actually know what you're doing then Kubernetes lets small startups easily scale their dogshit code as they grow.
Anonymous No.106346341 >>106347121
>>106346131
>Java EE and being able to modularize systems to simple artifacts you deploy to server which the server recognizes and reloads is basically what microservices premise is, but is actually not awful.
Didn't that require something like Jetty to pull off properly and requires all sorts of knowledge about Java internals to tweak its memory allocation and garbage collection, etc, otherwise you can be sure it's going to run like crap.

The concepts were sound though.
Anonymous No.106346543
>>106335912 (OP)
I watched the documentary not long ago and they tell how Pokemon Go was the first real in production product that relied on k8s.
Made me wonder if that's part of the reason why Pokemon Go became as big as it is, ie Google really bankrolled them behind the scenes so that k8s wouldn't make a bad first impression
Anonymous No.106347121 >>106351835
>>106346341
Yes, you need to understand your application, but if you care about performance you need to know your application regardless of the tech. You can use JMX with Zabbix or other monitoring solution to gather data. You can also look at the heap dumps to realize which objects are fattest and think if you can make them leaner. You can also find memory leaks both from charts and heap dump. Optimization is not a thing you do once. It's maintenance.
Anonymous No.106349314 >>106349458
>>106335912 (OP)
I think using it is probably overengineering.
Anonymous No.106349458 >>106355407
>>106349314
For running a WordPress blog yeah.
Anonymous No.106350201
>>106344672
https://github.com/k3s-io/k3s-ansible

stand up X machines, write down their IPs
add them to the config

ez
Anonymous No.106350269
>>106335912 (OP)
it works.
if your software sucks, it doesn't.
If running kubectl rollout restart causes your software to disconnect users, your software is not k8s ready. fuck you.

that's it.
Anonymous No.106350296 >>106351843
>>106346131
>Java
yep, found your problem.
like I tell everyone, if your software is shit, it won't work out.

universally, without fail, basically anything java is probably shit and shouldn't move to k8s.
Anonymous No.106350376
>>106344672
use kind or minikube locally. podman play is an alternative.
then you do real VMs or a cluster of shitty machines you have, like RPis or whatever.
Anonymous No.106350435
how to store stuff on this thing? longhorn etc
Anonymous No.106350451 >>106350700
I use docker :D
Anonymous No.106350700
>>106350451
If it works that's good. No need to over complicate things for no good reason. But "just fiddling around" is good enough reason though.
Anonymous No.106351431
>>106340197
>allocate an additional team just for kubernetes ops
Correct.
Anonymous No.106351835 >>106351843
>>106337935
just use vault anon, it is literally as good as people make it out to be

>>106345120
>t. retard works in company full of incompetent people and has no idea what he's talking about
sorry your workplace is ass, though it seems in line with hiring you if your takeaway is "yeah every single large company and their entire infra team is retarded"

>>106346131
>Java EE and being able to modularize systems to simple artifacts you deploy to server which the server recognizes and reloads is basically what microservices premise is, but is actually not awful.
I'm the biggest java shill you will find, but cmon now that's just bs
the application server era was fucking dogshit and thank God we are done with this shit

>>106347121
none of this has anything to do with Java EE though, that's just Java performance work in general
needing to hand-manage a Tomcat/jetty/undertow/jboss/whatever that has 34 contexts running on it, each with their own EE compatibility version etc was a monumental pain in the ass though
and noisy neighbor problems are already poorly handled by Linux, so all in the same yuge JVM lead to exactly the kind of shit you can guess
Anonymous No.106351843 >>106351915
>>106351835
(cont.)

>>106350296
it's got nothing to do with Java, but the crux of your message remains correct
if something "can't run on k8s" it means one of:
1. it can't tolerate more than 1 instance running, or
2. it can't tolerate any kind of failover
and in either case, it's not a k8s problem but a "your software is a piece of shit" problem

>>106340197
>For small scale docker-compose or systemd and a bit of ansible grease will do.
yeah nobody here is arguing that retards running 3 pieces of software need k8s
but any large-ish platform quickly reaches a point where this is a nightmare to actually operate
there's a good reason infra people fucking love k8s and it's not money or job security
it's that it literally eliminates 50% of the tedious and error-prone work that we had before
Anonymous No.106351915 >>106354328
>>106351843
(cont. cont.)

There are a handful of fair criticisms of k8s tho:
1. the rbac system is not well documented, and often quite limited or counter intuitive
=> everyone uses OPA, which is an annoyingly complex piece of software

2. network isolation and roundtrip optimization features are equally poor + cluster peering doesn't exist natively at all
=> cue service meshes, which are also fairly annoying to work with, but arguably only necessary at serious scale, and not really more complex than existing SDN solutions network engineers are accustomed to, so it's a bit of an "eh"
=> (internal|external)TrafficPolicy on services are a retarded band-aid and everyone just pretends it's fine

3. manifests are verbose as fuck
=> there's no easy solution, and templating solutions (Helm charts, Jsonnet, ...) are absolute dogshit to maintain
=> the better solution is everything-operator+CRD, but that's expensive in dev time, and doesn't help with standardising things (pdbs, priority classes, ...)
Anonymous No.106351941
>>106340219
FUCK HELM
use werf, I am not a flant employee btw
Anonymous No.106352012 >>106352297
>>106345767
>Like rolling releases and self-healing or scaling

Something that old Elastic Beanstalk does with a few clicks and no bullshit, something that a 1 person team can manage easily.
Anonymous No.106352029
>>106346131
>Chances that your people actually grok Kubernetes are slim so you can expect shit reliability with expensive people still having no idea what they are doing
>I worked at 3 different companies where we had kubernetes and I've never seen one without retarded issues that wouldn't exist if it wasn't used

I can confirm. Exactly this is what happens.
Anonymous No.106352064
Me proud 2 announs exgonad is now part ob retardia icon pack

U in da hall on famoose

TA DAAA

>>106348068
Anonymous No.106352070 >>106352123 >>106352333
>>106335912 (OP)
Kubernetes is like teenage sex. Everyone thinks it's great, everyone says they are doing it, everyone thinks everyone else is doing it, but no one is actually doing it. I learned it to get jobs but none of the companies I applied for actually use it (but they still list it as a requirement).
Anonymous No.106352123
>>106352070
Anon, I'm using it right now thoughbeit... I also manage shit that isn't kubernetes that wishes it was.
Anonymous No.106352297
>>106352012
I'm assuming beanstalk is similar to Azure App Service. Which is nice if you have a few services running in your company, that all work independently. But is it also nice when you have around 100 to even a 1000 services running, that also may want to talk to each other?
Anonymous No.106352333
>>106352070
>no one is actually doing it.
anon...
Anonymous No.106352560
>>106335912 (OP)
ITS TRASH, ITS even worst when your devops guys are subpar.
Anonymous No.106353999 >>106355019
>>106335912 (OP)
Opinions on Ceph? I don't fully understand the difference between it and k8s. In terms of jobs, which one is better to know?
Anonymous No.106354328
>>106351915
>3. manifests are verbose as fuck
Fucking love llms for this. Just tell it what you want and bam, right there in 5 seconds. Read it over to double check and make tweaks, then deploy it.
Anonymous No.106355019
>>106353999
One is a storage cluster the other a container workload orchestrator. You’ll probably want to learn both, but k8s is the one to learn first. But before anything, learn how containers work under the hood.
Anonymous No.106355407
>>106349458
>WordPress
That too is usually overengineering.
Anonymous No.106355412
>>106336114
Please link video
>>106335912 (OP)
Amazing in theory waste of time in practice, same as terraform.
Anonymous No.106355417 >>106355525 >>106356962
>>106337935
We have a GitHub repo called β€œsecrets”.
Wish I was kidding
Anonymous No.106355525 >>106356962
>>106355417
Like sealed secrets? Or one mistake and disaster?
Anonymous No.106356962 >>106357679
>>106355417
>>106355525
I bet their situation is probably more common than you'd think. What's the right way to do secrets at scale though?

Maybe something like Vault?
https://www.hashicorp.com/en/products/vault

But what if you don't want a whole product like that?
Is there some simple software that can encrypt secrets in a git repo, store the key securely on the server (in a TPM maybe?) and decrypt them using that?

I would guess a lot of people store secrets in plain text as root or environment variables, etc with a backup in a git repo (possibly encrypted), etc, because doing this right is such a pain in the ass.
Anonymous No.106357679 >>106359511
>>106356962
>But what if you don't want a whole product like that?
>Is there some simple software that can encrypt secrets in a git repo, store the key securely on the server (in a TPM maybe?) and decrypt them using that?
Maybe we could take the secrets, and seal them somehow...
Anonymous No.106359511
>>106357679
That's what a TPM is for but some software stacks are retarded and want plain text secrets or environment variables.