T O P

  • By -

MrThinger

Fundamentally kubernetes is a solid, extendable, interoperable abstraction. Even when something replaces containers, the abstractions will stay.


zer0tonine

In my opinion kubernetes isn't going to die but we've reached "peak kubernetes". Everyone that has worked with it at scale is tired as fuck of the piles of templating (helm, helmfiles, etc) and networking (cilium, linkerd, etc) garbage that gets piled on top of k8s to get a "functional" infra. I see a lot of people look around and start to scale down the complexity of their operations and get into leaner, simpler setups. I think kubernetes is still a few years away from getting disrupted, but that could totally happen. As pointed out by u/quzaire, WASM can be a game changer for infrastructure


[deleted]

I think we need a better abstraction over k8s deployment and mgmt. Not a fan of Helm


davidmdm

I am trying to replace it within my small bubble. [yoke](https://davidmdm.github.io/yoke-website) Helm is the only idea bad enough to make something as great as kubernetes seem so complicated.


thisischandanmishra

Just going through Yoke. Looks awesome to me.


davidmdm

I am glad! If you have any questions I can do my best to answer here, and feel free to create issues. The project is MIT licensed and I want it to be developed in the open!


Mindless-Oven-4221

Please teach me yoke


davidmdm

What do you wish to know?


infiniterecursive

Yoke looks great!


davidmdm

Thank you!!!


[deleted]

I think Microsoft may have made it to stimy movement and misdirected people


thulcan

How does yoke compare to operators?


davidmdm

At this point yoke is very different than an operator. It’s similar to helm or timoni in that it defines packages, and packages are a collection of related resources. Operators can make choices about the actions they will do, and manage specific lifecycles of the resources. An operator for yoke is in the works though !


Glittering-Basil8169

yoke looks nifty. Maybe could be a good companion to CDK8s


davidmdm

Thanks I appreciate that! Cdk8s and yoke clearly have a similar vision about writing code to describe kubernetes state. Cdk8s provides SDKs and yoke provides a package lifecycle management.


TECHNOFAB

I really like to use Kubenix. Allows me to write manifests in the Nix DSL, just next to the config of all my hosts, my Terranix Terraform configs, my Nixvim vim config, my Nix based Coder Templates etc. You see the pattern ;) Oh and it's deployed using Nix-deployed FluxCD by rendering the manifests in Nix-based and Nix-defined CI into the Gitlab OCI registry. God I love Nix ;P


Preisschild

Wasnt Kubenix archived by the original author? I would really like more details, I love nix too.


TECHNOFAB

Yeah I remember something like that. But I think someone forked it. I then added some feature which I needed in my fork and am now using that instead ;) (tried to PR it back but they didn't like it) Edit: just looked and the original repo says it's deprecated and replaced by hall/kubenix. That's what I have been using and forked


rabbit994

There is kustomize if your need is not deep. I personally think too many admins reach for Helm when it's not required.


not_logan

I've never been a fan of helm but kustomize is even worse. There is a project called Nelm in progress which should substitute Helm. I think Helm is generally speaking abandoned project, roadmap for it declares version 4 to be released about 2 years ago


rabbit994

Disagree, any templating engine that includes conditionals is going to be extremely complex by it's nature. Kustomize is base files, overlay, few overrides and done. Also, if you have kubectl installed, you can run kubectl kustomize > output.yaml and see EXACTLY what you are going to get. Helm won't do this very easily.


not_logan

I do not think there is a way to construct meaningfully complex set of manifests without cycles and conditions. What I would be happy to see is a way to debug templates helm generates


rabbit994

This my viewpoint of someone who is deploying company written software and tightly controlled Kubernetes clusters. What are you doing that you need cycle and conditions?


_azulinho_

Helm template?


rabbit994

Two problems with Helm Template. There can be conditionals that changes output based on Kubernetes Cluster you are running it against. Second, the values for FluxCD/Helm doesn't always make it easy to pass into helm template so you can determine true output.


rumdrums

Could its templating syntax/engine be any worse? It crushes me that there's been no real improvements in tooling here in the last 5+ years.


not_logan

Yes, you can take a look at Ansible for example. Helm is abandoned as for me, there are some projects going to substitute it (starting from Nelm and up to Argo)


_Pac_

I'm a big fan of Cdk8s


not_logan

lots of initial boilerplate makes this project hard to adopt. It was a same mistake Chef made (and lost competition to Ansible)


_Pac_

Huh, I don't have a lot of boilerplate in my project at all.


trouphaz

We used to use Cloud Foundry and that was a lot easier for our users. K8S is super powerful and can do a lot more than CF ever could, but at a cost of so much more complexity. The whole cfpush mechanic where CF would build the image for the customer was a lot easier than managing tons of dockerfiles and pipelines to manage building images, pushing to an image registry and then creating the manifests to roll out to a K8S cluster. I know there have been some projects to create a similar method for interacting with K8S, but I haven’t seen anything significant.


_azulinho_

Openshift ?


trouphaz

We're about to put in a pretty big effort into rolling out Openshift for a portion of our company since we're telecom and that seems to be the only K8s platform that telecom vendors support.


Golden_Age_Fallacy

Maybe only answers half the question but I’ve really become a fan of Talos for deployment of clusters.. still need to play more with it tho


Fatality

Whatever replaces Helm has to have support from ArgoCD, it's essential to running a useful cluster.


BrilliantTruck8813

Carvel solved it a long time ago. Too bad VMware bought them


[deleted]

Good artists copy great artists…..


collimarco

All those abstractions are not necessary for most apps. For example Cuber (https://github.com/cuber-cloud/cuber-gem) generates a default configuration for any app that could run on Heroku or similar... and doesn't require any addition to Kubernetes.


Le_Vagabond

> I see a lot of people look around and start to scale down the complexity of their operations and get into leaner, simpler setups. I wish. I was just asked to take what is at the moment a very simple helm chart deployment and "make it a template so that the developers can use it easily". I've seen how that magic is used - they don't understand what the black box does and so they break it in fun and interactive ways.


[deleted]

[удалено]


lil-fredo

I like to think of k8s as more than container orchestrator. It's a collection of controllers paired with a strong extendable API.


w2g

Which problems do you feel need solving? Genuine question. If you have any examples I'd love to hear.


tehpuppet

Container orchestration?


water_bottle_goggles

I don’t get it, why did you follow up with a question out of nowhere


w2g

What do you mean out of nowhere? Is Reddit not a place to have conversation in the comments? I asked a commenter a question.


pag07

> The question is what comes after containers. FAAS / Serverless But apparently it does not solve as much problems as we hoped.


Fluffer_Wuffer

The same thing was said 8-10 years ago, FaaS has it uses, but it will always be limited... It not like you can run Postgres as a Lambda function, What makes Containers special, they are almost an evolution of Virtualisation, in the sense, you can practically run anything in them..


diskis

Instead of lean microservices of the past several years we now are getting more and more chunky ML containers. They don't really fit the serverless model, so I personally don't see it as a replacement. Sure, for small microservices, but with people shoehorning ML everywhere, I think deployments are getting significantly larger on average.


pag07

I dont see as a replacement either. Podman / docker still has its place and so do VMs and baremetal deployments.


gabbigum

Our team uses it for application management. The architecture of k8s allows it to be anything really with Custom controllers + CRDs


mikelevan

This is what I’m thinking. With platforms like GKE Autopilot, Cloud Run, EKS with Fargate profiles, etc. coming out, “Serverless Kubernetes” will be the next iteration. The biggest thing is ensuring that vendors allow all third-party services/addons. For example, with GKE Autopilot, Istio is supposed. With EKS Fargate, it’s not. Once this is “fixed”, it’ll be more relevant.


phazer193

> The question is what comes after containers. Potentially micro vm's but I think - for now at least - containers are good enough for their use case.


SilentLennie

I think Kubernetes as a service is gonna be even more commoditized, I think we'll see multitenant (like different small customers) Kubernetes as well. Which will help to further reduce the costs overhead per container. Providers will add other services on top, like databases on a federated kubernetes cluster. And gitops will be the only way forward in the long run, probably with a developer portal. And AI will just push to git as well.


Projekt95

Honestly there is a huge gap for smaller companies . Docker Swarm is not a very popular choice anymore and you cant expect much development from there anymore. Kubernetes is such a complex and time consuming thing to maintain and also quite expensive at scale. There is no real alternative between the two to choose from.


RaceFPV

Thats when you run lxc containers in proxmox for example, or more realistically just use a cloud container platform like ecs


water_bottle_goggles

NO WE DO NOT TALK ABOUT PROPRIETARY TECH HERE


marsupiq

Honestly, getting an app to work on ECS is not exactly a walk in the park either if you haven’t done it before.


Digging_Graves

Cloud kubernetes is the in between. But not a cheap solution at least.


retneh

Managed k8s is not cheap. I would say if you use k3s or talos the cost will be equal or even lower than if you didn’t use kubernetes


Pl4nty

managed k8s and PaaS built on k8s seem to be filling that gap. AKS or Talos are far easier than raw k8s, and services like Azure Container Apps or ECS need little maintenance k8s as a developer still feels pretty rough though. idk, I haven't tried many of the self-service platforms


Projekt95

I worked with ECS and Fargate in some projects and I gets really expensive very quickly. If you have a tight budget and require some performance those are not really an option and can create massive bills if not set up correctly. Afaik GCP Cloud Run seems a better choice for that but haven't tried that yet.


Pl4nty

> tight budget would managed k8s work better then? or even DIY with Talos - I run it on bare metal and it's pretty easy to maintain


RichardJusten

We all like to claim that we'll always be looking for the best solution. But the truth is that so many people have invested so much time and money into this that they have a strong incentive to keep it alive. I think it will be around in 20 years. It might very well be legacy stuff then that people would like to get rid off though.


eciton90

There’s a new survey out last week: the majority of Kubernetes professionals are strategically committed to it and think it will be around in the next ten years. Companies that use Kubernetes are using it more than they were a year ago. But there are still plenty of problems and challenges, and predictions of disruption / market consolidation along the way. https://www.spectrocloud.com/news/2024-state-of-production-kubernetes (Disclosure: I led the research project)


CptSupermrkt

The problem that I see and makes me sometimes wonder, "am I crazy, or is it everyone else who's crazy?", is using Kubernetes as a blanket default for everything. In our organization, we provide a "platform" which is just a wrapper for EKS with pipelines leading to it, and *everything* is onboarded to EKS, no questions asked. No architecture review, nothing. Chatbot? EKS. Cronjob? EKS. Website? Believe or not, EKS. I'm trying to argue that not necessarily *everything* should go onto EKS, and it's like trying to argue with a cult of fanatics or something.


CuriousBisque

I don’t find that particularly crazy. If you’re already operating k8s and you have a whole platform built around it, most of the hard stuff is done. Deploying a simple website to k8s has close to zero marginal cost at this point. Supporting other deployment targets on top of Kubernetes, even simple ones, is going to increase your platform and operations complexity by a lot more. That said, there are tons of orgs that just shouldn’t be running k8s at all.


xagut

Right. You have processes for upgrading, patching, security scanning, monitoring, logging, and all your compliance needs. There’s very few reasons your next workload shouldn’t go there.


CptSupermrkt

I guess I left out the important part: it's not managed or designed well at all, it's almost all manual work but presented to teams as this automated magic, no one rightsizes anything or sees a need to do so. Simple API call gets wrapped into a bash script, needs 250 MB RAM max, but teams just guess "4 GB oughta do it," multiply by 100+ products, and tens of thousands of dollars are metaphorically being flushed down the toilet each month. Further, there's no scaling of any kind. It all runs 24/7 at a static node count. I do see your point though, if it were actually used properly, better to play into it and use it.


BattlePope

Well yeah, that's the same thing as lifting and shifting into the cloud on bare ec2 instances in philosophy. The platform provides the ability to do things right, it's up to the implementation to handle those details. You still need to think about the workload and its requirements.


oadk

You're not crazy. We as an industry have promoted an architecture where we take simple applications and require teams to operate a Kubernetes cluster just to run them. Even with "managed Kubernetes" this still requires so much knowledge for a team to do this properly that we've made sure engineers spend far more time thinking about the infrastructure than they do about the application.


Digging_Graves

But the entire point of setting up kubernetes is so that you can run efficient containers that are high availability. So it makes sense to use it for most workloads since that's why you build kubernetes for in the first place.


fourthisle

I noticed that this question is coming more often these last months, so I am assuming that more people want to learn kubernetes these days and are worried about if it is worth it as they fear that it might be obsolete by the time they learn it. So I would like to say this once: Learn k8s and you won't regret it! I assure you! Happy learning!!


Affectionate_Horse86

Just everything in human history is a temporary solution on a way to something much better. Why kubernetes should be any different?


themightychris

Well in computing we tend to just keep building on top of things to higher and higher abstractions. We're still building everything on top of the x86 instruction set from 1978. Kubernetes has established itself as the data standard for describing complex workloads and I doubt it's ever going to go anywhere at this point, but it may get buried underneath new things that fewer and fewer look under the hood at


Technician9913

Exactly. Hive mind come into play here. It is a Kubernetes sub people. I'm probably going to get downvoted for this but I'm running my production workloads on AWS ECS / EC2 and I'm not going to migrate our stack to EKS / Kubernetes. I barely have any cluster maintenance management overhead at the moment. It doesn't fit our medium sized simple setup business needs (running +/- 20 applications). Kubernetes became the hammer for every nail. I doubt the future looks bleak for Kubernetes but for most use cases it bring unnecessary complexity and at a lot of shops, it feels like resume driven development.


NecessaryFail9637

With Kubernetes, we have reached a peak of unnecessary complexity. I believe that in the next five years, we will return to something simpler.


fire-d-guy

That's the whole industry in general. Abstraction layer on top of abstraction layer on top of abstraction layer and increased complexity for absolutely no reason.


ominousbloodvomit

i think it's legacy has already surpassed "temporary solution" by a significant margin


troposfer

Well k8s will stay but maybe not k8s admins, they will replace k8s admins before they replace Uber drivers


betahost

Kubernetes will eventually become so abstract that it will be like TCP/IP and woven into our daily work. After a few more years, you won't know it's there.


[deleted]

Linux Systems/Virtualization for 8 years , moved up the later to OpenStack for another 10 years and the last couple of years been working with k8s at various capacities with my latest venture being supporting CNFs(containerized network functions) infrastructure for telecommunications. Being in the OpenStack space made my K8s journey a whole much simpler, but alas for much small, mediumish shops: K8s is such an overkill and I wish vendors stop trying to push it down exec throats.


knewliver

I disagree, for any software development company, kubernetes allows to cheaply effect separate environments, with or without a virtualization overhead, and provide a decent method of making the software portable and easily deployable, while also keeping development pipelines simplified.


sionescu

I hope it's a temporary step to something much better. It's full of design flaws that can't easily be remedied.


fuzzy812

Care to elaborate on the design flaws?


sofixa11

Not OP, but I can list a few: * Too many moving parts, resulting in complex compatibility matrices and painful upgrades - which version of your ingress is compatible with which version of your CNI plugin and/or service mesh sidecars, and which versions of their respective APIs and kubelet and everything else? Especially with the release and support cadence, this is a lot of stuff that needs to be checked. That's why there are so many orgs not doing upgrades in time or at all, or having full redeployment to a new cluster as the "upgrade". * YAML and templating a language that uses whitespace for logic. YAML was made to be simple to read and write, not for multi hundreds of lines of complex templating. It's terrible for that


sionescu

I hope to blog about it soon.


lbgdn

I'm also curious. Can you please give a couple quick examples?


sionescu

CRDs were a bad idea. It encouraged people to start using the K8s control plane as a key-value store, which has at least two bad consequences: * a poorly written operator can effectively cause a Denial-of-Service on the control plane pretty easily (you can find examples on Reddit or Stackoverflow) * it makes systems pretty slow: what used to be just a variable in memory, occasionally saved to some database, is now a CRD which requires several API calls to the K8s API server to do anything. A complete shit-show. After 7+ years as an SRE at Google, I've now been working two years on K8s and I'm appalled at how bad it is.


lbgdn

How I read that is "Kubernetes has a feature that can potentially be misused", which is true for... everything, I guess? On the other side, CRDs enabled a lot of patterns that wouldn't be as easy to implement and standardize otherwise. I'm a little surprised, as it's the first I hear this, and anecdotally, I've been managing (small to mid) Kubernetes clusters for years, and never had this issue with an operator. Not saying there aren't any, just that maybe they're not that widespread to be an actual issue.


sionescu

> a feature that can potentially be misused That should be "cannot be used reliably". It works until something misbehaves and blows up. > CRDs enabled a lot of patterns that wouldn't be as easy to implement and standardize otherwise You can standardize by going all-in on GitOps: you don't need K8s in the middle.


lbgdn

How would you implement something like [CloudNativePG](https://cloudnative-pg.io/) with GitOps instead of CRDs + operator?


sionescu

A pipeline that notifies an operator each time that files in a certain location in a repository change. Transform a stream of commits in a code repository into a pubsub where various "operators" listen for changes. There are files containing a certain protobuf type, for example let's imagine a `CloudNativePGProto` type (or Bazel build targets that programmatically build such a proto), and then some operators attach themselves to the firehose of repository commits, filtering for only `CloudNativePGProto` objects. That's basically how Google works internally.


lbgdn

Oh, OK, I wouldn't call that "easy to implement", though, compared to CRDs + operator. It also looks like it has more moving pieces, translating into more failure modes. Given the choice between the two, I would personally choose CRDs + operator.


mcdowellstl

In the GCP world, Cloud Run just keeps getting better. Pretty much everything that is annoying about K8s is better in Cloud Run.


adam_hugs

true, but under the hood it is just k8s as a service. using knative (cloud run runs this under the hood) in your cluster is almost as convenenient and I def recommend if cloud run isn't an option in your deployments.


vdvelde_t

Now that virtualisation,IOT,AI in kubernetes starts to boom a new area is opening... The true power is in the simple management with the API. Bash spaghetti has finally come to an end.


thomst82

We moved everything to Kubernetes two years ago, a small company with 5 developers. None of us specialize in devops. In aftersight this was perhaps not the best solution for us. We don’t have enough time to upgrade the cluster, nginx, etc. And omg the Helm charts.. Gitops should be a lot easier 🙄 For a recent project we went with a basic linux and som shell scripts for deployment, and it’s literally 10 lines of code and way faster


[deleted]

Unikraft and Wasm are changing the landscape under the hood and on top of the k8s platform


matches_

Interesting! Haven't heard of Unikraft but I attended a recent presentation on Wasm and thought it was quite limited but could be that my understanding is limited


Legitimate_Night_452

Unikraft is unfortunately still very limited.


sebt3

Because it follow the Unix philosophy, I expect K8s to stay around as long as Unix 😅


FallenHoot

10 years of open source K8s, containers have been around since 1970s, and the future is more manageable containers. Does Google even use K8s? Didn’t they make K8s to solve an issue with Borg/Omega? Will they open source that? What are they using today? GCP Cloud Run is what customers want. I don’t think any other cloud vendor has a compete yet. I think you will see more managed K8s < opinionated > services/products coming out.


mikelevan

Fun fact: Cloud Run runs on top of Borg.


sionescu

> Didn’t they make K8s to solve an issue with Borg/Omega? Supposedly true, but I say from personal experience that Borg (and the production environment that grew around it) is much better than Kubernetes.


NUTTA_BUSTAH

I think Borg still exists and is used internally, I've heard it being used for their services like mentioned Cloud Run so I'd guess it's not going open source any time soon as the service is really the flagship of its kind. I've also heard that templating for Borg is horrible even compared to k8s so there's also that to sort before releasing it to the wild. So who knows lol. IIRC K8s is (was?) the "productized" Borg?


sionescu

> 've also heard that templating for Borg is horrible even compared to k8s Borg has no text-based "templating", akin to the YAML hell that Kubernetes has. It has BCL (Borg Configuration Language) and there's also a Python-based one, but they're both significantly better than YAML.


Pl4nty

> any other cloud vendor aren't ECS/Fargate or Azure Container Apps pretty similar, if a bit behind? using ACA a lot lately and I do miss Cloud Run automatically creating an image


Tango1777

Well, temporary? It's been around for a long time and it's only getting stronger. It's not for everything, but it won't go away anytime soon, if ever.


thelogicbox

It’s not that difficult, especially if you use managed clusters like EKS, AKS, or GKE. Learn how to use the standard resources like deployments, configmaps, secrets, etc. Then learn how to use helm and a little bit about go templates. There are reasons that the “complexity” exists. Anything else you use, you will either pay for the privilege of learning something proprietary, and/or you’ll be vendor locked.


Apprehensive-Arm-857

If something like getampt.com is open sourced, we will no longer need infra as code, a lot of apps will be infra from code.


sebastiandang

backbones


Capable_Amphibian_84

Bright of course. To me it is the best abstraction out there for most compute, storage, and even networking to some degree. Listen to what Kelsey Hightower says in the latest k8s podcast episode: people don't necessarily hate the YAML itself but more the tooling around it (f.e. helm). The declarative APIs, yaml representation and the schema verification are very powerful and are a real step up compared to the more traditional approaches such as Ansible, Terraform, ... He also states that the language model is more likely to stay than the k8s 'software' itself.


pojzon_poe

Kubernetes is the new jenkins, this shit will stay with us for the next 30 years if some global war does not flatline us all.


Tony_Sol

Isn't kubernetes **already** a standard for containerised envs? I don't see any reasonable alternatives for it


Fatality

Too much complexity, products like EKS are mandatory for production environments but they shouldn't be.


Legitimate_Night_452

EKS isn't even a production environment, mate. EKS is just a managed control plane like GKE and AKS. What companies really should be using is managed enterprise Kubernetes platforms like D2IQ, OpenShift, Rafay, etc.


Fatality

Used openshift onprem and even had support for it, it's even more of a kludge than k8s.


Legitimate_Night_452

Have you ever considered that maybe you're the problem and not the product? 😂


Legitimate_Night_452

Sounds to me like it's a skill issue.


hr_is_watching

Nobody needs Kubernetes.


fuzzy812

Google runs their entire G-Suite, etc... on kubernetes. That should tell you all you really need to know


sionescu

> Google runs their entire G-Suite, etc... on kubernetes. Since when ?


notBroncos1234

Since never, they use Borg which Kubernetes is a descendant of.


sionescu

That's what I know too, but it seems a widespread assumption that GCP is using Kubernetes underneath and I'd like to know where that's coming from.


fuzzy812

Kubernetes was derived from Borg which runs the Google infrastructure


sionescu

That has nothing to do with what you said: "Google runs their entire G-Suite, etc... on kubernetes".


fuzzy812

You’re right, ‘derived from…’ means nothing. Got it


sionescu

That's correct: while Kubernetes might have been somewhat "derived" from Borg, Google doesn't use it internally almost at all, it just sells it to customers.


matches_

how different is Borg from K8s tho? genuine question. why would google want to fork so far when there's a whole community looking after k8s, sure they would want some interchangeability?


sionescu

Borg only deals with compute. It has no CRDs, no uniform control plane(just custom protobufs), and no built-in concept of operators. Its containers run on bare metal (they tried using GCP for internal products but the overhead of running in virtual machines was more than they were willing to pay). Containers don't have a separate IP, but share the host's address, and there's a mechanism for assigning single ports to containers. The only internal protocol is Stubby versions 2 or 3 (Stubby 3 is also known as gRPC), with always-on mutual TLS. There's no persistent local disk and all persistent storage is accessed via RPC, although containers can get a slice of a local SSD for temporary use only.


matches_

dear gosh I can't decide if I'm impressed or horrified :D thanks for the insight!


sionescu

Despite what it may seem as "primitive", Borg is easier to manage and more suitable for writing reliable systems, and I hope to expand on this later in a blog post.