Kubernetes

903 readers
1 users here now

founded 1 year ago
MODERATORS
1
11
submitted 1 year ago* (last edited 1 year ago) by Daemon to c/kubernetes
2
3
 
 

cross-posted from: https://lemmy.ml/post/20234044

Do you know about using Kubernetes Debug containers? They're really useful for troubleshooting well-built, locked-down images that are running in your cluster. I was thinking it would be nice if k9s had this feature, and lo and behold, it has a plugin! I just had to add that snippet to my ${HOME}/.config/k9s/plugins.yaml, run k9s, find the pod, press enter to get into the pod's containers, select a container, and press Shift-D. The debug-container plugin uses the nicolaka/netshoot image, which has a bunch of useful tools on it. Easy debugging in k9s!

4
5
6
7
8
9
8
submitted 7 months ago by Sheldan to c/kubernetes
10
22
submitted 8 months ago by mac to c/kubernetes
11
5
submitted 8 months ago* (last edited 8 months ago) by Sheldan to c/kubernetes
 
 

I recently got recommended this project, to have a more natively connected CI/CD (I would probably be more interested in the CI part, as I already have argo-cd running) And it seems very interesting, and the development seems okayish active. The only thing that I am curious about (and why I made this post, besides maybe making more people aware that it exists), is how active the Tekton hub (https://hub.tekton.dev/) is.

So, maybe somebody here has some information on that. I am not using Tekton (yet), but I read somewhere in the documentation, that this hub is supposed to be the place to get re-usable components, but seeing the actual activity on there turned me off from the project a little bit, because a lot of things are in version 0.1 and have been last updated 1 or 2 years ago. Maybe that issue only exists, because I am not logged in, but that certainly looks weird.

So, do you have any experience with Tekton? How do you feel about it?

12
13
14
15
16
17
10
submitted 11 months ago by mac to c/kubernetes
18
9
submitted 11 months ago by mac to c/kubernetes
19
20
 
 

One of biggest problems of #kubernetes is complexity.
@thockin on #KubeCon keynote shares his insights. I've seen that time and again with my users, as well as on our Logz.io DevOps Pulse yearly survey.
Maintainers aren't the end users of
@kubernetes , which doesn't help.

21
 
 

#KubeCon #ObservabilityDay? It’s time to talk about the unspoken challenges of #monitoring #Kubernetes: the bloat of metric data, the high churn rate of pod metrics, configuration complexity, and so much more. https://horovits.medium.com/f30c58722541
#observability #devops #SRE @kubernetes @linuxfoundation

22
6
submitted 1 year ago by stoex to c/kubernetes
23
 
 

It’s time to talk about the unspoken challenges of monitoring #Kubernetes: the bloat of metric data, the high churn rate of pod metrics, configuration complexity, and so much more.
https://horovits.medium.com/f30c58722541
#kubecon @kubernetes #k8s #monitoring #observability #devops #SRE @victoriametrics

24
 
 

cross-posted from: https://lemmy.zip/post/3942293

We need to deploy a Kubernetes cluster at v1.27. We need that version because it comes with a particular feature gate that we need and it was moved to beta and set enabled by default from that version.

Is there any way to check which feature gates are enabled/disabled in a particular GKE and EKS cluster version without having to check the kubelet configuration inside a deployed cluster node? I don't want to deploy a cluster just to check this.

I've check both GKE and EKS changelogs and docs, but I couldn't see a list of enabled/disabled feature gates list.

Thanks in advance!

25
 
 

I installed K3s for some hobby projects over the weekend and, so far, I have been very impressed with it.

This got me thinking, that it could be a nice cheap alternative to setting up an EKS cluster on AWS -- something I found to be both expensive and painful for the availability that we needed.

Is anybody using K3s in production? Is it OK under load? How have upgrades and compatibility been?

view more: next ›