this post was submitted on 10 Oct 2023
42 points (76.9% liked)

Programming

17406 readers
105 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
top 44 comments
sorted by: hot top controversial new old
[–] [email protected] 58 points 1 year ago (2 children)

You can’t run vmalert without flags

Running grep without parameters is also pretty fucking useless.

500 words in to the over 3,000 word dump, I gave up.

Claims to have a Unix background, doesn't RTFM.

Nobody really uses Kubernetes for day-to-day work, and it shows. Where UNIX concepts like files and pipes exist from OS internals up to interaction by actual people, cloud-native tooling feels like it’s meant for bureaucrats in well-paid jobs.

Translation: Author does not understand APIs.

Want an asynchronous, hierarchical, recursive, key-value database? With metadata like modified times and access control built-in? Sounds pretty fancy! Files and directories.

Ok. Now give me high availability, atomic writes to sets of keys, caching, access control...

I’m ashamed enough that I can’t really apply to these jobs

This reads as "I applied to the jobs and got rejected. There's nothing wrong with me, so the jobs must be broken".

[–] [email protected] 31 points 1 year ago (2 children)

Nobody really uses Kubernetes for day-to-day work, and it shows.

Wat.

[–] [email protected] 7 points 1 year ago (1 children)

Literally copied and pasted that from the article.

[–] [email protected] 23 points 1 year ago

I know. I'm responding to the absurdity of it.

[–] purelynonfunctional 1 points 1 year ago

It seems pretty clear to me what this means. Unlike, say, a GNU/Linux command line environment, Kubernetes is not a lived-in environment. Certain kinds of environments (at least in the free software world) naturally accrue small conveniences just because they are used for basic things like navigating the filesystem, communicating with others, writing the text that one spends 40+% of their day on, etc.

Kubernetes just isn't such an environment. For most people nowadays (although this was once the case), neither is a mainframe.

Without the natural pressures of habitation, a computing environments can retain certain kinds of sharp edges much longer.

That said, we may not agree with the author's idea of the coziness of Unix. But what he's getting at there, the claim itself, is perfectly clear.

[–] [email protected] 12 points 1 year ago* (last edited 1 year ago) (3 children)

Running grep without parameters is also pretty fucking useless.

The difference is grep is a simple tool that can take in text, transform it, and output it to a console. It operates in a powerful and easy to understand way by default (take in text and print lines in the text containing the search parameters). This vmalert tool is just an interface to another, even more complicated piece of software.

Claims to have a Unix background, doesn't RTFM.

Since when do Unix tools output 3,000 word long usage info? Even GNU tools don't even come close...

Translation: Author does not understand APIs.

The point is that these abstractions do not mesh with the rest of the system. HTTP and REST are very strange ways to accomplish IPC or networked communication on Unix when someone would normally accomplish the same thing with signals, POSIX IPC, a simpler protocol over TCP with BSD sockets, or any other thing already in the base system. It does make sense to develop things this way, though, if you're a corpo web company trying to manage ad-hoc grids of Linux systems for your own profit rather than trying to further the development of the base system.

Ok. Now give me high availability

I would hope the filesystems you use are "high availability" lol

atomic writes to sets of keys

You're right, that would be nice. Someone should put together a Plan 9 fileserver that can do that or something.

caching, access control

Plan 9 is capable of handling distributed access controls and caching (even of remote fileservers!). There's probably some Linux filesystems that can do that too.

In the end, it's not so much about specific tools that can accomplish this but that there are alternatives to the dominant way of doing things and that the humble file metaphor can still represent these concepts in a simpler and more robust way.

This reads as "I applied to the jobs and got rejected. There's nothing wrong with me, so the jobs must be broken".

This is the maybe the worst way of interpreting what they said. They can come and correct me if I'm wrong but I read that as: they have a particular ideological objection to this "cloud" ecosystem and the way it does things. It's not a lack of skill as your comment implies but rather a rejection of this way of doing things.

[–] [email protected] 13 points 1 year ago (2 children)

Since when do Unix tools output 3,000 word long usage info? Even GNU tools don't even come close...

man bash clocks in on about 43.000 words, just FYI

[–] zlatko 5 points 1 year ago

Since when do Unix tools output 3,000 word long usage info? Even GNU tools don’t even come close…

[zlatko@dilj ~/Projects/galactic-bloodshed]$ man grep | wc -w
4297
[zlatko@dilj ~/Projects/galactic-bloodshed]$ man man | wc -w
4697
[zlatko@dilj ~/Projects/galactic-bloodshed]$ 
[–] [email protected] 5 points 1 year ago (1 children)

True, but a man page is a different thing from a tool's built-in usage information.

[–] [email protected] 8 points 1 year ago (2 children)

I would disagree, or rather: it depends. You can print the --help of bash, but will that actually tell you anything about bash except a really superficial subset of flags? In the same way that the author argues that the help of his tool is too long to be useful, the help of bash is to short for the same reason. He argues that "cloud tools have a gazillion options where UNIX tools have good defaults". Bash has a gazillion options and no good defaults. As a matter of fact, bash on defaults is fairly dangerous. Yet, it is at the heart of most Unix systems today I'd argue.

[–] [email protected] 3 points 1 year ago

Yeahh, you have a good point lol. Bash and the GNU ecosystem have developed their own sprawling problems.

[–] [email protected] 2 points 1 year ago

Definitely depends, yeah. bash is a huge piece of software that - for me - feels a bit out of place in other systems closer to original unix. Interesting ones are rc and even plain old /bin/sh provided by something like busybox.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (2 children)

This vmalert tool is just an interface to another, even more complicated piece of software.

Not really just an interface. It is a pluggable service that connects to one or more TSDBs, performs periodic queries, and notifies another service when certain thresholds are exceeded. So with all those configuration options, why is the standalone binary expected to have defaults that may sound same on one system but insane in a different one? If the author wants out of the box configuration they could have gotten the helm chart or the operator and then that would be taken care of. But they seem to be deathly allergic to yaml, so I guess that won't happen.

Since when do Unix tools output 3,000 word long usage info? Even GNU tools don't even come close...

You just said that this software was much more complex than Unix tools. Also if only there were alternate documentation formats....

HTTP and REST are very strange ways to accomplish IPC or networked communication on Unix when someone would normally accomplish the same thing with signals, POSIX IPC, a simpler protocol over TCP with BSD sockets, or any other thing already in the base system.

Until you need authentication, out of the box libraries, observability instrumentation, interoperability... which can be done much more easily with a mature communication protocol like HTTP. And for those chasing the bleeding edge there's gRPC.

I would hope the filesystems you use are "high availability" lol

They're not, and I'm disappointed that you think they are. Any individual filesystem is a single point of failure. High availability lets me take down an entire system with zero service disruption because there's redundancy, load balancing, disaster recovery...

the humble file metaphor can still represent these concepts

They can, and they still do... Inside the container.

It's not a lack of skill as your comment implies but rather a rejection of this way of doing things.

Which I understand, I honestly do. I rejected containers for a (relatively) long time myself, and the argument that the author is making echoes what I would have said about containers. Which is why I believe myself to be justified in making the argument that I did, because rejecting a way of doing things based on preconception is a lack of flexibility, and in cloud ecosystems that translates to a lack of skill.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

You just said that this software was much more complex than Unix tools.

That's the problem. The reason Unix became so popular is because it has a highly integrated design and a few very reused abstractions. A lot of simple parts build up in predictable ways to accomplish big things. The complexity is spread out and minimized. The traditional Unix way of doing things is definitely very outdated though. A modern Unix system is like a 100 story skyscraper with the bottom 20 floors nearly abandoned.

Kubernetes and its users would probably be happier if it was used to manage a completely different operating system. In the end, Kubernetes is trying to impose a semi-distributed model of computation on a very NOT distributed operating system to the detriment of system complexity, maintainability, and security.

Until you need authentication, out of the box libraries, observability instrumentation, interoperability... which can be done much more easily with a mature communication protocol like HTTP.

I agree that universal protocols capable of handling these things are definitely useful. This is why the authors of Unix moved away from communication and protocols that only function on a single system when they were developing Plan 9 and developed the Plan 9 Filesystem Protocol as the universal system "bus" protocol capable of working over networks and on the same physical system. I don't bring this up to be an evangelist. I just want to emphasize that there are alternative ways of doing things. 9P is much simpler and more elegant than HTTP. Also, many of the people who worked on Plan 9 ended up working for Google and having some influence over the design of things there.

They're not, and I'm disappointed that you think they are. Any individual filesystem is a single point of failure. High availability lets me take down an entire system with zero service disruption because there's redundancy, load balancing, disaster recovery...

A filesystem does not exclusively mean an on-disk representation of a tree of files with a single physical point of origin. A filesystem can be just as "highly available" and distributed as any other way of representing resources of a system if not more so because of its abstractness. Also, you're "disappointed" in me? Lmao

They can, and they still do... Inside the container.

And how do you manage containers? With bespoke tools and infrastructure removed from the file abstraction. Which is another way Kubernetes is removed from the Unix way of doing things. Unless I'm mistaken, it's been a long time since I touched Kubernetes.

because rejecting a way of doing things based on preconception is a lack of flexibility

It's not a preconception. They engaged with your way of doing things and didn't like it.

in cloud ecosystems that translates to a lack of skill.

By what standard? The standard of you and your employer? In general, you seem to be under the impression that the conventional hegemonic corporate "cloud" way of doing things is the only correct way and that everyone else is unskilled and not flexible.

I'm not saying that this approach doesn't have merits, just that you should be more open-minded and not judge everyone else seeking a different path to the conventional model of cloud/distributed computing as naive, unskilled people making "bad-faith arguments".

[–] [email protected] 0 points 1 year ago (1 children)

A lot of simple parts build up in predictable ways to accomplish big things. The complexity is spread out and minimized.

This has always felt untrue to me. The command line has always been simple parts. However we cannot argue that this applies to all Unix-like systems: The monolithic Linux kernel, Kerberos, httpd, SAMBA, X windowing, heck even OpenSSL. There's many examples of tooling built on top of Unix systems that don't follow that philosophy.

The traditional Unix way of doing things is definitely very outdated though.

Depends on what you mean. "Everything is a file"? Sure, that metaphor can be put to rest. "Low coupling, high cohesion"? That's even more valid now for cloud architectures. You cannot scale a monolith efficiently these days.

In the end, Kubernetes is trying to impose a semi-distributed model of computation on a very NOT distributed operating system to the detriment of system complexity, maintainability, and security.

Kubernetes is more complex than a single Unix system. It is less complex than manually configuring multiple systems to give the same benefits of Kubernetes in terms of automatic reconciliation, failure recovery, and declarative configuration. This is because those three are first class citizens in Kubernetes, whereas they're just afterthoughts in traditional systems. This also makes Kubernetes much more maintainable and secure. Every workload is containerized, every workload has predeclared conditions under which it should run. If it drifts out of those parameters Kubernetes automatically corrects that (when it comes to reconciliation) and/or blocks the undesirable behaviour (security). And Kubernetes keeps an audit trail for its actions, something that again in Unix land is an optional feature.

If you work with the Kubernetes model then you spend 10% more time setting things up and 90% less time maintaining things.

9P is much simpler and more elegant than HTTP

It also has negligible adoption compared to HTTP. And unless it provides an order of magnitude advantage over HTTP, then it's going to be unlikely that developers will use it. Consider git vs mercurial. Is the latter better than git? Almost certainly. Is it 10x better? No, and that's why it finds it hard to gain traction against git.

A filesystem does not exclusively mean an on-disk representation of a tree of files with a single physical point of origin. A filesystem can be just as “highly available” and distributed as any other way of representing resources of a system if not more so because of its abstractness.

Even an online filesystem does not guarantee high availability. If I want highly available data I still need to have replication, leader election, load balancing, failure detection, traffic routing, and geographic distribution. You don't do those in the filesystem layer, you do them in the application layer.

Also, you’re “disappointed” in me? Lmao

Nice ad hominem. I guess it's rules for thee, but not for me.

And how do you manage containers? With bespoke tools and infrastructure removed from the file abstraction. Which is another way Kubernetes is removed from the Unix way of doing things. Unless I’m mistaken, it’s been a long time since I touched Kubernetes.

So what's the problem? Didn't you just say that the Unix way of doing things is outdated? Let the CSI plugin handle the filesystem side if things, and let Kubernetes focus on the workload scheduling and reconciliation.

It’s not a preconception. They engaged with your way of doing things and didn’t like it.

Dismissal based on flawed anecdote is preconception.

By what standard? The standard of you and your employer? In general, you seem to be under the impression that the conventional hegemonic corporate “cloud” way of doing things is the only correct way and that everyone else is unskilled and not flexible.

No. I'm not married to the "cloud" way of doing things. But if someone comes to me and says "Hey boblin, we want to implement something on system foo, can you help us?" and I am not used to doing things the foo way I will say "I'm not familiar with it but let's talk about your requirements, and why you chose foo" instead of "foo is for bureaucrats, I don't want to use it". I'd rather hire an open-mined junior than a gray-bearded Unix wizard that dismisses anything unfamilar. And I will also be the first person to reject use cases for Kubernetes when they do not make sense.

just that you should be more open-minded and not judge everyone else seeking a different path to the conventional model of cloud/distributed computing as naive, unskilled people making “bad-faith arguments”.

There are scenarios where cloud compute just does not make sense, like HPC. If the author had led with something like that, then they would have made a better argument. But instead they went for

cloud-native tooling feels like it’s meant for bureaucrats in well-paid jobs,

,

In the 90s my school taught us files and folders when we were 8 years old

, and

When you finally specify all those flags, neatly namespaced with . to make it feel all so very organised, you feel like you’ve achieved something. Sunk-cost fallacy kicks in: look at all those flags that I’ve tuned just so - it must be robust and performant!

It's hard to not take that as bad faith.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (1 children)

This has always felt untrue to me. The command line has always been simple parts. However we cannot argue that this applies to all Unix-like systems: The monolithic Linux kernel, Kerberos, httpd, SAMBA, X windowing, heck even OpenSSL. There's many examples of tooling built on top of Unix systems that don't follow that philosophy.

I can see why you would come to think that if all you've been exposed to is Linux and its orbiting ecosystem. I agree with you that modern Unix has failed to live up to its ideals. Even its creators began to see its limitations in the late 80s and began to develop a whole new system from scratch.

Depends on what you mean. "Everything is a file"? Sure, that metaphor can be put to rest.

That was never true in the first place. Very few things under Unix are actually represented as files (though credit to Linux for pursuing this idea in kernel-space more than most). But Plan 9 shows us this metaphor is worth expanding and exploring in how it can accomplish being a reliable, performant distributed operating system with a fraction of the code required by other systems.

Kubernetes is more complex than a single Unix system. It is less complex than manually configuring multiple systems to give the same benefits of Kubernetes in terms of automatic reconciliation, failure recovery, and declarative configuration. This is because those three are first class citizens in Kubernetes, whereas they're just afterthoughts in traditional systems. This also makes Kubernetes much more maintainable and secure. Every workload is containerized, every workload has predeclared conditions under which it should run. If it drifts out of those parameters Kubernetes automatically corrects that (when it comes to reconciliation) and/or blocks the undesirable behaviour (security). And Kubernetes keeps an audit trail for its actions, something that again in Unix land is an optional feature.

My point is Kubernetes is a hack (a useful hack!) to synchronize multiple separate, different systems in certain ways. It cannot provide anything close to something like a single system image and it can't bridge the discrete model of computation that Unix assumes.

This also makes Kubernetes much more maintainable and secure. Every workload is containerized, every workload has predeclared conditions under which it should run. If it drifts out of those parameters Kubernetes automatically corrects that (when it comes to reconciliation) and/or blocks the undesirable behaviour (security). And Kubernetes keeps an audit trail for its actions, something that again in Unix land is an optional feature.

All these features require a lot of code and complexity to maintain (latest info I can find is almost 2 million as of 2018). Ideally, Kubernetes is capable of what you said, in the same way that ideally programs can't violate Unix filesystem DAC or other user permissions but in practice every line of code is another opportunity for something to go wrong...

Just because something has more security features doesn't mean it's actually secure. Or that it's maintainable without a company with thousands of engineers and tons of money maintaining for you. Keeping you in a dependent relationship.

It also has negligible adoption compared to HTTP. And unless it provides an order of magnitude advantage over HTTP, then it's going to be unlikely that developers will use it. Consider git vs mercurial. Is the latter better than git? Almost certainly. Is it 10x better? No, and that's why it finds it hard to gain traction against git.

So? I don't expect many of these ideas will be adopted in the mainstream under the monopoly-capitalist market system. It's way more profitable to keep selling support to manage sprawling and complex systems that require armies of software engineers to upkeep. I think if state investment or public research in general becomes relevant again maybe these ideas will be investigated and adopted for their technical merit.

Even an online filesystem does not guarantee high availability. If I want highly available data I still need to have replication, leader election, load balancing, failure detection, traffic routing, and geographic distribution. You don't do those in the filesystem layer, you do them in the application layer.

"Highly available" is carrying a lot of weight there lol. If we can move some of these qualities into a filesystem layer (which is a userspace application on some systems) and get these benefits for free for all data, why shouldn't we? The filesystem layer and application layer are not 2 fundamentally separate unrelated parts of a whole.

Nice ad hominem. I guess it's rules for thee, but not for me.

Lol, stop being condescending and I won't respond in kind.

So what's the problem? Didn't you just say that the Unix way of doing things is outdated?

I think the reason the Unix way of doing things is outdated is cuz it didn't go far enough!

Dismissal based on flawed anecdote is preconception.

What? lol

It's not a flawed anecdote or a preconception. They had their own personal experience with a cloud tool and didn't like it.

You can't smuglord someone into liking something.

I'd rather hire an open-mined junior than a gray-bearded Unix wizard that dismisses anything unfamilar.

I'm not a gray-bearded Unix wizard and I'm not dismissing these tools because they're unfamiliar. I have technical criticism of them and their approach. I think the OP feels the same way.

The assumption among certain computer touchers is that you can't use Kubernetes or "cloud" tools and not come away loving them. So if someone doesn't like them they must not really understand them!

It's hard to not take that as bad faith.

They probably could've said it nicer. It's still no excuse to dismiss criticism because you didn't like the tone.

I think Kubernetes has its uses, for now. But it's still a fundamentally limited and harmful (because of its monopolistic maintainers/creators) way to do a kind of distributed computing. I don't think anyone is coming for you to take your Kubernetes though...

[–] [email protected] 1 points 1 year ago

My point is Kubernetes is a hack (a useful hack!) to synchronize multiple separate, different systems in certain ways. It cannot provide anything close to something like a single system image and it can’t bridge the discrete model of computation that Unix assumes.

Kubernetes is not intended to provide anything like a single system image. It's a workload orchestration system, not an operating system. Given a compatible interface (a runtime) Kubernetes can in theory distribute workloads to any OS.

All these features require a lot of code and complexity to maintain (latest info I can find is almost 2 million as of 2018). Ideally, Kubernetes is capable of what you said, in the same way that ideally programs can’t violate Unix filesystem DAC or other user permissions but in practice every line of code is another opportunity for something to go wrong…

Just because something has more security features doesn’t mean it’s actually secure. Or that it’s maintainable without a company with thousands of engineers and tons of money maintaining for you. Keeping you in a dependent relationship.

I'm not going to argue that Kubernetes is not complex. But as I stated previously Kubernetes as a bespoke ecosystem is less complex than configuring the same features with decoupled systems. The requirements for an orchestrator and the challenges (technical, security, human, etc) to manage said orchestrator are higher. All else being equal, Kubernetes has implemented this in a very lean way, delegating networking, storage, and runtime to pluggable providers on the left, and delegating non-basic workload aspects to operators on the right. It's this extensibility that makes it both popular with operators and makes it appear daunting to a layperson. And going back to security, is has provably shown to have a reduced attack surface when managed by a competent operator.

So? I don’t expect many of these ideas will be adopted in the mainstream under the monopoly-capitalist market system. It’s way more profitable to keep selling support to manage sprawling and complex systems that require armies of software engineers to upkeep. I think if state investment or public research in general becomes relevant again maybe these ideas will be investigated and adopted for their technical merit.

So you're... what, dismissing HTTP because it has been adopted by capitalist market systems? Are you going to dismiss the Fediverse for using HTTP? What about widely adopted protocols? DNS, BGP, IPv4/6, etc?

How about we bring this part of the discussion back to the roots? You said that HTTP and REST as communication protocols seemed strange to you because Unix has other primitives. I pointed out that those primitives do not address many modern client-server communication requirements. You did not refute that, but you said, and I paraphrase "9P did it better". I refrain from commenting on that because there's no comparative implementation of complex Internet-based systems in 9P. I did state though that even if 9P is superior, as you claim, it did not win out in the end. There's plenty of precedents for this: Betamax-VHS, git-mercurial, etc.

“Highly available” is carrying a lot of weight there lol. If we can move some of these qualities into a filesystem layer (which is a userspace application on some systems) and get these benefits for free for all data, why shouldn’t we? The filesystem layer and application layer are not 2 fundamentally separate unrelated parts of a whole.

(My emphasis) It's not free though. There's an overhead for doing this, and you end up doing things in-filesystem that have no business being there.

It’s not a flawed anecdote or a preconception. They had their own personal experience with a cloud tool and didn’t like it.

*Ahem*:

"Nobody really uses Kubernetes for day-to-day work, and it shows."

That is not an experience, it's a provably wrong statement.

The assumption among certain computer touchers is that you can’t use Kubernetes or “cloud” tools and not come away loving them. So if someone doesn’t like them they must not really understand them!

That's a very weird assumption, and it's the first time I've heard it. Can you provide a source? Because in my experience the opposite is the case - there's no community more critical of Kubernetes' flaws than their developers/users themselves.

They probably could’ve said it nicer. It’s still no excuse to dismiss criticism because you didn’t like the tone.

I dismissed the criticism because it makes an appeal to pathos, not to logos. Like I said, there's plenty of valid technical criticisms of Kubernetes, and even an argument on the basis of ethics (like you're making) is more engaging.

I think Kubernetes has its uses, for now. But it’s still a fundamentally limited and harmful (because of its monopolistic maintainers/creators) way to do a kind of distributed computing. I don’t think anyone is coming for you to take your Kubernetes though…

No my Kubernetes. I use it because it's academically interesting, and because it does the tasks it is meant to do better than most alternatives. But if CNCF were to implode today and Kubernetes became no longer practical to use then I would just pivot to another system.

I'm not going to argue whether it's a harmful way of doing distributed computing based on their maintainers/pedrigee. That's a longer philosophical discussion than I suspect neither you or I have time for.

[–] [email protected] 6 points 1 year ago (1 children)

You just said that this software was much more complex than Unix tools

Probably need to keep in mind incidental versus essential complexity here.

So with all those configuration options, why is the standalone binary expected to have defaults that may sound same on one system but insane in a different one?

Because this is how much of what we use already is implemented. Significant effort goes in to portability, interoperability and balancing compromises. When I'm doing software development e.g. writing HTTP APIs (of which I apparently know nothing about ;) ) - I feel like I've got a responsibility to carefully balance what I expose as some user-configurable thing versus something managed internally by the application. Sometimes, thankfully, the application doesn't even have to think about it al all - like what TCP flags to set when I dial some service.

You bring up containers which is a great example of some cool features provided by the Linux kernel to solve interesting problems. If you're interested, have a look at FreeBSD's Jails, Plan 9 and LXC. Compare the interface to all these systems, both at the library level and userspace, and compare the applications developed using those systems. How easy is it to get going? How much do I need to keep in my head when using these features? Docker, Kubernetes, and the rest all have made different tradeoffs and compromises.

Another one I think about is SQLite. Some seriously clever smarts. Huge numbers of people don't know anything about for-loops, C, or B-Trees but can read & write SQL. That's technology at its best.

Consider how difficult it could be to, say, start a car in all the different operating conditions it is expected to be used in. But we never think about it.

We as tech people pride ourselves on familiarity with esoteric detail, but it doesn't need to be like this. Nor does memorising it all have anything to do with "skill".

What I'm struggling with are thoughts of significant vested commercial interest in exposing this kind of detail, fuelling multi-billion dollar service industries. Feelings of being an outsider despite understanding how it all fits together.

It is a pluggable service that connects to one or more TSDBs, performs periodic queries, and notifies another service when certain thresholds are exceeded.

Have you ever written this kind of software before?

It sounds like you are comfortable with the status quo of this part of the software industry, and I'm truly jealous! If you've got any tips on dealing with this kind of stuff you can find my email at https://www.olowe.co/about.html Thanks :)

[–] [email protected] 0 points 1 year ago

Probably need to keep in mind incidental versus essential complexity here.

Go on...

Because this is how much of what we use already is implemented. Significant effort goes in to portability, interoperability and balancing compromises. When I’m doing software development e.g. writing HTTP APIs (of which I apparently know nothing about ;) ) - I feel like I’ve got a responsibility to carefully balance what I expose as some user-configurable thing versus something managed internally by the application. Sometimes, thankfully, the application doesn’t even have to think about it al all - like what TCP flags to set when I dial some service.

In the case of vmalert, the binary makes no assumptions as to default behaviour because it was not meant to be run standalone. It comes as part of a container with specific environment variables, which in turn is packaged as a Helm chart which has sane configurations. Taking the vmalert binary by itself is like taking a kerberos server binary without its libraries and config files in /etc files and complaining that it's not working.

You bring up containers which is a great example of some cool features provided by the Linux kernel to solve interesting problems. If you’re interested, have a look at FreeBSD’s Jails, Plan 9 and LXC. Compare the interface to all these systems, both at the library level and userspace, and compare the applications developed using those systems. How easy is it to get going? How much do I need to keep in my head when using these features? Docker, Kubernetes, and the rest all have made different tradeoffs and compromises.

I am very well versed in jails, chroot, openvz, LXC, etc. OCI containers are in a different class - don't think of them as an OS-like environment, think of them as a self-contained, packaged service. Docker is then one example of a runtime runtime on which those services run, and Kubernetes is an orchestrator that managed containers in runtimes. And yes, there are some tradeoffs and compromises, but those are well within the bounds of the Pareto principle - remove the 10% long tail of features on the host, reduce user-facing complexity by 90%.

Another one I think about is SQLite. Some seriously clever smarts. Huge numbers of people don’t know anything about for-loops, C, or B-Trees but can read & write SQL. That’s technology at its best.

Are you arguing that Kubernetes doesn't do that for you? Because with Kubernetes I can say "run the service in this container with these settings and so many replicas", attach some conditions like "stop sending traffic to any one container that takes longer than N seconds to respond" and "restart the container if a certain command returns an error", and just let it run. I can do a rolling upgrade of the nodes and Kubernetes will reschedule the containers on any other available node, it can load balance traffic, I can update the spec of a deployment and Kubernetes will do a zero-downtime upgrade for me. Try implementing the same on a Unix system. You'd need a way to push configs (Ansible, Puppet, etc?). You need load balancing and leader election (Keepalived?). You need error detection. You need DNS. You need to run the services. You need to ensure there's no library conflict. There's a LOT of complexity that a Kubernetes user does not need to worry about any more. Tell me that's not serious smarts and technology at its best.

What I’m struggling with are thoughts of significant vested commercial interest in exposing this kind of detail, fuelling multi-billion dollar service industries. Feelings of being an outsider despite understanding how it all fits together.

You seem to be conflating Kubernetes and cloud services. Being a cloud native technology does not mean it has to run on a managed cloud service. It just means that it has certain expectations as to how workloads run on it, and if those expectations are met then it makes certain promises about how it will behave.

Have you ever written this kind of software before?

I have contributed to several similar open source projects, yes. What about it?

It sounds like you are comfortable with the status quo of this part of the software industry, and I’m truly jealous!

I am comfortable with my knowledge of this part of the software industry. There is no status quo - there's currently an equilibrium, yes, but it is a tenuous one. I know the tools I use today will likely not be the same tools I will be using a decade from now. But I also know that the concepts and architectures I learn from managing these tools will still be applicable then, and I can stay agile enough to adapt and become comfortable in a new ecosystem. I would urge you to consider the same approach for yourself.

[–] [email protected] 5 points 1 year ago (2 children)

This is more along the lines I was thinking.

I think the parent comment went ad hominem rather than trying to understand some of the difficulties I brought up. I'm not sure whether engaging with them would be productive.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (2 children)

I'm glad I at least got closer to understanding your criticism than they did.

Don't let anyone tell you you're old or naive or "stuck in the past" for thinking these things! There is a real crisis in the operating systems world that your criticism is reflecting. It takes an army of software engineers and billions of dollars to keep this ecosystem and these systems going and they still struggle with reliability and security. The reason it's like this is an issue of economic organization.

We can't go back to the old way of doing things but we can't keep maintaining these fundamentally flawed systems either. You may find something inspiring in this brief presentation by Rob Pike: http://doc.cat-v.org/bell_labs/utah2000/

[–] [email protected] 3 points 1 year ago (1 children)

We can't go back to the old way of doing things but we can't keep maintaining these fundamentally flawed systems either.

That's a great way of putting it, thanks. I'm actually only 30 years old (lol). Sometimes I feel there's so few people who've ever used or written software at this level in the part of the industry I find myself in. It seems more common to throw money at Amazon, Microsoft, and more staff.

I've replaced big Java systems with small Go programs and rescued stalled projects trying to adopt Kubernetes. My fave was a failed attempt to adopt k8s for fault tolerance when all that was going on was an inability to code around TCP resets (concurrent programming helped here). That team wasn't "unskilled"; they were just normal people being crushed by complexity. I could help because they just weren't familiar with the kind of problem solving I was, nor what tooling is available without installing extra stuff and dependencies.

Thanks for your understanding :)

[–] [email protected] 4 points 1 year ago

That's a great way of putting it, thanks. I'm actually only 30 years old (lol).

Yeahh, and I saw someone compare you to the "old man yelling at cloud" lol. Even though there are good reasons to yell at the cloud hehe

Sometimes I feel there's so few people who've ever used or written software at this level in the part of the industry I find myself in. It seems more common to throw money at Amazon, Microsoft, and more staff.

I've replaced big Java systems with small Go programs and rescued stalled projects trying to adopt Kubernetes. My fave was a failed attempt to adopt k8s for fault tolerance when all that was going on was an inability to code around TCP resets (concurrent programming helped here). That team wasn't "unskilled"; they were just normal people being crushed by complexity. I could help because they just weren't familiar with the kind of problem solving I was, nor what tooling is available without installing extra stuff and dependencies.

I haven't had the "privilege" of working for a wage in the industry (and I still don't know if I want to) but I think I know what you mean. I've seen this kind of tendency even in my friends who do work in it. There is less and less of a focus on a whole-system kind of understanding of this technology in favor of an increased division of labor to make workers more interchangeable. Capitalists don't want people with particular approaches capable of complex problem-solving and elegant solutions to problems; they want easily-replaceable code monkeys who can churn out products. Perhaps there is a parallel here with what happened to small-scale artisan producers of commodities in early capitalism as they were wiped out and absorbed into manufactories and forced to do ever-increasingly small and repetitive tasks as part of the manufacture of something they once produced from scratch to final product in a whole process. Especially concerning is the increasing use of AI by employed programmers. Well, usually their companies forcing them to use AI to try to automate their work.

And like you gave an example of, this has real bad effects on the quality of the product and the team that develops it. From the universities to the workplace, workers in this industry are educated in the virtues of object-oriented programming, encapsulation, tooling provided by the big tech monopolies, etc. All methods of trying to separate programmers from each other's work and the systems they work on as a whole and make them dependent on frameworks sold or open-sourced™ by tech monopolies at the expense of creative and free problem-solving.

Glad at least you were able to unstall some of the projects you've been involved in!

Thanks for your understanding :)

Glad we could share ideas :3

You and other people in the thread gave me a lot to think about. Hope this comment made some sense lol.

[–] [email protected] 2 points 1 year ago

I see that the problem arises from the "visionary, but lower experienced newer developers (compared to the past generation) " trying to fix a world where "don't touch it if it works crowd who has seen all old timers" built, by putting each layer over the older one. It has all the capabilities, but there is no "single vision", no "well defined api".

Old established paradigms are being broken. Some conventions are forgotten, new tooling and perspectives are being built.

Sure this means there is an unfortunate clash is happening.

I can't say if this is a better, or wiser world or not, however I can only say this is the way now. You can adapt, try to embrace and push forward things or you can try to stay away and become one of the legendary Cobol developer crowd. We know they are there in the wild, but we can't find them.

[–] [email protected] 3 points 1 year ago

I probably did go a bit ad hominem in my last paragraph. By the time I was done with the article I was very frustrated by what seemed to be some very bad faith arguments (straw man, false dilemma) that were presented.

[–] [email protected] 44 points 1 year ago (1 children)
[–] [email protected] 5 points 1 year ago (1 children)

I’m now 30 years old and I wonder what I’ll feel like after another 30 years :(

[–] [email protected] 23 points 1 year ago (1 children)

Eh, I wouldn't worry too much. I'm 48, and this rant still sounds like "old man yells at cloud" to me too.

It's not age, it's willingness to adapt.

[–] [email protected] 5 points 1 year ago

Exactly. 25 years ago I helped manage a Sun cluster. 20 years ago I was on a team that managed roughly 3000 Linux servers in a data center. We racked them, monitored them, wrote tools to configure & manage them, etc. Ten years ago I helped manage Linux systems that were physically managed by a hosting provider, and we never actually saw/touched any of the hardware.

Today I help manage hundreds of AWS instances and also use tools/services from providers like Splunk, Akamai, and others. I haven’t seen/touched a physical server in years. It’s now all virtually managed via web portals, API’s, tools like terraform, etc.

[–] [email protected] 26 points 1 year ago

"I used to be with ‘it’, but then they changed what ‘it’ was. Now what I’m with isn’t ‘it’ anymore and what’s ‘it’ seems weird and scary. It’ll happen to you!"

Literally old man yells at cloud

[–] [email protected] 10 points 1 year ago (2 children)

I swear Im a decent coder, but fuck me if kubernetes and that whole ecosystem just confuses me

[–] [email protected] 15 points 1 year ago (1 children)

I am someone with kubernetes in my job title. If you as a developer are expected to know about kubernetes beyond containerizing your application then your company has set itself up for failure. As you aptly said kubernetes is an ecosystem, and the dev portion is a small niche of that.

[–] GarytheSnail 1 points 1 year ago

Wait, shouldn't the developer know how to scale their application, debug networking issues, etc?

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (3 children)

What makes DevOps so different from sysadmin? Recruiters always told me "it's nearly the same", but I never got the job, so I guess idk.

[–] [email protected] 7 points 1 year ago (1 children)

It's actually pretty distinct. DevOps refers to the mindset (or philosophy, if you will) of "you build it, you run it". It boils down to you as a software developer are also responsible for packaging up you masterpiece, pushing it through CI, getting it deployed and making sure it keeps on running smoothly. It is designed to shift responsibilities away from the sysadmin to the developer.

The problem with this is that it's not a role or a job title, so recruiters and HR does not know how to work with it. Hence, they invented the DevOps "Role" because it sounds more modern. So in reality its used as a marketing term most of the time. So when someone pitches you a DevOps jobs, this tells you a few things:

  • they don't know what they are talking about
  • the company behind the offer puts a lot of meaning into titles, which means things will likely be pretty hierarchical even though they claim it won't be
  • they'll likely try to pay you less that your worth
[–] lysdexic 1 points 1 year ago

Hence, they invented the DevOps “Role” because it sounds more modern.

Not quite. Basically a DevOps role includes the responsibility of fixing pipelines and be paged in the middle of the night if something bad happens with the app.

Instead of paying a sysadmin and a developer who don't talk to each other, you dump all responsibilities onto someone else and at least eliminate the finger-pointing part between sysadmins and developers when something bad happens.

[–] [email protected] 7 points 1 year ago (1 children)

Recruiters lie my dude.

They are similar but with a strong knowledge set in different tools.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

Makes sense I didn't get the job, I only vaguely know the difference and it was mostly theoretical stuff like CI/CD, but those recruiters really wanted to throw me at random interviews to see if I'd stick :D

PS: sorry I offtopic'ed to recruiter-hating, gonna go find a community for that.

[–] [email protected] 2 points 1 year ago

You good homie. I shit on recruiters frequently. I have them hitting me up all the time for in person stuff from LinkedIn when it actively says no in person stuff

[–] lysdexic 1 points 1 year ago

Recruiters always told me “it’s nearly the same”,

Usually recruiters know jack shit about what they are recruiting for. Their main responsibility is trying to sell the idea that recruiters play a relevant role in recruiting, when in tech they quickly pass the ball to anyone else to assess hard skills.

but I never got the job, so I guess idk.

The goal of some recruiters is to source candidates and thus line up a list for their paying customer to go through. Their goal is to pretend they have a long list of people ready to help, when all they have is your name on a sheet of paper.

[–] [email protected] 9 points 1 year ago

hating on k8s is very in vogue currently. simpler systems like ECS exist and are really good too.

anybody bitching too hard about the tools today isn’t remembering 10 years ago correctly.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

I've struggled with many of the same tools. What we need is real distributed operating systems, like Plan 9, not increasingly complicated hacks and kludges to keep old-world operating systems relevant in a networked world.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

“Clould-native” software co-exists with corporate jargon. They obscure and complicate in the interest of perpetuating lucrative contracts over productive environments.

“Cloud Engineers” get paid $150K+ to fiddle with these strings and make sure it’s all escaped/delimited correctly in YAML files. It’s a fucking mess. I’m ashamed enough that I can’t really apply to these jobs. Maybe writing and running software on servers in the commercial world is not a good fit for someone like me who despises corporate jargon.

This.

[–] [email protected] -1 points 1 year ago

love the blog! interesting stuff.