this post was submitted on 27 Mar 2025
481 points (95.1% liked)

Programmer Humor

22785 readers
389 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 51 points 4 weeks ago (11 children)

Sure thing, but what is a monad anyway?

[–] gnutrino 73 points 4 weeks ago

It's a monoid in the category of endofunctors. Obviously.

[–] [email protected] 56 points 4 weeks ago* (last edited 4 weeks ago) (3 children)

In practical terms, it's most commonly a code pattern where any function that interacts with something outside your code (database, filesystem, external API) is "given permission" so all the external interactions are accounted for. You have to pass around something like a permission to allow a function to interact with anything external. Kind of like dependency injection on steroids.

This allows the compiler to enhance the code in ways it otherwise couldn't. It also prevents many kinds of bugs. However, it's quite a bit of extra hassle, so it's frustrating if you're not used to it. The way you pass around the "permission" is unusual, so it gives a lot of people a headache at first.

This is also used for internal permissions like grabbing the first element of an array. You only get permission if the array has at least one thing inside. If it's empty, you can't get permission. As such there's a lot of code around checking for permission. Languages like Haskell or Unison have a lot of tricks that make it much easier than you'd think, but you still have to account for it. That's where you see all the weird functions in Haskell like fmap and >>=. It's helpers to make it easier to pass around those "permissions".

What's the point you ask? There's all kinds of powerful performance optimizations when you know a certain block of code never touches the outside world. You can split execution between different CPU cores, etc. This is still in it's infancy, but new languages like Unison are breaking incredible ground here. As this is developed further it will be much easier to build software that uses up multiple cores or even multiple machines in distributed swarms without having to build microservice hell. It'll all just be one program, but it runs across as many machines as needed. Monads are just one of the first features that needed to exist to allow these later features.

There's a whole math background to it, but I'm much more a "get things done" engineer than a "show me the original math that inspired this language feature" engineer, so I think if it more practically. Same way I explain functions as a way to group a bunch of related actions, and not as an implementation of a lambda calculus. I think people who start talking about burritos and endofunctors are just hazing.

[–] sacredfire 18 points 4 weeks ago (2 children)

I don’t know if this is correct, but if it is, this is best answer to this question I’ve ever seen.

[–] [email protected] 17 points 4 weeks ago (1 children)

I'm sure someone will be like "um akchuly" to my explanation. But for me it's good enough to think if it that way.

I've worked in Haskell and F# for a decade, and added some of the original code to the Unison compiler, so I'm at least passingly familiar with the subject. Enough that I've had to explain it to new hires a bunch of times to get them to to speed. I find it easier to learn something when I'm given a practical use for it and how it solves that problem.

[–] [email protected] 8 points 4 weeks ago* (last edited 4 weeks ago)

Lovely response! Very cool to see Unison mentioned. Haskell and Purescript are my daily drivers but I have a huge crush on it even though it intimidates me.

Ps. Unison doesn’t have monads. They are replaced by “abilities”.

[–] [email protected] 3 points 4 weeks ago* (last edited 4 weeks ago)

It is, although I'm not sure it's complete. A list is one kind of monad, despite working like non-mutable linked lists would in any other language. They just happen to behave monadically, providing an obvious and legal interpretation of the monad functions. Going off of OP you might think monads are all Maybe.

I will say that the concept is overhyped in at this point, at least in Haskell, and there's a lot of monads available that do what plain functional code could but worse.

[–] [email protected] 2 points 4 weeks ago

Great explanation! Though I prefer to regard monads as semicolon simulators. Monads combine actions separated by semicolons together. The combination can be exceptional, logging, multi-output, or whatever.

[–] [email protected] 1 points 4 weeks ago

That's a good run down of the "why". The thing is, there's way more things that are monads than things that have to be looked at as monads. AFAIK it only comes up directly when you're using something like IO or State where the monad functions are irreversible.

From the compiler end, are there optimisations that make use of the monadic structure of, say, a list?

[–] [email protected] 24 points 4 weeks ago (1 children)
[–] SatouKazuma 14 points 4 weeks ago (1 children)
[–] [email protected] 7 points 4 weeks ago (1 children)
[–] SatouKazuma 5 points 4 weeks ago (1 children)

Idk. I tried a string comparison and rustc said equality was false. 😝

[–] [email protected] 3 points 4 weeks ago (1 children)

They're functionally the same.

load more comments (1 replies)
[–] [email protected] 24 points 4 weeks ago

It's just a monoid object in a category of endofunctors, no biggie

[–] [email protected] 11 points 4 weeks ago

Only monad I know is xmonad. My favourite x11 window manager.

[–] [email protected] 8 points 4 weeks ago* (last edited 4 weeks ago)

Whatever Haskell programmers decide to call a monad today. It's wandered pretty far away from whatever mathematical definition, despite insistences to the contrary.

(Technically, the requirement is to implement a few functions)

[–] [email protected] 8 points 4 weeks ago* (last edited 4 weeks ago)

A reproductive organ

[–] [email protected] 4 points 4 weeks ago

Not Mossad.

[–] silasmariner 4 points 4 weeks ago

It's a burrito

[–] [email protected] 4 points 3 weeks ago* (last edited 3 weeks ago)

It's a container with certain behaviors and guarantees making them easy and reliable to manipulate and compose. A practical example is a generic List, that behaves like:

  • List[1, 2, 3], i.e. ("new", "unit", "wrap") to create, containing obj(s)
  • map(func) to transform objs inside, List[A] -> List[B]
  • first(), i.e. ("unwrap", "value") to get back the obj
  • flat_map(func), i.e. ("bind") to un-nest one level when func(a) itself produces another List, e.g. [3, 4].flat_map(get_divisors) == flatten_once([[1, 3], [1, 2, 4]]) == [1, 3, 1, 2, 4]

Consider the code to do these things using for loops -- the "business logic" func() would be embedded and interlaced with flow control.

The same is true of Maybe, a monad to represent something or nothing, i.e. a "list" of at most one, i.e. a way to avoid "null".

Consider how quickly things get messy when there are multiple functions and multiple edge cases like empty lists or "null"s to deal with. In those cases, monads like List and Maybe really help clean things up.

IMO the composability really can't be understated. "Composing" ten for loops via interlacing and if checks and nesting sounds like a nightmare, whereas a few LazyList and Maybe monads will be much cleaner.

Also, the distinction monads make with what's "inside" and what's "outside" make it useful to represent and compartmentalize scope and lifetimes, which makes it useful for monads like IO and Async.

[–] [email protected] 41 points 4 weeks ago (1 children)

Can't spell "functional" without "fun"!

[–] [email protected] 5 points 4 weeks ago

N is for No Surviiiivors, here in the deep blue sea!

[–] [email protected] 25 points 4 weeks ago (1 children)

Functional programmers still pretending side effects snd reals world applications don’t exist.

[–] expr 60 points 4 weeks ago (4 children)

As a senior engineer writing Haskell professionally, this just isn't really true. We just push side effects to the boundaries of the system and do as much logic and computation in pure functions.

It's basically just about minimizing external touch points and making your code easier to test and reason about. Which, incidentally, is also good design in non-FP languages. FP programmers are just generally more principled about it.

[–] namingthingsiseasy 16 points 4 weeks ago

I've never had the chance to use a functional language in my work, but I have tried to use principles like these.

Once I had a particularly badly written Python codebase. It had all kinds of duplicated logic and data all over the place. I was asked to add an algorithm to it. So I just found the point where my algorithm had to go, figured out what input data I needed and what output data I had to return, and then wrote all the algorithm's logic in one clean, side effect-free module. All the complicated processing and logic was performed internally without side effects, and it did not have to interact at all with the larger codebase as a whole. It made understanding what I had to do much easier and relieved the burden of having to know what was going on outside.

These are the things functional languages teach you to do: to define boundaries, and do sane things inside those boundaries. Everything else that's going on outside is someone else's problem.

I'm not saying that functional programming is the only way you can learn something like this, but what made it click for me is understanding how Haskell provides the IO monad, but recommends that you keep that functionality at as high of a level as possible while keeping the lower level internals pure and functional.

[–] [email protected] 5 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

It heavily depends on the application, right? Haskell is life for algorithmically generating or analysing data, but I'm not really convinced by the ways available in it to do interaction with users or outside systems. It pretty much feels like you're doing imperative code again just in the form of monads, after a while. Which is actually worse from a locality of ~~reference~~ behavior perspective.

[–] [email protected] 5 points 4 weeks ago* (last edited 4 weeks ago)

Not really, it's just good practice. You write your application in layers, and the outer layer/boundary is where you want your side effects and that outer layer takes the crazy effectful world and turns it sane with nice data types and type classes and whatnot and then your inner layers operate on that. Data goes down the layers then back up, at least in my experience with functional projects in OCaml, F#, Clojure, and Haskell.

The real sauce is immutability by default/hard-to-do mutation. I love refs in OCaml and Clojure, so much better than mutation. Most of the benefits of FP are that and algebraic data types, in that order imo.

[–] expr 4 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

I'm not sure what you mean by "locality of reference". I assume you mean something other than the traditional meaning regarding how processors access memory?

Anyway, it's often been said (half-jokingly) that Haskell is a nicer imperative language than imperative languages. Haskell gives you control over what executing an "imperative" program actually means in a way that imperative languages don't.

To give a concrete example: we have a custom monad type at work that I'm simply going to call Transaction (it has a different name in reality). What it does is allow you to execute database calls inside of the same transaction (and can be arbitrarily composed with other code blocks of type Transaction while still being guaranteed to be inside of the same transaction), and any other side effects you write inside the Transaction code block are actually collected and deferred until after the transaction successfully commits, and are otherwise discarded. Very useful, and not something that's very easy to implement in imperative languages. In Haskell, it's maybe a dozen lines of code and a few small helper functions.

It also has a type system that is far, far more powerful than what mainstream imperative programming languages are capable of. For example, our API specifications are described entirely using types (using the servant library), which allows us to do things like statically generate API docs, type-check our API implementation against the specification (so our API handlers are statically guaranteed to return the response types they say they do), automatically generate type-safe API clients, and more.

We have about half a million lines of Haskell in production serving as a web API backend powering our entire platform, including a mobile app, web app, and integrations with many third parties. It's served us very well.

[–] [email protected] 2 points 4 weeks ago

Excellent write-up. People who complain about Haskell and purely functional languages just don't understand it, I think. Take me for example. I tried learning Haskell many years ago, and while I learned so many new and incredibly useful concepts from my short adventure, that I use everyday in my career, I just couldn't wrap my head around the more abstract concepts, like monads e.g. And the feeling I got was that Haskell is a difficult language, but probably it's the terminology and abstract mathematical concepts which are the real issue for me here. Because the syntax isn't really that complicated. Especially the way space is used to call functions. I'm really sick of all the parentheses in other languages.

But, if you understand all about functional programming, for those that do, it seems to really enrich the way they write and maintain code from what I've seen. People who dog on it just don't understand (including me). Of course it's hard to maintain something you don't understand. But if you do understand it, it's easy to maintain. 🤷‍♂️ Seems logical.

What next, where is the line drawn for what kind of code we can write? Why introduce more useful concepts in programming if we risk losing maintainability because some devs won't learn the new concepts?

Life means change. Adapt. Learn new things. Expand the mind. Learn how to do things in a good way, and then do the things in that good way. Why stagnate just because we don't understand something. Better to learn a new thing to understand the better way, than to dumb it down to a worse state just so we understand it.

Bah.

[–] [email protected] 2 points 4 weeks ago (2 children)

I’m not sure what you mean by “locality of reference”. I assume you mean something other than the traditional meaning regarding how processors access memory?

Shit! Sorry, got my wires crossed, I actually meant locality of behavior. Basically, if you're passing a monad around a bunch without sugar you can't easily tell what's in it after a while. Or at least I assume so, I've never written anything big in Haskell, just tons of little things.

To give a concrete example:

Yeah, that makes tons of sense. It sounds like Transaction is doing what a string might in another language, but just way more elegantly, which fits into the data generation kind of application. I have no idea how you'd code a game or embedded real-time system in a non-ugly way, though.

It also has a type system that is far, far more powerful than what mainstream imperative programming languages are capable of.

Absolutely. Usually the type system is just kind of what the person who wrote the language came up with. The Haskell system by contrast feels maximally precise and clear; it's probably getting close to the best way to do it.

load more comments (2 replies)
[–] [email protected] 5 points 4 weeks ago

I'd love to work on a codebase like that

load more comments (1 replies)
[–] [email protected] 24 points 4 weeks ago

This is why I make sure that nothing I code functions in any way at all

[–] [email protected] 22 points 4 weeks ago (1 children)

functional programmers when they look at their code 2 years later

[–] Colloidal 22 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

~~functional~~ programmers when they look at their code 2 years later

FTFY

[–] [email protected] 8 points 4 weeks ago* (last edited 4 weeks ago)

Yeah, no side-effects seems like it could only improve readability.

[–] [email protected] 11 points 4 weeks ago* (last edited 4 weeks ago)

Okay but partial application of curried functions is a really cool way of doing dependency injection and you haven't experienced bliss until you create a perfect module of functions that are exactly that

Also languages with macros and custom operators (where operators are just functions with special syntactic sugar) are so much cooler than those without (Clojure and elixir my beloved)

Additionally a system where illegal states are made impossible is soooo nice to work in. It's like a cheat code

[–] [email protected] 10 points 4 weeks ago

my_balls |> ligma() |> gotem(laugh=TRUE)

[–] [email protected] 9 points 4 weeks ago

Somebody who worked here before tried to do functional in C# by passing delegates into methods instead of injecting interfaces into constructors, across hundreds of repositories. This is why clever people should not be allowed to write code.

[–] [email protected] 8 points 4 weeks ago

Do curried functions come with grated coconut and a lime wedge?

[–] [email protected] 6 points 4 weeks ago (1 children)
[–] [email protected] 2 points 4 weeks ago

Very cool but if I want to bevel things it's a nightmare =/

[–] [email protected] 3 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

Thankfully never got sucked into that void. I had a coworker who really evangelized functional programming. I wonder what he's up to now.

[–] [email protected] 6 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

We have a principal engineer on our team that is pushing this sort of style, hard.

It's essentially obfuscation, no one else on the team can really review, nevermind understand and maintain what they write. It's all just functional abstractions on top of abstractions, every little thing is a function, even property/field access is extracted out to a function instead of just.... Using dot notation like a normal person.

[–] [email protected] 6 points 4 weeks ago (1 children)

even property/field access is extracted out to a function

Java, the most functional programming language there is.

[–] [email protected] 5 points 4 weeks ago (1 children)

Well, this is in JS to be clear

Instead of

const name = user.name

It's

const userToName(user) => user.name;

const name = userToName(user);

Ad nauseum.

[–] [email protected] 3 points 4 weeks ago* (last edited 4 weeks ago)

I was afraid you’d say that. That’s stupid.

Do they give a reason for why that’s ‘necessary’?

(Also it should be const userToName = (user) => user.name;)

[–] [email protected] 2 points 4 weeks ago

That was the impression I got about functional programming, from what little I read about it like 15 years ago. Sounds like somebody found a pretty hammer and everything became a nail.

[–] [email protected] 2 points 4 weeks ago* (last edited 4 weeks ago)

I dabbled in some Haskell a few years ago but quit trying when I got to the hard parts like monads and functors and stuff. All those mathematical concepts were a little too abstract for me.

But what I did bring with me from the experience changed my way of programming forever. Especially function composition and tacit (point-free) style programming. It makes writing code so much faster and simpler and it's easier to read and maintain.

You can utilize some functional programming concepts without being too hardcore with it and get the best of both worlds in the process. 👍

load more comments
view more: next ›