A monad is just a monoid in the category of endofunctors. Everyone knows that!
Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
Of course!
In other words though, for those just starting their monad journey:
An endofunctor is a box. If you have a box of soup, and a function to turn soup into nuts, you can probably turn the box of soup into a box of nuts. You don’t have to write a new function for this, if the box can take your existing function and “apply” it to its contents.
Arrays are endofunctors, because they hold things, and you can use Array.map to turn an array of X into an array of Y.
Monoids are things you can add together. Integer plus integer equals integer, so ints are monoids. String plus string (concatenation) equals a longer string, so strings are monoids. Grocery lists are monoids.
Arrays are monoids!
Arrays are both endofunctors and monoids, so for everyone except category theory purists, they are monads.
Javascript promises hold things, and you can transform their contents with .then - so they are endofunctors. You can “add them together” with Promise.all (returning a new promise), so they are monoids. They are both monoids and endofunctors, so they are monads.
I’ve just upset all the category theorists, but in the context of programming, that’s all it is. It’s surprisingly simple once you get it, it’s just complicated names for simple features.
"Complicated names for simple features" seems to describe Haskell crowd pretty well.
Thank you for a straightforward explanation!
JavaScript promises are not monads: https://stackoverflow.com/questions/45712106/why-are-promises-monads
They're close, but not quite. then
can shift semantics from under you depending on what's supplied.
It should also be added that monoids require the type to have an identity or "empty" value (which is the empty array for arrays).
The explanation is pretty off in general, since it implies that any type that's both a functor and a monoid is a monad, which is simply not true. A good example is a ZipList
(like lists, but with an applicative instance based on zipping). It's a functor and a monoid, but definitely not a monad.
IIRC monoid also requires you to have zero
, so that an initial value is known for any repetitive operation. In case of an array that'd be an empty one, in case of promise that'd be doing nothing, I guess.
Endodunctors? Now I have to know Minecraft too?
Duh!
A monad isn't "a thing", it's a common interface shared by several different types that have a common mathematical structure that happens to be useful for structuring programs around. I think that's why it's so confusing to people, other programming languages tend to not have as abstract abstractions front and center.
In my experience they're only really in your face when you're doing things with side-effects, and at that point it's just a lobster trap that collects you non-functional code until you pass it to main. Maybe I'm just a lame Haskeller though.
I use Either
(for error handling) and State
(for shared state in the program) fairly often, sometimes both at once with IO
in a monad transformer stack. Having pure code is of course the best but error handling at least tends to sneak in through the program
monad transformer stack
I'm going to have to look that up.
When you write big Haskell programs, do you ever find yourself emulating imperative code? I always do past a certain point, and then I figure I might as well bite the bullet and just move to Rust or C or something for the extra performance.
Monad transformers are monads that take another monad as a type argument and are for when you want to have severtal kinds of Monads at the same time. If you want to be able to throw errors, have state and perform IO you can use the type ExceptT ErrorType StateT StateType IO a
for example.
IMO the biggest strengths of Haskell are that you can create very powerfull abstractions and that you have a greater ability to reason about your code. This is still true to some extent even if you have a lot of imperative-like State
or IO
code, so it can still valuable to write in Haskell. Of course, it's still good to avoid this when possible, and take it as a sign to rethink your design.
The main reasons why I don't program more in Haskell are that it can be un-ergonomic to write certain kinds of code (that use IO
a lot for example), that it can be hard to reason about space leaks and primarily that it's basically pointless to convince anyone else at $DAYJOB that writing something in Haskell is a good idea (for not entierly bad reasons, it's good to have code that's maintainable by multiple people)
I wasn't even thinking of IO - I'm very good at avoiding that when possible - what I end up doing is writing giant functions like bigChungus :: a -> a
where a
is a large agglomeration of mostly auxiliary data, and then I call iterate
on it to search for a member of [a]
signifying completion, often with a version of find
. If you think about it that's just a loop with the parts of a
working as mutable variables.
I have to be suspicious that GHC runtime is actually building such a linked list and not turning that back into a loop in the imperative assembly code, like it should. And really, if I'm writing that way why Haskell.
Hmm no, I can't say that I've ever writen code like that. For one, it might be better to use loop :: (a -> Either a b) -> a -> b
instead so that you don't have to sort through the result afterwards with find
.
I'm not sure exactly what you're trying to do, but maybe using the State
monad could be a good idea? If a
is an object with fields that you want to be able to read and update that sounds a bit like what you might want to use State
for. This can be combined with maybe something from the loop section of Control.Monad.Extra to make the intention of the code a bit clearer.
If performance is critical you might be better of using a different language anyway (Haskell performance is okay but not amazing) but otherwise I don't think that this is really gonna slow down your code unacceptably much.
Hmm no, I can’t say that I’ve ever writen code like that. For one, it might be better to use
loop :: (a -> Either a b) -> a -> b
instead so that you don’t have to sort through the result afterwards withfind
.
Lol. Yep, I'm a lame Haskeller.
I’m not sure exactly what you’re trying to do, but maybe using the
State
monad could be a good idea?
This is a pattern that has repeated on different things, and the main reason I haven't done much Haskell in the past couple years. Maybe State
is what I need, I'll have to look into it.
If performance is critical you might be better of using a different language anyway (Haskell performance is okay but not amazing) but otherwise I don’t think that this is really gonna slow down your code unacceptably much.
See, I come from a maths background, and I have a bit of perfectionism going even if it's not a big deal. Maybe the processor can do a stupid thing and get away with it, but why should it?
What's that? Valuable programmer time you say? Pffft. I'll be over here designing a chess predicament with a multiply-infinite but well-defined solution to reach check (Yes, I've seen it done).
I mean, List is a monad. It just happens that the mathematical pattern works well for encapsulating side effects too.
Oh, I know, it's just not in-your-face. It's entirely possible to use Lists without knowing that.
I'm glad there is at least one serious answer on this thread.
Man. I recall watching the Computerphile video on monads and the first thing the presenter did was choose Haskell for example language.
Worst video of all of them, just some haskell masturbation. "Oooo, we can do infinite liiiists". Bitch that's called a generator.
Give me both and wash it down with some Prolog goddammit
Functional programming is so much fun. Sadly people usually give it complicated concepts to a point that it scares beginners away.
I understand that by giving something a name, we have control and can communicate effectively with others about it (like design patterns). But still...
It's a pretty natural consequence of other languages simply not having a concept or word for the thing that we're trying to abstract over, so better names simply don't exist. I've yet to see anyone come up with a better name than "monad" for the concept. Same for monoids. We may as well use the names that come from math and are already used extensively rather than trying to invent some new name that would invariably be misleading anyway.
Every single programming language is chock full of jargon that is basically meaningless to anyone unfamiliar with it. It's really no different. The only difference is that monads are fundamentally an unfamiliar concept to many imperative programmers, particularly because programming in that style pretty much upends a basic assumption imperative programmers tend to have (namely, that the semantics of sequencing operations is a global, immutable property of programs).
In the mirror universe Morpheus is holding Go and Erlang
I mean, technically, Option
s are monads, so…
The way I understood monads is they're a way to abstract the "executor" of a function. I/O monads run step-by-step based on stdin, List runs a function on every element, and the function is unaware of this, Option runs the function if the value exists (again the function's not aware of this)
That being said, I'm coming at this from a Rust view, and I've only scanned through one guide to monads so I may be wrong
That's not a monad, that's just typeclasses (also known as traits/interfaces/etc.).
If you're familiar with flat_map
or and_then
in Rust, you already get the basic idea of monads. Just imagine that instead of the ad-hoc implementations that Rust has to do (since it doesn't support higher-kinded data types), you could express it in a single trait. Then you can write code that generically works with any types that support flat_map
operations (along with a couple of other requirements, like needing a From<T>
impl, as well as abiding by some laws that are really only a concern for library authors).
If that sounds simple, it's because it is. It's mostly just that it just so happens that being able to write code in this fashion has some pretty significant implications. It essentially allows you to very strictly control the semantics of sequencing operations, while still giving you enough flexibility to enable many of the things you typically do in imperative programs. In a language like Haskell which is built on monads, it's a very powerful reasoning tool, much in the same way that purity in functions is a powerful reasoning tool. It allows us to do things like, say, represent async code blocks as first-class concepts requiring no special syntax (no async/await, etc.), because the monad interface allows us to dictate what it means to sequence async actions together (if you're curious to see what this looks like, here's a link to Haskell's Async
type from the async
library: https://hackage.haskell.org/package/async-2.2.4/docs/Control-Concurrent-Async.html#t:Async).
Ah, so is it right to say it's an abstraction of how functions are sequenced? I could kinda see that idea in action for I/O and Async (I assume it evaluates functions when their corresponding async input is ready?)
I think that's a reasonable enough generalization, yeah.
I'm sorry though, I seem to have given you incorrect information. Apparently that library does not have monad instances, so it's a bad example (though the Concurrently
type does have an applicative instance, which is similar in concept, just less powerful). For some reason I thought they also provided monad instances for their API. My bad.
Perhaps it would be better to use a much simpler example in Option
. The semantics of the sequencing of Option
s is that the final result will be None
if any of the sequenced Option
s had a value of None
, otherwise it would be a Some
constructor wrapping the final value. So the semantics are "sequence these operations, and if any fail, the entire block fails", essentially. Result
is similar, except the result would be the first Err
that is encountered, otherwise it would be a final Ok
wrapping the result.
So each type can have its own semantics of sequencing operations, and in languages that can express it, we can write useful functions that work for any monad, allowing the caller of said function to decide what sequencing semantics they would like to use.
All good, thanks for the explanation! :D
How monadical 😏
Maybe I'm not understanding it correctly, but Monads are data-structure objects whose methods return an data-structure object of the same type.
Like, (using Typescript):
interface IdentityMonad<T> {
map: ((fn: (v: T)) => T) => IdentityMonad<T>;
value: T
}
const Identity = <T>(value: T) => {
const map = (fn) => Identity(fn(initialValue));
return {
map, value
}
}
const square = (x) => x * x;
const twoId = Identity<number>(2);
console.log(twoId.value) //=> 2;
const sixtyFourId = twoId.map(square).map(square).map(square).map(square).map(square);
console.log(sixtyFourId.value) // => 64;