this post was submitted on 09 Aug 2024
615 points (98.1% liked)
Programmer Humor
32588 readers
947 users here now
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
- Posts must be relevant to programming, programmers, or computer science.
- No NSFW content.
- Jokes must be in good taste. No hate speech, bigotry, etc.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've found it's mostly two things: readability (ironically) and performance. I'll describe a few crude examples, but I won't get too much into specifics, otherwise I might as well write another book myself.
The performance part is simple: its excessive reliance on polymorphism and the presence of several levels of abstraction just doesn't allow for good code generation. I've seen 10x+ performance improvements by dropping all of the above, with often minimal loss in readability; on the contrary, oftentimes the code became more readable as well.
The readability part is harder to explain; not only because it depends on the codebase and the problem at hand, but also on the coding style each programmer has (though in my opinion, in that particular case it's the programmer's problem, not the codebase's).
I like to think of codebases as puzzles. To understand a codebase, you need to piece together said puzzle. What I've found with Clean Code codebases is that each piece of the puzzle is itself a smaller puzzle to piece together, which isn't ideal.
Functions
They should be small and do one thing
I generally disagree, not because those ideas are wrong, but because they're often too limiting.
What often happens by following those principles is you end up with a slew of tiny functions scattered around your codebase (or a single file), and you are forced to piece together the behaviour they exhibit when called together. Your code loses locality and, much like with CPU cache locality, your brain has to do extra work to retrieve the information it needs every time it needs to jump somewhere else.
It may work for describing what the code does at a high level, but understanding how it works to make meaningful changes will require a lot more work as a result.
Don't repeat yourself
Once again, it makes sense in principle, but in practice it often creates more problems. I agree that having massive chunks of repeated code is bad, no questions about it, but for smaller chunks it may actually be desirable in some circumstances.
By never repeating code, you end up with functions that are over-parameterized to account for all possible uses and combinations that particular code snippet needs to work with. As a result, that code becomes more complex, and the code that calls it does too, because it requires you to know all the right parameters to pass for it to do the right thing.
Exceptions
Exceptions are just bad. They are a separate, hidden control flow that you constantly need to be wary of.
The name itself is a misnomer in my opinion, because they're rarely exceptional: errors are not just common, but an integral part of software development, and they should be treated as such.
Errors as values are much clearer, because they explicitly show that a function may return an error and that it should be handled.
Classes, interfaces and polymorphism
I have lots of gripes with object orientation. Not everything needs to be an object, not everything needs to be polymorphic. There's no need to have a
Base64Decoder
, much less anIBase64Decoder
or anAbstractBase64Decoder
. Base64 only works one way, there are no alternative implementations, a function is enough.I'm a lot more on the data oriented side of the isle than the OO one, but I digress.
Object orientation can be good in certain contexts, but it's not a silver bullet.
Encapsulation for the sake of it
Let's say you have something like this:
With the Clean Code approach, it magically becomes:
Why? Who the hell knows. It makes absolutely no tangible difference, it only makes your code longer and more verbose. Now, if a value needs validation, sure, but oftentimes this is just done regardless and it drives me insane.
Abstract classes for everything!
The problem with wanting to create the most generalized code in advance is that you end up stuck in abstraction hell.
You may as well not need the ability to have arbitrary implementations, but now you need to plan for that.
Not only that, but it also makes reasoning about your code harder: how many times have you had to step through your code just to figure out what was being executed | just to figure out what particular concrete class was hiding behind an abstract class reference?
I myself, way too many, and there was often no reason for that.
Also, the idea that you shouldn't know about the implementation is crazy to me. Completely encapsulating data and behaviour not only makes you miss out on important optimizations, but often leads to code bloat.
There's more but I'm tired of typing :)
Take a look at these if you want more info or someone else's view on the matter, I wholeheartedly recommend both:
They may be a part of software development, but they should not be common during the normal execution of software. I once read the hint, “if your app doesn’t run with all exception handlers removed, you are using exceptions in non-exceptional cases”.
Throwing an exception is a way to tell your calling function that you encountered a program state in which you do not know how to proceed safely. If your functions regularly throw errors at you, you didn’t follow their contract and (for instance) didn’t sanitize the data appropriately.
I disagree here. You can always ignore an error return value and pretend that the “actual” value you got is correct. Ignoring an exception, on the other hand, requires the effort to first catch it and then write an empty error handler. Also (taking go as an inspiration), I (personally) find this very hard to read:
This code mingles two separate things: The “normal” flow of the program, which is supposed to facilitate a business case, and error handling.
In this example, on the other hand, you can easily figure out the flow of data and how it relates to the function’s purpose and ignore possible errors. Or you can concentrate on the error handling, if you so choose. But you don’t have to do both simultaneously:
Agreed. Go's implementation of errors as values is extremely noisy and error prone. I'm not a fan of it either.
Then that's a language design / api design issue. You should make it so you cannot get the value unless you handle the error.
I'm of the opinion that errors should be handled "as soon as possible". That doesn't necessarily mean immediately below the function call the error originates from, it may very well be further up the call chain. The issue with exceptions is that they make it difficult to know whether or not a function can fail without knowing its implementation, and encourage writing code that spontaneously fails because someone somewhere forgot that something should be handled.
The best implementation of errors as values I've seen is Rust's
Result
type, which paired with the?
operator can achieve a similar flow to exceptions (when you don't particularly care where exactly an error as occurred and just want to model the happy path) while clearly signposting all fallible function calls. So taking your example:It would become:
The difference is that you know that
try_something
andtry_yet_something_else
may fail, whiletry_something_else
cannot, and you're able to handle those errors further up if you wish.You could do so with exceptions as well, but it wasn't clear.
The same clarity argument can be made for
null
as well. AnOption
type is much more preferable because it forces you to handle the case in which you are handed nothing. If a function can operate with nothing, then you can clearly signpost it with anOption<T>
, as opposed to justT
if a value is mandatory.Exceptions are also a lot more computationally expensive. The compiler needs to generate landing pads and a bunch of other stuff, which not only bloat your binaries but also prevent several optimizations. C# notoriously cannot inline functions containing
throw
s for example, and utility methods must be created to mitigate the performance impact.You're talking Monads, baby!