Instead of making one big mess, you make multiple smaller messes and stuff them into objects.
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
Yep. State is bad, so in OOP we take the huge ugly ball of unnecessary state, and we spread it across the program ecosystem as a thin ugly brittle venier of unnecessary state.
Let's just not talk about the part where the computer is inherently stateful though 😉
Exactly! ;)
OOP is when you forget the S by mistake.
Dude, you're going to shit bricks when you realize most computer science jargon is just marketing buzzwords on top of marketing buzzwords and the terms never meant anything more or less it needed to sell a product.
For example, what the hell is big data? What is a scripting language? Is your DB web scale?
For example, what the hell is big data?
Big data is when we align our agile synergies at scale.
Vertical integration!
That one phrase does mean something though, and it should be fucking illegal.
Wait, so, the Cloud is actually just a bunch of other computers, called servers, and the only real innovation is basically a load balancing system?
Next youre gonna tell me I wont be able to stream lagless video games and also do competitive multiplayer on my Google Stadia, pff, like youre some kind of expert or something.
/s
I wont be able to stream lagless video games and also do competitive multiplayer on my Google Stadia
Negative latency!
God I still cannot believe how obviously bullshit that all was and how many fucking idiots parroted it hook line and sinker.
Google casually violating causality
You're getting a lot of conceptual definitions, but mechanically, it's just:
keeping state (data) and behavior (functions) that operate on that state, together
At minimum, that's it. All the other things (encapsulation, message passing, inheritance, etc) are for solidifying that concept further or for extending the paradigm with features.
For example, you can express OOP semantics without OOP syntax:
foo_dict.add(key, val) # OOP syntax
dict_add(foo_dict, key, val) # OOP semantics
keeping state (data) and behavior (functions) that operate on that state, together
Importantly, that's "together at runtime", not in terms of code organization. One of the important things about an object is that it has dynamic dispatch. Your object is a pointer both to the data itself and to the implementation that works on that data.
There's a similar idea that's a bit different that you see in Haskell, Scala, and Rust - what Haskell calls type classes. Rust gives it a veneer of OO syntax, but the semantics themselves are interestingly different.
In particular, the key of type classes is keeping data and behavior separate. The language itself is responsible for automagically passing in the behavior.
So in Scala, you could do something like
def sum[A](values: List[A])(implicit numDict: Num[A]) = values.fold(numDict.+)(numDict.zero)
Or
def sum[A: Num](values: List[A]) = values.fold(_ + _)(zero)
Given a Num typeclass that encapsulates numeric operations. There's a few important differences:
-
All of the items of that list have to be the same type of number - they're all Ints or all Doubles or something
-
It's a list of primitive numbers and the implementation is kept separate - no need for boxing and unboxing.
-
Even if that list is empty, you still have access to the implementation, so you can return a type-appropriate zero value
-
Generic types can conditionally implement a typeclass. For example, you can make an Eq instance for List[A] if A has an Eq instance. So you can compare List[Int] for equality, but not List[Int => Int].
OOP on its most fundamental level is the principle that stuff is represented by objects and those objects communicate with each other. That's it, that's the whole OOP.
What you are probably referring to is how OOP solves different problems and the different patterns it uses. Those are not OOP itself, those are basically instructions on how to do OOP correctly without shooting yourself in the foot.
So SOLID, IoC, dependency injection, factory, composition over inheritance and all the other famous principles are not OOP itself, but any medium-size app that's not following them is set for really fun times ~5 years down the road.
Not sure if I've answered your question, it's really vague, feel free to ask further.
The Haskell world (which admittedly is its own type of crazy) considers OOP to be a 1990's thing that was well-intentioned but didn't work out. The basic characteristics of OOP are subtyping and inheritance (added: plus stateful objects).
As originally envisioned, objects were supposed to communicate by message passing: i.e. there would be a separate thread of execution for each object, so they could do stuff asynchronously to each other. By that notion, Erlang is the only OOP language that has any traction. Joe Armstrong, inventor of Erlang, famously said:
I think the lack of reusability comes in object-oriented languages, not functional languages. Because the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
He could have fooled me. I would have guessed he was talking about npm ;).
Ok, so in most languages, you have some way to define a data structure. It could be anything. Maybe it stores the X and Y coordinates of a Cartesian vector. And now you want to do stuff with your vectors, so you write a bunch of functions you can call like get_vector_length(myvect)
or add_vectors(vect1, vect2)
.
In OOP, you add that kind of functionality into the data structure itself. So now you can just write myvect.length()
or vect1 + vect2
(by implementing the +
operator for your data structure). At this point, the data structure is typically called a "class" and the functions you build into the class are "methods".
As you dig deeper into it, you learn about inheritance. When you have 2 related classes that share a lot of functionality, you can use inheritance to save a lot of duplication in your code.
In statically-typed languages, it can also come in useful to have a base class you can pack into a container, since most containers can only accept a single data type. If you had some graphics classes like Rectangle
and Circle
that all inherit from Shape
, you could make a collection of Shape
that's a mix of those. (In dynamically-typed languages, this tends to be less of an issue since you can put objects of any data type straight into the list. This might be why OOP isn't approached as soon in tutorials for such languages, since it's not as mission-critical? But it's still a good idea to have some sort of class hierarchy where it makes sense.)
There aren't really that many definitions for OOP; it's a very consolidated paradigm. This is a short but comprehensive guide: https://www.baeldung.com/java-oop
It's simply not true that there "aren't really that many definitions of OOP", much less that the guide you've linked is "comprehensive" when it is specifically about Java.
This is a good, brief post about the different conflicting definitions: https://paulgraham.com/reesoo.html
This is a much more comprehensive but also less focused overview, with many links, from a site that is effectively both a wiki and a forum: https://wiki.c2.com/?ReesOnObjectOrientedFeatures
Academically, you're right. For practical reasons, you probably don't care how Simula, E, Lisp and Smalltalk (languages mentioned in that 20 year old article) implement it. This seemed more like a beginner question so I think the Java definition is a good starting point.
the comments in this thread show that there are different answers to the question, including different from this post.
Nonetheless, I appreciate the link. It's a good read.
It's the sound a Mid-Westerner makes when they're trying to get past you.
But for real, it's because this was super common:
typedef struct {
// data structure
} SomeDataStruct;
void SomeFunction(SomeDataStruct * data) {
// lots of functions like this
}
Like many things, once OOP became a thing, people started adding to it. This is why you get a ton of different definitions and such.
Which OOP? Alan Kay meant this:
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.
But there is also various other OOP around. And those really about completly different things.
What kind of different defintions have you came across?
In Essence, OOP: you describe with your code Classes and how those classed Objects interact with each other. Classes can be inherited from other Classes and or implement Interfaces (Interface, Traits, Protocol) so you know how derived classes can be interacted with even though you don't know the concrete class until runtime execution.
The tradeoff would be the stringent nature of OOP: you need to have Objects, otherwise it's just functional/procedural programming with extra steps.
Javascript is generally considered OOP, but classes weren't widely available till 2017.
Inheritance isn't fundamental to OOP, and neither are interfaces. You can have a duck- typed OOP language without inheritance, although I don't know of any off the top of my head.
Honestly, the more fundamental thing about OOP is that it's a programming style built around objects. Sometimes OO languages are class based, or duck typing based, etc. But you'll always have your data carrying around it's behavior at runtime.
JavaScript has been OOP since I can remember due to its prototypal nature. Change something on an inherited prototype, and every descendant also get those changes. And "classes" is just syntax sugar for that prototype mechanism.
It's similar to any tech buzzword. Take "agile" for example. Agile was successfully sold as being a great idea without really being well-defined. Suddenly anyone selling a development methodology had a strong incentive to pitch it as being the real way to do agile development.
In the 90s and 2000s every 10x california tech guru agreed that OO was the future, but apparently none of them actually liked smalltalk. Instead, every new language with a hint of dynamic dispatch suddenly claimed to represent the truest virtues of OO.
There are also people who argue that smalltalk is not true OO. They say that by Alan Kay's own definition the most OO language is Erlang.
I think it's most useful to learn about that history, instead of worrying about people's post-hoc academic definitions.
Using objects to represent data.