A tale of growing frustration with Object-Oriented Programming that highlights one of its core features as a culprit.
It’s easy to become polarized about the two main paradigms that we use to classify new programming languages with. I certainly was, until recently.
Do you prefer object-oriented programming, or functional programming? If you are okay with either paradigm, then congratulations — you are not polarized — but if you greatly prefer one over the other, or maybe you even glance condescendingly at fellow developers who don’t share your polarized view of the world, then I hope to change your mind.
Functional vs. OO — Which is Better?
Let’s start with Java. Java is object-oriented. Considering that you can’t even write your main function without embedding it inside a class (hereby making it a method), it shouldn’t be hard to convince you that this statement is true.
Then there’s C. There are no classes, no inheritance, no sub-typing, no function overloading. All the features that characterize a language as OO. If it walks like a duck, talk like a duck, and quack like a duck, then it must be a duck, right? Well, C is no OO duck.
What about PHP? Is PHP object-oriented? Well, yes. Is it functional? Yes. In fact, PHP did not have classes until PHP 3 (PHP’s support for classes wasn’t agreeable until PHP 5, but that’s a different story). Class support was brutally injected into the language just like many of its other language features, creating something of a cross-breed where things fit together awkwardly. It walks like a duck, but talks like an elephant. I’ve been working with PHP for a number of years, so trust me when I insist that I don’t say this lightly. So PHP is both. Let’s move on.
While Go doesn’t have a type called ‘object’ it does have a type that matches the same definition of a data structure that integrates both code and behavior. In Go this is called a ‘struct’.
Why I Was Polarized
Why You Shouldn’t Be Polarized
Then came Go. And although Go was OO, it didn’t take the paradigm to the extreme like Java had done. It was pleasant to work with. It took me a while — and a bit of research — to figure out why.
The reason is that Go emphasizes composition over inheritance. Another revelation occurred to me: I haven’t been disliking OO. I’ve just been frustrated by certain aspects of it, such as inheritance and sub-typing. It was often impossible to make changes to my classes without having those changes ripple up and down through the inheritance hierarchy. I kept thinking that I must have been doing something wrong — after all, what’s the point of all this code hiding and carefully planned class design to ensure that the “user” (potentially another developer) can extend my class in just the right places, and in just the right way, if it becomes so fragile that I can’t make those changes myself without modifying every single class in that hierarchy? Back then I didn’t know any better, because most OO programming languages only offer the all-or-nothing package, the take-it-or-leave-it package. And inheritance was promoted as one of the core pillars of OO design.
As you probably know, inheritance also introduces a teeth-grinding issue known as the “Deadly Diamond of Death” (see Wikipedia for more information). Java mitigates this by disallowing multiple inheritance, but while it could be considered as having a “safer”, or “better” inheritance model than eg. C++, the main challenges of inheritance remain.
I’m not the only one — and far from the first — to realize the issues surrounding inheritance. Once I had my suspicions, it was easy to find second opinions.
The Gang of Four highlighted inheritance as an anti-pattern in their book “Design Patterns” from 1994: “Favor ‘object composition’ over ‘class inheritance’.” (quote is from Wikipedia).
Here’s Fabien Potencier, author of the Symfony Framework for PHP, arguing that future releases of Symfony should favor composition over inheritance: https://medium.com/@fabpot/fabien-potencier-4574622d6a7e.
In his excellent write-up, Nicolò Pignatelli convinces you to stay away from inheritance: https://codeburst.io/inheritance-is-evil-stop-using-it-6c4f1caf5117.
Allen Holub, of JavaWorld, also writes about how inheritance should be avoided — in Java, nonetheless! https://www.javaworld.com/article/2073649/core-java/why-extends-is-evil.html.
With this write-up, I’m actually not trying to convince you that inheritance is evil, and that you should stay away from it. I’ve already had that revelation, before I knew that there is a body of research out there that has reached similar conclusions. I’m just hoping to add some nuance to the discussion.
Go has helped show me that OO is not evil — far from it. But some of the features that has landed along with OO, in particular inheritance and (inheritance-based) sub-typing, are indeed so easy to misapply that they should probably be considered evil. These features will undermine your code quality and leave you curled up in the corner of the room, crying. I only regret taking so long time to realize this.
I’m not dismissing the entire OO paradigm, as for example Charles Scalfani did in his interesting, but controversial and highly polarized write-up, Goodbye, Object Oriented Programming. I’m not even being polarized — despite the title of this write-up — on whether or not inheritance belongs in a modern programming language. That’s my whole point: each language feature (be it inheritance, polymorphism, async functions etc.) has a limited area of applicability, a constrained set of scenarios where that feature is a good match that encourages “good design” rather than impedes it. In that context, the value of inheritance certainly needs to be downplayed to a much higher degree than it has in the past.
With new features such as closures coming to both PHP and Java (and even a programming language called Closure!), functional programming is indeed experiencing a comeback. But it shouldn’t necessarily be to the detriment of OO.
Be critical, always. When working with a new and exciting programming language, decide for yourself which features of that language works well, and which don’t. Some features will seem clever, but they will actually undermine your productivity. There’s a trade-off, and not every single feature of a programming language is the pinnacle of technological development.
Note: Sub-typing refers to compatibility of interfaces, and is actually a trait that we want in our interface-based designs. But a variation of sub-typing where compatibility springs from types that inherit from one another, is undesirable. This is the variation I’m referring to when I mention sub-typing in this write-up.