... all the time.
Do you want to be more specific?
Okay, yes, I probably should.
__proto__-what-is-that-even "Prototype" property that adds fields beyond the ones living directly in the object), and mutable (an object can become a different type of object by calling
Object.setPrototypeOf(instance, newPrototype), which on modern browsers will stab your performance directly in the jibblies but will also change an object to another type of object). Instances of classes are very convenient—Who doesn't love getting an object and just calling
myObject.doSomeStuff(args)?—and they can really help you organize your code.
Here's why you should use them less.
They require special serialization and deserialization
JSON.parse. You'll likely still need to validate the parsed data is the right format, since the Web is a hellworld full of active attackers (such as your own server code compiled to the wrong version), but you're 90% of the way there.
But if your objects are class instances? Oh dear. I'm sorry you did that to yourself. Don't forget to define a
toJSON method to explicitly select what fields you want serialized, and keep it up-to-date as the class changes. If you don't, you'll get the object's "enumerable properties" (what those are is left as an exercise for the reader). And on the deserialization side, don't forget to specify a
reviver function that takes pieces of your parsed JSON, pattern-matches them against the original instances, and uses the class constructor to change the object to an instance. Be careful:
reviver is looking at sub-trees and you're at risk if some sub-trees with the same properties should be different classes; I recommend synchronizing the
toJSON for those classes to add a 'tag' field that can be inspected to pop the POJO back into an object instance. And don't forget to synchronize all that serialization and deserialization logic with the server's representation of the data, or uh-oh!
... or you could do none of that, and just have the in-memory object representation track more closely to the on-wire model by not using classes.
They add overhead
They're a pain in the ass to unit test
What you want to do in unit tests is confirm your functions manipulate state correctly. Because class use encourages state hiding, it makes it trickier to write unit tests... Do you want to test your private or protected methods? "No," says the purist, "you should test your public API." Okay, but the public API is relying internally on some couple dozen private functions, so now I'm writing big, jangly dependency-heavy tests to get around the fact that I can't just call
myInstance.privateMethod directly and test its output. And if I'm using a mocking library, I'm now mocking stuff up in
MyClass.prototype, and sometimes I'm working around private methods by adding a
instanceof are traps
When you have to dynamically determine if an unknown object is an instance of some class, you can use the built-in
instanceof operator. This walks the prototype chain for the object to see if the specified class shows up anywhere as a parent. This works great until you get into anything complicated involving libraries and modules. Suddenly, you discover that your
ThreeDCoordinate isn't a
ThreeDCoordinate because it was built with
ThreeDLib version 2.7, but you're using npm and the code you're running right now is in a library you added which is depending on version 2.9 of
ThreeDLib, and no, the 2.9 and 2.7
ThreeDCoordinate classes aren't the same class even though they are 100% the same code.
So what should we do instead?
getDistance(coord1, coord2) to really be
ThreeDCoordinate objects; if they have
z fields that are numbers, I can act on them.
With POJOs manipulated by functions, I don't need special handling for serialization, my objects are much smaller (and I can't modify a prototype chain so I can't incur the expense of doing a very slow operation in a modern browser), and I can get inheritance by either extending objects (taking one object and adding fields to it... Not great, because this also incurs browser overhead) or composing objects (making a new object that has a field containing the "parent" object).
There are some possible downsides to this approach. One is the lack of enforced discipline in only having some methods available on some objects means you'll have to be more careful with your code to keep your inputs straight (it's easier to pass the wrong object to the wrong handler function if you're not referencing the methods via myObject.method()). Another is functions divorced from the data they care about can tend to end up wordy; it's no longer
One additional downside is the lack of private members. To be honest, while I find these conceptually useful I don't find I need the language itself enforcing discipline around them these days. My experience is that the question "how private is private" is wuzzier than I want it to be. The object model enforces it as "data only visible inside the methods of the class," and I find myself needing to "jail-break" that abstraction (for testing or "friend-class" reasons) too often. For functions, I can get privacy by scoping them to the module level. For data, if the API is sufficiently complex that private data matters, I put creation and maintenance of the object behind constructor and mutator functions and only change the data through those functions.
A TypeScript plug
To implement the approach I'm describing here, what I really do these days is build my types of objects as interfaces and use interface inheritance to indicate when one object can be treated as a subset of another object. I construct objects in functions declared to return a particular interface-conforming object and write functions that take in a particular interface-conforming object. The compiler will do the work at compile time to let me know if I'm trying to pass the wrong type to the wrong function. In the relatively rare cases that I'm handling multiple types of object on the same channel, I can use tag fields (or the structure of the data) and type guards to turn mystery-typed objects into an understood type.
The zeroth rule is there are no rules
I actually use them often, but I have some specific rules of thumb on when to use and when to avoid them:
1. If you have a big type family and inheritance is cheaper than special-casing
If you're dealing with a family of a dozen or more related types, where most of them share implementation but a few do have special handling needs (i.e. the traditional "shapes are circles and rectangles" problem), you may very well be better off using classes than an elaborate family of handler functions and special-case logic for switching on particular instances of the type family. In my experience, big bags of things mapping to tangible objects will fit this description.
2. Don't use them to describe data on the wire
It's hardly ever worth it to do the heavyweight serialization / deserialization of mapping data on the wire to class instances. If your data is going on the wire, keep it simple. Note that point 1 and point 2 come into conflict sometimes. There is no universal answer here; you'll be making tradeoffs one way or the other if you class-up your big type family that also serializes onto the wire. At least if you go that road, you can make
implement toJSON on every class and a reviver that understands the whole class family part of the process.
3. Don't use
At this point in my career, I consider
instanceof harmful; it actively conflicts with the ability to use different versions of a library in the same codebase, and fails in silent and confusing ways. It also bakes knowledge of the class hierarchy into possibly-unrelated code. Try not to do it.