Lately I’ve spent a lot of time internally debating one of my personal software design philosophies. Namely, to what extent, or if at all, should one go above and beyond the idea of MVP(Minimum Viable Product) when developing software?
That is to say, is doing the bare minimum to achieve the desired functionality perhaps even with disregard to the systems around you the best practice? Does the old saying, “If it ain’t broke, don’t fix it?” still hold water? Or is it worth taking that extra 10% of time to slightly re-architect the systems around you with consideration to that extra piece of functionality you’ve tacked on so that it doesn’t explode from the seams the next time you need to add “just one more piece of functionality”?
I’ll be the first to admit that as a fairly inexperienced developer, I have been known to be enthusiastically heavy handed when developing new systems or features. Where a chisel would have been the best tool for the job, more than once my tendency has been to immediately equip the refactor bazooka and fire at will. While I’ve built some pretty cool things as a result and made some pretty impactful changes, the void of code being released in the middle is disheartening. Not to mention the, “Hold your breath… here we deploy.” moments when the code is actually ready to ship.
As part of cleaning up my act and pursuing better development habits I’ve been making a conscious effort to refrain from these sorts of tasks and in many cases push myself in the opposite direction. Which explains how I arrived at this quandary in the first place.
In many cases it pains me when I know with just a bit of extra work, the “just get things done” changes could be made into elegant changes which improve readability, testability, and extensibility in the future. But is that really necessary or is it the case that doing this like this is simply over-architecting. Maybe it is true that if it isn’t breaking the current system or that in which you are trying to implement, then it should be left alone until it does.
My gut tells me that this isn’t the case. If you’ve been working with a code base for an extended period of time and you understand your current feature set and the foreseeable roadmap, then isn’t the extra 10% overhead now worth saving you the headache later on when the system you’ve been hacking things onto actually does explode?
I understand the arguments against this idea, or, at least trying to silo your changes to one scope of work at any given time. But from my experience, as working in a startup, “I’ll make that change in a later pull request”, really means, “It’s not going to get done.” So, with all that being said I ask, where is the line in the sand that one must draw when making these decisions?
Anyway, I’d love to hear from you, what your experiences are on this, and what I’m missing. I wrote this in about 30 minutes, probably in procrastination in studying for my American Politics Final tomorrow so hopefully it’s not too raw.