It's Too Easy - Part III - In Search of a Shorter Letter
![]() |
| A quiet night in 8-bit Paris |
If I had more time, I would have written a shorter letter
- attr. Blaise Pascal
I've spent about 1500 words arguing against making things too complex, so we should just opt for the easiest possible solution, right?
Well, being easy isn't all that easy. Remember way back in Part I, when I discussed entropy? And remember when I said that we often make things too complex because we assume that complexity is the natural state of a solution? It turns out that's true.
How many times have you looked at something and decided it's going to be a simple task (I know I'll hang that drywall myself!), and 8 hours and a trip to the emergency room later, you realize there were a lot of details you overlooked during your initial estimate that made the task decidedly not simple?
That's precisely why people spend so much time working through the details of a complex solution during planning (the ultimate universal joke being that, often, the details they're focused on aren't the details that need to be addressed during the implementation, hence the mess that accompanies a system that's designed for complexity).
In addition, simple solutions seem obvious after the fact because they are so simple and intuitive. What people take for granted is that the solution only seems simple in hindsight.
A great example is Google's search bar, which is often held up as a paragon of simplicity. It's a single field with a search icon that generally gets you anywhere you need to go by typing in a few words of free-form text.
Previous search engine iterations attempted to bucket by category, making the user interface much more difficult to navigate. In addition, Google's algorithm for returning relevant search results is fairly straightforward (though, I suspect, the underlying details of the work are even more complex than hanging drywall by yourself). The search algorithm consists of distributing the work to index pages among independent computer processes(if you're not familiar with indexing in computer science, think of it as a book index - it's very similar and where the CS term originates), scoring the number of references to a page, and determining if those references are come from trusted sources (which, in turn are deemed trusted by their own link scores) and is clear enough that I can describe it one run on sentence.
But this only seems like a straightforward way to index the internet now. Prior to its creation, search engines weren't able to successfully link search terms to relevant results (they were essentially just an accumulation of links without any scoring of the quality of the link), and, hence, why Google's algorithm turned Google into, well, Google.
Finding a simple solution like this is extraordinarily tricky. Cut too much from the proposal, and you're left with a lacking set of features or the most trivial of all solutions. Cut too little, and you're back to taming the Beast of Byzantium and handling all sorts of edge cases that never appear.
It seems, then, that we've reached a deadlock. I've made the claim that we shouldn't go in search of complex solutions, but complex solutions, paradoxically, tend to be easier to develop than truly simple but useful concepts.
It's in these moments that I rely on a couple of rules of thumb. The first is sticking to the Pareto Principle (the 80/20 rule), which states that 80% of outcomes derive from 20% of causes. It's originally an economic principle, but the probability behind it, surprisingly, seems to work in several other fields. This is the mathematical representation of the proverbial "low-hanging fruit" in corporate speak. Look for the easiest tasks with the biggest returns and focus on those first.
I should note that, in cases like site reliability, where we're routinely trying to hit targets high in the 90th percentile, this doesn't say that 80% is good enough (a site that's down once every 5 days will soon cease to be a site that draws any traffic). Instead, we should concentrate on the 20% effort that will get us as close as possible to our stated target in the fewest steps, whether or not that target is 95% or 99.999%.
The second rule of thumb I tend to follow is - throw out any requirement that isn't going to be utilized in the next 6 months. A lot of design, in the software world at least, is polluted with a lot of wishful thinking and what-ifs that never seem to be fulfilled. This leads you to build a lot of functionality that literally goes nowhere, and makes maintenance an ever-increasing nightmare.
This can be a hard rule to follow because many software development timelines extend well past 6 months. I won't get into that particular debate in this post, but I would strongly advocate that you pull your timelines in within 6 months. It helps keep the scope of your project sharply focused and makes it less likely you'll continue following sunk-cost requirements that make no sense but must be completed because, damn it, they were part of planning.
Even if you have optimal 1-month development and release cycles for your features, this can still be a tough rule to follow, especially when a manager gets all hot and bothered about ignoring their pet requirement because it didn't make the temporal cut. But do your best anyway to shelve it until the more pressing issues are addressed. You're not always going to win this argument, but that's why these are rules of thumb and not iron-clad principles.
If you're able to follow these rules, you'll likely have a strong working solution with minimal complexity. Of course, you'll have to add complexity for some of those silly little edge cases that entropy always throws our way, but they'll be far fewer and more relevant than if you tried to identify these cases during a marathon brainstorming session at the beginning of the project.
And, if time allows in the future - which, in most corporate environments, it doesn't, at least explicitly - you should revisit your solution with fresh eyes a few months or years later and see if you can further simplify it with the experience gained by implementing the solution previously and watching it run loose in the wild.
And, that, kids, is how you perform the juggling act that balances necessary complexity with desired simplicity.
Until next time, my human and robot friends.

Comments
Post a Comment