Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it? - Brian Kernighan
There is a popular opinion among programmers that code should always be as simple as possible. The idea is that by emphasizing simplicity, you produce code that is easier to understand, easier to maintain, easier to modify, and so on. However, in my encounters with this philosophy I have found mostly frustration.
Simplicity is subjective. You might say that between two potential solutions, it will usually be obvious which one is simpler. That is true in my experience, but the edge cases create a lot of trouble. When two people have different ideas about which solution is simpler, how are you supposed to resolve the disagreement?
To start, it's useful to understand that complexity comes in many forms. In 2011, Mark Rosewater, the head designer for Magic: The Gathering, wrote an article about their New World Order. The New World Order is essentially a set of design principles for the game. In that article, distinctions are drawn between many types of complexity.
First, there's the complexity of an individual card. How difficult is it to understand what the card does? However, even here I think it's important to distinguish between understanding what the card does, and why you would want it to do that.
Second, there's the complexity of a board state. When cards start interacting with each other, situations can arise where it is difficult for the players to understand exactly what options and outcomes are available to them.
Finally, there's the complexity of developing a strategy. Even with a clear set of options, it may be difficult to determine which option is the best one. How do you maximize the impact of your cards on this particular game? The answer to that question may change from situation to situation even when the sets of cards are very similar.
These types of complexity have different effects on the way that a game is played, and as a result they should be considered separately. It may even be the case that some types of complexity are desired. Strategic complexity in particular is often the hook that keeps a player coming back in order to improve. If a game doesn't have decisions, is it really a game?
Chess vs Go
As an example, would you consider the game of chess or the game of go to be simpler? There are certainly arguments on both sides.
Purely judging by the rules required to define the game, I expect go to be simpler. In chess, each type of piece has a different way of moving, and for pawns there are separate patterns for moves that capture versus moves that don't. In go, the structure of a turn is placing a stone, followed by removing any stones that were captured as a result. So there's no need to memorize movement for each piece, only understand what it means for pieces to be captured.
Both games have several special cases. In chess, there's en passant, castling, promotion, the fifty move draw rule, perpetual check, threefold repetition, and so on. In go, there's the ko rule. Then, there are a number of edge cases for counting territory at the end of the game that can be difficult to understand for a player learning the game.
Then there's the victory condition. In chess, the goal is to put the enemy king in a position where it cannot escape being captured. In go, the goal is to have more territory surrounded. Both of these sound reasonably simple, but in practice beginners have a very difficult time understanding the concept of territory. When two completely new players play a game against each other, they invariably think the game is over before it actually is, and they also have no idea who the winner is, or how to find out other than to ask someone else.
Both games have a huge amount of strategic complexity. In a very literal sense, there are generally going to be more possible moves available for a turn in go than in chess. However, the vast majority of them aren't reasonable plays from a strategic point of view, so it's not easy to say that one game has more strategic complexity than the other.
Software Complexity
When writing software, I have noticed some similar flavors of complexity. It would be nice if I could describe complexity as how difficult it is to understand what the code is doing. However, "what the code is doing" is another subjective idea. For example, if your goal is to understand how code maps to CPU execution, then assembly makes that simpler than most other languages.
Many language features simplify one aspect of programming while complexifying another. For example, garbage collection moves most of the responsibility of allocating and freeing memory out of the programmer's hands. This means that memory leaks are less likely to occur, and the resulting code doesn't have memory management distracting from the main computations. But at the same time, garbage collection leads to other complications like garbage collector pauses.
The difficulty of understanding how code is executed is one type of complexity that can be present in software. However, there are several others. Another is the difficulty in understanding what the code is actually meant to compute. These two are generally at odds, and reducing one often means increasing another.
Then there's complexity in analyzing the code. Quicksort is only a couple lines of code, and the result of a sorted list is simple to understand. But there are many complications that arise in understanding its runtime. For example, if the pivot is picked randomly, then quicksort not only runs in \( O(n \log n) \) time in the average case, but also with high probability.
What about for deterministic pivot selections? With most simple schemes you'll find quadratic worst case, but with median of medians you can get \( O(n \log n) \) worst case. Do these results make quicksort a complex algorithm, or is it still simple?