(This is the first in a series of shorter posts where I will be elaborating on mental models I use in life and in writing these essays. The point is to save time in future essays. One of the biggest deterrents to publishing things, for me, is worries about how much time I will have to spend explaining things prior to and supportive of the point I want to make. The largest of these are the mental models I use. Some of these are mental models popularized by others like Shane Parrish. Some are, to my knowedge, my own inventions. But all have been tweaked to my, perhaps idiosyncratic, uses.
By expaining them here, I can link to them in future essays without having to waste space giving anything but a brief rundown of an idea or analytical tool. Interested readers will be able to click back to these First Principles write ups if they want more info.)
Searching for Solutions
(Anyone with a passing familiarity with gradient descent, optimization problems, and machine learning can safely skip this one, except for the bullet points about how to turn those ideas to non-AI uses.)
When I think about any problem, I think about it, by analogy, as a search space. This helps in thinking about how best to go about searching for a solution.
Think of a search space as the set of all possible solutions to a problem. If you’ve ever graphed an equation in math class, like f(x) = x^2, the curving lines sprouting from the origin of the Cartesian plane and soaring up to -∞ and +∞ are the solutions that ‘satisfy’ the problem. Though very often we want more than just a solution, we want the best solution. Or, failing that, one of the best solutions.
If we’re thinking of our search space of all possible solutions as a two-dimensional plane, then we can think about the goodness and badness of solutions as the third dimension, similar to a landscape. Optimal problem solving is finding the highest (or in some cases, where want to minimize something, the lowest) point in the space.
Different search spaces call for different strategies. If the search space is small, then setting up a clever way to look is a waste of time, and you can brute force it. If you’re looking for your keys, and you know they fell into the compost, you should probably just dump out the entire contents and sift through them. If the search space is very large, or searching in it has a non-zero cost (like you can’t wait a million years for the brute force solution) then you need something clever, like simulated annealing, evolutionary algorithms, or neural networks. For a wonder filled and highly entertaining, as well as immensely informative, guide to these topics, I recommend Fogel & Michalewicz’s How to Solve It: Modern Heuristics.
My personal instinct, honed by training in philosophy, is to, when faced with a dense or hard to navigate search space, to wonder whether we really need to search the space in a more efficient or clever way. Maybe the problem is with the search space we have chosen, or perhaps with the way we framed the problem in the first place.
Do we know what the answer would even look like?
A new antibiotic is going to be a large protein structure with definite characteristics, not an iambic pentameter epic. How to layout your room is going to be an arrangement of your stuff in three dimensional space, not a protein structure. “Do human beings have free will?” is a question too vague to have anything count as an answer to it - and in fairness philosophers who think about this ask much more sophisticated, pointed questions than that blunt, stoned undergraduate way of framing it.
How much effort does it take to validate a solution? Can we check a solution quickly and easily, or do we get only one chance to choose and act?
Whether a candidate word rhymes with paired word is easy to validate, but finding a word that fits the meaning and tone of a developing poem is harder to validate. Also, is the test anything better than “other people agree with me” which can be for reasons other than “because it’s true.”
Is the search space static or dynamic?
Are we searching a space that does not change - like an attic no one has been into for months - or is the space changing while we are searching - a manhunt for a suspect to a murder alerts the suspect, and potential accomplices, and can lead them to flee, making our search harder.
Are we the only ones searching the space, and are we searching cooperatively or competitively?
An novelist is exploring their own search space, though within a definite genre, style, and publishing environment. A publisher is putting a novel out into a competitive environment with potential customers who have only limited money, shelf space, and attention to give to new publications.
If competitively, is there one solution to find, or is there a range of runner up solutions?
Only one research team is going to find the most effective molecule to treat a condition (and certainly are the only ones who can profit from it), but many investment managers can build strong portfolios for their clients.
Is our search space too narrow, artificially cutting off potentially optimal or near optimal solutions, or is it too large, and we are unlikely to find even an acceptable, good enough solution in the time we have to search?
Picking a movie to watch is too large if we consider every movie ever made and available for purchase, rental, or streaming, but too narrow if it consists of “The Great Train Robbery” and “Schindler’s List.” If the latter is the case, one might just choose not to watch a movie at all and do something else.
What is the opportunity cost of our search? With the resources we are using to search, could we have just paid for, in some sense literal or figurative, an existing solution?
Searching the veterinary literature and training to become a veterinarian to treat your cat’s leukemia is a waste of all resources that could be spent if veterinarians are available for hire.
When do we stop looking (also known as the Optimal Stopping Problem)? At what point is the continued search actually losing rather than gaining value?
Finding a parking spot is optimal when we find one close to our destination, but suboptimal when we keep driving around hoping for a better one
Are we even looking in the right search space?
Trying to optimize the suspension on the horse-drawn buggy when we should be investigating how to make parts for automobiles, or get into a whole other line of business entirely.
Is this even a problem?
First Philosopher: How are synthetic a priori ideas possible? They are possible because of X.
Second Philosopher: Synthetic a priori ideas are not possible. This is so because of Y.
Antiphilosopher: this question does not make sense, and both answers to it are not even wrong.
Prior Guest Appearances of this Principle:
Let’s Stop Pretending We Are Original Thinkers - most of the time, most of us are not even searching for solutions to problems, but going with the accepted solutions of our time and class. There is nothing wrong with this, except pretending that we are doing something different.
Target for Tonight: A Drama in One Act - maybe randomly throw bombs into the abyss is not the best way to fight a war, even if it is good for public morale to believe “something is being done”
Twice Read Books: Peter Thiel's "Zero to One" Or, On Secrets & Mysteries - sometimes we cannot know what exactly we are looking for, and need to search with just some criteria, a space to search in, and a hunch that a solution exists
No, There Are Not Too Many Good Books To Read - Sometimes your search space can be too large and stifling to even begin looking in. Its okay to accept guides to where to search.
Totally Unrelated Appendix:
Looking at the art I put on the tops of those essays, isn’t it f**cking phenomenal how far AI generated art has come in just two years? Just for fun, I redid some of the images
Target for Tonight
Twice Read Books: Peter Thiel’s “From Zero to One,” Or On Secrets and Mysteries
No, There Are Not Too Many Good Books To Read
Just phenomenal. And it makes me excited for where generative AI is going to go next.