Like many others, I have been using AI a lot for coding, learning math, brainstorming and research. As a professional software engineer, it is amazing/interesting to notice that I have hardly written code by hand since June.
When I started, I wanted to “build the muscle” of coding with an agent, so I decided to deliberately not code by hand. Soon after, the agents got good enough (and I got attuned to using them well) that the way to build software like this became natural / default for me.
I also recently re-learned math till high school / college level (using MathAcademy). During this time, I (mostly for fun) threw a bunch of math problems at different LLMs and was surprised that most of them get most of math questions (just from images of the questions) correct 95% of the time.
Additionally, asking LLMs broad questions like “intuition behind XYZ” seems to get it to map various models to a situation too.
This led me to thinking what could I do to be productive (in a GDP sense) when AI models will be able to do most of the solving.
Intuition and automaticity
One common denominator of almost all the (high paying) fields (engineers, doctors, finance) in intuitively understand the problem statement and then hypothesizing potential solution. This is fractal, in the sense that it applies to progressively bigger problems. When I was in high school doing integration and differentiation of that level, at some point the solutions becomes intuitive. What I mean by that is that I almost automatically knew how to solve a particular answer but I didn’t know how I knew it.
Intuition in its core essence is just pattern matching.
This can be applied to larger and larger problem spaces. In high school this looks like “automatically coming up with a solution to a integration problem”, in competitive programming this becomes applying a particular algorithmic technique to a problem statement and at cutting edge of research this becomes looking at a wall of data or a very complex problem statement and coming up with ideas to solve that. At higher / cutting edge level this is possible only with automaticity.
Hypothesis: You can’t do cutting edge work without automaticity
One of the questions that comes up a lot (that I certainly asked myself a lot when I was studying anything as a kid) is “why learn something by heart when you can just look it up?”
As I moved up the intuitive solution chain, I realized that automaticity is needed because the “context switch” of looking up information is too much for the brain and it loses the context of the problem at hand. Distractions (even looking up information) are so destructive that Korea bans flights during some exams.
Building automaticity
With the why out of our way, let’s talk about the how. The answer is repetition, practice and tight feedback loop. You need to do smaller level problems so much that it becomes second nature. The less you have to look up things during a hard problem the better you will perform.
You have to
- give it enough time, it takes time to learn hard things
- do smaller things before doing bigger things (I am guilty of having a different way of learning, head first)
Abstraction helps but automaticity at your level of your relevant abstraction is needed to do great cutting edge work. Depending on what you are doing, you might need automaticity across abstraction levels too.
Remember that everything is an abstraction:
- bits are an abstraction over hardware on / off (that can be a transistor)
- qubits are an abstraction over quantum computers (so you can build quantum algorithms on paper)
- math is an abstraction
- energy is an abstraction over the universe
- energy conservation is subject to frame of reference
So for example you are building a model to understand pitch control in Football. You don’t need to know have automaticity over the hardware / bit layer but over python and math. However, if you are moving live Kubernetes workloads across networked machines, then you need automaticity over Linux networking, bits and bytes.
What to learn?
Now that we have an idea of why and how to learn. The next question is what to learn? The answer here is more specific to your personal interests. At the same time there are some foundational topics that layer on top of each other to show up everywhere.
Math, specifically probability and statistics (uphill battle for me, I hate both) are literally everywhere from finance, to virtually any system modeling. Applied math in form of real analysis also shows up everywhere. Computer science and finance are built on top of math.
Finance, economy shows up everywhere (even if you don’t want to), the money abstraction pays bills.
What’s the end goal?
Remember we are answering this question in context of AI agents being able to do most of the heavy lifting. While the content so far was mostly generic, it still applies till we are able to prompt agents like “please solve cancer for humanity”.
Once you have built automaticity over your field.