Tune Your Cognitive Strategies: Stories
March 2018
A long-time reader reinterprets the skill in terms of managing subagents, adds several examples, and shares a lot of other interesting concepts:
There seems to be a strong connection between this skill and meditating. They both put me in a particular state of mind, they involve actively perceiving some part of your phenomenology and being curious or experimental about it. For the sake of relatedness and the utility of memorable concept handles I'm going to refer to this skill as "metatating" (at least until someone comes along with something better).
[...] I've already noticed some really weird stuff as a consequence of my time practicing. For instance: The cooking pot analogy has proved itself to be extremely valuable. [...] 90% of the time my thinking is caused by a concept bubbling up in my mind. I can seize upon the concept and my internal narrator will start going over the idea and talk about it. Or I can let the concept go and go back to a blank mind, or a different direction or any number of things.
The interesting bit lies the consequences of choosing whether or not to hold onto the idea. If I let the idea go, I still understand the concept, I have the entire feeling in my mind and it's usable. In this way I can make startlingly rapid thoughts or arguments or assessments in my head. I use Bayesian expected value all the time in my day to day life, and after learning how to metatate, whenever I find myself making an estimation, I can grab the estimation and pick it apart. Before it was a gut feeling of which I was fairly cognizant, but now I can really say the historical cause for a lot of my estimations, this also makes it easier to update as the gut feeling has more gears to tweak.
Additionally, if I let go of an idea two major things happen, firstly, I understand the idea less and can't manipulate it as well. I still grasp it, but since these ideas don't take the form of words, just concepts with associated spatial properties it's really hard to actively parse all the pieces of it in great detail. Secondly, my memory of having had the thought fades very quickly. This seems very analogous to "time under tension" (a term in exercise literature describing increased protein synthesis in response to a sufficiently long and intense muscle activation). Again, for the sake of useful concept handles, I call this 'thought under tension'. [...]
Successful metatation feels like directly informing subagents about their performance. Performance about what? Well it depends, if I'm doing ODE (math) and I catch myself dropping a negative, or rushing through the problem, I will explicitly reward the subagent who identified the issue. In-fact I will reward the subagent who noticed that there was a subagent trying to tell me I had a problem. This reward structure feels like excitement, happiness, self-affirmation, and a squirt of dopamine. My heart rate speeds up a bit and there’s some tension in my neck and shoulders. I get very happy for a small time. It feels like the mental equivalent of giving someone a high-five or a hug. There's even a subagent who's sole purpose is to reward other subagents who find helpful information. This subagent rewards itself for doing it's job, so after an initial period of roughly three days of conscious attention, my rewarder subagent started automatically rewarding subagents for doing their job.
Subagents are a pretty common metaphor for many of the internal conflicting desires in a person. I never really identified with this until I started metatating. At which point subagents became not conflicting desires, but personifications of habits and tendencies of my mind. This is what I personally started thinking of 'deltas' (the patterns of thought) as. When something relevant comes up, a subagent will chime in with what it thinks might be important. I then mull over the thought (via some process I'm not sure of yet) and then reward or punish the subagent for it's behavior. For example, I have a subagent I identify as "Munchkin". Munchkin likes to optimize and give the middle finger to people who don't optimize for their goals (including myself). Usually this is helpful, Munchkin tries to optimize pretty much anything I'm doing, from writing a proof, to finding the shortest route to my destination, to taking making protein shakes as efficiently as possible. However, sometimes it's not as helpful to pay attention to Munchkin, especially in combination with another subagent "Agent". If I notice that one of my teachers is running a class poorly, or is making us do busy-work, Munchkin kicks on and usually gets frustrated. When Munchkin does this I make sure to reward (feel good about noticing, give myself a hug, etc) Munchkin for coming up with ways to optimize, but also punish (internally chastise, feel bad, etc) Munchkin for impacting my mental state in a negative (suboptimal) way. I'll go through a few examples to demonstrate what it actually feels like to do this.
Example 1:
I'm playing guitar scales
Notice a concept bubble up -> quickly evaluate its relevance to playing guitar -> decide it's not relevant to playing -> discard the concept -> reward myself for noticing and evaluating -> return full attention to the guitar
This is an example of the most common form metatation that happens. The general cognitive strategy is - Notice thought -> evaluate thoughts relevance -> encourage or discard thought
This thought chain (probably) took somewhere between .5 and 1 seconds.
It happened consciously, but also wordlessly.
I kept playing my scales while doing this, but the scales became harder to execute while metatating.
Example 2:
I'm trying to prove something in math
Notice what I'm trying to prove -> try to wrap my head around what the thing I'm trying to prove is ->realize failure and a sinking cold fatalistic sensation in my midsection -> make a plan to prove the thing -> reward myself for noticing and planning -> state all the relevant definitions -> state all the relationships between the definitions -> I find a specific relationship which guides me to the proof.
This is a specific math example of what it feels like to fail to execute something. This can happen in any domain, but the general cognitive strategy is something like - Notice potential/actual failure -> Make plan to avoid failure -> execute plan - This is something which works pretty well on small object level issues, but falls apart when it comes to executing longer term plans (I'm working on fixing this).
A few things to note.
This specific mathematical thought chain (probably) took around 10-20 minutes.
There are small deviations and random mental babble not included in this for clarity sake.
It happened consciously and with words.
My performance didn't suffer at all because of this metatation, but was in-fact what enabled me to complete my object level task.
The 'deltas' here are the generalized strategies which help me think or work. I tend to explicitly include a positive feedback loop in all my 'deltas'. This lets me reinforce their use by making sure I feel good about executing the strategy. Additionally, including at least 1 meta step in the whole chain ensures that I actively notice when I use them, instead of only seeing the object level progress. This meta step also serves as a good leverage point for modifying cognitive strategies I've already made.
Consider that if I want to use the cognitive strategy in example 2 for longer plans like writing papers or planning my day, it often fails. If I kept using the same cognitive strategy without a meta step, I wouldn't have an easy way to modify the cognitive strategy while inside it. That sounds abstract so I'll go over another example
Example 3:
thinking of the things I need to do
Notice I will probably not do everything I should -> feeling of potential failure -> make a plan to do all the things I should do -> reward myself for noticing and planning -> notice that I'm executing this cognitive strategy -> reward myself for noticing -> notice that I'll probably fail based on past experience -> engage murphyjitsu -> ...
The act of noticing when I'm executing a cognitive strategy makes it much easier to change the cognitive strategy. When many of the common strategies take less than a few seconds, it's incredibly difficult to actively try and remember to change the cognitive strategy without some outside help. This is why I always put meta steps in my cognitive strategies.
A few things to note.
This thought chain (probably) took around 3 seconds to execute.
Modifying a cognitive strategy from the inside can feel like breaking stride, or like trying to write with a non-dominant hand.
Some final notes:
Some cognitive strategies (like in example 1) actively reduce the amount of mental RAM I have to work with. This feels like having less space to use in my head, I can't fit as many ideas in my head or do things as quickly. Cognitive strategies like this are noticeable in the back of my conscious mind as quiet little voice or thrum.
If I don't stop it, it can be easy to go into positive feedback spirals of the form - Reward myself for noticing -> notice I am noticing and reward myself for noticing -> notice I am noticing I am noticing and reward myself for noticing -> ...
These aren't actually very difficult to stop, the mental move feels like putting a stick into the moving spokes of a bicycle wheel. However, it's really interesting to note the fact that it's possible to do these loops in the first place.
It's worth noting that in my experience, subagents have explicit predictive power. I have an unnamed subagent that makes mundane predictions about events in the world. For example, if I go into a public bathroom this subagent will factor in the current time, my spatial position, and everything I can see or hear to give a probability that someone else is in the bathroom. This manifests (unless I look directly at the subagent's process) as a gut feeling about whether or not someone's in the bathroom with me. I have another unnamed subagent who predicts other people's actions, if I'm making plans with someone, this subagent will make active predictions about whether or not this person will show up, how late/early they will be etc. Additionally, if these subagents make an inaccurate prediction about the world I can update their models via the same mechanism I use to reward or punish them for cognitive strategies. Many of the gut feelings or intuitions I have about the world can be actively broken down and seen.