Theories about learning

This post contains analogies and theories that I find helpful to understand learning. Treat it like a tweetstorm.

Learning is a cornerstone of personal development. It can be grouped into the following aspects:

  • Facts: data that can be acquired
  • Mental models: transferable framework across subjects. A synonym is “idea”
  • Wisdom: a trained model for intuitive thinking

Other concepts:

  • Insights are (relevant) facts organised around mental models
  • Problems are processes to develop models

Facts

Facts are (mostly) useless outside mental models.

Facts used to be valuable when information was in scarcity. However, most facts are within clicks away. Of course, intelligence agents are still scouting for confidential data, but that is irrelevant to this post unless such data can cause a paradigm shift and refute existing models.

People are still eager for facts. This may explain the popularity of pub quizzes and tabloid news. However, facts without models are not reproductive: they cannot be transferred or inspire new ideas. They are DNA materials in a dysfunctional egg or sperm.

The reason why we are getting smarter is probably not because we remember more facts, but because we understand more mental models.

Facts are still necessary for learning, but I find the best way to learn facts is through mental models.

Mental models1

Models are representations that make sense of the complex world. They tell us how data are retrieved, organized, validated, processed, communicated (etc.) and why these ways are chosen.

Mental models can be “heavy” or “light”, “broad” or “specific”. They can also build upon one another. The more quantitative a subject is, the more models it contains - mathematics, physics, statistics, engineering are full of useful mental models.

Models are difficult to define because it is in the middle of the spectrum of facts and wisdom, but in general, they should be transferable and applicable.

In biology, evolution is a light and broad model that builds on randomization and survival of the fittest. In contrast, the parasympathetic supply of the cranial nerves is a heavy and specific model. Still, both models can be transferable. Evolution can be applied to ideas and companies; the cranial nerve model can be applied to design thinking that promotes redundancy.

Models need to be applicable to be useful. This is why we solve problems when learning. Problems examine the rigour of our models. Feynman’s technique is a problem-solving approach to building models. It is about transferring knowing the name of something to knowing something. In medicine, it is about looking at a structure from a functional view before memorizing its names.

Learning models require facts. The process is like building a house with concrete. First, you often need to understand more facts than necessary so that even when some degrades, there is still enough left to support the latticework. Second, you can’t build it too fast by cramming because some parts must solidify first before other parts can be built.

When reading, I like to focus on the mental models that produce the results rather than the results themselves (unless these results are helpful to my mental models). That is to say, I tend to focus on transferable units that the authors employ to build the system, rather than the products of the system. For example, when reading genetics papers outside my project scope, I no longer look for “what does gene X do”, but how and why the authors adopt a certain methodology.

There is not much difference between models that we use to solve problems and models that we use to build models. It is like functional and meta-programming. The functions of functions are still functions.

When models are sufficiently trained, they become a part of the high-order system 1 thinking that I call wisdom.

Wisdom

Wisdom is the intuition to make decisions, not a detailed calculation via a spreadsheet. It is more like a neural network that gives you the answer directly (after sufficient data is collected).

The calculative and articulated part of our brain may be like a five-dollar calculator. A calculative decision-making process occurs when we think linearly, summing the pros and cons. I find myself anxious when making such decisions: arbitrary values assigned to each factor can be wrong and I cannot possibly consider all factors in the complex world. Now I realise the anxiety comes from the underdevelopment of wisdom.

It is an interpretable neural network. People can break down what factors they are considering when coming to that decision, but just like most neural networks, the interpretable version will lose some fidelity as to how exactly the decision is made. This is one of the reasons why we all need to train our own wisdom instead of directly using the models from others.

Another reason why we need to DIY our wisdom is that our society changes (more and more rapidly). Previous models lose their specificity and sensitivity as the nature of data changes. Developing our own wisdom improves our ability to develop models fitted to our context.

Just like all neural network models, wisdom needs good data to produce good output. The processing of collecting such data is called forming insights.

Insights

Insights are relevant facts organized around mental models. Being relevant means the facts need to be up-to-date, but acquiring facts can be time-consuming. Thus, few people are insightful across many fields. Even Charlie Munger and Warren Buffett chose not to invest in most technology companies for this reason.

My approach is to build insights only in the field that have problems that I want to solve and ignore most of the facts in other fields. This is especially helpful when reading non-fiction full of facts (and examples) because most content is just data that help to validate the authors’ model. It also helps when deciding which newspapers/websites/podcasts to subscribe to.

Practice

What do all these mean in practice?

  • Build strong lattice of mental models through
    • structured learning and deep work
      • (rather than getting distracted and read scattered blog posts, which should only come later after the latticework is built)
    • problem-solving (including explaining)
    • maintaining relevant facts through techniques like spaced repetition
  • Reading not for facts, but with questions like these in mind
  • Transfer mental models across subjects
    • the more contexts we use them, the more likely they become intuitive (and the wiser we become)
    • the more mobile a model is, the more likely it will reproduce with others to form new models
  • How to handle failures
    • Failures are information overload. It indicates that our brain does not have sufficient models to explain the data presented to us.

  1. I began to understand its importance thanks to Charlie Munger, Naval Ravikant, Tim Urban, Xiaolai Li, and many others.↩︎

Avatar
Tim

Personalizing medicine

Related