The Learning Method Charlie Munger Never Wrote Down (But Lived By)
Two techniques do most of the heavy lifting.
I was rereading Poor Charlie’s Almanack. Since then, one line has refused to leave me alone:
“I came to law school having learned the method of learning, and nothing has served me better in my long life than continuous learning.” — Charlie Munger
That is a massive claim. Not because Charlie loved learning. Plenty of people do.
But because he’s saying something far more specific: in his view, the highest-ROI skill of his entire life was learning how to learn.
That should get our attention.
Learning is one of the few processes we can run almost every day, for the rest of our lives, at very low cost, with absurd upside if done well.
It would be stupid not to optimize it.
The problem is that Charlie never really left a clean, step-by-step instruction manual. The good news is that modern learning psychology did most of that work for us.
This post has a simple goal: to lay out the best method of learning for most people in most cases, what works, why it works, and how to apply it.
Here’s how to learn according to science.
We Need Filters
If I ask you how to run in the best way, you’ll probably tell me it depends on a lot of things (terrain, race type, body shape, goals, etc.).
Learning works the same way. There’s no such thing as a perfect learning method.
So we’re going to filter learning techniques and keep the ones whose effectiveness doesn’t swing wildly across these four dimensions:
Conditions: solo vs. group, dosage, timing, format (reading vs. listening, open vs. closed book, etc.)
Learner profile: age, level, prior knowledge, abilities.
Material: vocabulary, concepts, math problems, complex texts.
Test: Criterion tasks: what you’re tested on → memory, comprehension, problem solving, transfer.
That’s exactly what Dunlosky & al. (2013)1 did. After reviewing a large body of evidence, they rate each technique’s relative utility:
High utility
Practice testing (testing yourself)
Distributed practice (spacing)
Moderate utility
Elaborative interrogation
Self-explanation
Interleaved practice
Low utility
Summarizing
Highlighting/underlining
Rereading
Imagery for text
Keyword mnemonic
From here, the system designs itself: build the whole machine around the two high-utility pillars, then plug in the moderate-utility modules where they actually pay off (everything else stays secondary).
The Most Effective Method in Most Cases
Let’s take a few seconds to thank the authors of that paper, who probably spent thousands of combined hours grinding through scientific studies to arrive at this result.
In my opinion, their contribution to humanity through this work is massive, and I’m happy to be a small link in the chain that passes these ideas along.
Let’s begin.
1. Define how you’ll know you know
As with any task, you have to start by making explicit what “done” looks like.
So the first step is to define a proof, something that demonstrates you learned what you wanted, the way you wanted. For example, if you’re learning a:
Concept → you can explain it with no notes, then give two counterexamples
Procedure → you can redo the exercise cold, with no template.
Domain → you can solve novel cases (not the ones you’ve already seen)
If you don’t define that proof, you’re very likely to optimize a proxy instead of the learning itself. It’s the finance equivalent of improving a ratio without improving the business.
2. Break the material into testable units
Your brain isn’t wired to evaluate “the whole” in isolation. It judges the whole by summing up the parts.
So except for the simplest cases, you have to convert the learning material (a course, a book, a video, etc.) into items: discrete things you want to be able to reproduce on demand.
Almost all items fall into four buckets:
Facts (definitions, formulas, dates, vocabulary)
Concepts (relationships, causes, limits, implications)
Procedures (algorithms, methods, steps)
Discrimination (choosing the right method among several similar ones)
This classification is what lets us move to the next step.
3. Turn every item into a question
Your brain remembers what you force it to retrieve. Turning each item into a question is how you optimize the material for the actual learning process.
In practice: write questions, not notes. Using the same four buckets:
Fact → “What is X?”
Concept → “Why does X imply Y?”
Procedure → “How to solve / derive / apply ?”
Discrimination → “In this case, which method and why?”
Over a long life of learning, your “question bank” is your main intangible asset.
4. Test yourself for real, using active recall
In learning psychology, we distinguish two ways of “retrieving” knowledge:
recall (the answer comes to mind),
recognition (you spot the right answer among others).
Recognition is much easier than recall. Recall requires a stronger memory trace, and more cognitive work to pull it out.
So here, forget recognition. Focus on recall, through practice testing:
No information source available.
One explicit question, with a clear answer you must produce, even if it’s imperfect.
You answer even if you have no idea, because the error will make the correction stick
Then you correct by comparing your response to the right answer
Change one of those conditions and practice testing becomes less effective (that’s not what you want.)
I’ll go further: if you keep only one method from this entire post, make it this one. Practice testing has huge advantages:
It works for most people, in most cases
It’s cheap in time and easy to set up (especially with GenAI)
Its effects show up across many retention intervals → from hours to months
It’s the Swiss Army knife of learning. Even so, it’s still more powerful when it’s not used alone, but embedded inside the full system.
5. Feedback. Always feedback.
The core of the process is practice testing: test + feedback.
But feedback is the real lever. In many cases, a weak test + strong feedback beats a strong test + weak feedback.
So you should overinvest in feedback quality. Every time your brain produces an answer and then receives feedback, it prunes some neural connections, strengthens others, and builds new ones.
Non-negotiable rules for high-quality feedback:
Every answer gets corrected with one clear, explicit “best answer.”
Every recurring mistake becomes its own item.
If you often get one part wrong, that part becomes a separate question with its own feedback.
Retest quickly after an error.
Anywhere from a few minutes to 2–3 days max, depending on how “serious” the mistake was.
One important detail: feedback doesn’t have to be immediate. Delayed feedback can still be effective.
6. Space out your sessions
Spacing your learning sessions is a simple way to compound learning: for the same total study time, spreading practice over time is associated with better long-term retention than massed study (cramming)
The reason is biological (as it almost always is). When you do practice testing with an error + feedback, you trigger cascades of biological processes that take days and weeks to play out, ultimately rewiring your neurons and connections (in other words, learning). You can’t outrun your biology.
Spacing your sessions creates a kind of Lollapalooza effect2: multiple powerful forces (deliberate practice, motivation, biological rewiring, etc.) aligned in the same direction, toward learning.
A simple template (adjust to your time horizon):
After creating an item: D+1, D+3, D+7, D+14, D+30
If the item is easy: stretch the intervals faster
If it’s fragile: shorten the intervals until it stabilizes
The goal is simple: get the same item right multiple times, across increasing intervals.
For implementation, I recommend Anki, a spaced-repetition flashcard tool that’s literally built for this and highly customizable to your goals (not a paid partnership). Some GenAI tools now offer similar systems as well.
7. Graft the “moderate-utility” modules in the right place
Practice testing + spacing is already an excellent method on its own. But under the right conditions, you can make it even better with a few add-ons.
Module A - Elaborative interrogation (“Why?”)
You take a statement and force your brain to generate a cause by answering: “Why?” (some would call it the most important question).
This was the most surprising part of the paper for me: elaborative interrogation isn’t that useful, in general.
The problem is cost (and it’s the main problem with most other techniques, too).
It can demand a lot of cognitive effort for limited return: you often need to go one “why” deeper, and you can end up spending your time inventing explanations instead of learning the material.
So the opportunity cost versus practice testing + spacing can be high.
It’s mostly useful for discrete facts/concepts that aren’t too complex. The rules are simple:
Identify a testable claim.
Ask: “Why?”.
Answer in 3/4 sentences max.
Get feedback: do a quick check to confirm your “why” actually holds.
Module B - Self-Explanation
Here you explain your own reasoning while you learn/solve, step by step.
More formally: you make explicit the intermediate steps of your information-processing, which helps link new knowledge to what you already know (and exposes gaps).
Its “moderate” rating comes from three issues:
Limited scope: it mostly applies to procedures and some concepts.
Time cost can explode (so does the opportunity cost).
It’s easy to slip into the illusion of self-explanation by paraphrasing (chauffeur knowledge vs. Planck knowledge).
One key point: self-explanation works better during learning (concurrent self-explanation) than after the fact (retrospective self-explanation).
A simple method:
At each step of a procedure or chain of reasoning, answer (briefly, in your own words) questions like:
“Why is this step valid?”
“What rule / principle did I just use?”
“What would change if assumption X were false?”
Module C - Interleaving (Mixed Practice)
This isn’t about alternating learning techniques. It’s about alternating the problems themselves.
I mention it because it’s directionally right, but it has three important limitations:
It only works well in domains with a large variety of problem types. The strongest results show up in math learning.
The literature is smaller and less decisive than for the other techniques.
Interleaving only helps once you have baseline competence in each individual task, which is not always obvious in solo learning.
Now that we’ve laid the foundations, we can finally build the full system.
The Full System — Summary
0) Pre-setup
Define your goal + time horizon: what do you want to know, and by when?
1) Define the proof (how you’ll know you know)
Concept → explain it with no notes + give 2 counterexamples
Procedure → redo the exercise “cold,” with no template
Domain → solve novel cases (not the ones you’ve already seen)
2) Break the material into testable units
Facts (definitions, formulas, vocabulary)
Concepts (links, causes, limits, implications)
Procedures (methods, steps, algorithms)
Discrimination (choosing the right tool among similar options)
3) Turn every item into a question
Fact → “What is X?”
Concept → “Why does X imply Y?”
Procedure → “Solve / apply / derive.”
Discrimination → “In this case, which tool—and why?”
4) Active recall (practice testing)
Zero support (notes/book closed)
One question → one produced answer, even if imperfect
ALWAYS correct after (compare to the best answer)
5) Feedback (the most important part)
Every answer is ALWAYS corrected with a clear best answer
Every recurring mistake becomes its own item (or sub-item)
Retest quickly after an error (minutes to 1-3 days, depending on severity)
Useful note: feedback can be delayed, it doesn’t need to be instant
6) Space it (distributed practice) — calendar-based
Rule: for the same total time → spacing > cramming for long-term retention
Template: D+1, D+3, D+7, D+14, D+30 (adjust to your horizon)
If easy: stretch intervals faster / if fragile: shorten until stable
7) Add the “moderate-utility” modules in the right place (optional but recommended)
Module A → Elaborative interrogation (“Why?”): for simple facts/concepts; short answer; quick verification
Module B → Self-explanation: for procedures/reasoning; justify each step (“why valid?” “what rule?”)
Module C → Interleaving: mix problem types to train discrimination (after baseline competence)
8) What you reduce by default (low utility)
Rereading / highlighting / summaries / mnemonics / imagery.
You can use them only as prep for questions but never as the core of learning
9) Repeat for life.
To close, let me remind you that cognitive activity at any age is associated with a longer healthy life expectancy, and the association tends to be stronger when that cognitive activity is more intense.3
I wish you a long life filled with learning and reading.
Take care,
Flo
Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4–58.
Charlie Munger’s “Lollapalooza effect” is what happens when several reinforcing forces line up: the outcome turns nonlinear (often explosive) because the impact compounds (it’s multiplicative, not additive).
Prospective cohorts and meta-analyses broadly point in the same direction: staying cognitively active (reading, learning, mentally demanding hobbies):
is associated with a longer cognitive healthspan (lower risk of cognitive decline and dementia),
and the signal often looks dose-responsive (more frequent/intense engagement, stronger associations).
See, e.g., Verghese et al., N. Engl. J. Med. (2003); Wu et al., JAMA Network Open (2023); Liu et al., BMC Medicine (2024); Liu et al., BMC Geriatrics (2021).


The deliberate accumulation of a multidisciplinary latticework represents the most sophisticated form of intellectual compounding.
By prioritizing mental models that exhibit the Lindy Effect, one develops a profound immunity to the ephemeral noise of modern specialization. This synthesis of timeless wisdom remains the only reliable framework for navigating a system of increasing complexity.
Well written MC! This is definitely the pillar everyone should know. There are many students that still aren't aware of active recall and the forgetting curve, and if only they understood this they'd be able to work a lot smarter. This also goes for us, especially in this industry, where there is so much continuous learning to do.