Sure, except your photos come out like this:
(A Victorian-era selfie. Source.)
The dilemma: You need a camera for the photo, but don't want the camera in the photo. The instrument shouldn't appear inside the subject. (Hubert, you're leaving scalpels in the patient again.)
So, we need an isolated photo of a shiny object. What can we do?
Shrink it down: Make the camera as small as possible. Microscopic, a fleck of dust. But even that speck will show up on the sphere. Take the photo, and fix the blemish with our best guess given the surrounding pixels.
Go invisible: Make the camera unobservable to the subject: make it from perfectly transparent glass, or actively camouflage with your surroundings (like an octopus). The camera is there, looking at the subject, but the subject cannot notice it.
In Calculus, a function like f(x) = x^{2} is our subject. Limits (shrinking) and infinitesimals (invisibility) are how we take photos without our reflection getting in the way.
Consider a function like f(x) = 2x + 3. If I take a photo with my camera, I get:
We have the original function, and put the camera into the scene. The result is the original function (2x + 3) and the camera observing "2". That is, the camera thinks "2" is how much the function has changed. (And yes, frac(d)(dx) 2x + 3 = 2.)
Ok. Now take a function like f(x) = x^{2}. Again, let's put the camera into the scene to observe changes:
Hrm. The camera is directly observing some changes (2x · text(camera)) but there's another text(camera)^{2} term: the camera is observing its own reflection! The term text(camera)^{2} only exists because we have a camera in the first place. It's an illusion.
What's the fix?
List all the changes the camera sees:
Figure out what the camera directly observed. We divide to see what was "attached" to the camera:
Remove "reflections" where the camera saw itself:
From a technical perspective, the last step happens by shrinking the camera to zero (limits) or letting the camera be invisible (infinitesimals).
Neat, eh? We're reframing the process of finding the derivative: make a change, see what the direct effects are, remove the artifacts. The concept of "what the camera directly sees" and the camera's "reflection" help settle my mind about throwing away terms that appear to be there.
In math, we have fancy terms like linear and non-linear functions. We can think in terms of "shiny" or "dull" functions.
Linear functions are dull because they only have terms like x or constant values -- the camera can attach directly, and there's no reflection. Non-linear functions have self-interactions (like x^{2}) which means the camera has a chance to see itself. Reflections need to be removed.
With multiple subjects [f(x), g(x)] or multiple cameras (for the x, y and z axis) we get cross terms like df · dg or dx · dy. The goal is the same: remove unnecessary self- and cross-reflections from the final result. Show what the camera directly sees.
A few related blog posts on limits:
Regular calculus books use dx as the camera to detect change. The goal is to introduce a change (dx), then get the difference (f(x + dx) - f(x)).
This difference (for example, 2x · dx + dx^{2}) isolates the changes that dx is directly responsible for ("sees"). We can then divide by dx to get the change as a rate (how much we got out for how much we put in).
The concern is the same: the change dx may have reflections (dx^{2}) that need to be removed.
The Hawthorne Effect is where people behave differently when being studied. The study itself is appearing in the results.
If you ask people to enter a study about their eating, exercise, reading, or sleeping habits, those behaviors will change. (Gotta look good for the camera! Where are those Greek philosophers I'd always meant to read?)
Math gives us a few suggestions:
Shrink the effect: Make the study as non-intrusive as possible (like an iPhone passively monitoring your steps). Even then, figure out how much the results are skewed and adjust for this. (You left your phone on the washing machine again, you sly dog.)
Make the observations invisible: Imagine you don't know when the study is going to start. "Sometime in the next 20 years we'll silently observe your grocery shopping habits. Sign here." Hrm. You won't change your behavior for 20 years "just in case", so you'll just be you.
Think math only applies to equations? Hah. Only if we don't internalize the underlying concept.
Happy math.
]]>Ok. For that difficult concept, what finally made it click? It's usually:
Rarely is it because we're lacking:
The limiting factor -- the thing holding me back -- is how I approach a concept.
Imagine you're teleported to a Roman classroom. Kids -- heck, the adults -- are struggling with multiplication. (IV times VII is really hard!)
What do you fix: Flip the classroom? Gamify things? Invent a printing press to distribute more worksheets?
Helpful, in time. But the first fix should be a simple discussion:
Hey, I'm from the future. Yes, it's pretty nice. But first, we need to fix your concept of a number. Individual lines for digits is cumbersome. Instead, think in groups of ones, tens, hundreds, and so on. Now multiplication is built into your numbers, and arithmetic gets a lot easier. Let me show you...
Boom. The "Roman Numeral Problem" is not fixed with better tech. Just a better understanding.
Ok. Imagine a time traveler (you, 6 months from now) is going to tutor you today. What would they suggest?
Focus on the problems a time traveler would fix first.
While drafting this post, a comment came in:
So, I just started learning about imaginary numbers in math class, and I was so confused. I understood the idea, but not the practical application or really what i was. I am a person who needs to understand a concept fully, I have trouble accepting that i=the square root of -1. I was googling it and I only got more confused. Then, I found your article on imaginary numbers, and all of a sudden, I got it. I could visualize it, even though I have no specific examples of their importance, I can understand why and how they could be important. It clicked. It doesn't make me want to go do my worksheet on adding and subtracting them, but in math tomorrow, I will be a much happier camper. -Abby
It drives me crazy to see endless tutorials on imaginary numbers that don't address the fundamental confusion of how a negative number can have a square root. You can give me all the videos and interactive quizzes you want, I'm not truly learning until you explain the notion of a rotation.
This misprioritization shows up everywhere:
Fix the plot, then worry about special effects.
Fix the recipe, then worry about decor.
Fix the melody, then worry about the instruments.
Fix the analogy, then worry about the presentation format.
Identify what's held you back and fix that first.
Happy math.
Technology helps with certain limitations (access, distribution, cost). But the quality of the source material is still up to us. I'd prefer handwritten letters with Socrates to a HD video conference with Carrot Top.
Veritasium has a great video on these lines ("This Will Revolutionize Education"):
If we think the limiting factor in education is still distribution, we'll focus on technical solutions.
But you know what? We've had Shakespeare online for a few decades now. Modern kids must be poetry experts because of free access to quality literature, right?
It's not an access problem any more. It's a motivation, interest, enthusiasm, understanding-what's-actually-going-on problem. Let's fix that first.
]]>Mathematically, we can write:
And to a calculator, these are the same. Are they? There's a suspicion nothing (0) and complete cancellation (1 - 1) aren't quite identical.
In physics, there's the notion of a stable and unstable equilibrium. Take two pencils. Lay one on the table, balance the other on its tip.
They're both 'balanced'. There's zero motion. Yet one is a precarious position, carefully opposing the pull of gravity, while the other lays peacefully.
Lie on the floor for 10 minutes. Hold the plank pose for 10 minutes. From a physics perspective, no work was done (nothing moved), but your quivering arms tell a different story.
In algebra, we constantly factor equations to find roots.
Why? In short, we want to find the "neutral zones" where all forces balance.
Factoring
means "Is there a value where x^2, 2x, and 3 cancel each other out?". We arrange the scenario so the neutral zone is where we want to be (such as having no error, or having competing goals align).
There is often a "trivial solution", where we can plug in x=0 and all inputs disappear (lying the pencil on the table... or just taking it away!). However, we're more interested in finding a "neutral zone", where multiple, existing forces balance.
Programming languages distinguish "void/undefined/null" (a value is not set) and "having a value of emptiness".
var i; // i is undefined
i = 0; // i is now set to 4 bytes of "nothingness"
If we imagine data storage as a light switch, we have
By itself, var i
is just a name or pointer, but it's not yet referring to anything (not even nothingness). It's not that Gazasdasrb means "nonsense", it's that Gazasdasrb has no meaning at all.
Many math explanations say you "can't divide by zero". It's not that you can't, it's that it's undefined. What does division by zero mean? What does Gazasdasrb mean?
If we pick a specific value for the result of a division by zero (let's say 3/0 = 15) then we immediately have contradictions (this means 15 * 0 = 3).
We avoid this trouble by saying division by zero is "undefined", or "we haven't got around to picking a value, nyah". In some games, the only winning move is not to play.
(Sometimes we define a value for strange expressions (such as 0^{0} = 1), if it's useful and doesn't lead to contradictions.)
Calculus dances with the concept of zero. Beyond the study of limits and infinitesimals, we are curious about the meaning of "zero change".
When I say a function isn't changing ("the derivative is zero"), it's usually not enough information. Are we not changing because we're at a minimum, a maximum, or precariously balanced between a hill and ravine?
There are tricks, like the second-derivative test, to see what type of "zero change" we have.
Society sets many goals for itself. Here's one: reduce littering. Given our "multiple zero" interpretation, we could accomplish this with:
It's the same result -- clean streets -- but what strategy do we prefer?
In general, any negative influence (unemployment, crime, pollution, etc.) can be seen through the lens of prevention or cure, an absence vs. meticulous cancellation. The reading is 0 in both cases, and it's up to us to make the distinction. (Sir, unfrozen Caveman Og is asking about Wooly Mammoth attacks again. Should we sell him more repellent?)
In Eastern philosophy there's the notion of non-doing or Wu Wei. Our brains think of "non doing" as sitting lazily on the couch. But maybe it's another type of zero. (Again, hold a plank for 10 minutes and tell me nothing happened.)
This essay is quick armchair philosophy from an equation. The words "something comes from nothing" aren't convincing. But if I write 0 = 1 - 1, boom, an idea snaps into place. How did 5 symbols convince you in seconds? Isn't that amazing?
Calculations are nice, but not the end goal of math education. Intuition means you're comfortable thinking, daydreaming, and exploring a concept with math as a guidepost. Now imagine having this comfort with the notions of shape, change, and chance (geometry, calculus, statistics).
Let's learn to sing with math, baby.
]]>If you want a path that doesn't expect perfect motivation, shares insights in minutes (not weeks), and aims for lifelong insight, this guide is for you.
My learning strategy is to ask honest (sometimes uncomfortable) questions about what's really working.
No games, no kidding ourselves, just:
Here's my wishlist for a learning guide. Elon Musk talks about thinking from first principles, starting with fundamental truths and working forward from there^{1}. Who cares what's being done now, what's our goal?
Priority #1 for any class is: Do not create hate for the subject.
Imagine 99% of people in a skiing class never ski again. They cringe at the thought. We wouldn't console ourselves thinking "Oh, skiing teaches important physical skills that apply to other fields." We'd think "That skiing class is awful and needs to change."
Sure, not everyone will love skiing (or cooking, or math), but they shouldn't detest it. Temporary understanding is not worth permanent aversion.
So, what Calculus introduction made me excited to learn more?
For me, it was seeing how patterns can be cleverly split and re-assembled:
Most courses march you through weeks of theory "appreciate" these diagrams in week 11. Ugh. The big picture helps me appreciate the details, not the reverse.
A typical discussion:
"I want to learn Calculus. What should I do?"
"Here's a [full book/course/MOOC]. It's months of effort, I didn't do it myself, but here you go."
In other words, "go the library and read for 100 hours". The real question:
I'm interested in the subject. Is there a plan that worked for you?
Motivation is limited. Traditional classes "work" -- because students are under immense pressure to finish (tuition, peer pressure, fear of not graduating).
Online courses without this pressure have single-digit completion rates. We can pretend students "got something" from the experience, just like you "got something" from a movie you walked out of. We can't change the goalposts to "something is better than nothing" halfway through.
Realistic advice on what worked with my limited motivation (even as a math hobbyist!) is:
Get an Aha! moment in minutes that motivates me to keep going (a cool diagram, example, or simulation).
Take a progressive journey where even if I stop after an hour, I have some helpful insights (vs. an hour of stretching in the parking lot).
Maintain a desire to revisit the subject by having an approachable, gentle introduction. I'll then keep coming back to fill in gaps over time.
For fun, find a lesson on imaginary numbers.
Does it acknowledge negative numbers were also distrusted?
Is the name "imaginary" described as an insult, given by people who didn't understand the concept?
Does the teacher mention their own confusion? (Or did imaginary numbers just click?)
Is there a real-world application? (If not, is this because it truly doesn't exist, we haven't tried to look, or it isn't important for learning?)
This type of lesson is a giant pet peeve. The flow is "Here's a confusing concept. I was confused myself, but I won't tell you that. Memorize the definition, apply it in these practice problems, and we'll call it a day."
Argh, this drives me nuts. It reinforces the stereotype that math class is a game of moving symbols around. (This symbol multiplied by this other symbol makes -1. Tada!)
It's ok to lack an intuition; I lack it for most things. But hiding our initial confusion implies the subject isn't confusing.
There's a common trope of the smart-aleck student trying to "outsmart" the teacher. Do basketball players try to "outsmart" their coach?
The flawed assumption is teachers must be some omniscient authority giving you access to precious knowledge. The knowledge is out there, it's not like the teacher invented the math herself. Instead, imagine a coach who is trying to improve your understanding.
Coaches can be wrong, sure. But they've seen many struggle with the same issues you're facing, and are trying to help. It's ok if Lebron James can dunk better than his coach.
The math may be perfect and unchanging, but the way it's taught is not. Let's make it easy to improve lessons and not expect perfection the first time.
Most courses assume you want mastery of the subject. That's fine, but is it necessary?
There are several levels of music understanding:
Intuitive Appreciation: Just enjoying the music.
Natural Description: Humming a tune you heard or made up.
Symbolic Description: Reading and writing the sheet music.
Theory: Explaining how harmonies work, why minor scales are somber, etc.
Performance: Playing the official instruments.
In language learning, there is an ILR scale from no profiency to native fluency. Not everyone studying Calculus needs to become Isaac Newton. Can we have a path that goes as far as we need?
Combining these insights, I've made a Calculus Learning Guide.
The principles, as I tried to apply them:
It's honest. It's the explanation that actually inspired me, not the theoretical explanation that requires weeks of discipline for some future payoff.
It acknowledges limited motivation. How far can you get in 1 minute? 10 minutes? An hour? Pretty far, I think. And getting a win in 10 minutes means you'll come back for more.
It's updatable. With lessons based primarily on text, we can easily update, re-arrange, add, edit, fix. Other formats are essentially a bet we got it right the first time.
It acknowledges levels of understanding. Most people just want an appreciation for Calculus. Technical performance is a goal we can separate, organize, and build a path to.
I eat the veggies myself. This guide has "gut checks" like "Can I describe an integral in everyday terms?" and "Can I derive the product rule on my own?". This is how I actually refresh my Calculus understanding.
In my ideal world, every Wikipedia topic would have a guide that took you from the 1-minute version to a full technical understanding. Go as far as you wish, make meaningful progress at each step, and have fun along the way.
Happy math.
Musk mentions not "reasoning by analogy", or assuming a conclusion is true based on what happened in another scenairo. This is different from "understand by analogy", getting the gist of an idea and then working to the technical version. The analogy is a raft to cross the river, to be left behind once you're on land. ↩