All posts by kalid

An Intuitive Guide to Linear Algebra

Despite two linear algebra classes, my knowledge consisted of “Matrices, determinants, eigen something something”.

Why? Well, let’s try this course format:

  • Name the course “Linear Algebra” but focus on things called matrices and vectors
  • Label items with similar-looking letters (i/j), and even better, similar-looking-and-sounding ones (m/n)
  • Teach concepts like Row/Column order with mnemonics instead of explaining the reasoning
  • Favor abstract examples (2d vectors! 3d vectors!) and avoid real-world topics until the final week

The survivors are physicists, graphics programmers and other masochists. We missed the key insight:

Linear algebra gives you mini-spreadsheets for your math equations.

We can take a table of data (a matrix) and create updated tables from the original. It’s the power of a spreadsheet written as an equation.

Here’s the linear algebra introduction I wish I had, with a real-world stock market example.

What’s in a name?

“Algebra” means, roughly, “relationships”. Grade-school algebra explores the relationship between unknown numbers. Without knowing x and y, we can still work out that (x + y)2 = x2 + 2xy + y2.

“Linear Algebra” means, roughly, “line-like relationships”. Let’s clarify a bit.

Straight lines are predictable. Imagine a rooftop: move forward 3 horizontal feet (relative to the ground) and you might rise 1 foot in elevation (The slope! Rise/run = 1/3). Move forward 6 feet, and you’d expect a rise of 2 feet. Contrast this with climbing a dome: each horizontal foot forward raises you a different amount.

Lines are nice and predictable:

  • If 3 feet forward has a 1-foot rise, then going 10x as far should give a 10x rise (30 feet forward is a 10-foot rise)
  • If 3 feet forward has a 1-foot rise, and 6 feet has a 2-foot rise, then (3 + 6) feet should have a (1 + 2) foot rise

In math terms, an operation F is linear if scaling inputs scales the output, and adding inputs adds the outputs:

F(ax) &= a \cdot F(x) \\
F(x + y) &= F(x) + F(y)

In our example, F(x) calculates the rise when moving forward x feet, and the properties hold:

\displaystyle{F(10 \cdot 3) = 10 \cdot F(3) = 10}

\displaystyle{F(3+6) = F(3) + F(6) = 3}

Linear Operations

An operation is a calculation based on some inputs. Which operations are linear and predictable? Multiplication, it seems.

Exponents (F(x) = x2) aren’t predictable: 102 is 100, but 202 is 400. We doubled the input but quadrupled the output.

Surprisingly, regular addition isn’t linear either. Consider the “add three” function:

F(x) &= x + 3 \\
F(10) &= 13 \\
F(20) &= 23

We doubled the input and did not double the output. (Yes, F(x) = x + 3 happens to be the equation for an offset line, but it’s still not “linear” because F(10) isn’t 10 * F(1). Fun.)

Our only hope is to multiply by a constant: F(x) = ax (in our roof example, a=1/3). However, we can still combine linear operations to make a new linear operation:

\displaystyle{G(x, y, z) = F(x + y + z) = F(x) + F(y) + F(z)}

G is made of 3 linear subpieces: if we double the inputs, we’ll double the output.

We have “mini arithmetic”: multiply inputs by a constant, and add the results. It’s actually useful because we can split inputs apart, analyze them individually, and combine the results:

\displaystyle{G(x,y,z) = G(x,0,0) + G(0,y,0) + G(0,0,z)}

If the inputs interacted like exponents, we couldn’t separate them — we’d have to analyze everything at once.

Organizing Inputs and Operations

Most courses hit you in the face with the details of a matrix. “Ok kids, let’s learn to speak. Select a subject, verb and object. Next, conjugate the verb. Then, add the prepositions…”

No! Grammar is not the focus. What’s the key idea?

  • We have a bunch of inputs to track
  • We have predictable, linear operations to perform (our “mini-arithmetic”)
  • We generate a result, perhaps transforming it again

Ok. First, how should we track a bunch of inputs? How about a list:


Not bad. We could write it (x, y, z) too — hang onto that thought.

Next, how should we track our operations? Remember, we only have “mini arithmetic”: multiplications, with a final addition. If our operation F behaves like this:

\displaystyle{F(x, y, z) = 3x + 4y + 5z}

We could abbreviate the entire function as (3, 4, 5). We know to multiply the first input by the first value, the second input by the second value, etc., and add the result.

Only need the first input?

\displaystyle{G(x, y, z) = 3x + 0y + 0z = (3, 0, 0)}

Let’s spice it up: how should we handle multiple sets of inputs? Let’s say we want to run operation F on both (a, b, c) and (x, y, z). We could try this:

\displaystyle{F(a, b, c, x, y, z) = ?}

But it won’t work: F expects 3 inputs, not 6. We should separate the inputs into groups:

1st Input  2nd Input
a          x
b          y
c          z

Much neater.

And how could we run the same input through several operations? Have a row for each operation:

F: 3 4 5
G: 3 0 0

Neat. We’re getting organized: inputs in vertical columns, operations in horizontal rows.

Visualizing The Matrix

Words aren’t enough. Here’s how I visualize inputs, operations, and outputs:

linear algebra reference

Imagine “pouring” each input along each operation:

linear algebra pour in

As an input passes an operation, it creates an output item. In our example, the input (a, b, c) goes against operation F and outputs 3a + 4b + 5c. It goes against operation G and outputs 3a + 0 + 0.

Time for the red pill. A matrix is a shorthand for our diagrams:

\text{I}\text{nputs} = A = \begin{bmatrix} \text{i}\text{nput1}&\text{i}\text{nput2}\end{bmatrix} = \begin{bmatrix}a & x\\b & y\\c & z\end{bmatrix}

\text{Operations} = M = \begin{bmatrix}\text{operation1}\\ \text{operation2}\end{bmatrix} = \begin{bmatrix}3 & 4 & 5\\3 & 0 & 0\end{bmatrix}

A matrix is a single variable representing a spreadsheet of inputs or operations.

Trickiness #1: The reading order

Instead of an input => matrix => output flow, we use function notation, like y = f(x) or f(x) = y. We usually write a matrix with a capital letter (F), and a single input column with lowercase (x). Because we have several inputs (A) and outputs (B), they’re considered matrices too:

\displaystyle{MA = B}

\begin{bmatrix}3 & 4 & 5\\3 & 0 & 0\end{bmatrix} \begin{bmatrix}a & x\\b & y\\c & z\end{bmatrix}
= \begin{bmatrix}3a + 4b + 5c & 3x + 4y + 5z\\ 3a & 3x\end{bmatrix}

Trickiness #2: The numbering

Matrix size is measured as RxC: row count, then column count, and abbreviated “m x n” (I hear ya, “r x c” would be easier to remember). Items in the matrix are referenced the same way: aij is the ith row and jth column (I hear ya, “i” and “j” are easily confused on a chalkboard). Mnemonics are ok with context, and here’s what I use:

  • RC, like Roman Centurion or RC Cola
  • Use an “L” shape. Count down the L, then across

Why does RC ordering make sense? Our operations matrix is 2×3 and our input matrix is 3×2. Writing them together:

[Operation Matrix] [Input Matrix]
[operation count x operation size] [input size x input count]
[m x n] [p x q] = [m x q]
[2 x 3] [3 x 2] = [2 x 2]

Notice the matrices touch at the “size of operation” and “size of input” (n = p). They should match! If our inputs have 3 components, our operations should expect 3 items. In fact, we can only multiply matrices when n = p.

The output matrix has m operation rows for each input, and q inputs, giving a “m x q” matrix.

Fancier Operations

Let’s get comfortable with operations. Assuming 3 inputs, we can whip up a few 1-operation matrices:

  • Adder: [1 1 1]
  • Averager: [1/3 1/3 1/3]

The “Adder” is just a + b + c. The “Averager” is similar: (a + b + c)/3 = a/3 + b/3 + c/3.

Try these 1-liners:

  • First-input only: [1 0 0]
  • Second-input only: [0 1 0]
  • Third-input only: [0 0 1]

And if we merge them into a single matrix:

[1 0 0]
[0 1 0]
[0 0 1]

Whoa — it’s the “identity matrix”, which copies 3 inputs to 3 outputs, unchanged. How about this guy?

[1 0 0]
[0 0 1]
[0 1 0]

He reorders the inputs: (x, y, z) becomes (x, z, y).

And this one?

[2 0 0]
[0 2 0]
[0 0 2]

He’s an input doubler. We could rewrite him to 2*I (the identity matrix) if we were so inclined.

And yes, when we decide to treat inputs as vector coordinates, the operations matrix will transform our vectors. Here’s a few examples:

  • Scale: make all inputs bigger/smaller
  • Skew: make certain inputs bigger/smaller
  • Flip: make inputs negative
  • Rotate: make new coordinates based on old ones (East becomes North, North becomes West, etc.)

These are geometric interpretations of multiplication, and how to warp a vector space. Just remember that vectors are examples of data to modify.

A Non-Vector Example: Stock Market Portfolios

Let’s practice linear algebra in the real world:

  • Input data: stock portfolios with dollars in Apple, Google and Microsoft stock
  • Operations: the changes in company values after a news event
  • Output: updated portfolios

And a bonus output: let’s make a new portfolio listing the net profit/loss from the event.

Normally, we’d track this in a spreadsheet. Let’s learn to think with linear algebra:

  • The input vector could be ($Apple, $Google, $Microsoft), showing the dollars in each stock. (Oh! These dollar values could come from another matrix that multiplied the number of shares by their price. Fancy that!)

  • The 4 output operations should be: Update Apple value, Update Google value, Update Microsoft value, Compute Profit

Visualize the problem. Imagine running through each operation:

linear algebra stock example

The key is understanding why we’re setting up the matrix like this, not blindly crunching numbers.

Got it? Let’s introduce the scenario.

Suppose a secret iDevice is launched: Apple jumps 20%, Google drops 5%, and Microsoft stays the same. We want to adjust each stock value, using something similar to the identity matrix:

New Apple     [1.2  0      0]
New Google    [0    0.95   0]
New Microsoft [0    0      1]

The new Apple value is the original, increased by 20% (Google = 5% decrease, Microsoft = no change).

Oh wait! We need the overall profit:

Total change = (.20 * Apple) + (-.05 * Google) + (0 * Microsoft)

Our final operations matrix:

New Apple       [1.2  0      0]
New Google      [0    0.95   0]
New Microsoft   [0    0      1]
Total Profit    [.20  -.05   0]

Making sense? Three inputs enter, four outputs leave. The first three operations are a “modified copy” and the last brings the changes together.

Now let’s feed in the portfolios for Alice ($1000, $1000, $1000) and Bob ($500, $2000, $500). We can crunch the numbers by hand, or use a Wolfram Alpha (calculation):

matrix stock computation

(Note: Inputs should be in columns, but it’s easier to type rows. The Transpose operation, indicated by t (tau), converts rows to columns.)

The final numbers: Alice has $1200 in AAPL, $950 in GOOG, $1000 in MSFT, with a net profit of $150. Bob has $600 in AAPL, $1900 in GOOG, and $500 in MSFT, with a net profit of $0.

What’s happening? We’re doing math with our own spreadsheet. Linear algebra emerged in the 1800s yet spreadsheets were invented in the 1980s. I blame the gap on poor linear algebra education.

Historical Notes: Solving Simultaneous equations

An early use of tables of numbers (not yet a “matrix”) was bookkeeping for linear systems:

x + 2y + 3z &= 3 \\
2x + 3y + 1z &= -10 \\
5x + -y + 2z &= 14


\begin{bmatrix}1 & 2 & 3\\2 & 3 & 1\\5 & -1 & 2\end{bmatrix} \begin{bmatrix}x \\y \\ z \end{bmatrix}
= \begin{bmatrix}3 \\ -10 \\ 14 \end{bmatrix}

We can avoid hand cramps by adding/subtracting rows in the matrix and output, vs. rewriting the full equations. As the matrix evolves into the identity matrix, the values of x, y and z are revealed on the output side.

This process, called Gauss-Jordan elimination, saves time. However, linear algebra is mainly about matrix transformations, not solving large sets of equations (it’d be like using Excel for your shopping list).

Terminology, Determinants, and Eigenstuff

Words have technical categories to describe their use (nouns, verbs, adjectives). Matrices can be similarly subdivided.

Descriptions like “upper-triangular”, “symmetric”, “diagonal” are the shape of the matrix, and influence their transformations.

The determinant is the “size” of the output transformation. If the input was a unit vector (representing area or volume of 1), the determinant is the size of the transformed area or volume. A determinant of 0 means matrix is “destructive” and cannot be reversed (similar to multiplying by zero: information was lost).

The eigenvector and eigenvalue are the “axes” of the transformation.

Consider a spinning globe: every location faces a new direction, except the poles.

An “eigenvector” is the input that doesn’t change direction after going through the matrix (it points “along the axis”). And although the direction doesn’t change, the size might. The eigenvalue is the amount the eigenvector is scaled up or down when going through the matrix.

(My intuition here is weak, and I’d like to explore more. Here’s a nice diagram and video.)

Matrices As Inputs

A funky thought: we can treat the operations matrix as inputs!

Think of a recipe as a list of commands (Add 2 cups of sugar, 3 cups of flour…).

What if we want the metric version? Take the instructions, treat them like text, and convert the units. The recipe is “input” to modify. When we’re done, we can follow the instructions again.

An operations matrix is similar: commands to modify. Applying one operations matrix to another gives a new operations matrix that applies both transformations, in order.

If N is “adjust for portfolio for news” and T is “adjust portfolio for taxes” then applying both:

TN = X

means “Create matrix X, which first adjusts for news, and then adjusts for taxes”. Whoa! We didn’t need an input portfolio, we applied one matrix directly to the other.

The beauty of linear algebra is representing an entire spreadsheet calculation with a single letter. Want to apply the same transformation a few times? Use N2 or N3.

Can We Use Regular Addition, Please?

Yes, because you asked nicely. Our “mini arithmetic” seems limiting: multiplications, but no addition? Time to expand our brains.

Imagine adding a dummy entry of 1 to our input: (x, y, z) becomes (x, y, z, 1).

Now our operations matrix has an extra, known value to play with! If we want x + 1 we can write:

[1 0 0 1]

And x + y - 3 would be:

[1 1 0 -3]


Want the geeky explanation? We’re pretending our input exists in a 1-higher dimension, and put a “1″ in that dimension. We skew that higher dimension, which looks like a slide in the current one. For example: take input (x, y, z, 1) and run it through:

[1 0 0 1]
[0 1 0 1]
[0 0 1 1]
[0 0 0 1]

The result is (x + 1, y + 1, z + 1, 1). Ignoring the 4th dimension, every input got a +1. We keep the dummy entry, and can do more slides later.

Mini-arithmetic isn’t so limited after all.


I’ve overlooked some linear algebra subtleties, and I’m not too concerned. Why?

These metaphors are helping me think with matrices, more than the classes I “aced”. I can finally respond to “Why is linear algebra useful?” with “Why are spreadsheets useful?”

They’re not, unless you want a tool used to attack nearly every real-world problem. Ask a businessman if they’d rather donate a kidney or be banned from Excel forever. That’s the impact of linear algebra we’ve overlooked: efficient notation to bring spreadsheets into our math equations.

Happy math.

Math As Language: Understanding the Equals Sign

It’s easy to forget math is a language for communicating ideas. As words, “two and three is equal to five” is cumbersome. Replacing numbers and operations with symbols helps: “2 + 3 is equal to 5″.

But we can do better. In 1557, Robert Recorde invented the equals sign, written with two parallel lines (=), because “noe 2 thynges, can be moare equalle”.

“2 + 3 = 5″ is much easier to read. Unfortuantely, the meaning of “equals” changes with the context — just ask programmers who have to distinguish =, == and ===.

A “equals” B is a generic conclusion: what specific relationship are we trying to convey?


I see “2 + 3 = 5″ as “2 + 3 can be simplified to 5″. The equals sign transitions a complex form on the left to an equivalent, simpler form on the right.

Temporary Assignment

Statements like “speed = 50″ mean “the speed is 50, for this scenario”. It’s only good for the problem at hand, and there’s no need to remember this “fact”.

Fundamental Connection

Consider a mathematical truth like a2 + b2 = c2, where a, b, and c are the sides of a right triangle.

I read this equals sign as “must always be equal to” or “can be seen as” because it states a permanent relationship, not a coincidence. The arithmetic of 32 + 42 = 52 is a simplification; the geometry of a2 + b2 = c2 is a deep mathematical truth.

The formula to add 1 to n is:


which can be seen as a type of geometric rearrangement, combinatorics, averaging, or even list-making.

Factual Definition

Statements like

\displaystyle{e = \lim_{n\to\infty} \left( 1 + \frac{100\%}{n} \right)^n}

are definitions of our choosing; the left hand side is a shortcut for the right hand side. It’s similar to temporary assignment, but reserved for “facts” that won’t change between scenarios (e always has the same value in every equation, but “speed” can change).


Here’s a tricky one. We might write

x + y = 5

x – y = 3

which indicates conditions we want to be true. I read this as “x + y should be 5, if possible” and “x – y should be 3, if possible”. If we satisfy the constraints (x=4, y=1), great!

If we can’t meet both goals (x + y = 5; 2x + 2y = 9) then the equations could be true individually but not together.

Example: Demystifying Euler’s Formula

Untangling the equals sign helped me decode Euler’s formula:

\displaystyle{e^{i \cdot \pi} = -1}

A strange beast, indeed. What type of “equals” is it?

A pedant might say it’s just a simplification and break out the calulus to show it. This isn’t enlightening: there’s a fundamental relationship to discover.

e^i*pi refers to the same destination as -1. Two fingers pointing at the same moon.

They are both ways to describe “the other side of the unit circle, 180 degrees away”. -1 walks there, trodding straight through the grass, while e^i*pi takes the scenic route and rotates through the imaginary dimension. This works for any point on the circle: rotate there, or move in straight lines.

euler's formula

Two paths with the same destination: that’s what their equality means. Move beyond a generic equals and find the deeper, specific connection (“simplifies to”, “has been chosen to be”, “refers to the same concept as”).

Happy math.

Why Do We Learn Math?

I cringe when hearing "Math teaches you to think".

It's a well-meaning but ineffective appeal that only satisfies existing fans (see: "Reading takes you anywhere!"). What activity, from crossword puzzles to memorizing song lyrics, doesn't help you think?

Math seems different, and here's why: it's a specific, powerful vocabulary for ideas.

Imagine a cook who only knows the terms "yummy" and "yucky". He makes a bad meal. What's wrong? Hrm. There's no way to describe it! Too mild? Salty? Sweet? Sour? Cold? These specific critiques become hazy variations of the "yucky" bucket. He probably wouldn't think "Needs more umami".

Words are handholds that latch onto thoughts. You (yes, you!) think with extreme mathematical sophistication. Your common-sense understanding of quantity includes concepts refined over millenia (zero, decimals, negatives).

What we call "Math" are just the ideas we haven't yet internalized.

Let's explore our idea of quantity. It's a funny notion, and some languages only have words for one, two and many. They never thought to subdivide "many", and you never thought to refer to your East and West hands.

Here's how we've refined quantity over the years:

  • We have "number words" for each type of quantity ("one, two, three... five hundred seventy nine")
  • The "number words" can be written with symbols, not regular letters, like lines in the sand. The unary (tally) system has a line for each object.
  • Shortcuts exist for large counts (Roman numerals: V = five, X = ten, C = hundred)
  • We even have a shortcut to represent emptiness: 0
  • The position of a symbol is a shortcut for other numbers. 123 means 100 + 20 + 3.
  • Numbers can have incredibly small, fractional differences: 1.1, 1.01, 1.001...
  • Numbers can be negative, less than nothing (Wha?). This represents "opposite" or "reverse", e.g., negative height is underground, negative savings is debt.
  • Numbers can be 2-dimensional (or more). This isn't yet commonplace, so it's called "Math" (scary M).
  • Numbers can be undetectably small, yet still not zero. This is also called "Math".

Our concept of numbers shapes our world. Why do ancient years go from BC to AD? We needed separate labels for "before" and "after", which weren't on a single scale.

Why did the stock market set prices in increments of 1/8 until 2000 AD? We were based on centuries-old systems. Ask a modern trader if they'd rather go back.

Why is the decimal system useful for categorization? You can always find room for a decimal between two other ones, and progressively classify an item (1, 1.3, 1.38, 1.386).

Why do we accept the idea of a vacuum, empty space? Because you understand the notion of zero. (Maybe true vacuums don't exist -- you get the theory)

Why is anti-matter or anti-gravity palatable? Because you accept that positives could have negatives that act in opposite ways.

How could the universe come from nothing? Well, how can 0 be split into 1 and -1?

Our math vocabulary shapes what we're capable of thinking about. Multiplication and division, which eluded geniuses a few thousand years ago, are now homework for grade schoolers. All because we have better ways to think about numbers.

We have decent knowledge of one noun: quantity. Imagine improving our vocabulary for structure, shape, change, and chance. (Oh, I mean, the important-sounding Algebra, Geometry, Calculus and Statistics.)

Caveman Chef Og doesn't think he needs more than yummy/yucky. But you know it'd blow his mind, and his cooking, to understand sweet/sour/salty/spicy/tangy.

We're still cavemen when thinking about new ideas, and that's why we study math.

A Brief Introduction to Probability & Statistics

I’ve studied probability and statistics without experiencing them. What’s the difference? What are they trying to do?

This analogy helped:

  • Probability is starting with an animal, and figuring out what footprints it will make.
  • Statistics is seeing a footprint, and guessing the animal.

Probability is straightforward: you have the bear. Measure the foot size, the leg length, and you can deduce the footprints. “Oh, Mr. Bubbles weighs 400lbs and has 3-foot legs, and will make tracks like this.” More academically: “We have a fair coin. After 10 flips, here are the possible outcomes.”

Statistics is harder. We measure the footprints and have to guess what animal it could be. A bear? A human? If we get 6 heads and 4 tails, what’re the chances of a fair coin?

The Usual Suspects

Here’s how we “find the animal” with statistics:

Get the tracks. Each piece of data is a point in “connect the dots”. The more data, the clearer the shape (1 spot in connect-the-dots isn’t helpful. One data point makes it hard to find a trend.)

Measure the basic characteristics. Every footprint has a depth, width, and height. Every data set has a mean, median, standard deviation, and so on. These universal, generic descriptions give a rough narrowing: “The footprint is 6 inches wide: a small bear, or a large man?”

Find the species. There are dozens of possible animals (probability distributions) to consider. We narrow it down with prior knowledge of the system. In the woods? Think horses, not zebras. Dealing with yes/no questions? Consider a binomial distribution.

Look up the specific animal. Once we have the distribution (“bears”), we look up our generic measurements in a table. “A 6-inch wide, 2-inch deep pawprint is most likely a 3-year-old, 400-lbs bear”. The lookup table is generated from the probability distribution, i.e. making measurements when the animal is in the zoo.

Make additional predictions. Once we know the animal, we can predict future behavior and other traits (“According to our calculations, Mr. Bubbles will poop in the woods.”). Statistics helps us get information about the origin of the data, from the data itself.

Ok! The metaphor isn’t perfect, but more palatable than “Statistics is the study of the collection, organization, analysis, and interpretation of data”. Need proof? Let’s see if we can ask intuitive “I tasted it!” questions:

  • What are the most common species? (Common distributions)
  • Are new ones being discovered?
  • Can we predict the next footprint? (Extrapolation)
  • Are the tracks following a path? (Regression / trend line)
  • Here’s two tracks, which animal was faster? Bigger? (Data from two drug trials: which was more effective?)
  • Is one animal moving in the same direction as another? (Correlation)
  • Are two animals tracking a common source? (Causation: two bears chasing the same rabbit)

These questions are much deeper than what I pondered when first learning stats. Every dry procedure now has a context: are we learning a new species? How to take the generic footprint measurements? How to make a table from a probability distribution? How to lookup measurements in a table?

Having an analogy for the statistics process makes later data crunching click. Happy math.

PS. The forwards-backwards difference between probability and statistics shows up all over math. Some procedures are easy to do (derivatives) but difficult to undo (integrals). (Thanks Denis)

Understanding Algebra: Why do we factor equations?

What’s algebra about? When learning about variables (x, y, z), they seem to “hide” a number:

\displaystyle{x + 3 = 5}

What number could be hiding inside of x? 2, in this case.

It seems that arithmetic still works, even when we don’t have the exact numbers up front. Later on, we might arrange these “hidden numbers” in complex ways:

\displaystyle{x^2 + x = 6}

Whoa — a bit harder to solve, but it’s possible. Today let’s figure out how factoring works and why it’s useful.


When we write a polynomial like “x^2 + x = 6″, we can think at a higher level.

We have an unknown number, x, which interacts with itself (x * x = x^2). We add in the original number (+ x) and the result is 6.

x^2, x and 6 are all “numbers”, but now we’re keeping track of how they’re made:

  • x^2 is a component interacting with itself
  • x is a component on its own
  • 6 is the desired state we want the entire system to become

After the interactions are finished, we should get 6. What number could be hiding inside of x to make this true?

Hrm — this is tricky. So let’s fight with a trick of our own: we can make a different system to track the error in our original one (this is mind-bending, so hang on).

Our original system is x^2 + x. The desired state is 6. A new system:

\displaystyle{x^2 + x - 6}

will track the difference between the original system and the desired state. When are we happiest? When there’s no difference:

\displaystyle{x^2 + x - 6 = 0}

Ah! that’s why we’re so interested in setting polynomials to zero! If we have a system and the desired state, we can make a new equation to track the difference — and try make it zero. (This is deeper than just “subtract 6 from both sides” — we’re trying to describe the error!)

But… how do we actually get the error to zero? It’s still a jumble of components: x^2, x and 6 are flying everywhere.

Factor That Mamma Jamma

Factoring the rescue. My intuition: factoring lets us re-arrange a complex system (x^2 + x – 6) as a bunch of linked, smaller systems.

Imagine taking a pile of sticks (our messy, disorganized system) and standing them up so they support each other, like a teepee:


(That’s a 2-d example, with two sticks).

Remove any stick and the entire structure collapses. If we can rewrite our system:

\displaystyle{x^2 + x - 6 = 0}

as a series of multiplications:

\displaystyle{Component \ A \cdot Component \ B = 0}

we’ve put the sticks in a “teepee”. If Component A or Component B becomes 0, the structure collapses, and we get 0 as a result.

Neat! That is why factoring rocks: we re-arrange our error-system into a fragile teepee, so we can break it. We’ll find what obliterates our errors and puts our system in the ideal state.

Remember: We’re breaking the error in the system, not the system itself.

Onto The Factoring

Learning to “factor an equation” is the process of arranging your teepee. In this case:

x^2 + x - 6 &= (x + 3)(x -2) \\
&= Component \ A \cdot Component \ B

If x = -3 then Component A falls down. If x = 2, Component B falls down. Either value causes the error to collapse, which means our original system (x^2 + x, the one we almost forgot about!) meets our requirements:

  • When x = -3, the error collapses, and we get (-3)2 + -3 = 6
  • When x = 2, the error collapses, and we get 22 + 2 = 6

Putting It All Together

I’ve wondered about the real purpose of factoring for a long, long time. In algebra class, equations are conveniently set to zero, and we’re not sure why. Here’s what happens in the real world:

  • Define the model: Write how your system behaves (x^2 + x)
  • Define the desired state: What should it equal? (6)
  • Define the error: The error is its own system: Error = actual – desired (i.e., x^2 + x – 6)
  • Factor the error: Rewrite the error as interlocking components: (x + 3)(x – 2)
  • Reduce the error to zero: Zero out one component or the other (x = -3, or x = 2).

When error = 0, our system must be in the desired state. We’re done!

Algebra is pretty darn useful:

  • Our system is a trajectory, the “desired state” is the target. What trajectory hits the target?
  • Our system is our widget sales, the “desired state” is our revenue target. What amount of earnings hits the goal?
  • Our system is the probability of our game winning, the “desired state” is a 50-50 (fair) outcome. What settings make it a fair game?

The idea of “matching a system to its desired state” is just one interpretation of why factoring is useful. If you have more, I’d like to hear them!


A cheatsheet for the process:

Some more food for thought:

  • Multiplication is often seen as AND. Component A must be there AND Component B must be there. If either condition is false, the system breaks.

  • The Fundamental Theorem of Algebra proves you have as many “components” as the highest polynomial. If your highest term is x^4, then you can factor into 4 interlocked components (discussion for another day). But this should make sense: if you rewrite an “x^4 system” into multiplications, shouldn’t there be 4 individual “x components” being multiplied? If there were 3, you could never get to x^4, and if there were 5, you’d overshoot and get an x^5 term.

  • Do you have a real-world system in a “teepee” arrangement, where a single failing component collapses the entire structure?

  • The quadratic formula can “autobreak” any system with x^2, x and constant components. There’s formulas for complex systems (with x^3, x^4, or even some x^5 components) but they start to get a bit crazy.

  • Is there any way to prevent a system from having these weak points? (Unfactorable? Non-zeroable?). Don’t forget, we thought systems like x^2 + 1 were “non-zeroable” until imaginary numbers came along.

Happy math.

Finding Unity in the Math Wars

I usually avoid current events, but recent skirmishes in the math world prompted me to chime in. To recap, there’ve been heated discussions about math education and the role of online resources like Khan Academy.

As fun as a good math showdown may appear, there’s a bigger threat: Apathy. And Justin Bieber.

Educators, online or not, don’t compete with each other. They struggle to be noticed in our math-phobic society, where we casually wonder “Should algebra be taught at all?” not “Can algebra be taught better?”.

Entertainment is great; I love Starcraft. But it’s alarming when a prominent learning initiative gets less attention than a throwaway pop song (Super Bass: 268M views in a year; Khan Academy: 175M views in 5 years). Online learning is a rounding error next to Justin Bieber — “Baby” has 700M views alone.

What do we need? The Math Avengers. Different heroes, different tactics, and not without differences… but everyone fighting on the same side. Against Bieber.

I could be walking into a knife fight with an ice cream cone, but I’d like to approach each side with empathy and offer specific suggestions to bridge the gap.

The Big Misunderstanding

Superheroes need a misunderstanding before working together. It’s inevitable, and here’s ours (as a math relationship, of course):

Bad Teacher < Online Learning < Good teacher

The problem is in considering each part separately.

  • Is Khan Academy (free, friendly, always available) better than a mean, uninformed, or absent teacher? Yes!

  • Is an engaging human experience better than learning from a computer? Yes!

But, really, the ultimate solution is Online learning + Good Teachers.

Tactics differ, but we can agree on the mission: give students great online resources, and give teachers tools to augment their classroom.

Why Do I Care?

I love learning. Here’s my brief background so you can root out my biases.

I was a good student. I was on the math team and hummed songs like “Life is a sine-wave, I want to de-rive it all night long…”. I drew comics about sine & cosine, the crimefighting duo. You might say I enjoyed math.

I entered college and was slapped in the face by my freshman year math class.

Professors at big universities must know everything, right? If I didn’t get a concept, something must be wrong with me, right?

I had a WWII-era, finish-half-a-proof-in-class, grouch of a teacher. I bombed the midterm and was distressed. Math… I loved math! I didn’t mind difficulties in Physics or Spanish. But math? What I used to sing and draw cartoons about?

Finals came. While cramming, I found notes online, far more helpful than my book and teacher. I sent an email to the class, gingerly suggesting BY EUCLID YOU NEED TO READ THESE WEBSITES THEY ARE SO MUCH BETTER THAN THE PROFESSOR. The websites turned up on an index card in the computer lab that evening. How many of us were struggling?

I was studying, staring at a blue book when an aha! moment struck. I could see the Matrix: equations were a description of twists, turns and rotations. Their meaning became “obvious” in the way a circle must be round. What else could it be?

I was elated and furious: “Why didn’t they explain it like that the first time?!”

Paranoid I’d forget, I put my notes online and they evolved into this site: insights that actually worked for me. Articles on e, imaginary numbers, and calculus became popular — I think we all crave deep understanding. Bad teaching was a burst of gamma rays: I’m normally mild mannered, but enter Hulk Mode when recalling how my passion nearly died.

My core beliefs:

  • A bad experience can undo years of good ones. Students need resources to sidestep bad teaching.

  • Hard-won insights, sometimes found after years of teaching, need to be shared

  • Learning “success” means having basic skills and the passion to learn more. A year, 5 years from now, do people seek out math? Or at least not hate it? (Compare #ihatemath to #ihategeography)

(Oh, I had great teachers too, like Prof. Kulkarni. The bad one just unlocked the Hulk.)

An Open letter to Khan Academy and Teachers

I recently heard a quote about constructive dialog: “Don’t argue the exact point a person made. Consider their position and respond to the best point they could have made.”

Here’s the concerns I see:

Packaging and presentation matters

Yes, other resources and tutorials exist, but there’s power in a giant, organized collection. We visit Wikipedia because we know what to expect, and it’s consistent.

Khan Academy provides consistent, non-judgmental tutorials. There are exercises and discussions for every topic. You don’t need to scour YouTube, digest hour-long calculus lectures, or open up PDF worksheets for practice.

So, let’s use the magic of friendly, exploratory, bite-sized learning of topics.

Community matters

Teachers and online tools don’t “compete” any more than Mr. Rogers and Sesame Street did. They’re both ways to help.

I do think the name “Khan Academy” presents a challenge to community building. Would you rather write for Wikipedia or the Jimmy-Wales-o-pedia?

Wikipedia really feels like a community effort, and though there are alternatives, in general it’s a well-loved resource.

I think teachers may hesitate to use Khan Academy, not out of jealousy, but concern that a single pedagogical approach could overpower all others. Let’s build an online resource that can take input from the math community.

Human interaction matters

It’s easy to misunderstand Khan Academy’s goal. I’ve seen many of their blog posts and videos, and believe Khan Academy wants to work with teachers to promote deep understanding.

But, some news coverage shows students working silently in front of computers in class, not watching at home to free up class time for personal discussions.

The teacher doesn’t appear to be involved or interacting, and that misuse of a learning tool is a nightmare for teachers who want a personal connection. Let’s have an online resource that directly contributes to offline interactions also.

Experience matters

I’ve seen that insights emerge hours (or years) after learning a subject. For example, we’ve “known” since 4th grade what a million and billion are: 1,000,000 and 1,000,000,000.

But do we feel it? How long is a million seconds, roughly? C’mon, guess. Ready? It’s 12 days.

Ok, now how long is a billion seconds? It’s… wait for it… 31 years. 31 years!

That’s the difference between knowing and feeling an idea. Passion comes from feeling.

Teachers draw on years of experience to get ideas to click — let’s feed this back into the online lessons.

Students matter

We teach for the same reason: to help students. Here’s a few specific situations to consider.

For many, Khan Academy is their only positive math experience: not teachers, or peers, or parents, but a video. Sure, it’s not the same as an in-person teacher, but it’s miles beyond an absent or hostile one. If an education experience gets someone excited to learn, and coming back to math, we should celebrate.

Remember, despite years of positive experiences and acing tests, a sufficiently bad class nearly drove me away from math. Resources like Khan Academy offer a lifeline: “Even with a bad teacher, I can still learn”.

When someone is interested, we need to feed their curiosity. I get a lot of traffic from Khan Academy comments — how can we help students dive deeper, without making them trudge randomly through the internet?

Lastly, we all learn differently. I generally prefer text to videos (faster to read, and I can “pause” with my eyes and think). Some like the homemade feel of Khan’s videos. Others might like the polished overviews in MinutePhysics. You might prefer 3-act math stories or modeling instruction.

Let’s offer several types of resources for students to enjoy.

Calling the Math Avengers

Still here? Fantastic. To all teachers, online and non:

  • What specific steps can we take to align our efforts?

One idea: Make a curated, collaborative, easy-to-explore teaching resource.

Khan Academy is well-organized: each topic has a video and sample problems. How about sections for complementary teaching styles, projects, and misconceptions?

Imagine a student could select their “Math hero” as Khan Academy or PatrickJMT or James Tanton and see lessons in the style they prefer (like Wikipedia, curate the list to “notable” resources).

Imagine teachers could explore the best in-class activities (“What projects work well for negative numbers?”).

Whatever the style, make it easy for other educators to contribute. Want project-based videos? Sure. Need step-by-step tutorials? Great. Prefer a conceptual overview? No problem.

Each teacher keeps their house style. Let Hulk smash, and Captain America handle the hostage negotiations. Use the hero that suits you.

(It’s a public google doc you can copy and edit)

Perfect? Nope. But it’s a starting point to think about how we can work together.

Let’s focus on the overlap and align our efforts: different heroes, different tactics, and on the same side.

Sign Up for the BetterExplained Email List

Hi all! I’m starting an email list for BetterExplained readers and everyone interested in deep, intuitive learning. Sign up with the form below or click here to sign up.

Why Should I Sign Up?

If you like the blog, you’ll enjoy the email list too. I’ll be sharing the insights & techniques that took me from “huh?” to “aha!” on topics in math, programming, and communication. These periodic emails will include:

  • Exclusive content & previews
  • Short learning tips / essays / additional material that weren’t the right fit for the blog
  • Q&A on topics that are bothering you
  • Announcements & discounts for BetterExplained products I think you’ll enjoy

The blog explains ideas as I wish they were taught. The email list shares information I wish I’d seen.

Why Start an Email List?

A few reasons:

  • Email has better interaction. I can write, you can read & reply at your leisure. Social media seems noisy and non-personal — I want a conversation. (I credit Scott Young and patio11 for jump-starting the email idea).

  • The medium shapes the message. Blog posts are great for long-form, single-topic articles. Email favors shorter, bite-sized pieces. I found myself holding back material because it wasn’t a “blog-fit” (but still valuable!).

  • Better sustainability. The goal of BetterExplained is to be a lifelong project. Keeping in touch ensures the material stays useful and enjoyable, and that products (like the existing ebook + screencasts) dramatically increase your understanding.

I love blogging and will always write. Email is another method to get quick feedback, with blog posts to distill the final results.

I’m so thankful to have a little corner of the internet to share ideas, and I love hearing about and discussing the aha! moments that made things click. Let’s raise the bar for our teaching and learning: keep in touch!


How To Understand Derivatives: The Quotient Rule, Exponents, and Logarithms

Last time we tackled derivatives with a “machine” metaphor. Functions are a machine with an input (x) and output (y) lever. The derivative, dy/dx, is how much “output wiggle” we get when we wiggle the input:

simple function

Now, we can make a bigger machine from smaller ones (h = f + g, h = f * g, etc.). The derivative rules (addition rule, product rule) give us the “overall wiggle” in terms of the parts. The chain rule is special: we can “zoom into” a single derivative and rewrite it in terms of another input (like converting “miles per hour” to “miles per minute” — we’re converting the “time” input).

And with that recap, let’s build our intuition for the advanced derivative rules. Onward!

Division (Quotient Rule)

Ah, the quotient rule — the one nobody remembers. Oh, maybe you memorized it with a song like “Low dee high, high dee low…”, but that’s not understanding!

It’s time to visualize the division rule (who says “quotient” in real life?). The key is to see division as a type of multiplication:

\displaystyle{h = \frac{f}{g} = f \cdot \frac{1}{g}}

derivative product rule

We have a rectangle, we have area, but the sides are “f” and “1/g”. Input x changes off on the side (by dx), so f and g change (by df and dg)… but how does 1/g behave?

Chain rule to the rescue! We can wrap up 1/g into a nice, clean variable and then “zoom in” to see that yes, it has a division inside.

So let’s pretend 1/g is a separate function, m. Inside function m is a division, but ignore that for a minute. We just want to combine two perspectives:

  • f changes by df, contributing area df * m = df * (1 / g)
  • m changes by dm, contributing area dm * f = ?

We turned m into 1/g easily. Fine. But what is dm (how much 1/g changed) in terms of dg (how much g changed)?

We want the difference between neighboring values of 1/g: 1/g and 1(g + dg). For example:

  • What’s the difference between 1/4 and 1/3? 1/12
  • How about 1/5 and 1/4? 1/20
  • How about 1/6 and 1/5? 1/30

How does this work? We get the common denominator: for 1/3 and 1/4, it’s 1/12. And the difference between “neighbors” (like 1/3 and 1/4) will be 1 / common denominator, aka 1 / (x * (x + 1)). See if you can work out why!

\displaystyle{\frac{1}{x + 1} - \frac{1}{x} = \frac{-1}{x(x+1)}}

If we make our derivative model perfect, and assume there’s no difference between neighbors, the +1 goes away and we get:

\displaystyle{\frac{-1}{x(x+1)} \sim \frac{-1}{x^2}}

(This is useful as a general fact: The change from 1/100 to 1/101 = one ten thousandth)

The difference is negative, because the new value (1/4) is smaller than the original (1/3). So what’s the actual change?

  • g changes by dg, so 1/g becomes 1/(g + dg)
  • The instant rate of change is -1/g^2 [as we saw earlier]
  • The total change = dg * rate, or dg * (-1/g^2)

A few gut checks:

  • Why is the derivative negative? As dg increases, the denominator gets larger, the total value gets smaller, so we’re actually shrinking (1/3 to 1/4 is a shrink of 1/12).

  • Why do we have -1/g^2 * dg and not just -1/g^2? (This confused me at first). Remember, -1/g^2 is the chain rule conversion factor between the “g” and “1/g” scales (like saying 1 hour = 60 minutes). Fine. You still need to multiply by how far you went on the “g” scale, aka dg! An hour may be 60 minutes, but how many do you want to convert?

  • Where does dm fit in? m is another name for 1/g. dm represents the total change in 1/g, which as we saw, was -1/g^2 * dg. This substitution trick is used all over calculus to help split up gnarly calculations. “Oh, it looks like we’re doing a straight multiplication. Whoops, we zoomed in and saw one variable is actually a division — change perspective to the inner variable, and multiply by the conversion factor”.

Phew. To convert our “dg” wiggle into a “dm” wiggle we do:

\displaystyle{dm = \frac{-1}{g^2} \cdot dg}

And get:

dh &= (df \cdot m) + (f \cdot dm) \\
dh &= (df \cdot \frac{1}{g}) + (f \cdot \frac{-1}{g^2} \cdot dg)

derivative product rule

Yay! Now, your overeager textbook may simplify this to:

\displaystyle{ dh = \frac{df \cdot g - f \cdot dg}{g^2}}

and it burns! It burns! This “simplification” hides how the division rule is just a variation of the product rule. Remember, there’s still two slivers of area to combine:

  • The “f” (numerator) sliver grows as expected
  • The “g” (denominator) sliver is negative (as g increases, the area gets smaller)

Using your intuition, you know it’s the denominator that’s contributing the negative change.

Exponents (e^x)

e is my favorite number. It has the property

\displaystyle{\frac{d}{dx} e^x = e^x}

which means, in English, “e changes by 100% of its current amount” (read more).

The “current amount” assumes x is the exponent, and we want changes from x’s point of view (df/dx). What if u(x)=x^2 is the exponent, but we still want changes from x’s point of view?

u &= x^2 \\
\frac{df}{dx} e^u &= ?

It’s the chain rule again — we want to zoom into u, get to x, and see how a wiggle of dx changes the whole system:

  • x changes by dx
  • u changes by du/dx, or d(x^2)/dx = 2x
  • How does e^u change?

Now remember, e^u doesn’t know we want changes from x’s point of view. e only knows its derivative is 100% of the current amount, which is the exponent u:

\displaystyle{ \frac{d(e^u)}{du} = e^u }

The overall change, on a per-x basis is:

\displaystyle{ \frac{d(e^u)}{dx} = \frac{du}{dx} e^u = 2x \cdot e^u = 2x \cdot e^{x^2} }

This confused me at first. I originally thought the derivative would require us to bring down “u”. No — the derivative of e^foo is e^foo. No more.

But if foo is controlled by anything else, then we need to multiply the rate of change by the conversion factor (d(foo)/dx) when we jump into that inner point of view.

Natural Logarithm

The derivative is ln(x) is 1/x. It’s usually given as a matter-of-fact.

My intuition is to see ln(x) as the time needed to grow to x:

  • ln(10) is the time to grow from 1 to 10, assuming 100% continuous growth

Ok, fine. How long does it take to grow to the “next” value, like 11? (x + dx, where dx = 1)

When we’re at x=10, we’re growing exponentially at 10 units per second. It takes roughly 1/10 of a second (1/x) to get to the next value. And when we’re at x=11, it takes 1/11 of a second to get to 12. And so on: the time to the next value is 1/x.

The derivative

\displaystyle{\frac{d}{dx}ln(x) = \frac{1}{x}}

is mainly a fact to memorize, but it makes sense with a “time to grow” intepreration.

A Hairy Example: x^x

Time to test our intuition: what’s the derivative of x^x?

\displaystyle{\frac{d}{dx} x^x = ? }

This is a bad mamma jamma. There’s two approaches:

Approach 1: Rewrite everything in terms of e.

Oh e, you’re so marvelous:

h(x) &= x^x \\
 &= [e^{ln(x)}]^x \\
 &= e^{ln(x) \cdot x}

Any exponent (a^b) is really just e in different clothing: [e^ln(a)]^b. We’re just asking for the derivative of e^foo, where foo = ln(x) * x.

But wait! Since we want the derivative in terms of “x”, not foo, we need to jump into x’s point of view and multiply by d(foo)/dx:

\frac{d}{dx} ln(x) \cdot x &= x \cdot \frac{1}{x} + ln(x) \cdot 1 \\
&= 1 + ln(x)

The derivative of “ln(x) * x” is just a quick application of the product rule. If h=x^x, the final result is:

\displaystyle{h'(x) = (1 + ln(x)) \cdot e^{ln(x) \cdot x} = (1 + ln(x)) \cdot x^x}

We wrote e^[ln(x)*x] in its original notation, x^x. Yay! The intuition was “rewrite in terms of e and follow the chain rule”.

Approach 2: Independent Points Of View

Remember, deriviatives assume each part of the system works independently. Rather than seeing x^x as a giant glob, assume it’s made from two interacting functions: u^v. We can then add their individual contributions. We’re sneaky though, u and v are the same (u = v = x), but don’t let them know!

From u’s point of view, v is just a static power (i.e., if v=3, then it’s u^3) so we have:

\displaystyle{\frac{d}{du} u^v = v \cdot u^{v - 1}}

And from v’s point of view, u is just some static base (if u=5, we have 5^v). We rewrite into base e, and we get

\frac{d}{dv} u^v &= \frac{d}{dv} [e^{ln(u)}]^v \\
&= \frac{d}{dv} e^{ln(u) \cdot v} \\ 
&=  ln(u) \cdot e^{ln(u) \cdot v}

We add each point of view for the total change:

\displaystyle{ln(u) \cdot e^{ln(u) \cdot v} + v \cdot u^{v - 1} }

And the reveal: u = v = x! There’s no conversion factor for this new viewpoint (du/dx = dv/dx = dx/dx = 1), and we have:

h' &= ln(x) \cdot e^{ln(x) \cdot x} + x \cdot x^{x - 1} \\
 &= ln(x) \cdot x^x + x^{x - 1 + 1} \\
 &= ln(x) \cdot x^x + x^x \\
 &= (1 + ln(x)) \cdot x^x

It’s the same as before! I was pretty excited to approach x^x from a few different angles.

By the way, use Wolfram Alpha (like so) to check your work on derivatives (click “show steps”).

Question: If u were more complex, where would we use du/dx?

Imagine u was a more complex function like u=x^2 + 3: where would we multiply by du/dx?

Let’s think about it: du/dx only comes into play from u’s point of view (when v is changing, u is a static value, and it doesn’t matter that u can be further broken down in terms of x). u’s contribution is

\displaystyle{\frac{d}{du} u^v = v \cdot u^{v - 1}}

if we wanted the “dx” point of view, we’d include du/dx here:

\displaystyle{\frac{d}{du} \frac{du}{dx} u^v = v \cdot u^{v - 1} \frac{du}{dx}}

We’re multiplying by the “du/dx” conversion factor to get things from x’s point of view. Similarly, if v were more complex, we’d have a dv/dx term when computing v’s point of view.

Look what happened — we figured out the genric d/du and converted it into a more specific d/dx when needed.

It’s Easier With Infinitesimals

Separating dy from dx in dy/dx is “against the rules” of limits, but works great with infinitesimals. You can figure out the derivative rules really quickly:

Product rule:

(fg)' &= (f + df)(g + dg) - fg \\
&= [fg + f dg + g df + df dg ]- fg \\
&= f dg + g df + df dg

We set “df * dg” to zero when jumping out of the infinitesimal world and back to our regular number system.

Think in terms of “How much did g change? How much did f change?” and derivatives snap into place much easier. “Divide through” by dx at the end.

Summary: See the Machine

Our goal is to understand calculus intuition, not memorization. I need a few analogies to get me thinking:

  • Functions are machines, derivatives are the “wiggle” behavior
  • Derivative rules find the “overall wiggle” in terms of the wiggles of each part
  • The chain rule zooms into a perspective (hours => minutes)
  • The product rule adds area
  • The quotient rule adds area (but one area contribution is negative)
  • e changes by 100% of the current amount (d/dx e^x = 100% * e^x)
  • natural log is the time for e^x to reach the next value (x units/sec means 1/x to the next value)

With practice, ideas start clicking. Don’t worry about getting tripped up — I still tried to overuse the chain-rule when working with exponents. Learning is a process!

Happy math.

Appendix: Partial Derivatives

Let’s say our function depends on two inputs:


The derivative of f can be seen from x’s point of view (how does f change with x?) or y’s point of view (how does f change with y?). It’s the same idea: we have two “independent” perspectives that we combine for the overall behavior (it’s like combining the point of view of two Solipsists, who think they’re the only “real” people in the universe).

If x and y depend on the same variable (like t, time), we can write the following:

\displaystyle{\frac{df}{dt} = \frac{df}{dx} \cdot \frac{dx}{dt} + \frac{df}{dy} \cdot \frac{dy}{dt}}

It’s a bit of the chain rule — we’re combining two perspectives, and for each perspective, we dive into its root cause (time).

If x and y are otherwise independent, we represent the derivative along each axis in a vector:

\displaystyle{(\frac{df}{dx}, \frac{df}{dy})}

This is the gradient, a way to represent “From this point, if you travel in the x or y direction, here’s how you’ll change”. We combined our 1-dimensional “points of view” to get an understanding of the entire 2d system. Whoa.

How To Understand Derivatives: The Product, Power & Chain Rules

The jumble of rules for taking derivatives never truly clicked for me. The addition rule, product rule, quotient rule — how do they fit together? What are we even trying to do?

Here’s my take on derivatives:

  • We have a system to analyze, our function f
  • The derivative f’ (aka df/dx) is the moment-by-moment behavior
  • It turns out f is part of a bigger system (h = f + g)
  • Using the behavior of the parts, can we figure out the behavior of the whole?

Yes. Every part has a “point of view” about how much change it added. Combine every point of view to get the overall behavior. Each derivative rule is an example of merging various points of view.

And why don’t we analyze the entire system at once? For the same reason you don’t eat a hamburger in one bite: small parts are easier to wrap your head around.

Instead of memorizing separate rules, let’s see how they fit together:


The goal is to really grok the notion of “combining perspectives”. This installment covers addition, multiplication, powers and the chain rule. Onward!

Functions: Anything, Anything But Graphs

The default calculus explanation writes “f(x) = x^2″ and shoves a graph in your face. Does this really help our intuition?

Not for me. Graphs squash input and output into a single curve, and hide the machinery that turns one into the other. But the derivative rules are about the machinery, so let’s see it!

I visualize a function as the process “input(x) => f => output(y)”.

simple function

It’s not just me. Check out this incredible, mechanical targetting computer (beginning of youtube series).

The machine computes functions like addition and multiplication with gears — you can see the mechanics unfolding!

simple function

Think of function f as a machine with an input lever “x” and an output lever “y”. As we adjust x, f sets the height for y. Another analogy: x is the input signal, f receives it, does some magic, and spits out signal y. Use whatever analogy helps it click.

Wiggle Wiggle Wiggle

The derivative is the “moment-by-moment” behavior of the function. What does that mean? (And don’t mindlessly mumble “The derivative is the slope”. See any graphs around these parts, fella?)

The derivative is how much we wiggle. The lever is at x, we “wiggle” it, and see how y changes. “Oh, we moved the input lever 1mm, and the output moved 5mm. Interesting.”

The result can be written “output wiggle per input wiggle” or “dy/dx” (5mm / 1mm = 5, in our case). This is usually a formula, not a static value, because it can depend on your current input setting.

For example, when f(x) = x^2, the derivative is 2x. Yep, you’ve memorized that. What does it mean?

If our input lever is at x = 10 and we wiggle it slightly (moving it by dx=0.1 to 10.1), the output should change by dy. How much, exactly?

  • We know f’(x) = dy/dx = 2 * x
  • At x = 10 the “output wiggle per input wiggle” is = 2 * 10 = 20. The output moves 20 units for every unit of input movement.
  • If dx = 0.1, then dy = 20 * dx = 20 * .1 = 2

And indeed, the difference between 10^2 and (10.1)^2 is about 2. The derivative estimated how far the output lever would move (a perfect, infinitely small wiggle would move 2 units; we moved 2.01).

The key to understanding the derivative rules:

  • Set up your system
  • Wiggle each part of the system separately, see how far the output moves
  • Combine the results

The total wiggle is the sum of wiggles from each part.

Addition and Subtraction

Time for our first system:

\displaystyle{h(x) = f(x) + g(x) }

derivative addition

What happens when the input (x) changes?

In my head, I think “Function h takes a single input. It feeds the same input to f and g and adds the output levers. f and g wiggle independently, and don’t even know about each other!”

Function f knows it will contribute some wiggle (df), g knows it will contribute some wiggle (dg), and we, the prowling overseers that we are, know their individual moment-by-moment behaviors are added:

\displaystyle{dh = df + dg} derivative addition

Again, let’s describe each “point of view”:

  • The overall system has behavior dh
  • From f’s perspective, it contributes df to the whole [it doesn't know about g]
  • From g’s perspective, it contributes dg to the whole [it doesn't know about f]

Every change to a system is due to some part changing (f and g). If we add the contributions from each possible variable, we’ve described the entire system.

df vs df/dx

Sometimes we use df, other times df/dx — what gives? (This confused me for a while)

  • df is a general notion of “however much f changed”
  • df/dx is a specific notion of “however much f changed, in terms of how much x changed”

The generic “df” helps us see the overall behavior.

An analogy: Imagine you’re driving cross-country and want to measure the fuel efficiency of your car. You’d measure the distance traveled, check your tank to see how much gas you used, and finally do the division to compute “miles per gallon”. You measured distance and gasoline separately — you didn’t jump into the gas tank to get the rate on the go!

In calculus, sometimes we want to think about the actual change, not the ratio. Working at the “df” level gives us room to think about how the function wiggles overall. We can eventually scale it down in terms of a specific input.

And we’ll do that now. The addition rule above can be written, on a “per dx” basis, as:

\displaystyle{\frac{dh}{dx} = \frac{df}{dx} + \frac{dg}{dx}}

Multiplication (Product Rule)

Next puzzle: suppose our system multiplies parts “f” and g”. How does it behave?

\displaystyle{h(x) = f(x) \cdot g(x)}

Hrm, tricky — the parts are interacting more closely. But the strategy is the same: see how each part contributes from its own point of view, and combine them:

  • total change in h = f’s contribution (from f’s point of view) + g’s contribution (from g’s point of view)

Check out this diagram:

derivative product rule

What’s going on?

  • We have our system: f and g are multiplied, giving h (the area of the rectangle)
  • Input “x” changes by dx off in the distance. f changes by some amount df (think absolute change, not the rate!). Similarly, g changes by its own amount dg. Because f and g changed, the area of the rectangle changes too.
  • What’s the area change from f’s point of view? Well, f knows he changed by df, but has no idea what happened to g. From f’s perspective, he’s the only one who moved and will add a slice of area = df * g
  • Similarly, g doesn’t know how f changed, but knows he’ll add as slice of area “dg * f”

The overall change in the system (dh) is the two slices of area:

\displaystyle{dh = f \cdot dg + g \cdot df}

Now, like our miles per gallon example, we “divide by dx” to write this in terms of how much x changed:

\displaystyle{\frac{dh}{dx} = f \cdot \frac{dg}{dx} + g \cdot \frac{df}{dx}}

(Aside: Divide by dx? Engineers will nod, mathematicians will frown. Technically, df/dx is not a fraction: it’s the entire operation of taking the derivative (with the limit and all that). But infinitesimal-wise, intuition-wise, we are “scaling by dx”. I’m a smiler.)

The key to the product rule: add two “slivers of area”, one from each point of view.

Gotcha: But isn’t there some effect from both f and g changing simultaneously (df * dg)?

Yep. However, this area is an infinitesimal * infinitesimal (a “2nd-order infinitesimal”) and invisible at the current level. It’s a tricky concept, but (df * dg) / dx vanishes compared to normal derivatives like df/dx. We vary f and g indepdendently and combine the results, and ignore results from them moving together.

The Chain Rule: It’s Not So Bad

Let’s say g depends on f, which depends on x:

\displaystyle{y = g(f(x))} derivative product rule

The chain rule lets us “zoom into” a function and see how an initial change (x) can effect the final result down the line (g).

Interpretation 1: Convert the rates

A common interpretation is to multiply the rates:

\displaystyle{\frac{dg}{dx} = \frac{dg}{df} \cdot \frac{df}{dx}}

x wiggles f. This creates a rate of change of df/dx, which wiggles g by dg/df. The entire wiggle is then:

\displaystyle{\frac{dg}{df} \cdot \frac{df}{dx}}

This is similar to the “factor-label” method in chemistry class:

\displaystyle{\frac{miles}{second} = \frac{miles}{hour} \cdot \frac{1 \ hour}{60 \ minutes} \cdot \frac{1 \ minute}{60 \ seconds} = \frac{miles}{hour} \cdot \frac{1}{3600}}

If your “miles per second” rate changes, multiply by the conversion factor to get the new “miles per hour”. The second doesn’t know about the hour directly — it goes through the second => minute conversion.

Similarly, g doesn’t know about x directly, only f. Function g knows it should scale its input by dg/df to get the output. The initial rate (df/dx) gets modified as it moves up the chain.

Interpretation 2: Convert the wiggle

I prefer to see the chain rule on the “per-wiggle” basis:

  • x wiggles by dx, so
  • f wiggles by df, so
  • g wiggles by dg

Cool. But how are they actually related? Oh yeah, the derivative! (It’s the output wiggle per input wiggle):

\displaystyle{df = dx \cdot \frac{df}{dx}}

Remember, the derivative of f (df/dx) is how much to scale the initial wiggle. And the same happens to g:

\displaystyle{dg = df \cdot \frac{dg}{df}}

It will scale whatever wiggle comes along its input lever (f) by dg/df. If we write the df wiggle in terms of dx:

\displaystyle{dg = (dx \cdot \frac{df}{dx}) \cdot \frac{dg}{df}}

We have another version of the chain rule: dx starts the chain, which results in some final result dg. If we want the final wiggle in terms of dx, divide both sides by dx:

\displaystyle{\frac{dg}{dx} = \frac{df}{dx} \cdot \frac{dg}{df}}

The chain rule isn’t just factor-label unit cancellation — it’s the propagation of a wiggle, which gets adjusted at each step.

The chain rule works for several variables (a depends on b depends on c), just propagate the wiggle as you go.

Try to imagine “zooming into” different variable’s point of view. Starting from dx and looking up, you see the entire chain of transformations needed before the impulse reaches g.

Chain Rule: Example Time

Let’s say we put a “squaring machine” in front of a “cubing machine”:

input(x) => f:x^2 => g:f^3 => output(y)

f:x^2 means f squares its input. g:f^3 means g cubes its input, the value of f. For example:

input(2) => f(2) => g(4) => output:64

Start with 2, f squares it (2^2 = 4), and g cubes this (4^3 = 64). It’s a 6th power machine:

\displaystyle{g(f(x)) = (x^2)^3}

And what’s the derivative?

\displaystyle{ \frac{dg}{dx} = \frac{dg}{df} \cdot \frac{df}{dx}}

  • f changes its input wiggle by df/dx = 2x
  • g changes its input wiggle by dg/df = 3f^2

The final change is:

\displaystyle{3f^2 \cdot 2x = 3(x^2)^2 \cdot 2x = 3x^4 \cdot 2x = 6x^5}

Chain Rule: Gotchas

Functions treat their inputs like a blob

In the example, g’s derivative (“x^3 = 3x^2″) doesn’t refer to the original “x”, just whatever the input was (foo^3 = 3*foo^2). The input was f, and it treats f as a single value. Later on, we scurry in and rewrite f in terms of x. But g has no involvement with that — it doesn’t care that f can be rewritten in terms of smaller pieces.

In many examples, the variable “x” is the “end of the line”.

Questions ask for df/dx, i.e. “Give me changes from x’s point of view”. Now, x could depend on something deeper variable, but that’s not being asked for. It’s like saying “I want miles per hour. I don’t care about miles per minute or miles per second. Just give me miles per hour”. df/dx means “stop looking at inputs once you get to x”.

How come we multiply derivatives with the chain rule, but add them for the others?

The regular rules are about combining points of view to get an overall picture. What change does f see? What change does g see? Add them up for the total.

The chain rule is about going deeper into a single part (like f) and seeing if it’s controlled by another variable. It’s like looking inside a clock and saying “Hey, the minute hand is controlled by the second hand!”. We’re staying inside the same part.

Sure, eventually this “per-second” perspective of f could be added to some perspective from g. Great. But the chain rule is about diving deeper into “f’s” root causes.

Power Rule: Oft Memorized, Seldom Understood

What’s the derivative of x^4? 4x^3? Great. You brought down the exponent and subtracted one. Now explain why!

Hrm. There’s a few approaches, but here’s my new favorite: x^4 is really x * x * x * x. It’s the multiplication of 4 “independent” variables. Each x doesn’t know about the others, it might as well be x * u * v * w.

Now think about the first x’s point of view:

  • It changes from x to x + dx
  • The change in the overall function is [(x + dx) - x][u * v * w] = dx[u * v * w]
  • The change on a “per dx” basis is [u * v * w]


  • From u’s point of view, it changes by du. It contributes (du/dx)*[x * v * w] on a “per dx” basis
  • v contributes (dv/dx) * [x * u * w]
  • w contributes (dw/dx) * [x * u * v]

The curtain is unveiled: x, u, v, and w are the same! The “point of view” conversion factor is 1 (du/dx = dv/dx = dw/dx = dx/dx = 1), and the total change is

\displaystyle{(x \cdot x \cdot x) + (x \cdot x \cdot x) + (x \cdot x \cdot x) + (x \cdot x \cdot x) = 4 x^3}

In a sentence: the derivative of x^4 is 4x^3 because x^4 has four identical “points of view” which are being combined. Booyeah!

Take A Breather

I hope you’re seeing the derivative in a new light: we have a system of parts, we wiggle our input and see how the whole thing moves. It’s about combining perspectives: what does each part add to the whole?

In the follow-up article, we’ll look at even more powerful rules (exponents, quotients, and friends). Happy math.

Learning To Learn: Embrace Analogies

Why do analogies work so well? They’re building blocks for our thoughts, written in the associative language of our brains.

At first, I thought analogies had to be perfect models of the idea they explained. Nope.

“All models are wrong, but some are useful” – George Box

Analogies are handles to grasp a larger, more slippery idea. They’re a raft to cross a river, and can be abandoned once on the other side. Unempathetic experts may think the raft is useless, since they no longer use it, or perhaps they were such marvelous swimmers it was never needed!

Analogies are perfectly fine. But why do they work so well?

Our brains are association machines. Connections, relationships, patterns — we need meaning! Yet we present topics as if we could be programmed with raw information.

Consider the typical language class:

  • Here’s the grammar
  • Here’s the vocabulary
  • Put the vocab in the grammar and go!

(We know how well that works). The mistake is thinking direct study of the grammar and vocabulary will build fluency — it’s a tough slog. I suspect a class of 80% speaking, listening, making idioms, building pronunciation and 20% vocabulary/grammar does much better than the reverse.

Start with simple analogies you deeply understand, then attach extra details.

Here’s an example: I can casually describe i (the imaginary number) as the square root of -1 and you can blindly accept it.

But you won’t really believe me until I start down the path of “Hey, numbers can be 2 dimensional, and i is a rotation into the 2nd dimension”. The word “rotation” stretches our brain about what a number could be — the number line may not be the final step. We’re having a real discussion and can start learning!

See, you’re extremely fluent with the idea of a line, and the idea of a second dimension, and we can work “i is a rotation” into that framework. In computer terms: we are programming with the native language of the machine. Our brain thinks with connections, so explain new data in terms of existing connections!

Although a subject can be distilled into rules and facts, drinking this concentrated math isn’t the best way to enjoy it. It’s not how our brains work, and presenting raw data suffers from a painful translation step.

I don’t think of algebra, trig and other math as a table of equations. It’s a web of connections and insights. But why show facts and hope you recreate the mental model in my head, instead of describing it directly?

No, no — let’s have a brain-to-brain. Here’s the analogies in my head, I want you to have them too.

Site Update: Ahas and FAQs for articles

I’ve just added a new feature to the site: an Aha / FAQ section for each article.

You can add an aha! moment or question, and vote / discuss them individually. This extends, making mini-posts for key ideas in an article. Why?

Mine the gold in the comments

Several articles have awesome discussions, like understanding e. There’s gems, but unfortunately they’re buried inside hundreds of comments, which is tough to read through.

If we can extract and rate key insights, we can build a “living FAQ” of what’s most helpful. These are good branching points for new articles as well.

Better collaboration

I love chatting math and finding new learning approaches. A few examples:

  • Audrey McGoldrick made an awesome animated slideshow explaining the introduction to calculus.

  • Joshua Zucker helped me refine my recent “functions are plates, derivatives are breaking plates into shards, integrals are weighing the pieces” analogy, and corrected a huge misconception about integrals and anti-derivatives.

  • Stan and YatharthROCK have been helping me develop ideas on (thanks guys).

The aha / question area is like a mini-forum to discuss analogies. (Teachers, please ransack these insights and whatever is useful — every article is free to use, print, mime, etc. for non-commercial use). The goal is to enable conversations about what is actually working.

Continual improvement

I have a nefarious plan for the widget: gather improvements for each article! Knowing exactly what parts of the article helped (or didn’t) makes it easier to keep revising: I want the analogies to sing.

Try it out

The aha section is new, there’ll be some bugs, but I’d love your feedback anyway:

  1. Share what really worked. As you read articles, post what analogies helped the most.

  2. Ask questions. Have a question that’s been bothering you? Add it!

  3. Vote on what’s helping. Just click the heart to rate an insight or question.

The aha section isn’t a replacement for comments — it’s a way to organize the best parts. If there’s other types of “aha” items you’d like organized (Followup:, Example:, etc.) let me know!

Calculus: Building Intuition for the Derivative

How do you wish the derivative was explained to you? Here's my take.

Psst! The derivative is the heart of calculus, buried inside this definition:

\displaystyle{ f'(x) =\lim_{dx\to 0} \frac{f(x+dx)-f(x)}{dx}}

But what does it mean?

Let's say I gave you a magic newspaper that listed the daily stock market changes for the next few years (+1% Monday, -2% Tuesday...). What could you do?

Well, you'd apply the changes one-by-one, plot out future prices, and buy low / sell high to build your empire. You could even hire away the monkeys who currently throw darts at newspapers.

Others call the derivative "the slope of a function" -- it's so bland! Like the stock list, the derivative is a total, predictive understanding of a system. You can plot the past/present/future, find minimums/maximums, and yes, staff your simian workforce.

Step away from the gnarly equation. Equations exist to convey ideas: understand the idea, not the grammar.

Derivatives create a perfect model of change from an imperfect guess.

This result came over thousands of years of thinking, from Archimedes to Newton. Let's look at the analogies behind it.

We all live in a shiny continuum

Infinity is a constant source of paradoxes ("headaches"):

  • A line is made up of points? Sure.
  • So there's an infinite number of points on a line? Yep.
  • How do you cross a room when there's an infinite number of points to visit? (Gee, thanks Zeno).

And yet, we move. My intuition is to fight infinity with infinity. Sure, there's infinity points between 0 and 1. But I move two infinities of points per second (somehow!) and I cross the gap in half a second.

Distance has infinite points, motion is possible, therefore motion is in terms of "infinities of points per second".

Instead of thinking of differences ("How far to the next point?") we can compare rates ("How fast are you moving through this continuum?").

It's strange, but you can see 10/5 as "I need to travel 10 'infinities' in 5 segments of time. To do this, I travel 2 'infinities' for each unit of time".

Analogy: See division as a rate of motion through a continuum of points

What's after zero?

Another brain-buster: What number comes after zero? .01? .0001?

Hrm. Anything you can name, I can name smaller (I'll just halve your number... nyah!).

Even though we can't calculate the number after zero, it must be there, right? Like demons of yore, it's the "number that cannot be written, lest ye be smitten".

Call the gap to the next number "dx". I don't know exactly how big it is, but it's there!

Analogy: dx is a "jump" to the next number in the continuum.

Measurements depend on the instrument

The derivative predicts change. Ok, how do we measure speed (change in distance)?

Officer: Do you know how fast you were going?

Driver: I have no idea.

Officer: 95 miles per hour.

Driver: But I haven't been driving for an hour!

We clearly don't need a "full hour" to measure your speed. We can take a before-and-after measurement (over 1 second, let's say) and get your instantaneous speed. If you moved 140 feet in one second, you're going ~95mph. Simple, right?

Not exactly. Imagine a video camera pointed at Clark Kent (Superman's alter-ego). The camera records 24 pictures/sec (40ms per photo) and Clark seems still. On a second-by-second basis, he's not moving, and his speed is 0mph.

Wrong again! Between each photo, within that 40ms, Clark changes to Superman, solves crimes, and returns to his chair for a nice photo. We measured 0mph but he's really moving -- he goes too fast for our instruments!

Analogy: Like a camera watching Superman, the speed we measure depends on the instrument!

Running the Treadmill

We're nearing the chewy, slightly tangy center of the derivative. We need before-and-after measurements to detect change, but our measurements could be flawed.

Imagine a shirtless Santa on a treadmill (go on, I'll wait). We're going to measure his heart rate in a stress test: we attach dozens of heavy, cold electrodes and get him jogging.

Santa huffs, he puffs, and his heart rate shoots to 190 beats per minute. That must be his "under stress" heart rate, correct?

Nope. See, the very presence of stern scientists and cold electrodes increased his heart rate! We measured 190bpm, but who knows what we'd see if the electrodes weren't there! Of course, if the electrodes weren't there, we wouldn't have a measurement.

What to do? Well, look at the system:

  • measurement = actual amount + measurement effect

Ah. After lots of studies, we may find "Oh, each electrode adds 10bpm to the heartrate". We make the measurement (imperfect guess of 190) and remove the effect of electrodes ("perfect estimate").

Analogy: Remove the "electrode effect" after making your measurement

By the way, the "electrode effect" shows up everywhere. Research studies have the Hawthorne Effect where people change their behavior because they are being studied. Gee, it seems everyone we scrutinize sticks to their diet!

Understanding the derivative

Armed with these insights, we can see how the derivative models change:

Derivative explanation

Start with some system to study, f(x):

  1. Change by the smallest amount possible (dx)
  2. Get the before-and-after difference: f(x + dx) - f(x)
  3. We don't know exactly how small "dx" is, and we don't care: get the rate of motion through the continuum: [f(x + dx) - f(x)] / dx
  4. This rate, however small, has some error (our cameras are too slow!). Predict what happens if the measurement were perfect, if dx wasn't there.

The magic's in the final step: how do we remove the electrodes? We have two approaches:

  • Limits: what happens when dx shrinks to nothingness, beyond any error margin?
  • Infinitesimals: What if dx is a tiny number, undetectable in our number system?

Both are ways to formalize the notion of "How do we throw away dx when it's not needed?".

My pet peeve: Limits are a modern formalism, they didn't exist in Newton's time. They help make dx disappear "cleanly". But teaching them before the derivative is like showing a steering wheel without a car! It's a tool to help the derivative work, not something to be studied in a vacuum.

An Example: f(x) = x^2

Let's shake loose the cobwebs with an example. How does the function f(x) = x^2 change as we move through the continuum?

Derivative explanation

Note the difference in the last 2 equations:

  • One has the error built in (dx)
  • The other has the "true" change, where dx = 0 (our measurements have no effect on the outcome)

Time for real numbers. Here's the values for f(x) = x^2, with intervals of dx = 1:

  • 1, 4, 9, 16, 25, 36, 49, 64...

The absolute change between each result is:

  • 1, 3, 5, 7, 9, 11, 13, 15...

(Here, the absolute change is the "speed" between each step, where the interval is 1)

Consider the jump from x=2 to x=3 (3^2 - 2^2 = 5). What is "5" made of?

  • Measured rate = Actual Rate + Error
  • 5 = 2x + dx
  • 5 = 2(2) + 1

Sure, we measured a "5 units moved per second" because we went from 4 to 9 in one interval. But our instruments trick us! 4 units of speed came from the real change, and 1 unit was due to shoddy instruments (1.0 is a large jump, no?).

If we restrict ourselves to integers, 5 is the perfect speed measurement from 4 to 9. There's no "error" in assuming dx = 1 because that's the true interval between neighboring points.

But in the real world, measurements every 1.0 seconds is too slow. What if our dx was 0.1? What speed would we measure at x=2?

Well, we examine the change from x=2 to x=2.1:

  • 2.1^2 - 2^2 = 0.41

Remember, 0.41 is what we changed in an interval of 0.1. Our speed-per-unit is 0.41 / .1 = 4.1. And again we have:

  • Measured rate = Actual Rate + Error
  • 4.1 = 2x + dx

Interesting. With dx=0.1, the measured and actual rates are close (4.1 to 4, 2.5% error). When dx=1, the rates are pretty different (5 to 4, 25% error).

Following the pattern, we see that throwing out the electrodes (letting dx=0) reveals the true rate of 2x.

In plain English: We analyzed how f(x) = x^2 changes, found an "imperfect" measurement of 2x + dx, and deduced a "perfect" model of change as 2x.

The derivative as "continuous division"

I see the integral as better multiplication, where you can apply a changing quantity to another.

The derivative is "better division", where you get the speed through the continuum at every instant. Something like 10/5 = 2 says "you have a constant speed of 2 through the continuum".

When your speed changes as you go, you need to describe your speed at each instant. That's the derivative.

If you apply this changing speed to each instant (take the integral of the derivative), you recreate the original behavior, just like applying the daily stock market changes to recreate the full price history. But this is a big topic for another day.

Gotcha: The Many meanings of "Derivative"

You'll see "derivative" in many contexts:

  • "The derivative of x^2 is 2x" means "At every point, we are changing by a speed of 2x (twice the current x-position)". (General formula for change)

  • "The derivative is 44" means "At our current location, our rate of change is 44." When f(x) = x^2, at x=22 we're changing at 44 (Specific rate of change).

  • "The derivative is dx" may refer to the tiny, hypothetical jump to the next position. Technically, dx is the "differential" but the terms get mixed up. Sometimes people will say "derivative of x" and mean dx.

Gotcha: Our models may not be perfect

We found the "perfect" model by making a measurement and improving it. Sometimes, this isn't good enough -- we're predicting what would happen if dx wasn't there, but added dx to get our initial guess!

Some ill-behaved functions defy the prediction: there's a difference between removing dx with the limit and what actually happens at that instant. These are called "discontinuous" functions, which is essentially "cannot be modeled with limits". As you can guess, the derivative doesn't work on them because we can't actually predict their behavior.

Discontinuous functions are rare in practice, and often exist as "Gotcha!" test questions ("Oh, you tried to take the derivative of a discontinuous function, you fail"). Realize the theoretical limitation of derivatives, and then realize their practical use in measuring every natural phenomena. Nearly every function you'll see (sine, cosine, e, polynomials, etc.) is continuous.

Gotcha: Integration doesn't really exist

The relationship between derivatives, integrals and anti-derivatives is nuanced (and I got it wrong originally). Here's a metaphor. Start with a plate, your function to examine:

  • Differentiation is breaking the plate into shards. There is a specific procedure: take a difference, find a rate of change, then assume dx isn't there.
  • Integration is weighing the shards: your original function was "this" big. There's a procedure, cumulative addition, but it doesn't tell you what the plate looked like.
  • Anti-differentiation is figuring out the original shape of the plate from the pile of shards.

There's no algorithm to find the anti-derivative; we have to guess. We make a lookup table with a bunch of known derivatives (original plate => pile of shards) and look at our existing pile to see if it's similar. "Let's find the integral of 10x. Well, it looks like 2x is the derivative of x^2. So... scribble scribble... 10x is the derivative of 5x^2.".

Finding derivatives is mechanics; finding anti-derivatives is an art. Sometimes we get stuck: we take the changes, apply them piece by piece, and mechanically reconstruct a pattern. It might not be the "real" original plate, but is good enough to work with.

Another subtlety: aren't the integral and anti-derivative the same? (That's what I originally thought)

Yes, but this isn't obvious: it's the fundamental theorem of calculus! (It's like saying "Aren't a^2 + b^2 and c^2 the same? Yes, but this isn't obvious: it's the Pythagorean theorem!"). Thanks to Joshua Zucker for helping sort me out.

Reading math

Math is a language, and I want to "read" calculus (not "recite" calculus, i.e. like we can recite medieval German hymns). I need the message behind the definitions.

My biggest aha! was realizing the transient role of dx: it makes a measurement, and is removed to make a perfect model. Limits/infinitesimals are a formalism, we can't get caught up in them. Newton seemed to do ok without them.

Armed with these analogies, other math questions become interesting:

  • How do we measure different sizes of infinity? (In some sense they're all "infinite", in other senses the range (0,1) is smaller than (0,2))
  • What are the real rules about making "dx go away"? (How do infinitesimals and limits really work?)
  • How do we describe numbers without writing them down? "The next number after 0" is the beginnings of analysis (which I want to learn).

The fundamentals are interesting when you see why they exist. Happy math.