The **gradient** is a fancy word for derivative, or the rate of change of a function. Itโs a vector (a direction to move) that

- Points in the direction of greatest increase of a function (intuition on why)
- Is zero at a local maximum or local minimum (because there is no single direction of increase)

The term "gradient" is typically used for functions with several inputs and a single output (a scalar field). Yes, you can say a line has a gradient (its slope), but using "gradient" for single-variable functions is unnecessarily confusing. Keep it simple.

โGradientโ can refer to gradual changes of color, but weโll stick to the math definition if thatโs ok with you. Youโll see the meanings are related.

## Properties of the Gradient

Now that we know the gradient is the derivative of a multi-variable function, letโs derive some properties.

The regular, plain-old derivative gives us the rate of change of a single variable, usually x. For example, dF/dx tells us how much the function F changes for a change in x. But if a function takes multiple variables, such as x and y, it will have multiple derivatives: the value of the function will change when we โwiggleโ x (dF/dx) and when we wiggle y (dF/dy).

We can represent these multiple rates of change in a vector, with one component for each derivative. Thus, a function that takes 3 variables will have a gradient with 3 components:

- F(x) has one variable and a single derivative: dF/dx
- F(x,y,z) has three variables and three derivatives: (dF/dx, dF/dy, dF/dz)

The gradient of a multi-variable function has a component for each direction.

And just like the regular derivative, the gradient points in the direction of greatest increase (the following article explains why). However, now that we have multiple directions to consider (x, y and z), the direction of greatest increase is no longer simply โforwardโ or โbackwardโ along the x-axis, like it is with functions of a single variable.

If we have two variables, then our 2-component gradient can specify any direction on a plane. Likewise, with 3 variables, the gradient can specify and direction in 3D space to move to increase our function.

## A Twisted Example

Iโm a big fan of examples to help solidify an explanation. Suppose we have a magical oven, with coordinates written on it and a special display screen:

We can type any 3 coordinates (like โ3,5,2โณ) and the display shows us the **gradient** of the temperature at that point.

The microwave also comes with a convenient clock. Unfortunately, the clock comes at a price โ the temperature inside the microwave varies drastically from location to location. But this was well worth it: we really wanted that clock.

With me so far? We type in any coordinate, and the microwave spits out the gradient at that location.

Be careful not to confuse the coordinates and the gradient. The **coordinates are the current location**, measured on the x-y-z axis. The **gradient is a direction to move** from our current location, such as move up, down, left or right.

Now suppose we are in need of psychiatric help and put the Pillsbury Dough Boy inside the oven because we think he would taste good. Heโs made of cookie dough, right? We place him in a random location inside the oven, and our goal is to cook him as fast as possible. The gradient can help!

The gradient at any location points in the direction of **greatest increase** of a function. In this case, our function measures temperature. So, the gradient tells us which direction to move the doughboy to get him to a location with a higher temperature, to cook him even faster. Remember that the gradient does **not** give us the coordinates of where to go; it gives us the **direction to move** to increase our temperature.

Thus, we would start at a random point like (3,5,2) and check the gradient. In this case, the gradient there is (3,4,5). Now, we wouldnโt actually move an entire 3 units to the right, 4 units back, and 5 units up. The gradient is just a direction, so weโd **follow this trajectory for a tiny bit**, and then check the gradient again.

We get to a new point, pretty close to our original, which has its own gradient. This new gradient is the new best direction to follow. Weโd keep repeating this process: move a bit in the gradient direction, check the gradient, and move a bit in the new gradient direction. Every time we nudged along and follow the gradient, weโd get to a warmer and warmer location.

Eventually, weโd get to the hottest part of the oven and thatโs where weโd stay, about to enjoy our fresh cookies.

## Donโt eat that cookie!

But before you eat those cookies, letโs make some observations about the gradient. Thatโs more fun, right?

First, when we reach the hottest point in the oven, what is the gradient there?

Zero. Nada. Zilch. Why? Well, once you are at the maximum location, there is **no direction of greatest increase**. Any direction you follow will lead to a **decrease** in temperature. Itโs like being at the top of a mountain: any direction you move is downhill. A zero gradient tells you to stay put โ you are at the max of the function, and canโt do better.

But what if there are two nearby maximums, like two mountains next to each other? You could be at the top of one mountain, but have a bigger peak next to you. In order to get to the highest point, you have to go downhill first.

Ah, now we are venturing into the not-so-pretty underbelly of the gradient. Finding the maximum in regular (single variable) functions means we find all the places where the derivative is zero: there is no direction of greatest increase. If you recall, the regular derivative will point to **local** minimums and maximums, and the absolute max/min must be tested from these candidate locations.

The same principle applies to the gradient, a generalization of the derivative. You must find multiple locations where the gradient is zero โ youโll have to test these points to see which one is the global maximum. Again, the top of each hill has a zero gradient โ you need to compare the height at each to see which one is higher. Now that we have cleared that up, go enjoy your cookie.

## Mathematics

We know the definition of the gradient: a derivative for each variable of a function. The gradient symbol is usually an upside-down delta, and called โdelโ (this makes a bit of sense โ delta indicates change in one variable, and the gradient is the change in for all variables). Taking our group of 3 derivatives above

Notice how the x-component of the gradient is the partial derivative with respect to x (similar for y and z). For a one variable function, there is no y-component at all, so the gradient reduces to the derivative.

Also, notice how the gradient can itself be a function!

If we want to find the direction to move to increase our function the fastest, we plug in our current coordinates (such as 3,4,5) into the equation and get:

So, this new vector (1, 8, 75) would be the direction weโd move in to increase the value of our function. In this case, our x-component doesnโt add much to the value of the function: the partial derivative is always 1.

Obvious applications of the gradient are finding the max/min of multivariable functions. Another less obvious but related application is finding the maximum of a constrained function: a function whose x and y values have to lie in a certain domain, i.e. find the maximum of all points constrained to lie along a circle. Solving this calls for my boy Lagrange, but all in due time, all in due time: enjoy the gradient for now.

The key insight is to recognize the gradient as the generalization of the derivative. **The gradient points to the direction of greatest increase; keeping following the gradient, and you will reach the local maximum.**

## Questions

**Why is the gradient perpendicular to lines of equal potential?**

Lines of equal potential (โequipotentialโ) are the points with the same energy (or value for F(x,y,z)). In the simplest case, a circle represents all items the same distance from the center.

The gradient represents the direction of greatest change. If it had any component along the line of equipotential, then that energy would be wasted (as itโs moving closer to a point at the same energy). When the gradient is perpendicular to the equipotential points, it is moving as far from them as possible (this article explains why the gradient is the direction of greatest increase โ itโs the direction that maximizes the varying tradeoffs inside a circle).

## Join Over 400k Monthly Readers

## Other Posts In This Series

- Vector Calculus: Understanding the Dot Product
- Vector Calculus: Understanding the Cross Product
- Vector Calculus: Understanding Flux
- Vector Calculus: Understanding Divergence
- Vector Calculus: Understanding Circulation and Curl
- Vector Calculus: Understanding the Gradient
- Understanding Pythagorean Distance and the Gradient

## Leave a Reply

192 Comments on "Vector Calculus: Understanding the Gradient"

i like it… well explained.

Super!!!

You are the man! Nice work!

Thanks, glad it was helpful for you.

i was always looking for conceptual and practical examples and yes i finally got.

Awsome!

well you made a good explanation, that even a not-so-smart guy gets it, but i think you missed the obvious -> WHY does gradient show the direction of the greatest increase.

I think that the principle of the gradient is quite easy, but understanding why does it work the way it does is a bit tricky and you should have focued on it more.

It would be interesting if you would somehow add it to this good article. Inspiration http://mathforum.org/library/drmath/view/68326.html

good luck !

Hi Palo, that’s a great point! I’ve been feeling a bit guilty, if you can imagine it, because I’ve lacked that explanation ๐

I’m probably going to do a separate article on the reason *why* the gradient points in the direction of greatest increase — I have another explanation that it works well with. Thanks for the link and feedback!

Your introduction is not quite correct:

You claim: “Points in the direction of greatest increase of a function”.

Why? It can also point in the direction of greatest decrease of a function.

A gradient is one or more directional derivatives. These derivatives are considered in a particular direction. In the case of single variable calculus, we generally talk about a directional derivative when we consider multiples of the x unit vector, i.e. k*(1,0). To consider the y unit vector, we deal with the partial derivatives with respect to y in a given direction. In three dimensions, the 3 partial derivatives form what we now call a ‘gradient’.

So in fact it is incorrect to call this a slope or anything else except to say that it describes the partial derivatives of a point in the direction of a given vector in space.

Does this make sense? Please visit my blog for some more interesting reading.

http://mathphile.blogspot.com/

Hi John, thanks for writing. You’re right, the formal definition of a gradient is a set of directional derivatives.

But when thinking about the intuitive meaning, I think it’s ok to consider the gradient as a vector that “points” in the direction of greatest increase (i.e. if you follow that direction your function will tend towards a local maximum).

Unless I’m mistaken, the gradient vector always points in the direction of greatest increase (greatest decrease would be in the opposite direction).

What I was saying is that it points either one way or the other, it is not restricted to the direction of greatest increase. As a simple example, consider what happens when you differentiate a parabola: You set the derivative equal to 0 and then you determine that it has either a maximum or a minimum at its turning point. It is not always a maximum just as it is not always a minimum. Think I have explained this correctly now.

good john you have done a great job.

Hi John, thanks for the clarification. I’d still politely disagree and say that in general, the gradient points in the direction of greatest increase :).

In the case of 2 dimensions, the gradient/slope only gives a forward or backward direction. A positive slope means travel “forward” and a negative slope means travel “backwards”.

Consider f(x) = x^2, a regular parobola. The gradient is zero at the minimum (x=0), and there is no *single* direction to go. At x = -1, the slope is negative, which means travel “backwards” (to x = -2) to increase your value. Similarly, at x = 1, you travel forward (to x = 2) to increase your value.

But, as you mention, strange things can happen when the derivative = 0. It can mean you are at a local maximum (no way to improve), or at a local minimum (no single direction to improve your position — forward or back will help). I consider the corner case of zero an exception to the general rule / intuition that the gradient is “the direction to follow” if you want to improve your function.

Wonderful explanation!

Thanks Vidhya, glad you liked it.

hi john keep it up you done a great job

Thanks a bunch! I didn’t think it could be this simple to find the maximum increase at a point, so I thought I’d look it up. Thanks to your great explaination, it turn out it was as easy as it seemed it should be. Great job! Thanks!

Travis

Awesome, glad it worked for you ๐

thanks!!!!

Hi Caitlyn, you’re welcome.

Thanks! The sadistic microwave example helped a lot.

Awesome, glad it was useful :).

Hello Kalid,

Did not read your reply for some

time. Am sorry you do not agree. ๐

Let me give you an example:

Suppose we are dealing with pressure

and height in a certain ‘cubic’

area. Suppose that the middle of the

cube height is 0 meters. Also suppose

that we have a whirlpool generated in the

cube such that the pressure rate increases

as we go below the middle of the cube.

Anything below is negative height and anything above

is positive height. Now, as one rises

higher in the cube, the pressure decreases.

If we find the gradient, then according to

your definition (and many others’), then

the gradient vector for the rate of greatest

increase will point below the middle of the

cube, not above. But above the middle we

find the greatest ‘decrease’ in rate of pressure.

In this example, greatest increase points

downwards and greatest decrease upwards.

It would probably be better to define

gradient as a vector that points in a

direction of greatest increase or decrease.

It’s additive inverse will point in the

diretion of greatest decrease or increase

respectively. For most physical phenomena,

your definition would generally be true.

But what happens when you have an anomaly?

Make sense?

I do not believe I have the best answer to this question but like yourself, I am a believer in trying to find the best possible explanation. Once again, I like your website. Keep up the good work Kalid!

Okay, I think I have the best answer. If f is a real-valued function, then del(f) or gradient of f points to the greatest increase, whereas -del(f) points t0 the greatest decrease.

For once planet math has some decent information on this since I last checked:

http://planetmath.org/encyclopedia/Gradient.html

I do not endorse everything Planet Math publishes but this particular information appears to be correct. In any event, it clears up the previous confusion I think.