# Teaching

## M 408M Multivariable Calculus (Fall 2015)

I was the TA for Multivariable Calculus. Some class materials are preserved here.

## Optional Problems

All optional problems are optional, meaning they will not be graded and you are not expected to complete them. They are, however, a good way to give yourself extra practice and to test your understanding of the material. It is highly recommended that you complete the optional problems, especially if you are falling behind or otherwise struggling with the material. Exercises are given for the 5th, 6th, and 7th edition of Stewart, indicated by 5E, 6E, 7E. Stars (*) indicate especially important exercises.

12/02/15.

- 5E:
**15.9**: 1-6, 8-10, 11-15, 19-23, 24 - 6E:
**16.9**: 1-6, 8-10, 11-15, 19-23, 24 - 7E:
**15.10**: 1-6, 8-10, 11-14, 15-19, 23-27, 28 - There are a whole slew of different hypotheses under which the change of variables theorem holds. They are often so restrictive that not even the change to polar coordinates satisfies them (although the change to polar coordinates is still valid for other reasons). We will not deal in technicalities, so the easiest hypotheses to remember are that if the transformation \(T\) is invertible and both \(T,T^{-1}\) are \(C^1\) on open domains, then the change of variables theorem holds.
- Compute \(\int_{-1}^1 x^2\,dx\) using the power rule and the fundamental theorem. Try again, this time blindly using the substitution \(u=x^2\) and notice that you get a different (incorrect) answer. What went wrong here is that the transformation \(u=x^2\) is not invertible over the region \([-1,1]\).
- In 1D we are used to making \(u\) substitutions by setting \(u=f(x)\), but in 2D all our substitutions take the form \(x=f(u,v), y=g(u,v)\). Notice that in 1D we set \(u\) equal to something, whereas in 2D we set \(x\) and \(y\) equal to something, not \(u\) and \(v\) equal to something. Both methods are valid but the book doesn't seem to mention why we change conventions. The answer is that in 2D the conventional way is typically much easier to compute than the other way. Recall the informal fact that \[dx\,dy = |J|\,du\,dv\] where \(J\) is the Jacobian determinant of the transformation \(x=f(u,v), y=g(u,v)\). This tells us how to switch from \((x,y)\) coordinates to \((u,v)\) coordinates. The whole process is very symmetric, so we should be able to switch back. Indeed, informally dividing by the Jacobian determinant indicates that we should guess \[du\,dv = \frac{1}{|J|}\,dx\,dy\] and this turns out to be correct. But if the inverse transformation is given by \(u=r(x,y), v=s(x,y)\) and has Jacobian determinant \(\tilde J\), then we are supposed to have \[du\,dv = |\tilde J|\,dx\,dy.\] In fact both are correct because \[\tilde J = \frac{1}{J}\] or in the notation of the book \[\frac{\partial(u,v)}{\partial(x,y)} = \frac{1}{\frac{\partial(x,y)}{\partial(u,v)}},\] hence the fractional notation. Thus, if we want to make a change of coordinates similar to what we do in 1D, via \(u=r(x,y), v=s(x,y)\), then we must convert differential areas by \[dx\,dy = |J|\,du\,dv = \frac{1}{|\tilde J|}\,du\,dv.\]
- Use the previous explanation to (informally) show in two different ways that \(dx\,dy = r\,dr\,d\theta\) is correct for the change to polar coordinates. Hint: the first way uses \(x=r\cos \theta, y = r\sin\theta\), the second way uses \(r=\sqrt{x^2+y^2}, \theta = \arctan(y/x)\). Notice that one Jacobian is significantly easier to compute than the other, but that they both end up giving the correct change of variables.

11/23/15.

- 5E:
**15.4**: 1-8, 9-11, 18-19, 21-25, 29-32, 36* - 6E:
**16.4**: 1-6, 7-9, 16-17, 19-23, 29-32, 36* - 7E:
**15.4**: 1-6, 7-9, 16-17, 19-23, 29-32, 40* - If \(f(x,y)=g(x)h(y)\) but the region of integration is not a rectangle, does the double integral of \(f\) still split as a product of single integrals? Prove or give a counterexample.
- Go through the derivation in the module and in the book of the fact that \(dA = r\,dr\,d\theta\). How does the derivation change if instead of polar coordinates we take \(x=2r\cos\theta, y=7r\sin\theta\)? What does a rectangle in these coordinates correspond to in rectangular coordinates?

11/18/15.

- 5E:
**15.2**: 22, 28-29, 34,**15.3**: 5-8, 17-20, 23-25, 37-42 - 6E:
**16.2**: 24, 28-29, 36,**16.3**: 5-8, 17-20, 33-34, 39-44 - 7E:
**15.2**: 24, 28-29, 36,**15.3**: 5-8, 17-20, 23-25, 43-48 - * Prove that if a continuous function \(f(x,y)\) factors as \(f(x,y) = g(x)h(y)\) then the double integral of \(f\) over \(R=[a,b] \times [c,d]\) factors as a product of single integrals.
- * One of the most useful inequalities one can use on integral expressions is also one of the simplest. Namely, if \(f(x,y)\) is Riemann integrable and \(m \leq f(x,y) \leq M\), then \[m A(R) \leq \iint_{R} f(x,y)\, dA \leq M A(R)\] where \(A(R)\) denotes the area of the region \(R\). Explain this result in terms of volumes.
- What does it mean for a function to be Riemann integrable in 1d? 2d?
- True or false: Riemann integrable functions are bounded.
- We define the integral of a function over a rectangle by chopping the rectangle into a grid and looking at the limit of Riemann sums of the function over that rectangle. How do we define the integral of a function over a region that is not a rectangle? What do we do if the function isn't defined on our entire region of integration?
- Give an example of a Riemann integrable function \(f(x)\) such that \[ \frac{d}{dx}\int_a^x f(y)\,dy \neq f(x) \]
- Give an example of a differentiable function \(f(x)\) on \([a,b]\) such that \(f'(x)\) is not Riemann integrable because it is unbounded. Note: there are examples of differentiable functions whose derivatives are bounded but still not Riemann integrable. Such examples are not as easy to come up with.
- Give an example of a bounded function of two variables such that \(f\) is not Riemann integrable but \(|f|\) is.
- Write (hypotheses and all) the mean value theorem for functions of one variable. Write the mean value theorem for integrals. We saw in a previous optional problem that the mean value theorem needs tweaking in order to extend to more than 1d. Show that if \(f\) is continuous and the region of integration is a rectangle, the mean value theorem for integrals does not need tweaking, i.e. it still holds in higher dimensions.
- If \(f\) is Riemann integrable and \(g=f\) for all but finitely many points, must \(g\) be Riemann integrable?
- Give an example of a Riemann integrable function on the unit square whose double integral cannot be evaluate as an iterated integral.

11/16/15.

- 5E:
**15.2**: 1-20 - 6E:
**16.2**: 1-22 - 7E:
**15.2**: 1-22 - Explain the difference between a multiple integral and an iterated integral.
- Write down the Riemann sum definition of a double integral. Can you guess what the definition for a triple integral will be?
- Look at the chapter 7/8/7 (5E/6E/7E resp.) review questions on techniques of integration. Refresh on any type of integral that you don't immediately know how to do.

11/11/15.

- 5E:
**14 Review:**Do all of it, yes**all of it**, ignore parts of questions that ask you to use a graphing calculator, computer, or CAS. - 6E:
**15 Review:**Do all of it, yes**all of it**, ignore parts of questions that ask you to use a graphing calculator, computer, or CAS. - 7E:
**14 Review:**Do all of it, yes**all of it**, ignore parts of questions that ask you to use a graphing calculator, computer, or CAS. *All of it? Really?*YES**ALL OF IT REALLY STOP ASKING AND START WORKING.**- The book does a poor job at outlining what the hypotheses needed for the method of Lagrange multipliers are, so we state a precise version of the theorem.
**Theorem (Lagrange Multipliers, 1 constraint)**: Suppose \(f\) and \(g\) are functions of \(n\geq 2\) variables defined on a ball. Further assume that \(f\) and \(g\) have continuous partials derivatives that exist everywhere in the ball. If \(f\) attains a constrained local extremum at \(\mathbf{x}_0=(x_1,\ldots,x_n)\), and \(\nabla g(\mathbf{x}_0)\neq 0\), then there exists a real number \(\lambda\) such that \(\nabla f(\mathbf{x}_0) = \lambda \nabla g(\mathbf{x}_0)\)- Notice how the theorem is stated:
**if**there is a constrained local extremum, then... The theorem itself does not tell us that there are any constrained local extrema. We must argue using other methods that the extremum exists. Thus when looking for constrained local extrema, we should look for:- Points for which \(g=0\) but there is no ball around the point on which both \(f\) and \(g\) have continuous partials.
- Points for which \(g=0\) and \(\nabla g = 0\).
- Points for which \(g=0\) and there is a \(\lambda\) for which \(\nabla f = \lambda \nabla g\).

- We can show that a function has absolute (and hence local) extreme points by noting that any
continuous function
on a closed bounded domain has an absolute maximum and absolute minimum.
Thus, if we can argue that the set of points satisfying the constraint \(g=0\) is a closed bounded
set,
then we are guaranteed the existence of a constrained absolute minimum and constrained absolute
maximum.
You may take for granted that if \(g\) is continuous then the set of points satisfying \(g=0\) is a
closed set.
Since we assume \(g\) has continuous partials on a ball for the Lagrange multipliers theorem, all
\(g\)'s that
we consider will be continuous.
Thus all that is left is to show that the set of points satisfying \(g=0\) is bounded.
This must be done on a case by case basis.
For example, if \(g(x,y,z) = x^2+y^2+z^2-1\), then the set of points satisfying \(g=0\) is a sphere
of radius 1.
This is obviously a bounded set, and hence we know that when doing a constrained optimization
problem with this \(g\),
there
**will be**a constrained absolute minimum and constrained absolute maximum.

11/08/15.

- 5E:
**14.8**: 3-10, 15-17, 18-19, 23, 44,**14 Review**: Concept check 1-19, True/False 1-12 - 6E:
**15.8**: 3-10, 15-17, 18-19, 25, 46,**15 Review**: Concept check 1-19, True/False 1-12 - 7E:
**14.8**: 3-10, 15-17, 19-20, 27, 48,**14 Review**: Concept check 1-19, True/False 1-12 - Fix \(p,q >1\) such that \(\frac{1}{p}+\frac{1}{q} = 1\). Maximize \(f(x,y) = x y\) in first quadrant subject to the constraint \(\frac{x^p}{p} + \frac{y^q}{q}=C\). Conclude for all \(x,y\) that \(|xy| \leq \frac{|x|^p}{p} + \frac{|y|^q}{q}\). Plug in \(g(t)/(\int_a^b g(s)^p\,ds)^{1/p}\) for \(x\), and \(h(t)/(\int_a^b h(s)^q\,ds)^{1/q}\) for \(y\) and then integrate in \(t\) to find \[ \int_a^b |g(t) h(t)|\,dt \leq \left( \int_a^b |g(t)|^p\,dt\right)^{1/p} \left(\int_a^b |h(t)|^q\,dt\right)^{1/q}. \] This is a special case of HÃ¶lder's inequality, an extremely important inequality in more advanced math, though not so much in this class. When \(p=q=2\) it usually goes by the name of Cauchy-Schwarz inequality.

11/03/15.

- 5E:
**14.6**: 49, 51-54, 57,**14.7**: 5-10, 27-29, 33, 40, 42, 43, 53 - 6E:
**15.6**: 49, 51-54, 57,**15.7**: 5-10, 29-31, 34, 41, 42, 43, 55 - 7E:
**14.6**: 51, 53-56, 61,**14.7**: 5-10, 29-31, 34, 41, 42, 43, 55 - Where does the second derivative test come from? Here is the main idea: to tell if a function has a local max/min or saddle point, check if its quadratic approximation does. Say \(f(x,y)\) has continuous second partials, and for simplicity \(f(0,0)=0\). Suppose further that \((0,0)\) is a critical point of \(f\). Show that \[Q(x,y) = f_{xx}(0,0)\frac{x^2}{2} + f_{xy}(0,0)xy + f_{yy}(0,0)\frac{y^2}{2}\] is the quadratic approximation of \(f\) at the origin. We will discover sufficient conditions for \(Q\) to have a strict local min at the origin. Since \(Q(0,0) = 0\), we must show that \(Q(x,y) > 0\) away from the origin. Plug in \((1,0)\) to discover that we need \(f_{xx}(0,0) > 0\). Plug in \((0,1)\) to discover that we need \(f_{yy}(0,0) > 0\). Plug in \((f_{yy}(0,0),-f_{xy}(0,0))\) to discover that we need \(f_{xx}f_{yy}-(f_{xy})^2 > 0\). Notice these are the conditions that appear in the second derivative test. We can derive the rest of the mysterious conditions found in the second derivative test similarly.
- Show that if \(f\) is differentiable, then the linearization of \(f\) at a point \(\vec x_0\) is \(L(\vec x) = f(\vec x_0) + \nabla f(\vec x_0)\cdot (\vec x - \vec x_0 )\). If you are familiar with matrix notation, show that if \(f \in C^2\) the quadratic approximation of \(f\) is the linear approximation plus the extra term \(\frac{1}{2}(\vec x - \vec x_0)^T D^2 f(\vec x) (\vec x - \vec x_0)\) where \(D^2 f\) is the Hessian matrix of \(f\), the matrix of second partial derivatives.
- Give an example of a function where the second derivative test is inconclusive, but the function has a local min at the origin. Repeat the question for a local max, and for a saddle point.

- 5E:
**14.6**: 4-6, 7-9, 11-13, 18, 21-23, 27-29, 36, 37*, 39-42, 55, 56, 63, 64*. - 6E:
**15.6**: 4-6, 7-9, 11-13, 18, 21-23, 27-29, 36, 37*, 39-42, 55, 64*. - 7E:
**14.6**: 4-6, 7-9, 11-13, 18, 21-23, 27-29, 36, 37*, 39, 40, 41-34, 57, 68*. - Derive the formula for the tangent plane to a surface given implicitly by \(F(x,y,z)=0\) at a fixed point \((x_0,y_0,z_0)\), assuming this point is on the surface and that \(F\) is differentiable at that point. E.g. \(F(x,y,z) = x^2 + y^2 + z^2 - 1 = 0\) corresponds to a sphere. Hint 1: draw a picture. Hint 2: take a path on the surface moving in the \(x\)-direction, and a path on the surface moving in the \(y\)-direction. What is the plane spanned by their tangent vectors, and what is that plane's normal vector?
- Assuming \(f\) is differentiable, show that for any direction (unit vector) \(\hat v\), we have \(\partial_{\hat v} f = \nabla f \cdot \hat v\).
- Show that if \(f \in C^1\) then for any \(a,b\) there is a \(c\) on the line between \(a\) and \(b\) such that \(f(b)-f(a)=(b-a)\cdot\nabla f(c)\). Hint: consider the function \(g(t) = f(a + t v)\) where \(v\) is the direction from \(a\) to \(b\) and use the mean value theorem for functions of 1 variable.

10/29/15.

- We saw in class the example \( f(x,y) = 1\) if \(x=0\) or \(y=0\) and \(f(x,y) = 0\) otherwise.
Draw the contour plot of this function.
Show that \(f_x(x,y)=0, f_y(x,y)=0\) at all points where the partials exist.
Constant functions are continuous, so in fact \(f\) has continuous partials.
**However**, this does not imply \(f\) is differentiable. Indeed, \(f\) isn't even continuous. The reason that in this case continuous partials does not imply differentiability is because the partials don't exist everywhere. In particular, prove that \(f_x\) doesn't exist on the line \(x=0\) and \(f_y\) doesn't exist on the line \(y=0\), excluding the origin, where both partials do exist. To avoid confusion, let us state the precise theorem under which continuous partials implies differentiability. -
**Theorem**. Suppose \(f\) is a real function of \(n\) real variables \(x_1,\ldots,x_n\) (think \(f(x,y)\) or \(f(x,y,z)\)) defined everywhere. If all \(n\) of the partial derivatives of \(f\) exist in a ball around the point \((x_1,\ldots,x_n)\), and at least \(n-1\) of them are continuous at \((x_1,\ldots,x_n)\), then \(f\) is differentiable at \((x_1,\ldots,x_n)\). (This theorem can actually be improved even more, but for our purposes this will be sufficient). - Example of how the theorem is typically used: if all the partial derivatives of \(f=f(x,y)\) exist and are continuous in a ball around \((x,y)\), then \(f\) is differentiable at \((x,y)\).
- Notice how the theorem cannot be applied to the example \(f\) from today's activity because any ball around the origin will contain part of the lines \(x=0\) and \(y=0\), so there is no ball around the origin on which the partials are defined everywhere in the ball.

10/28/15.

- 5E:
**14.3**: 68-72, 83, 87, 89,**14.5**: 1-12, 27-34, 43, 45, 47, 50, 51, 53 - 6E:
**15.3**: 72-77, 87, 92,**15.5**: 1-12, 27-34, 45, 47, 52, 53 - 7E:
**14.3**: 75-80, 97,**14.5**: 1-12, 27-34, 45, 47, 52, 53 - Using implicit differentiation of the equation \(F(x,y) = 0\), derive that
\[\frac{dy}{dx} = - \frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}} = -
\frac{F_x}{F_y}.\]
Note that this is a case when treating derivatives as fractions will give the
**wrong answer**. "Cancelling the \(\partial F\)s" would indicate that the minus sign shouldn't be there, when in fact it should be there. - In classical physics, the kinetic energy of an object of mass \(m\) moving with velocity \(v\) is \(K=\frac{1}{2} m v^2\). Experimentally, this formula breaks down when \(v\) is close to speed of light \(c\). In the special theory of relativity, Einstein corrected this formula to \[ K = \frac{mc^2}{\sqrt{1-(v^2/c^2)}}-mc^2. \] Use a linearization of \(f(x) = (1-x)^{-1/2}\) to approximate Einstein's equation for kinetic energy and show that it reduces to Newton's classical formula. Hint: in order to choose your base point to linearize at, consider what \(v^2/c^2\) is for everyday velocities relevant to humans.
- What are easy conditions that imply \(f\) is differentiable? One easy sufficient condition is that if \(f\) has continuous first order partials that exist everywhere (this is usually denoted \(f \in C^1(\mathbb{R}^2) \) and pronounced "f is C one" or "f is in C one"), then \(f\) is differentiable.

\(-\infty\) to 10/26/15.

- 5E:
**14.2**: 15-20, 27, 28, 29, 41, 42,**14.3**: 23-34, 41-44, 46, 53-56, 59-62, 67,**14.4**: 1-6, 11-16, 41 - 6E:
**15.2**: 15-20, 36, 37, 38, 45, 46,**15.3**: 29-40, 45-48, 50, 63-66, 71,**15.4**: 1-6, 11-16, 45 - 7E:
**14.2**: 15-20, 36, 37, 38, 45, 46,**14.3**: 29-40, 47-50, 52, 59, 75,**14.4**: 1-5, 11-16, 45 - Extend each of these functions to a continuous function in the plane, then show \(f_{xy} \neq f_{yx}\) at \((0,0)\). The functions are \(f(x,y) = y^2 \arctan(x/y)\) and \(f(x,y) = \frac{x^3y-xy^3}{x^2+y^2}\). Why doesn't this violate Clairaut's theorem?
- Read the definition of "differentiable" for functions of 2 variables and notice how it is different from the 1d definition. In particular, \(f_x,f_y\) existing at a point doesn't mean \(f\) is differentiable at that point. In higher dimensions, differentiable really means "can be approximated well by a linear function."
- Compare the linearization formulas for \(f\) in 1d and 2d to the 1d and 2d Taylor expansion of \(f\) about the origin. Although finding tangent lines is a good application of calculus, the more general idea of approximating functions by simpler functions is much more important.