# Generalizations of the Mean Value Theorem

One of my hobbies here is running, and I recently ran 3km on the track, which led me to the following thought:

Was there a mile (~1600m) which I ran at least as fast as my average pace?

This can be thought of as a generalization of the mean value theorem, which says that a differentiable curve $f:[a,b] \to \mathbb{R}$ has at least one point where the derivative is equal to the “average” slope.  However, this asks whether there exist some (connected) interval where the derivative is less than the average slope.

This statement can be seen to be false, and we might as well use the running example.  Suppose the 3km was run at an average pace of 5:00min/mile, and in the race, for (counter-)example, I rode a bike to the 1,599m mark (which is still considered cheating so far as the IAAF is concerned), and got there after 1:00.  I stay there for 5:01, then ride my bike to the finishline (taking another minute).  I’ve just completed a 7:01 3k (WR, by the way) but there was never a 5minute period where I covered a mile.  With faster and faster bikes (or cars!) my “moving” time could be reduced until it takes me 5:00+$\epsilon$ to run the 3km.  Of course, this is the best I can do without covering a mile in under 5minutes, since as soon as cover 3km in under 5 minutes, I must have (by continuity) covered a mile in under 5 minutes.

Now, since I hate to be a downer, there’s a positive result to share, which can be thought of as a generalization of the above discussion, but does not require any differentiability, or even continuity (unless there are philosophical objections to the discontinuous crossing of a mile).  I’ll state the theorem for the unit interval, then translate that to the running example:

Suppose  $f:[0,1] \to \mathbb{R}$ has average speed $M$ (i.e., $f(1)-f(0) = M$) and $n$ is fixed.  Then for any $p \in (0,1]$ with $p \leq 1/n$, there is an interval with length $p$ where the average speed is at least $M/n$.

To see why this is true, just split up the interval into $n$ equal subintervals, and observe that the “average speed” on at least one of them must be at least $M/n$, or else the total change would not be great enough.

Hence, in the running example, if I ran a 9:00 3k, then I must have covered at least one of the kilometers in 3minutes or less, even if I didn’t cover any mile in under 5minutes!

# Starting up

I’m trying to avoid defining what I will be talking about here, as I wait and see what I end up being interested in on any given day.   I’d anticipate some mix of math, programming, running, with some general geekery thrown in.  On the other hand, who knows?

# Coarea Formula, part I: the Jacobian

There’s a formula called the coarea formula which I have been researching for the past year or so.  There are two good ways to think about it.  One is to look at the so-called “Jacobian” and seek to interpret the integral of that number.  The second is to look at it as a natural dual (in a colloquial, rather than mathematical sense) to its more-famous-brother, the area formula.  We deal with the Jacobian today.

The Jacobian is typically introduced in calculus courses, and associated with a change of variables.  In a typical case, you would like to take and integral in one set of coordinates, $(x,y)$ and change to a set of coordinates $(u,v)$, according to a map $\phi: \mathbb{R}^2 \to \mathbb{R}^2$ which, since the domain is two dimensional, we may write as $\phi(x,y) = (u(x,y),v(x,y))$.  In this case, we have for an open set

$\int_{\mathbb{R}^2}f(x,y)J\phi(x,y)~dx~dy = \int_{\mathbb{R}^2} f(u,v) ~du~dv$

Let me finally define precisely what the Jacobian from calculus is- for a general map $\phi: \mathbb{R}^n \to \mathbb{R}^n$, we define

$J\phi(x_1,\ldots, x_n) = \det \left(\phi^j_{x_k}\right)_{j,k = 1}^n,$

where we are writing $\phi = (\phi^1,\ldots, \phi^n)$, and $\phi^j_{x_k} :=\frac{\partial \phi^j}{\partial x_j}$.  As a quick example, one might recall changing coordinates from Euclidean (rectangular) to polar.  Typically it went $x = r \cos{\theta}$ and $y = r \sin{\theta}$ (that is to say, our change of coordinates is $\phi(r, \theta) = (r\cos{theta},r\sin{theta})$).  Then, using the “absolute value” notation for determinant, we have

$J \phi(r,\theta) = \left| \begin{array}{cc} \cos{\theta} & \sin{\theta} \\ -r \sin{\theta} & r \cos{\theta} \end{array} \right| = r \cos^2 \theta + r \sin^2 \theta = r$,

which returns us to the (somewhat) familiar formula,

$\int f(x,y)~dx~dy = \int f(r,\theta) r~dr~d \theta$.

That seems like well over enough for a first post.  Next up: an intuition for what the Jacobian measures, as well as a definition of Jacobian for maps between spaces of different dimensions.

Also! I should mention that the words I use are almost surely wrong- typically what I call the “Jacobian” is called the “Jacobian determinant”, while the actual “Jacobian” is the matrix of derivatives.  It just seems like a mouthful.