In the last post we established certain conditions for the monotonicity of a function in an interval. In this post we will establish the same results via a different approach. This is based on the standard theorems of differential calculus, namely the

The proof of this theorem is normally not given in calculus textbooks and it belongs to the family of results classified as "those results whose proofs are beyond the scope of the book/syllabus". By the way this family is quite large unfortunately. More on its size later and right now we move to the proof of the Rolle's theorem.

If the function $ f$ is constant then $ f'(x) = 0$ for all $ x \in (a, b)$ and our theorem is trivial. So let's assume that the function $ f$ is not a constant. It means that the function takes values which are different from $ f(a) = f(b)$. Let's assume that one of these values is greater than $ f(a) = f(b)$. Then the maximum value of $ f$ in $ [a, b]$ is clearly greater than $ f(a) = f(b)$ and hence is attained at an interior point of $ [a, b]$ i.e. at some point $ c \in (a, b)$. Note that for continuous functions on closed interval, it is guaranteed that they attain their maximum value in that interval. If $ f'(c) > 0$ then $ f$ would be strictly increasing at $ c$ and hence to the right of $ c$ there will be values of $ f$ which are greater than the maximum value $ f(c)$ and this is clearly absurd. Hence $ f'(c)$ cannot be positive. By a similar argument $ f'(c)$ cannot be negative and hence it has to be zero.

In case the function $ f$ takes values less than $ f(a) = f(b)$ then we need to consider the minimum value of $ f$ in $ [a, b]$ and we reach the same conclusion.

Now that we have proved this theorem it is time to interpret it informally. Graphically the theorem says that if the chord joining point $ (a, f(a))$ and $ (b, f(b))$ on the graph of the function $ y = f(x)$ is parallel to the x-axis then there is an intermediate point in interval $ (a, b)$ at which the tangent to the graph is parallel to x-axis.

By the way math guy Michael Rolle never proved this theorem, but he has been made famous by this theorem. What Rolle actually proved was a result in which the function $ f$ was a polynomial and that too he tried to prove using algebraical methods. So the result he proved (although not in a correct fashion) was that

That this theorem is really important would take some time and study of its implications. We will not be able to describe all of its implications (for example Taylor's theorem and series) but rather we will use it to establish conditions of monotonicity in an interval.

The proof below is again taken from G. H. Hardy's "A Course of Pure Mathematics" and is a gem of a proof.

Let $ a < x < y < b$ and we will establish $ f(x) < f(y)$. Clearly $ f(x) \neq f(y)$ because if these values were equal then by Rolle's theorem the derivative would vanish somewhere in $ (x, y)$ and this would contradict that the derivative is positive in $ (a, b)$. Thus we are left with the alternatives $ f(x) > f(y)$ or $ f(x) < f(y)$.

Let's assume that $ f(x) > f(y)$. Since $ f'(x) > 0$, the function $ f$ is strictly increasing at $ x$ and hence there is point $ z$ to the right of $ x$ and sufficiently close to it such that $ x < z < y$ and $ f(x) < f(z)$. Then we have $ f(y) < f(x) < f(z)$ and therefore by intermediate value property for continuous functions there is a $ u \in (z, y)$ such that $ f(u) = f(x)$. And then by Rolle's theorem the derivative $ f'$ would vanish somewhere in $ (x, u)$ which contradicts that derivative is positive. Thus we cannot have $ f(x) > f(y)$ and hence $ f(x) < f(y)$. Since we assumed that $ a < x < y < b$ this proves that the function $ f$ is strictly increasing in $ (a, b)$.

The monotonicity at the end points of the interval $ [a, b]$ still remains to be proven. For that matter let $ a < x < b$. We shall prove that $ f(a) < f(x)$. Clearly we can find $ y, z$ such that $ a < z < y < x < b$ and then by what we have already proved it follows that $ f(z) < f(y) < f(x)$. Keeping $ x, y$ fixed we let $ z \to a+$ and by continuity we see that $ f(a) \leq f(y) < f(x)$ so that $ f(a) < f(x)$. Similarly we can prove $ f(x) < f(b)$ and therefore $ f(a) < f(b)$. So $ f$ is strictly increasing in closed interval $ [a, b]$.

Corresponding theorem for case when derivative is negative can be proved in the same fashion and is therefore left as an exercise for the reader.

The above result can also be proved using another famous theorem by Lagrange called the Mean Value Theorem.

Geometrically the theorem says that given a chord of a continuous smooth curve, between the end points of the chord there is a point on the curve at which the tangent is parallel to the chord.

This theorem can now be used to established the conditions of monotonicity. However this theorem gives more powerful results and it also gives a condition for the monotone functions which are not strict.

Many textbooks on calculus don't treat the end points of an interval while dealing with monotonicity. They tend to say that the functions are monotonic only in open intervals. In this regard even the highly acclaimed NCERT textbooks in India are wrong. For example let's say a function $ f$ is such that it is continuous in interval $ [1, 3]$ differentiable in $ (1, 3)$ and the derivative is positive in all of $ (1, 3)$ except that $ f'(2) = 0$ (check $ f(x) = (x - 2)^{3}$). What can we say for the monotonicity of this function? Common textbooks would say that the function is strictly increasing in $ (1, 2)$ and $ (2, 3)$. Whereas in reality the function is strictly increasing in $ [1, 2]$ and $ [2, 3]$ and hence in full interval $ [1, 3]$. If we only use open intervals we cannot join them to form the bigger intervals. That's why it is important to think about the closed intervals when dealing with monotonic functions.

Clearly we can set $ g(x) = f(x) - kx$ so that $ g$ is differentiable in $ [a, b]$ and $ g'(a)$ and $ g'(b)$ are of opposite signs. Let's assume that $ g'(a) > 0$ and $ g'(b) < 0$. Then function $ g$ is strictly increasing at $ a$ and strictly decreasing at $ b$ and hence the maximum value of $ g$ is attained at an interior point $ c$ of closed interval $ [a, b]$. Then $ g'(c) > 0$ would imply that values of $ g$ to the right of $ c$ are greater than the maximum value $ g(c)$. Similarly the case $ g'(c) < 0$ can be ruled out and therefore $ g'(c) = 0$ so that $ f'(c) = k$.

The function $ f(x) = x^{2}\sin(1/x)$ when $ x = 0$ and $ f(0) = 0$ is clearly differentiable everywhere with $ f'(0) = 0$ and $ f'(x) = 2x\sin(1/x) - \cos(1/x)$ for $ x \neq 0$. The derivative $ f'(x)$ is not continuous at point $ 0$, but because of the fact that it is a derivative of some function $ f(x)$ it possesses the intermediate value property.

So far we have examined monotone functions with the added constraint that they are differentiable and continuous. However continuity is not essential for monotonicity. In the next post post we will study monotone functions and their extensions in general. Such functions are technically called

*Rolle's Theorem and the Langrange's Mean Value Theorem.*We first need to establish these theorems.### Rolle's Theorem

Rolle's theorem is a kind of existence theorem in the sense that it ascertains the existence of a certain quantity without telling how to find it. Before discussing further about the theorem let's first state and prove it.*If a function $ f$ is defined in a closed interval $ [a, b]$ such that**$ f$ is continuous in closed interval $ [a, b]$**$ f$ is differentiable in open interval $ (a, b)$**$ f(a) = f(b)$*

*then there is at least one point $ c \in (a, b)$ for which $ f'(c) = 0$.*The proof of this theorem is normally not given in calculus textbooks and it belongs to the family of results classified as "those results whose proofs are beyond the scope of the book/syllabus". By the way this family is quite large unfortunately. More on its size later and right now we move to the proof of the Rolle's theorem.

If the function $ f$ is constant then $ f'(x) = 0$ for all $ x \in (a, b)$ and our theorem is trivial. So let's assume that the function $ f$ is not a constant. It means that the function takes values which are different from $ f(a) = f(b)$. Let's assume that one of these values is greater than $ f(a) = f(b)$. Then the maximum value of $ f$ in $ [a, b]$ is clearly greater than $ f(a) = f(b)$ and hence is attained at an interior point of $ [a, b]$ i.e. at some point $ c \in (a, b)$. Note that for continuous functions on closed interval, it is guaranteed that they attain their maximum value in that interval. If $ f'(c) > 0$ then $ f$ would be strictly increasing at $ c$ and hence to the right of $ c$ there will be values of $ f$ which are greater than the maximum value $ f(c)$ and this is clearly absurd. Hence $ f'(c)$ cannot be positive. By a similar argument $ f'(c)$ cannot be negative and hence it has to be zero.

In case the function $ f$ takes values less than $ f(a) = f(b)$ then we need to consider the minimum value of $ f$ in $ [a, b]$ and we reach the same conclusion.

Now that we have proved this theorem it is time to interpret it informally. Graphically the theorem says that if the chord joining point $ (a, f(a))$ and $ (b, f(b))$ on the graph of the function $ y = f(x)$ is parallel to the x-axis then there is an intermediate point in interval $ (a, b)$ at which the tangent to the graph is parallel to x-axis.

By the way math guy Michael Rolle never proved this theorem, but he has been made famous by this theorem. What Rolle actually proved was a result in which the function $ f$ was a polynomial and that too he tried to prove using algebraical methods. So the result he proved (although not in a correct fashion) was that

*"between any two roots of a polynomial lies a root of its derivative"*. Rolle hated the existing methods of calculus which were based on intuitive but confusing notion of infinitesimals and it is bit ironical that he is remembered by a theorem of utmost importance in differential calculus.That this theorem is really important would take some time and study of its implications. We will not be able to describe all of its implications (for example Taylor's theorem and series) but rather we will use it to establish conditions of monotonicity in an interval.

### Condition for Monotonicity

We use the Rolle's theorem to prove the following result:*If a function $ f$ is continuous in closed interval $ [a, b]$ and differentiable in open interval $ (a, b)$ and its derivative $ f'(x) > 0$ for all $ x \in (a, b)$ then $ f$ is strictly increasing in $ [a, b]$.*The proof below is again taken from G. H. Hardy's "A Course of Pure Mathematics" and is a gem of a proof.

Let $ a < x < y < b$ and we will establish $ f(x) < f(y)$. Clearly $ f(x) \neq f(y)$ because if these values were equal then by Rolle's theorem the derivative would vanish somewhere in $ (x, y)$ and this would contradict that the derivative is positive in $ (a, b)$. Thus we are left with the alternatives $ f(x) > f(y)$ or $ f(x) < f(y)$.

Let's assume that $ f(x) > f(y)$. Since $ f'(x) > 0$, the function $ f$ is strictly increasing at $ x$ and hence there is point $ z$ to the right of $ x$ and sufficiently close to it such that $ x < z < y$ and $ f(x) < f(z)$. Then we have $ f(y) < f(x) < f(z)$ and therefore by intermediate value property for continuous functions there is a $ u \in (z, y)$ such that $ f(u) = f(x)$. And then by Rolle's theorem the derivative $ f'$ would vanish somewhere in $ (x, u)$ which contradicts that derivative is positive. Thus we cannot have $ f(x) > f(y)$ and hence $ f(x) < f(y)$. Since we assumed that $ a < x < y < b$ this proves that the function $ f$ is strictly increasing in $ (a, b)$.

The monotonicity at the end points of the interval $ [a, b]$ still remains to be proven. For that matter let $ a < x < b$. We shall prove that $ f(a) < f(x)$. Clearly we can find $ y, z$ such that $ a < z < y < x < b$ and then by what we have already proved it follows that $ f(z) < f(y) < f(x)$. Keeping $ x, y$ fixed we let $ z \to a+$ and by continuity we see that $ f(a) \leq f(y) < f(x)$ so that $ f(a) < f(x)$. Similarly we can prove $ f(x) < f(b)$ and therefore $ f(a) < f(b)$. So $ f$ is strictly increasing in closed interval $ [a, b]$.

Corresponding theorem for case when derivative is negative can be proved in the same fashion and is therefore left as an exercise for the reader.

The above result can also be proved using another famous theorem by Lagrange called the Mean Value Theorem.

### Mean Value Theorem

*If a function $ f$ is defined in closed interval $ [a, b]$ such that**$ f$ is continuous in closed interval $ [a, b]$**$ f$ is differentiable in open interval $ (a, b)$*

*then there exists at least one point $ c \in (a, b)$ for which*$$f'(c) = \frac{f(b) - f(a)}{b - a}$$ This theorem, by the way, does not belong to the family of "those results whose proofs are beyond the scope of the book/syllabus". This is chiefly because of the reason that it can be easily deduced via Rolle's theorem. The idea is to form a function $ g$ related with function $ f$ such that $ g$ satisfies all conditions of Rolle's theorem and then we apply Rolle's theorem on $ g$. The form of the function $ g$ is chosen as follows: $$g(x) = f(x) - kx$$ where $ k$ is some suitable constant. The constant $ k$ is chosen such that $ g(a) = g(b)$ so that the function $ g$ satisfies all conditions of the Rolle's theorem. A simple calculation shows that $ g(a) = g(b)$ implies $$k = \frac{f(b) - f(a)}{b - a}$$ and then by Rolle's theorem there is a $ c \in (a, b)$ for which $ g'(c) = 0$ or $ f'(c) - k = 0$ or $ f'(c) = k$ and the Mean Value Theorem follows.Geometrically the theorem says that given a chord of a continuous smooth curve, between the end points of the chord there is a point on the curve at which the tangent is parallel to the chord.

This theorem can now be used to established the conditions of monotonicity. However this theorem gives more powerful results and it also gives a condition for the monotone functions which are not strict.

### Conditions for Monotonicity

We have the following theorem:*Let $ f$ be continuous in $ [a, b]$ and differentiable in $ (a, b)$. Then**if $ f'(x) > 0$ for all $ x \in (a, b)$ then $ f$ is strictly increasing in $ [a, b]$**if $ f'(x) \geq 0$ for all $ x \in (a, b)$ then $ f$ is increasing in $ [a, b]$**if $ f'(x) < 0$ for all $ x \in (a, b)$ then $ f$ is strictly decreasing in $ [a, b]$**if $ f'(x) \leq 0$ for all $ x \in (a, b)$ then $ f$ is decreasing in $ [a, b]$**if $ f'(x) = 0$ for all $ x \in (a, b)$ then $ f$ is constant in $ [a, b]$*

Many textbooks on calculus don't treat the end points of an interval while dealing with monotonicity. They tend to say that the functions are monotonic only in open intervals. In this regard even the highly acclaimed NCERT textbooks in India are wrong. For example let's say a function $ f$ is such that it is continuous in interval $ [1, 3]$ differentiable in $ (1, 3)$ and the derivative is positive in all of $ (1, 3)$ except that $ f'(2) = 0$ (check $ f(x) = (x - 2)^{3}$). What can we say for the monotonicity of this function? Common textbooks would say that the function is strictly increasing in $ (1, 2)$ and $ (2, 3)$. Whereas in reality the function is strictly increasing in $ [1, 2]$ and $ [2, 3]$ and hence in full interval $ [1, 3]$. If we only use open intervals we cannot join them to form the bigger intervals. That's why it is important to think about the closed intervals when dealing with monotonic functions.

### Intermediate Value Theorem for Derivatives

The technique used in the proof of Rolle's theorem can be used to establish the intermediate value property for the derivatives. We have seen earlier that continuous functions possess intermediate value property. However the continuity of a function is not necessary for the intermediate value property. In fact if a function is a derivative of another function then it possesses the intermediate value property. This result is sometimes also known as:**Darboux Theorem:***If $ f$ is differentiable in closed interval $ [a, b]$ i.e. $ f$ is differentiable in $ (a, b)$ and it possesses right hand derivative at $ a$ and left hand derivative at $ b$ and if $ f'(a) \neq f'(b)$ then for any given number $ k$ lying between $ f'(a)$ and $ f'(b)$ there is a point $ c \in (a, b)$ for which $ f'(c) = k$.*Clearly we can set $ g(x) = f(x) - kx$ so that $ g$ is differentiable in $ [a, b]$ and $ g'(a)$ and $ g'(b)$ are of opposite signs. Let's assume that $ g'(a) > 0$ and $ g'(b) < 0$. Then function $ g$ is strictly increasing at $ a$ and strictly decreasing at $ b$ and hence the maximum value of $ g$ is attained at an interior point $ c$ of closed interval $ [a, b]$. Then $ g'(c) > 0$ would imply that values of $ g$ to the right of $ c$ are greater than the maximum value $ g(c)$. Similarly the case $ g'(c) < 0$ can be ruled out and therefore $ g'(c) = 0$ so that $ f'(c) = k$.

The function $ f(x) = x^{2}\sin(1/x)$ when $ x = 0$ and $ f(0) = 0$ is clearly differentiable everywhere with $ f'(0) = 0$ and $ f'(x) = 2x\sin(1/x) - \cos(1/x)$ for $ x \neq 0$. The derivative $ f'(x)$ is not continuous at point $ 0$, but because of the fact that it is a derivative of some function $ f(x)$ it possesses the intermediate value property.

So far we have examined monotone functions with the added constraint that they are differentiable and continuous. However continuity is not essential for monotonicity. In the next post post we will study monotone functions and their extensions in general. Such functions are technically called

*functions of bounded variation*.**Print/PDF Version**
Great work..what an article..!!!

But I would like to seek Your attention towards one issue..

You’ve written that Mean Value Theorem can be derived from Rolle’s Theorem itself..but I think what I’ve learned is completely the opposite to that..which I’ve also mentioned in an article written by myself..here..

http://cosmologistic.blogspot.in/2012/10/lagranges-mvt-for-beginners.html

But Yes LMVT can for sure be derived from CAUCHY’s Theorem..is that what You wanted to tell ??

Arnab(@carmenelo)

March 20, 2013 at 11:38 AMHello Arnab,

I read your article and what you have mentioned is correct. If we present only the statement of Rolle’s and LMVT to any reasonable person he will at once say that Rolle’s is a special case of LMVT when the function values at the end points of the interval are same. However if we wish to establish these theorems there is no way to prove LMVT without using the Rolle’s.

Most importanty. from the graphical point of view these theorems have the same content. If there is a graph for a function satisfying Rolle’s condition, you just need to rotate the X-Y axes and the graph now becomes applicable to the LMVT. In Rolle’s the secant is parallel to X-axis, but in LMVT it is not necessarily parallel to X-axis.

As far as I am aware I have not seen any proof of LMVT without using Rolle’s Theorem. If you have any references regarding this you can let me know.

And by the way I have not touched upon Cauchy’s MVT but just to add this is a further generalization and it can also be proved using Rolle’s Theorem. And more advanced results like Taylor’s / Maclaurin’s theorem are also proved using Rolle’s theorem. By choosing functions suitably to meet the hypothesis of Rolle’s theorem, many advanced theorems can be proved. So in the theory of mean value theorems Rolle’s is the most fundamental one.

Paramanand

March 20, 2013 at 11:38 AM