A digression on Calculus… From introductory to advanced?

Before anything, wish everyone a happy and prosperous new year full of joy and success! :) (This was posted on Dec 31)

Surprised, aren’t you? Well, you may not be expecting this, but I came across a bunch of beautiful problems. So I want to introduce a little bit of graphs (is that really necessary?).
Without wasting any more time, let’s continue with this hotch-potch discussion on Calculus. Yeah, I know that I am not organised, sorry about that.
I don’t know from where to start, so anyway, let me start with the fundamentals of calculus problem-solving. The problems that we(of course, ‘we’ means that I am going to follow the Indian curriculum of Calculus) usually solve consist of the following concepts: Graphs and Functions, Limits, continuity and differentiability, function-plotting.
So anyway, let me start with some basic concepts of Polynomials.
Polynomials.
These may be defined as functions (that’s my way of looking at them) which have terms of integral powers and real or complex coefficients. For the sake of calculus of our level though, we only look at polynomials P[\mathbb R](x) which is generally given by
P(x)\equiv a_0x^n+a_1x^{n-1}+\cdots+a_n; given a_i\in\mathbb R\forall i=1,2,\cdots n.
The degree of a polynomial \deg{P(x)} is the highest power of x that is contained in P. If a_0\neq 0, then the degree of P is n.
Polynomials of even degree
If n is even, then we can only have two types of graphs possible.
P(x)=a_0x^n+a_1x^{n-1}+\cdots+a_n=x^n\left(a_0+\frac{a_1}{x}+\cdots+\frac{a_n}{x^n}\right).
Assume that x\to\infty, so that \lim_{x\to\infty}P(x)=\lim_{x\to\infty}a_0x^n=\left\{\begin{aligned}&+\infty, a_0>0\\&-\infty, a_0<0.\end{align...
And, \lim_{x\to-\infty}P(x)=\lim_{x\to-\infty}a_0x^n=\left\{\begin{aligned}&+\infty, a_0>0\\&-\infty, a_0<0.\end{ali...
So the graph can be of two different looks:
import graph; unitsize(1cm);size(8cm);real f(real x) {return (36-27x/100-13x^2/10000+3x^3/1000000+x^4/100000000);}pair F(real...
import graph; unitsize(1cm);size(8cm);real f(real x) {return (-x^4/100000000+200 x^3/100000000+370000 x^2/100000000-86000000 ...
So, a polynomial with even degree will either rise to the heaven, or fall down to hell on both sides – x\to\infty and x\to-\infty. It’s also obvious that it will cut the x axis even number of times.
Polynomials of odd degree
It’s easy to deduce that the graphs of polynomials with odd degree will fall to hell at one side, and rise up to heaven on the other. A nice way of recalling is that, if a_0>0, then the graph will rise up towards heaven at x\to\infty and vice versa. Like,
import graph; unitsize(1cm);size(8cm);real f(real x) {return (x^3/1000000+x^2/5000-(29 x)/100-30);}pair F(real x) {return (x,...
import graph; unitsize(1cm);size(8cm);real f(real x) {return (-x^3/1000000-x^2/5000+(29 x)/100-30);}pair F(real x) {return (x...
Roots
The roots of a polynomial are the points where the curve cuts the x-axis. We can use Descartes sign rule to determine the number of positive or negative roots. Let P(x) be a certain polynomial, which has 'n' number of sign changes as while proceeding from the lowest to the highest power (ignoring zero coefficients), has a maximum of n positive roots. It may have n-2, n-4 etc positive roots too. When this rule is applied to P(-x), we can guess the maximum allowed negative roots.

The derivative
Let f(x) be a function. Then f'(x_0) is the derivative at a point x_0, which is defined as f'(x_+)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}, f'(x_-)=\lim_{h\to0}\frac{f(x)-f(x-h)}{h}. If f'(x_+)=f'(x_-)=f'(x) in an interval (a,b), then the function is said to be differentiable over all points in (a,b). Then f'(x_0) is the slope of the tangent to the curve f(x) at x=x_0. A function is differentiable only if it’s continuous, but the reverse is not true.
Again, I do not intend on lecturing on removable and blah blah continuity, so I am assuming prior knowledge without loss of generality. After all, it’s tiring to mention everything, lol. So let’s get into some serious stuff.

Convexity, concavity, monotonicity.
A function is said to be monotonically, or strictly increasing, or decreasing if and only if f(a)\geq f(b), f(a)>f(b) and its reverses respectively hold for any a,b in a certain interval.
A function f(x) defined on an interval is called convex (or convex downward/concave upward) if the graph of the function lies below the line segment joining any two points of the graph. The formal definition is, a function f:\mathbb I\to\mathbb R is convex if and only if for any two points x_1,x_2\in\mathbb I and \lambda\in(0,1) we have f(\lambda x_1+(1-\lambda)x_2)\leq \lambda f(x_1)+(1-\lambda)f(x_2). In case the equality doesn’t occur, the function is called to be strictly convex.
Concavity is just the opposite.
The necessary and sufficient condition for a certain function f(x) to be convex in an interval (a,b) is that the function must lie above all of its tangents, ie f(x)\geq f(y)+(x-y)f'(y). Concavity: reverse the sign. :D
This may also be rephrased as f''(x)\geq 0 for all x in \mathbb I. We may show that this is another necessary and sufficient condition for convexity.
Jensen’s inequality.
Let f be a convex function of one real variable. Let x_1,\cdots,x_n\in\mathbb R and let a_1,\cdots, a_n\ge 0 satisfy a_1+\dots+a_n=1. Then
f(a_1x_1+\cdots+a_n x_n)\le a_1f(x_1)+\cdots+a_n f(x_n).
For a proof, look here.

Without any more of introduction, let me move on to one of the most important theorems in Calculus, ie Rolle’s theorem.
Rolle’s Theorem
If f(x) is continuous and differentiable at every point in an interval [a,b], and if f(a)=f(b), then there exists a certain x_0\in[a,b] such that f'(x_0)=0.
From graphical considerations, it is obvious that the function must have changed its slope from either positive to negative, or negative to positive at some point x_0 in the given interval. It can contradict this at the cost of its differentiability everywhere.

Mean Value Theorem (MVT for short)
If f(x) and g(x), and f'(x), g'(x) are continuous throughout an interval [a,b], and if f'(x)\neq 0 everywhere in the given interval, then there will always exist a point x_0\in[a,b] such that
\frac{f(b)-f(a)}{g(b)-g(a)}=\frac{f'(x_0)}{g'(x_0)}.
Proof.
Consider the function \phi(x)=\frac{f(b)-f(a)}{g(b)-g(a)}[g(x)-g(a)]-[f(x)-f(a)].
Since \phi(a)=\phi(b)=0, so applying Rolle’s theorem we see that
\phi'(x)=\frac{f(b)-f(a)}{g(b)-g(a)}g'(x)-f'(x)=0 for some point x=x_0. Hence done.
Geometric interpretation
For the non-general version of Rolle’s theorem for g(x)=x, we can see that the slope of the line joining (a,f(a)) and (b,f(b)) is \frac{f(b)-f(a)}{b-a}, and f'(x) is the slope at x=x. So, obviously there exists a certain x_0 in the interval such that f'(x_0) equals the slope of the chord joining the terminal points.
import graph; unitsize(1cm);size(8cm);real f(real x) {return (x^3/1000-2x^2/100+2x/10+1);}pair F(real x) {return (x,f(x));}xa...
Extended MVT
Define a constant R such that the equation
f(b)-f(a)-(b-a)f'(a)-\frac 12(b-a)^2R=0
Is satisfied. Define a function F(x) as
F(x)=f(x)-f(a)-(x-a)f;(a)-\frac 12 (x-a)^2R.
Since F(a)=F(b)=0, therefore F'(x)=f'(x)-f'(a)-(x-a)R, and
F'(x_1)=f'(x_1)-f'(a)-(x_1-a)R=0 for some x_1\in[a,b].
Since F'(a)=F'(b)=0, therefore \exists x_2\in[a,b]; \ni F''(x_2)=0, and R=f''(x_2). Substituting R, we get
f(b)=f(a)+(b-a)f'(a)+\frac{(b-a)^2}{2!}f''(x_2) \ \ (x_2\in (a,b)).
Continuing, we obtain,
\begin{aligned}f(b)=f(a)+\frac{(b-a)}{1!}f'(a)+\frac{(b-a)^2}{2!}f''(a)+\cdots+\frac{(b-a)^n}{n!}f^{(n)}(a)\\ +\frac{(b-a)^{n...
Where x_1\in(a,b). This expression is known as the extended MVT.

With this, I am wrapping up all my discussions on the theorems that we study in high school (except for the extended MVT, everything is quite fundamental).
And, it also means that I will now move onto some nice problems. Most of the problems that I will discuss in this post will be involving construction of functions and applications of Rolle’s theorem, etc. So, let’s change the mood from theoretic to a little of applications.
Problems.
1. Given two functions f and g, continuous on [a, b], differentiable on (a, b), and f(a)=f(b) = 0. Prove that there exists a point c\in(a, b) such that g'(c)f(c) + f'(c) = 0.
Solution
Define h(x)=f(x)e^{g(x)}, then h(a)=h(b)=0 and so, there exists a c in (a,b) such that h'(c)=0. But, note that h'(c)=e^{g(x)}\left(f'(x)+f(x)g'(x)\right)=0, and this leads us to our desired result.\Box

2. Let f be a function from reals to reals with at least two roots a<b . Prove that for any real number k there is c\in (a,b) such that f(c)+kf'(c)=0 .

Solution(Virgil Nicua)
If k=0 , then can choose c\in\{a,b\} . Suppose k\ne 0 . In this case consider g:[a,b]\rightarrow \mathbb R, Where g(x)=e^{\frac xk}\cdot f(x) . Observe that g satisfies g(a)=g(b)=0 and g'(x)=e^{\frac xk}\cdot\left[f'(x)+\frac 1k\cdot f(x)\right], so by Rolle’s theorem we get our desired result. \Box

3. Show that for two functions f,g such that g(x)\neq 0\forall x\in\mathbb R, and given s,r\in\mathbb R, we have a certain c between any two roots a,b of f such that
\frac{f'(x)}{g'(x)}=\frac{s}{r}\cdot\frac{f(x)}{g(x)}.
Solution
Consider h(x)=\frac{f(x)^r}{g(x)^s}. Indeed, h(a)=h(b)=0, so there exists at least one c between a,b such that
\frac{\text{d}}{\text{d} \ x}\left(\frac{f(x)^r}{g(x)^s}\right) = \frac{f(x)^{r-1}}{g(x)^{s+1}}(r\cdot g(x)f'(x)-s\cdot f(x)g...
Which leads to our desired result.\Box

4. Solve in \mathbb{R}, the following equation:
2^x-x=1.
Solution
Though it’s obvious from graphs that x=0,1 are the only solutions, why not apply Rolle’s? Suppose f(x)=2^x-x-1 has at least three distinct roots. By Rolle’s theorem, f'(x)= 2^x \log 2 - 1 should have at least two distinct roots; however f' is strictly increasing and so this cannot happen. So, f has at most two distinct roots, namely 0 and 1.

5. (Amparvardi) Let f: [a,b] \to \mathbb R, 0<a<b be a function which is continuous and differentiable in (a,b). Prove that there exists a real number c \in (a,b) for which
f'(c)=\frac1{a-c}+\frac1{b-c}+\frac1{a+b}.
Solution
Let us assume \displaystyle g(x)=(x-a)(x-b)\cdot e^{f(x)-\frac x{a+b}}. Note that g(a)=g(b)=0, and also
\begin{aligned}g'(x)&={e^{f(x)-\frac x{a+b}}}\cdot\left(2x-a-b-\frac{(x-a)(x-b)}{a+b}+(x-a)(x-b)f'(x)\right)\\&=(x-a)...
So from Rolle’s theorem, there exists at least one c\in(a,b) such that g'(c)=0, and we are done.\Box

Let’s see some LMVT problems now.
6. Show that

\frac{x-1}{x} \leq \ln x \leq x-1   for    x\in(0,\infty).
Solution
Consider the function f(x)=\ln x , it is continuous in [1,x] and also differentiable in (1,x). Applying LMVT, there must be one c such that f'(c) =\frac{\ln x}{x-1}
But 1 \leq c \leq x , therefore \frac{1}{x} \leq \frac{1}{c } \leq 1 \implies \frac{1}{x} \leq \frac{\ln x}{x-1} \leq 1, and hence we get our desired inequality.

7. Prove that for any x\in\left(0,\frac{\pi}{2}\right), we have
\left(\frac x{\sin x}\right)^{\tan x-x}>\left(\frac {\tan x}{x}\right)^{x-\sin x}.
Solution(Virgil Nicua)
Since 0<\sin x<x<\tan x, we can rewrite the given problem into
\frac{\ln x-\ln \sin x}{x-\sin x}>\frac{\ln\tan x-\ln x}{\tan x-x}.
Apply LMVT to the function \ln:(0,\infty)\to\mathbb R on the segments [\sin x, x] and [x,\tan x]. So there exist 0<\sin x<c<x<d<\tan x satisfying \frac{\ln x-\ln \sin x}{x-\sin x}=\frac 1c and \frac{\ln\tan x-\ln x}{\tan x-x}=\frac 1d. So, 0<c<d\implies
\frac{\ln x-\ln \sin x}{x-\sin x}>\frac{\ln\tan x-\ln x}{\tan x-x}\ \ \ \forall x\in \left(0,\frac {\pi}{2}\right).\Box

8. Let p(x)=ax^2+bx+c be a polynomial with real roots. Given that a and 24a+7b+2c have the same sign, prove that it’s impossible for both roots of p(x) to be in the interval (3,4).
Solution(marvopnema)
Assume the two roots of p(x) are both real, and situated in (3,4), hence of the form 3+u, 3+v, with 0<u,v<1. From Vieta’s relations we have -\dfrac {b} {a} = (3+u) + (3+v) = 6 + (u+v) and \dfrac {c} {a} = (3+u)(3+v) = 9 + 3(u+v) + uv.
Then \dfrac {1} {a} (24a + 7b + 2c) = 24 - 42 - 7(u+v) + 18 + 6(u+v) + 2uv = 2uv - (u+v) \leq 2uv - 2\sqrt{uv} = 2\sqrt{uv}(\sqrt{uv} - 1) < 0, hence a and 24a+7b+2c are of different signs.\Box

9. While a>0 and n\in\mathbb N, find the limit
\lim_{n\to\infty}n^2\left(\sqrt[n]{a}-\sqrt[n+1]{a}\right).
Solution(hsbatt)
By LMVT,
\begin{aligned}n^2 \left(a^{\frac{1}{n}} - a^{\frac{1}{n+1}} \right) &= n^2 a^{\frac{1}{t}} \ln a \left( \frac{1}{n}-\fra...
For some t \in (n,n+1).
Now, let’s sandwich the expression between \frac{n^2 a^{\frac{1}{n}} \ln a }{n(n+1)} and \frac{n^2 a^{\frac{1}{n+1}} \ln a }{n(n+1)}
Because both the limits of the left and the right bound are \ln a, therefore the function bounded in-between must also have a limit \ln a and n\to \infty.\Box

10. Without integrating, show that the sum

;S_n=\sum_{k=1}^n\frac 1{k\sqrt k}

converges.
Solution
Consider f:[k,k+1]\to\mathbb R, where f(x)=\frac 1{\sqrt x} and k=1,2,\cdots n-1. Observe that f'(x)=-\frac 1{2x\sqrt x}. By LMVT, there must exist a certain c\in[k,k+1] such that f(k+1)-f(k)=f'(c). Since f' is increasing, we get
f'(k)<f(k+1)-f(k)<f'(k+1);
Or,
\frac 1{2k\sqrt k}>\frac 1{\sqrt k}-\frac 1{\sqrt{k+1}}>\frac 1{2(k+1)\sqrt{k+1}}.
Summing up from 1 to n, we get
S_n<2\left(1-\frac 1{\sqrt{n+1}}\right)>S_n-1+\frac1{\sqrt{n+1}};
Or,
2\left(1-\frac 1{\sqrt{n+1}}\right)<S_n<3\left(1-\frac 1{\sqrt{n+1}}\right).
So, the sequence is increasing and bounded above and below. So we see that 2-\sqrt 2\leq S_n\leq 3, and S_n converges.\Box

11. Let p(x)=x^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0 be a monic polynomial of degree n>2, with real coefficients and all its roots real and different from zero. Prove that for all k=0,1,2,\cdots,n-2, at least one of the coefficients a_k,a_{k+1} is different from zero.
Solution
Lemma 1
If all the roots of p are real, then all the roots of p' are real.
* The proof of this lemma is clear, because between two consecutive distinct roots of p there is a root of p' (using Rolle’s theorem), and we have that if r is a root of p with multiplicity m\geq 2, then it’s a root of p' with multiplicity m-1.
Lemma 2
At least one of a_1,a_2 is different from 0
* Suppose that the (possibly repeated) roots of p are r_1,r_2,\ldots,r_n, and that a_1=a_2=0.
Then from Viete’s formulas we get that:
\begin{cases}a_1=r_1r_2\ldots r_n\left(\frac{1}{r_1}+\ldots\frac{1}{r_n}\right)=0\\a_2=r_1r_2\ldots r_n\left(\frac{1}{r_1r_2}...
As the r_i are nonzero, this immediately implies that \frac{1}{r_1^2}+\ldots+\frac{1}{r_n^2}=\left(\frac{1}{r_1}+\ldots\frac{1}{r_n}\right)^2-2\left(\frac{1}{r_1r_2}+\frac{1}{r_1r..., a clear contradiction. This concludes the proof of lemma 2. \Box
We now prove that the statement is true by strong induction on n\geq 1:
Base cases:n=1,2
If n=1 then there is nothing to prove, and if n=2 then the result is true because a_0\neq 0 (because 0 is not a root).
Induction step: Assume that n\geq 3.
Case 1: a_1\neq 0
In this case we have that p' is a polynomial with all roots real and nonzero (using lemma 1), so we may use the induction hypothesis on \frac{p'(x)}{n} (we divide by n just to make the polynomial monic, a minor technicality to allow us to use the hypothesis). As the coefficients of this polynomial are nonzero multiples of the coefficients of p (except a_0), we conclude that there are no consecutive zero coefficients in p.
Case 2: a_2\neq 0
In this case we have that p'' is a polynomial with all roots real and nonzero (using lemma 1 twice), so we may use the induction hypotheses on \frac{p''(x)}{n(n-1)} (again, dividing by n(n-1) is just a technicality), because n-2\geq 1. As the coefficients of this polynomial are nonzero multiples of the coefficients of p (except a_0,a_1), again we conclude that there are no consecutive zero coefficients (using lemma 2 for the coefficients a_1,a_2).
This concludes the proof of the desired statement. \Box

12. (a) Prove that

\sum_{k=0}^{2n-1}{\frac{x^{k}}{k!}

has only one real root x_{2n-1}.
(b) Prove that x_{2n-1} is decreasing, and \lim_{x\rightarrow\infty}{x_{2n-1}}=-\infty.
Solution to Part (a)
Let

P_{m}(x)=\sum_{k=0}^{m}\frac{x^{k}}{k!}
The key to proving that P_{2n-1} has exactly one real root is to show that P_{2n} has no real roots.
Note that for x\ge0,\ P_{m}(x)\ge 1. Hence all zeros must occur for negative x
By Taylor’s Theorem (with Lagrange mean-value form of the remainder),
e^{x}=P_{2n}(x)+\frac{e^{\xi}x^{2n+1}}{(2n+1)!}
But for x<0,\ \frac{e^{\xi}x^{2n+1}}{(2n+1)!}<0 which implies that P_{2n}(x)>e^{x}>0.
Hence, P_{2n}(x) has no zeros. Now consider P_{2n+1}(x). As an odd degree polynomial, it must have at least one zero. However, by Rolle’s Theorem, if it has two or more zeros, then its derivative must have at least one zero. But P_{2n+1}'(x)=P_{2n}(x), which has no zeros. Hence, P_{2n+1}(x) has exactly one zero.
Now, given any B<0, we know that P_{m}(x) tends to e^{x} uniformly on [B,0]. This implies that there exists an N, depending on B, such that for m>N,\ P_{m}(x) has no zeros in [B,0]. This establishes that the sequence of roots, x_{2n-1}, must tend to -\infty as n\to\infty.
Solution to Part (b)
Let Q_{k}(x) = \frac{x^{2k}}{(2k)!}+\frac{x^{2k+1}}{(2k+1)!}= \frac{x^{2k}}{(2k+1)!}\cdot \left( x+2k+1 \right).
We have that P_{2n-1}(-2n-1) = Q_{0}(-2n-1)+Q_{1}(-2n-1)+\ldots+Q_{n-1}(-2n-1).
Note that Q_{k}(-2n-1) < 0, \, \forall k \in \overline{0,n-1}, since (-2n-1)+2k+1=2(k-n) < 0.
Hence, P_{2n-1}(-2n-1) < 0. Since P_{2n-1} is a polynomial of odd degree and has a unique root, we must have that
-2n-1 < x_{2n-1}.
Consequently,
\begin{eqnarray*}P_{2n+1}\left( x_{2n-1}\right) &=& P_{2n-1}\left( x_{2n-1}\right)+Q_{n}\left( x_{2n-1}\right) \\ \ &...
Since P_{2n+1} is a polynomial of odd degree and has a unique root, we must have that x_{2n+1}< x_{2n-1}, i.e. \left\{ x_{2n-1}\right\}_{n \geq 1} is strictly decreasing.\Box

And, I finish the post here only. 12 Problems for this post!
Also, sorry for not being able to normalise. The introduction part is too easy, and the last problem uses Taylor. :lol:

Advertisements

2 thoughts on “A digression on Calculus… From introductory to advanced?

  1. Hi, man! I’ve seen your posts around Art of Problem Solving, and, reading your signature, I’ve come to this blog.

    Congratulations, it’s a pretty nice work you’ve been doing!

    Also, I’ve got a brazilian maths blog, and I think that I could tell to my readers – that aren’t that much, but they do exist – about your blog, and even put on my links bar. If we could start a partnership, exchange links on blogs, I think it would be great to both of us, what do you think?

    Well, the adress to my blog is http://amatematicapura.blogspot.com/. Although translating isn’t good, there’s a translate button, and I think that, even with translations it’ll be understandable.

    If you want to contact me, just leave an e-mail: joaopedroblogger@hotmail.com.

    See ya!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s