> <\body> |>> <\table-of-contents|toc> |.>>>>|> Scalar Conservation Laws> |.>>>>|> Shocks and the Rankine-Hugoniot condition |.>>>>|> > Hopf's treatment of Burgers equation |.>>>>|> > Two basic examples of Solutions |.>>>>|> > Entropies and Admissibility Criteria |.>>>>|> > Kruºkov's uniqueness theorem |.>>>>|> > Hamilton-Jacobi Equations> |.>>>>|> Other motivation: Classical mechanics/optics |.>>>>|> > Hamilton's formulation |.>>>>|> > Motivation for Hamilton-Jacobi from classical mechanics |.>>>>|> > The Hopf-Lax Formula |.>>>>|> > Regularity of Solutions |.>>>>|> > Viscosity Solutions |.>>>>|> > Sobolev Spaces> |.>>>>|> Campanato's Inequality |.>>>>|> > Poincaré's and Morrey's Inequality |.>>>>|> > The Sobolev Inequality |.>>>>|> > Imbeddings |.>>>>|> > Scalar Elliptic Equations> |.>>>>|> Weak Formulation |.>>>>|> > The Weak Maximum Principle |.>>>>|> > Existence Theory |.>>>>|> > Elliptic Regularity |.>>>>|> > Finite Differences and Sobolev Spaces |.>>>>|> > The Weak Harnack Inequality |.>>>>|> > Calculus of Variations> |.>>>>|> Quasiconvexity |.>>>>|> > Null Lagrangians, Determinants |.>>>>|> > Navier-Stokes Equations> |.>>>>|> Energy Inequality |.>>>>|> > Existence through Hopf |.>>>>|> > Helmholtz projection |.>>>>|> > Weak Formulation |.>>>>|> > <\with|par-first|0> \; Send corrections to . <\equation*> u+(f(u))=0 \>, 0>, typically convex. (x)> (given). Prototypical example: <\equation*> f(u)=|2>. Motivation for Burgers Equation. Fluids in 3 dimensions are described by equations. <\eqnarray*> +u\D u>||\u>>|||>>> Unknown: \\> velocity, \\> pressure. > is a parameter called . Get rid of incompressibility and assume \\>. <\equation*> u+u*u=\u. Burgers equation (1940s): small correction matters only when > is large (Prantl). Method of characteristics: <\equation*> u+|2>=0. Same as +u*u=0> if is smooth. We know how to solve +c*u=0>. (\> constant) (). Assume <\equation*> u=u(x(t),t) By the chain rule <\equation*> u|\t>=ux|\t>+u. If x/\t=u>, we have u/\t=u*u+u=0>. More precisely, <\eqnarray*> u|\t>>||>>|x|\t>>||(x(0)).>>>> Suppose (x)> is something like this: |gr-frame|>|gr-geometry||gr-grid||gr-grid-old||1>|gr-edit-grid-aspect|||>|gr-edit-grid||gr-edit-grid-old||1>|gr-line-arrows|||>>>|||||>>>||>>|||||>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>|>|>>>|> Analytically, (x)>, x/\t=u(x)>>(x)>. Strictly speaking, is fixed, need to determine >. Need to invert +t*u(x)> to find > and thus (x)>. |gr-frame|>|gr-geometry||gr-line-arrows|none|gr-dash-style|default||||>>>||>>||||>>>||>>||>|||>>|||||>|+t*u(x>)|>|>|>||>||>>|> As long as +t*u(x)> is increasing, this method works. Example 2: |gr-frame|>|gr-geometry||gr-line-arrows|||>>>||||>>>||>>||||>>>||>>||||>>>||>>||||>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>>>|> This results in a sort-of breaking wave phenomenon. Analytically, the solution method breaks down when <\equation*> 0=x|\x>=1+t*u(x). No classical (smooth) solutions for all 0>. Let's try weak solutions then. Look for solutions in >. Pick any test function C>(\\[0,\))>: <\equation*> >>\u+|2>=0,u(x,0)=u(x). Integrate by parts: <\equation> >>\u+\|2>\x*\t+>\(x,0)u(x)\x=0. <\definition> L([0,\]\\)> is a weak solution if () holds for all \C([0,\)\\)>. |gr-frame|>|gr-geometry||gr-line-arrows|||>>>||||>>>||>>||||>>>||>>||>|>|>|>|>||||>>>||>>||||>>>||>>|>|>|>|>>>|Solution for a simple discontinuity (> and > are unit vectors.)> Let > have compact support in \(0,\)> which crosses the the line of discontinuity. Apply (). > is the part of the support of > to the left of the line of discontinuity, > the one to the right. <\eqnarray*> ||>\u+\|2>\x*\t+>\u+\|2>\x*\t>>|||>(\u)+\|2>\x*\t+\>>|||>\u\+|2>\\s+>\u\+|2>\\s>>>> Notation =g-g> for any function that jumps across discontinuity. Thus, we have the integrated jump condition <\equation*> >\\+|2>>\\s. Since > is arbitrary, <\equation*> [u]\+|2>>\=0. For this path, <\equation*> \=(>,1)>+1>>,\=(-1,>)>+1>>. (>> is the speed of the shock.) <\equation*> \>=|2>>|>=+u|2>. <\equation*> =|> for a scalar conservation law +(f(u))=0>. <\definition> The for a scalar conservation law is given by <\equation*> u+(f(u))=0, <\equation*> u(x)=>|0,>>|>|0.>>>>> <\example> Let's consider the Riemann problem for the Burgers equation: /2>. <\equation*> u(x)=|0,>>||0.>>>>> By the derivation for ``increasing'' initial data above, we obtain <\equation*> u(x,t)=\y(t)}>, y(t)=/2>|>=. The same initial data admits another (weak) solution. Use characteristics: |gr-frame|>|gr-geometry||gr-line-arrows|||>>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>|>||||>>>||>>>>|> Assume )>. Then <\eqnarray*> >||->=v|t>,>>|>||=v.>>>> So, +u*u=0>>/t*v+v/t*v=0\v(-\+v)=0>. Choose )-\>. Then <\equation*> u(x,t)=. Thus we have a second weak solution <\equation*> u(x,t)=|0,>>||\1,>>||\1.>>>>> So, which if any is the solution? Resolution: <\itemize> /2>: E. Hopf, 1950 General convex : Lax, Oleinik, 1955. Scalar equation in >: Kruºkov. Basic idea: The ``correct'' solution to <\equation*> u+|2>=0 must be determined through a limit as \0> of the solution >> of <\equation*> u>+u>u>=\u>. This is also called to the . Then, apply a clever change of variables. Assume has compact support. Let <\equation*> U(x,t)=>u(y,t)\y. (Hold \0> fixed, drop superscript.) <\equation*> U=>u(y,t)\y=->|2>\y+\>u(y,t)\y. Then <\equation*> U=-|2>+\u or <\equation> U+|2>=\U. Equations of the form +H(D u)=0> are called . Let <\equation*> \(x,t)=exp-> (Cole-Hopf) <\eqnarray*> >||->U>>|>||->U>>|>||->U+\->U.>>>> Use () to see that <\equation*> \=\\, which is the heat equation for \>, and <\equation*> \(x)=exp-(x)|2\>. Since \0>, uniqueness by Widder. <\equation*> \(x,t)=t\>>>exp->|2t>+U(y)\y. Define <\equation*> G(t,x,y)=|2t>+U(y), which is called the function. Finally, recover via <\eqnarray*> \/\>||>*2t>exp->\y|>exp->\y>=>exp->\y|>exp->\y>>>|||-\>y*exp->\y|>exp->\y>.>>>> Heuristics: We want \0>u>(x,t)>. |gr-frame|>|gr-geometry||gr-line-arrows|none||||>>>||>>||||>>>||>>||>| (fixed)|>|||>||(y)>|>|/2t>|>||||||||>>>|> Add to get . We hold fixed and consider \0>. Let be the point where . We'd expect <\equation*> lim\0>u>(x,t)=. Problems: <\itemize> may not have a unique minimum. need not be > near minimum. Assumptions: <\itemize> > is continuous (could be weakened) (y)=o(\|y\|)> as \>. <\definition> <\eqnarray*> (x,t)>||z\\:G(x,z,t)=minG=inf argmin G,>>|(x,t)>||z\\:G(x,z,t)=minG=sup argmin G,>>>> <\lemma> Use our two basic assumptions from above. Then <\itemize> These functions are well-defined. (x,t)\a(x,t)> for \x>. In particular, >, > are increasing (non-decreasing). > is left-continuous, > is right-continuous: (x,t)=a(x,t)>. \>a(x,t)=+\>, -\>a(x,t)=-\>. In particular, =a> except for a countable set of points \> (These are called ). <\theorem> Use our two basic assumptions from above. Then for every \>, 0> <\equation*> (x,t)|t>\liminf\0>u>(x,t)\limsup\0>u>(x,t)\(x,t)|t>. In particular, for every 0> except for in a countable set, we have <\equation*> lim\0>u>(x,t)=(x,t)|t>=(x,t)|t>. Graphical solution I (Burgers): Treat (y)> as given. |gr-frame|>|gr-geometry||gr-line-arrows|||>>>||||>>>||>>||||>>>||>>|||||||||>|||>|(y)>|>||>|/2t>|>||||>>>||>>>>|> \; \ (y)\C-(x-y)/2t> is parabola is below (y)>. Then <\equation*> U(y)+|2t>-C\0, where is chosen so that the two terms ``touch''. Graphical solution II: Let\ <\equation*> H(x,y,t)=G(x,y,t)-|2t>=U(y)+|2t>-|2t>=U(y)+|2t>-. Observe , have minima at same points for fixed . |gr-frame|>|gr-geometry||gr-line-arrows|none|gr-color|green||||>>>||>>||||>>>||>>||>||||>>>||>>||>||>||>>|||||||||||>|>|||>>>>|> <\definition> If \\> continuous, then the of is <\equation*> supf\g:g. >, > defined by (y)+y/2t> same as that obtained from the convex hull of (y)+y/2t>>Irreversibility. <\remark> Suppose \C>. Observe that at a critical point of , we have <\equation*> \G(x,y,t)=0, which means <\equation*> \U(y)+|2t>=0, so <\equation*> u(y)+=0\x=y+t*u(y). Every such that (y)=x> gives a Lagrangian point that arrives at at the time . |gr-frame|>|gr-geometry||gr-line-arrows|||>>>||||>>>||>>|||||||||||>|||||>>>||>>||>||||>>>||>>|>|>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>|>| at global min.|>|>>|> <\remark> The main point of the Cole-Hopf method is that we have a solution formula independent of >, and thus provides a uniqueness criteria for suitable solutions. Exact references for source papers are: <\itemize> Eberhard Hopf, CPAM 1950 ``The PDE +u*u=\u>'' S.N. Kruºkov, Math USSR Sbornik, Vol. 10, 1970 #2. <\equation*> S=z\\:G(x,z,t)=minG <\proof> [Lemma ] Observe that is continuous in , and <\equation*> lim\>>=lim\>|2t\|y\|>+(y)|\|y\|>=\0. Therefore, minima of exist and > is a bounded set for 0>. <\eqnarray*> a(x,t)=inf S>|>|,>>|(x,t)=sup S>|>|.>>>> Proof of monotinicity: Fix \x>. For brevity, let (x,t)>. We'll show ,y,t)\G(x,z,t)> for any z>. This shows that G(x,y,t)> can only be achieved in )>, which implies (x,t)\z=a(x,t)>. Use definition of : <\eqnarray*> ,y,t)-G(x,z,t)>|||2t>+U(y)--z)|2t>-U(z)>>|||-y)|2t>+U(y)--z)|2t>+U(z)+(x-y)-(x-y)+(x-z)-(x-z)>>|||>+-x)(z-y)|\>>>>> 0> because ,t)>, 0> because \x>, by assumption y>. By definition, (x,t)\a(x,t)>. So in particular, <\equation*> a(x,t)\a(x,t), so > is increasing. Proof of other properties is similar. <\corollary> (x,t)=a(x,t)> at all but a countable set of points. <\proof> We know >, > are increasing functions and bounded on finite sets. Therefore, <\equation*> limx>a>(y,t),limx>a>(y,t) exist at all \>. Let (x,t)\a(x,t)}>. Then is countable.\ (x,t)=a(x,t)> for F>. <\equation*> a(y,t)\a(y,t)\a(y,t). Therefore, <\equation*> limx>a(y,t)=a(x,t). \; <\remark> Hopf proves a stronger version of Theorem : <\equation*> (x,t)|t>\liminf\0,\\x,\\t>u>(\,\)\limsup\0,\\x,\\t>u>(\,\)\(x,t)|t>. <\proof> (of Theorem ) Use the explicit solution to write <\equation*> u>(x,t)=>\exp\y|>exp\y>, where with G>. |gr-frame|>|gr-geometry||gr-line-arrows|none||||>>>||>>|||||||>|||>|>|>|>||>||>||>||>||>||>|0> here|>|0> here|>>>|> Fix . Fix \0>, let > and > denote (x,t)> and (x,t)>. Let <\eqnarray*> |>|-\|t>>>||>|-\|t>=:L.>>>> Lower estimate <\equation*> liminf\0>u>(x,t)\|t>-\. Consider <\eqnarray*> >(x,t)-l>||>-l\exp>\y|>exp>\y>=>+\-y|t>-l\exp>\y|>exp>\y>.>>>> Estimate the numerator as follows: <\eqnarray*> >>+\-y|t>\exp>\y>||>>|\>0>+>>>>|>>|>|+\>>+\-y|t>exp>\y>>>> On the interval [a+\,\>, we have the uniform lower bound <\equation*> )>\\0 for some constant depending only on >. Here we use <\equation*> >=(y)|\|y\|>+|2t\|y\|>->\\0 as \>. We estimate <\eqnarray*> +\>>+\-y\||t>e>\y>|>|+\>>+\-y\||t>exp->(y-a)\y>>|||>>)|t>exp-|4\>\y>>||>|>>exp-|4\>\y>>|||*|A>>\>>>y*e/2>\y=\|A>e|2\>>.>>>> For the denominator, <\equation*> >exp>\y: Since is continuous, and ,t)=0>, there exists > depending only on > such that <\equation*> P(x,y,t)\\ for [a,a+\>. Thus, <\equation*> >e>\y\>+\>e>\y\>+\>e)\>\y=\e)\>. Combine our two estimates to obtain <\equation*> u>(x,t)-l\e)\>|A*t*\e)\>>=-\\>. Since , > depend only on >, <\equation*> liminf\0>u>(x,t)\l=-\|t>. Since \0> arbitrary, <\equation*> liminf\0>u>(x,t)=|t>. \; <\corollary> \0>u>(x,t)> exists at all but a countable set of points and defines BV> with left and right limits at all \>. <\proof> We know <\equation*> a(x,t)=a(x,t) at all but a countable set of shocks. So, <\equation*> lim\0>u>(x,t)=(x,t)|t>=(x,t)|t> at these points. > because we have the difference of increasing functions. <\corollary> Suppose \BC(\)> (bounded, continuous). Then <\equation*> u(\,t)=lim\0>u>(\,t) is bounded and is a weak solution to <\equation*> u+|2>=0. <\proof> Suppose \C>(\\(0,\))>. Then we have <\equation*> \u>+>|2>=\u>\ <\equation*> >>\u>+\>)|2>\x*\t=\>>\u>\x*\t. We want <\equation*> ->\u+\|2>\x*\t=0. Suppose <\equation*> u>+u>u>=\u>,u>(x,0)\BC(\). Maximum principle yields <\equation*> >(\,t)|L>|>\|L>|>. Use DCT+\0>u>(x,t)=u> a.e. to pass to limit. <\equation*> u+|2>=0 (x)>, (x)=u(y)\y>. Always consider the Cole-Hopf solution. <\equation*> u(x,t)=, <\equation*> a(x,t)=argmin|2t>+U(y)|\>. <\example> (x)=\0}>>. Here, <\equation*> U(y)=\\0}>\y=y\0}> Then <\equation*> G(x,y,t)=|2t>+y\0}>\0, and <\equation*> G(x,y,t)=0=x\0}>=0 if 0>. So, for 0>. Differentiate and set <\equation*> 0=+1(0>>) So, . Consistency: need 0\x\t>. Gives for t>. <\eqnarray*> |||2t>+|2t>-+y\0}>>>||||2t>+|2t>+y\0}>-.>>>> Consider x/t\1>, 0>. x/2t> and . <\itemize> Case I: 0>, then /2t=y/2t-x*y/t\0>. Case II: 0>, then /2t=y/2t+(1-x/t)y\0>. <\equation*> a(x,t)=|0,>>||x\t,>>||t.>>>>> Then <\equation*> u(x,t)==|0,>>||x\t,>>||x.>>>>> <\example> (x)=-\0}>>. Then <\equation*> u(x,t)=-\-t/2}>. Shock path: . Here are some properties of the Cole-Hopf solution: <\itemize> ,t)\BV(\)> > difference of two increasing functions ,t)> and ,t)> exist at all \>. And ,t)\u(x,t)>. In particular, <\equation*> u(x,t)\u(x,t) at jumps. This is the . It says that chracteristics always enter a shock, but never leave it. Suppose ,t)\u(x,t)>. We have the Rankine-Hugoniot condidtion: <\equation*> =|2>>|>=u(x,t)+u(x,t). If is a shock location <\equation*> (u(x,t)+u(x,t))=,t)-a(x,t)>>>u(y)\y. <\eqnarray*> -a)()|\>>>||>>u(y)\y|\>>>>>> |gr-frame|>|gr-geometry||gr-line-arrows|||>>>|gr-line-width|default|gr-color|default||||>>>||>>||||>>>||>>|||>|||>|||>|||>||>>||>>||>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>|,t)>|>|,t)>|>>>|The ``clustering picture''.> <\eqnarray*> +D\(f(u))>||>|||(x)>>>> for \>, 0>. Many space dimensions, but is a scalar \(0,\)\\>, \\> (which we assume to be >, but which usually is >>). Basic calculation: Suppose C>(\\[0,\))>, and also suppose we have a convex function :\\\> (example: (u)=u/2>) <\equation*> |\t>>\(u)\x=>\(u)*u\x=->\(u)D(f(u))\x. Suppose we have a function \\> such that <\eqnarray*> q(u)>||(u)D(f(u)),>>>> i.e. <\eqnarray*> >q(u)+\>q(u)+\+\>q(u)>||u>+qu>+\+qu>>>||>|(u)fu>+\(u)fu>+\+\(u)fu>.>>>> Always holds: Simply define =\(u)f>. Then we have <\equation*> |\t>>\(u)\x=->div q(u)\x=-\\>q(u)\\=0, provided . <\example> Suppose +u*u=0>. Here (u)=u>. If (u)=u/2>, (u)=\(u)f(u)=u>. So, /3>. \ Smooth solution to Burgers Equation: <\equation*> \|2>+\|3>=0. (called the ) And <\equation*> |\t>|2>\x=0, which is . Consider what happens if we add viscosity <\eqnarray*> >+D\(f(u>))>||\u>,>>|>(x,0)>||(x).>>>> In this case, we have <\eqnarray*> |\t>>\(u>)\x>||>\*(u>)u>\x=>D\(q(u>))\x|\>+\>\(u>)D\Du>\x>>|||>(u>)|\>0>\|D u>\|\x\0>>>> because > is . If a solution to our original system is \0>u>> of solutions of the viscosity system, we must have <\equation*> |\t>>\(u)\x\0. Fundamental convex functions (): >, >, . <\definition> A function L>(\*\(0,\))> is an (or ) to the original system, provided <\enumerate> For every \C>(\\(0,\))> with \0> and every \> we have <\equation> >>\|u-k\|\+sgn(u-k)(f(u)-f(k))\D\\x*\t\0. There exists a set of measure zero such that for F>, ,t)\L>(\)> and for any ball <\equation*> lim0,t\F>\|u(y,t)-u(y)\|\y=0. An alternative way to state Condition 1 above is as follows: For every (entropy, entropy-flux) pair ,q)>, we have <\equation> \\(u)+\(q(u))\0 in >. Recover () by choosing (u)=\|u-k\|>. ()>() because all convex > can be generated from the fundamental entropies. () means that if we multiply by \0> and integrate by parts we have <\equation*> ->>\\(u)+D\\q(u)\x*\t\0. Positive distributions are measures, so <\equation*> \\(u)+\(q(u))=-m>, where >> is some measure that depends on >. To be concrete, consider Burgers equation and (u)=u/2> (energy). Dissipation in Burgers equation: <\eqnarray*> |\t>>(u>)\x>||>(u>)u>+2\>u>u>\x>>|||>(u>)\x.>>>> But what is the limit of the integral term as \0>? Suppose we have a situation like in the following figure: |gr-frame|>|gr-geometry||gr-line-arrows|none|>||||>>>||>>||||>>>||>>||||>>>||>>||||>||||>||||>>>||>>||||>>>||>>|\0>|>|>|>|>|>|>|>|>|>||>||>|>|>>>|> Traveling wave solution is of the form <\equation*> u>(x,t)=v>, where /=(u+u)/2>. And <\equation*> -c*v+|2>=v. Integrate and obtain <\equation*> -c(v-u)+|2>-|2>=v. For a traveling wave <\eqnarray*> >(u>)\x>|||\>>v>x|\>>>|||>(v)\x>>>> independent of >! In fact, <\eqnarray*> >(v)\x>||>v*\v|\x>\x>>|||>>-c(v-u)+|2>-|2>\v>>||)>>|-u)s(1-s)\s=-u)|6>,>>>> where the step marked )> uses the Rankine-Hugoniot condition. We always have \u>. Heuristic picture: |gr-frame|>|gr-geometry||gr-line-arrows|none||||>>>||>>||||>>>||>>||>||>||||>||||>||||>|||>||>||||>|||>||||>|||>|>|>|>|>>>|> The dissipation measure is concentrated on and has density <\equation*> -u)|6>. In what follows, \(0,\)>. Consider entropy solutions to <\eqnarray*> +D\(f(u))>||(x,t)\Q>>|||(x)>>>> Here, \>, \\>, >(Q)|>>. Characteristics:\ <\equation*> x|\t>=f(u)x|\t>=f(u),i=1,\,n. Let >=sup[-M,M]>\|f(u)\|> be the maximum speed of characteristics. Consider the area given by <\equation*> K=(x,t):\|x\|\R-c>t, 0\t\>> Define R/c>>. |gr-frame|>|gr-geometry||gr-line-arrows|none||||>>>||>>||||>>>||>>|||>|>|>||||>>>||>>||>||>||>||>|>=slice at fixed |>>>|> <\theorem> Suppose , are entropy solutions to the system such that <\equation*> >(Q)|>,>(Q)|>\M. Then for almost every \t>, \[0,T]>, we have <\equation*> >>\|u(x,t)-v(x,t)\|\x\>>\|u(x,t)-v(x,t)\|\x. In particular, for a.e. [0,T]> <\equation*> >\|u(x,t)-v(x,t)\|\>\|u(x)-v(x)\|\x. <\corollary> If =v>, then . (I.e. entropy solutions are unique, if they exist.) <\proof> Two main ideas: <\itemize> doubling trick, clever choice of test functions. Recall that if is an entropy solution for every \0> in >(Q)> and every \>, we have <\equation*> \|u(x,t)-k\|\+sgn(u-k)(f(u)-f(k))\D\\x*\t\0 Fix > such that )> is defined, let )>. <\equation*> \|u(x,t)-v(y,\)\|\+sgn(u-v)(f(u)-f(v))\D\\x*\t\0. This holds for )> a.e., so we have <\equation*> \x*\t*\y*\\\0. Moreover, this holds for every \C>(Q\Q)>, with \0>. We also have a symmetric inequality with >>, \> instead of >, \>. Add these to obtain <\equation*> \|u(x,t)-v(y,\)\|(\+\>)+sgn(u-v)(f(u)-f(v))\(D\+D\)\x*\t*\y*\\\0. This is what is called the . Fix \C>(Q)> and a ``bump'' function :\\\> with \0>, >\\r=1>. For 0>, let (r)\1/h*\(r/h)>. Let <\equation*> \(x,t,y,\)=\,|2>\,|2> where <\equation*> (z,s)|\>>>>=\(s)\(z). <\eqnarray*> >||\\\+\(\)>>|>>||\\-\(\)>>>> Adding the two cancels out the last term: <\equation*> \+\>=\\. Similarly, <\equation*> D\+D\=\D\. We then have <\equation*> \,|2>\|u(x,t)-v(y,\)\|\,|2>+sgn(u-v)(f(u)-f(v))D\\x*\t*\y*\\\0 > concentrates at , > as 0>. . Let 0>. (partly outlined in homework, Problems 6 & 7) <\equation> \|u(x,t)-v(x,t)\|\+sgn(u-v)(f(u)-f(v))\D\\x*\t\0 [To prove this step, use Lebesgue's Differentiation Theorem.] ()>> stability estimate. Pick two test functions: |gr-frame|>|gr-geometry||gr-line-arrows|none||||>>>||>>||||>>>||>>|||>||>||>|>|>||||>>>||>>||||>>>||>>||>||>||>||>||>||>|>|>|>|>|>|>>>|> Let <\equation*> \(t)=>\(r)\r. Choose <\equation*> \(x,t)=(\(t-t)-\(t-t))\>(x,t). where <\equation*> \>=1-\>\|x\|+c>t-R+\). Observe that <\equation*> (\>)=-\>*(c>)\0,D\>=-\>\. Therefore <\equation*> (\>)+c>\|D\>\|=-\>c>+\>c>=0. Drop >: <\eqnarray*> ||+sgn(u-v)(f(u)-f(v))\D\>>|||\+\D\\\|u-v\|\+c>\|D\\|=0(##)>>>> Substitute for > and use (##) to find <\equation*> (\(t-t)-\(t-t))\|u-v\|\*\x*\t\0 >> contraction. <\equation*> u+H(x,D u)=0 for \> and 0>, with (x)>. Typical application: Curve/surface evolution. (Think fire front.) |gr-frame|>|gr-geometry||gr-line-arrows|||>>>|||||||||>|||||||>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>|>|>|>|>>>|> <\example> If > is given as a graph . If > is a tangential vector, then <\equation*> \=)|>>. Let >=u(x,t)>. So the normal velocity is <\equation*> v=(0,>)\\, where > is the normal. <\equation*> \=,-1)|>>. Then =1>>>/>=-1>>=->>. <\equation*> u+>=0 is the , which in this case is >>. In > <\equation*> u+u\|>=0, a graph in >. Other rules for normal velocity can lead to equations with very different character. <\example> Here =-\> (mean curvature). <\equation*> \=|(1+u)> =-\>. Then <\equation*> |>>=|(1+u)>. So the equation is <\equation*> u=|(1+u)>, which is parabolic. Heuristics: |gr-frame|>|gr-geometry||gr-line-arrows|||>>>|||||||||>|||||||||||||||||||>||||||||||||||>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>|>|>|>>>|> If C>, then )=t>. Also <\equation*> \u+\|2>=0|>U+|2>=0. cf. Evans, chapter 3.3 <\itemize> Newton's second law Lagrange's equations Hamilton's equations Lagrange's equations: State of the system \> or > (which is the configuration space). Then <\equation*> L(x,>,t)=>>->. Typically, x\M*x>, where is the (pos.def.) mass matrix. A path in configuration space between fixed states )> and )> the action <\equation*> S(\)=>>L(x,>,t)\t over all paths >. <\theorem> Assume is >. Fix )>, )>. If > is an extremum of then <\equation*> -|\t>L|\>>+L|\x>=0. <\proof> (``Proof'') Assume that there is an optimal path . Then consider a perturbed path that respects the endpoints: <\equation*> x>(t)=x(t)+\\(t) with (t)=\(t)=0>. Sicne is an extremem of action, <\equation*> S|\\>x(t)+\\(t)\|=0>=0. So <\equation*> |\\>>>L(x+\\,>+\|\>,t)\t, which results in <\eqnarray*> >>L|\x>(x,>,t)\+L|\>>(x,>,t)|\>\t>||>|>>\(t)L|\x>-|\t>L|\>>\t+L|\>>\\|>>|\>>||>>> Since > was arbitrary, <\equation*> -|\t>L|\>>+L|\x>=0. Typical example: -body problem <\equation*> x=(y,\,y),y\\. Then <\equation*> T=m\|y\| and >, . So <\equation*> m>=-U|\y>i=1,\,N,j=1,\,3. \; <\equation*> H(x,p,t)=\>p*y-L(x,y,t)|\>> Then <\eqnarray*> >>||H|\p>,>>|>>||H|\x>,>>>> called . They end up being first-order equations. <\definition> Suppose \\> is convex. Then the is <\eqnarray*> >(p)>|>|\>p\x-f(x)>>|||\>\\\\|x\|\\.>>>> <\example> m*x>, 0> and \>. <\equation*> (p*x-f(x))=0\(p-m*x)=0\x=. And <\equation*> f>(p)=p\-m=*|m>. <\example> x\M*x>, where is pos.def. Then <\equation*> f>(p)=p\Mp. <\example> Suppose >/\> with \\\>. <\equation*> f>(p)=>|\>, >+>=1. Young's inequality and\ <\equation*> f>(p)+f(x)\p*x imply <\equation*> >|\>+>|\>\p*x. <\example> \; |gr-frame|>|gr-geometry||gr-line-arrows|none|gr-dash-style|default||||>>>||>>|||||>|||||>|||>>||||>>>||>>|>|>||>||||>>>||>>||||>>>||>>|||||||||>|>|>|>|>|>|>|>|>|>corners|>>>|> <\theorem> Assume is convex. Then \>=L>. <\proof> see Evans. Sketch: <\itemize> If > is piecewise affine, then \>=L> can be verified explicitly. Approximation: If \L> locally uniformly, then >\L>> locally uniformly. Back to Hamilton-Jacobi equations: <\equation*> u+H(x,Du,t)=0. is always assumed to be <\itemize> (\\\\[0,\))>, uniformly convex in u>, uniformly superlinear in . \ For every path connecting ,t)\(x,t)> associate the `action' <\equation*> S(\)=>L(x,>,t)\t. Lagrangian, convex, superlinear in >>. Least action>Lagrange's equations: <\equation> -|\t>D>>L(x,>,t)+DL=0 \>> 2nd order ODE. <\theorem> (``Theorem'') () is equivalent to <\equation> >=DH,>=-DH. Note that those are first order ODEs. <\proof> (``Proof'') <\equation*> H(x,p,t)=max\>v*p-L(x,v,t). Maximum is attained when <\equation> p=DL(x,v,t), and the solution is unique because of convexity. <\equation*> H(x,p,t)=v(x,p,t)p-L(x,v(x,p,t),t), where solves (). <\eqnarray*> H>||v-DL\Dv>>|||L)|\>()>Dv>>|||>>> Thus >=DH>. Similarly, we use () <\equation*> |\t>(p)=DL Note that <\eqnarray*> H>||v-DL-DL*Dv>>|||L+p-DL|\>)>>Dv.>>>> Thus, >=-DH>. Connections to Hamilton-Jacobi: <\itemize> () are characteristics of Hamilton-Jacobi equations. If )>, then u=p*\x-H*\t>. (cf. Arnold, ``Mathematical Methods in Classical Mechanics'', Chapter 46) <\equation*> u|\t>=-H(x,p,t);Du=p\u+H(x,D u,t)=0. Important special case: . <\example> -u\|=0>>. >>. <\example> +\|Du\|=0>. \|p\|>. <\equation*> >=DH(p)>>|>=0>>>>>\>|H(p(0))>>>>>\ <\equation> u+H(Du)=0,u(x,0)=u(x) for \>, 0>. Always, is considered convex and superlinear, >>. Action on a path connecting )=y> and )=x>: <\equation*> >>L(x,>,t)\t=>>L(>(t))\t\(t-t)L-t>. Using Jensen's inequality: <\equation*> -t>>>L(>)\t\L-t>>>>\t=L)-x(t)|t-t>. Hopf-Lax formula: <\equation> u(x,t)=min\>t*L+u(y). <\theorem> Assume :\\\> is Lipschitz with ,t))\M> Then defined by () is Lipschitz in *\[0,\)> and solves () a.e.. In particular, solves () in >. (Proof exacty follows Evans.) <\lemma> <\equation*> u(x,t)=min\>(t-s)L+u(y,s) where s\t>. <\proof> |gr-frame|>|gr-geometry||gr-line-arrows|none|gr-dash-style|||||>>>||>>|||>|||>>|||>>||||>||>||>||||>>>||>>>>|> <\equation*> == So <\equation*> =1-+. Since is convex, <\equation*> L\1-L+L. Choose such that <\equation*> u(y,s)=s*L+u(z). The minimum is achieved because is superlinear. Also, <\equation*> (y)-u(0)\||\|y\|>\M because > is Lipschitz. <\equation*> t*L+u(z)\(t-s)L+u(y,s). But <\equation*> u(x,t)=min>t*L|t>+u(z). Thus <\equation*> u(x,t)\(t-s)L+u(y-s) for all \>. So, <\equation*> u(x,t)\min\>(t-s)L+u(y-s). To obtain the opposite inequality, choose such that <\equation*> u(x,t)=t*L+u(z). Let 1-s/t)z+(s/t)x>. Then <\eqnarray*> >||>>|||+[u(x,t)-u(z)]>>|||(y,s)-u(z)+s*L+u(x,t)>>||>|>>> That means <\equation*> min\>(t-s)L+u(y-s)\u(x,t). <\lemma> \[0,\)\\> is uniformly Lipschitz. On any slice we have <\equation*> Lip(u(\,t))\M. <\proof> (1) Fix \\>. Choose \> such that <\eqnarray*> ||+u(y),>>|,t)>||-y|t>+u(y).>>>> Then <\equation*> u(,t-u(x,t)=inf\>t*L-z|t>+u(z)-t*L+u(y). Choose such that <\eqnarray*> -z>||>|z>||-x+y.>>>> Then <\eqnarray*> ,t)-u(x,t)>|>|(-x+y)-u(y)>>||>|-x,>>>> where )>. Similarly, <\equation*> u(x,t)-u,t)\M\|x-\|. This yields the Lipschitz claim. In fact, using \ Lemma we have <\equation*> Lip(u(\,t))\Lip(u(\,s)) for every s\t>, which can be seen as ``the solution is getting smoother''. (2) :> <\equation> u(x,t)=mint*L+u(y)\t*L(0)+u(x))>. Then <\equation*> (x)|t>\L(0). <\equation*> \|u(y)-u(x)\|\M\|x-y\|\u(y)\u(x)-M\|x-y\|. Thus <\equation*> t*L+u(y)\t*L+u(x)-M\|x-y\|. By (), <\eqnarray*> (x)>|>|t*L-M\|x-y\|>>|||\>M\|z\|-L(z)>>|||\>max\B(0,M)>\\z-L(z)>>|||\B(0,M)>max\>\\z-L(z)>>|||\B(0,M)>H(\).>>>> Now <\equation*> -max\B(0,M)>H(\)\(x)|t>\L(0), where both the left and right term only depend on the equation. > Lipschitz const in time max(L(0),max\B(0,M)>H(\))>. (Feb 22) Let \\(0,\)>.\ <\theorem> satisfies () almost everywhere in . <\proof> 1) We will use Rademacher's Theorem, which says Lip(Q)>> is differentiable a.e. (i.e., in Sobolev space notation, >(Q)=Lip(Q)>.) 2) We'll assume Rademacher's Theorem and show that () holds at any where is differentiable. Fix as above. Fix \>, 0>. Then <\eqnarray*> |)>>|h*L+u(y,t).>>>> Choose . Then <\eqnarray*> |>|q+u(x,t)>>>> and <\equation*> +\L(q). So, if we let 0>, we have u\q+u\L(q)>. Then <\equation*> u\-Du\q-L(q), since is arbitrary, optimize bound to become <\equation*> u\-H(Du). [Quick reminder: We want <\equation*> u=-H(Du). We already have one side of this.] Now for the converse inequality: Choose such that <\equation*> u(x,t)=L+u(z). |gr-frame|>|gr-geometry|||>||>|||||>||>||>>>|> Fix 0>, let . Then <\equation*> y=1-z+x=z+1-x and observe <\eqnarray*> ||>s*L|s>+u(z)\s*L+u(z)>>|-u(y,s)>|>|s*L+u(z).>>>> to find <\eqnarray*> |>|+(z)>-s*L+(z)>>>|u(x,t)-u(y,s)>|>|>>|x-(x-z),t-h|h>>|>|.>>>> Let 0>. Then <\eqnarray*> +Du>|>|>>|>|>|-Du\\-H(Du).>>>> Consider again surface evolution: -u\|>=0> (note the concave Hamiltonian). The surface evolves with unit normal velocity. So far, ,t))\Lip(u(\,s))> for any t>.\ ``One sided second derivative'': <\definition> \\> is semiconcave if c\0> <\equation*> f(x+z)-2f(x)+f(x-z)\C\|z\| for every \>. |gr-frame|>|gr-geometry||gr-line-width|2ln||>|||>|||>>||>|||>|||>>|>|>||>||>||>||>>>|Semiconcavity.> In the example, is semiconvex (because >>, so signs change). <\definition> is if there is a constant \0> such that <\equation*> \DH(p)\\\\|\\| for every \\>. <\theorem> Assume is uniformly convex. Then <\equation*> u(x+z,t)-2u(x,t)+u(x-z,t)\t>\|z\|(\x\\,t\0). <\proof> 1) Because is uniformly convex, we have <\equation*> H+p|2>\H(p)+H(p)|\>>+|8>\|p-p\||\> convexity>>. So, <\equation> L(q)+L(q)\L+q|2>+>\|q-q\|. To see this, choose > such that )=pq-L(q)>. Then <\equation*> (H(p)+H(p))=(pq+pq)-(L(q)+L(q)). This yields (). 2) Choose such that <\equation*> u(x,t)=t*L+u(y). By the Hopf-Lax formula, <\eqnarray*> |>|+2u(y)-2t*L-2u(y)>>|||L-L+L>>|||()>>|>=t>\|z\|>>>> (cf. Chapter 10 in Evans) Again, let \\(0,\)> and consider <\equation> u+H(Du,x)=0,u(x,0)=u(x). Suppose <\enumerate> H(p)>, There is no convexity on . Basic question: The weak solutions are non-unique. What is the `right' weak solution? <\definition> Lions>BC(\\[0,\)> is a provided <\enumerate> (x)> For test functions C>(Q)>: <\enumerate-Alpha> If has a local maximum at ,t)>, then +H(Dv,x)\0>, if has a local minimum at ,t)>, then +H(Dv,x)\0>. <\remark> If is a > solution to (), then it is a viscosity solution. Therefore suppose has a max at ,t)>. Then <\equation*> (u-v)=0>|(u-v)=0>>|u=\v>|u=Dv>>>>> (x,t) Since solves (), +H(Dv,x)\|,t)>=0> as desired. <\remark> The definition is unusual in the sense that `there is no integration by parts' in the definition. <\theorem> Assume there is 0> such that <\eqnarray*> )-H(x,p)\|>|>|-p\|>>|,p)-H(x,p)\|>|>|-x\|>>>> for all \> and \>. If a vicosity solution exists, it is unique. <\remark> Proving uniqueness is the hard part of the preceding theorem. Cf. Evans for complete proof. It uses the doubling trick of Kruºkov. What we will prove is the following: <\theorem> If is a viscosity solution, then +H(Du,x)=0> at all points where is differentiable. <\corollary> If is Lipschitz and a viscosity solution, then +H(Du,x)=0> almost everywhere. <\proof> Lipschitz |>>differentiable a.e. <\lemma> > function)> Suppose \\> is differentiable at ,t)>, then there is a > function \\> such that has a strict maximum at ,t)>. <\proof> (of Theorem ) 1) Suppose is differentiable at ,t)>. Choose touching at ,t)> such that has a strict maximum at ,t)>.\ 2) Pick a standard mollifier >, let >> be the > rescaling. Let >=\>\v>. Then <\equation*> >>|>|>|>>|>|>>|v>>|>|v>>>>>\0>.> >> has a local maximum at some >,t>)> such that >,t>)\(x,t)>. (Important here: maximum assumption.) For any , there is a ball ,t),r)> such that ,t)\maxB>(u-v)>. So, for > sufficiently small >)(x,t)\maxB>(u-v>)>. Then there exists some >,t>)> in the ball such that >> has a local maximum. Moreover, letting 0>, we find >,t>)\(x,t)>. (3) We use the definition of viscosity solutions to find <\eqnarray*> >+H(Dv>,x)>|>| (x>,t>)>>|v+H(Dv,x)>|>| (x,t).>>>> But is a local max>u=Dv>, =v>. So, <\equation*> u+H(Du,x)\0. (4) Similarly, use touching from above to obtain the opposite inequality. Why this definition? <\itemize> Semiconcavity Maximum principle (Evans) If were convex and , once again: |gr-frame|>|gr-geometry||gr-line-width|2ln||>||>|||>|||>|||>>|||>>||>||>||>||>|>|>|>|>>>|Semiconcavity> <\proof> (of Lemma ) |gr-frame|>|gr-geometry|||||||||||>|||>||>|>||>>>|> We want C> such that has a strict maximum at >. We know that is differentiable at > and continuous. Without loss, suppose =0>, )=0>, )=0>. If not, consider <\equation*> (x)=u(x+x)-u(x)-D u(x)(x-x). We can write (x)>, where (x)> is continuous and (0)=0>. Let <\equation*> \(r)=maxr>\|\(x)\|. :[0,\)\[0,\)> is continuous with (0)=0>. Then set <\equation*> v(x)=\(r)\r-\|x\|. Clearly , <\equation*> v(0)=0, D v=\(2\|x\|)-\(\|x\|)-2x. So, it is continuous and . (just check) Let \\> be open. Also, let >u> be the distributional derivative, with > a multi-index. >u> shall be the classical derivative (if it exists). <\definition> Let \> and 1>. Let <\equation*> W(\)\{u\\:D>u\L(\),\|\\|\k}. If W(\)>, we denote its norm by <\equation*> |>\\|\k>>u|L(\)|>. <\definition> (\)> is the closure of (\)> in the |k,p;\|>>-norm. <\proposition> (\)> is a Banach space. <\proposition> Suppose W(\)>. Define <\equation*> (x)=|\,>>||\.>>>>> Then \W(\)>. (Extension by zero for (\)> is OK.) Choose a standard mollifier \C>(\)> with \0>, )\B(0,1)>, >\*\x=1>. For \0>, let <\equation*> \>(x)\>\(x/\). <\theorem> Suppose W(\)>. For every open \\\>, there exist \C>(\)> such that <\equation*> -u|1,p;\|>\0. <\proof> Let =dist(|\>,\\)>. Choose \0>, with \\>. Set <\equation*> u(x)=\>\u for \>. We have >u=D>\>\u=\>\D>u>, for every >. Moreover, for \|\l>, we have >u\D>u> in (\)>. Typical idea in the theory: We want to find a representation of an equivalence class that has classical properties. If L(\)>, set <\equation*> f>(x)=lim0>f(y) \y. <\theorem> Suppose W(\)>, p\\>. Let \\\>.\ <\enumerate> Then has a representative >> on > that is absolutely continuous on a line parallel to the coordinate axes almost everywhere, and <\equation*> \>u>=D>u,n>>. Conversely, if has such a representative with >u>\L(\)>, \|\1>, then W(\)>. Why do we care? Two examples: <\corollary> If > is connected, and , then is constant. <\corollary> Suppose W(\)>. Then and are in (\)>, and we have <\equation*> D max{u,v}=| {u\v},>>|| {u\v}.>>>>> <\proof> Choose representatives >,v>>. Then >,v>}> is absolutely continuous. <\corollary> =max{u,0}\W(\)>. Likewise for >. <\corollary> W(\)\\|u\|\W(\)>. <\proof> ,u}>. <\proof> (of Theorem ) 1) Without loss of generality, suppose =\>, and has compact support. We may as well set because of Jensen's inequality. Pick \C>(\)> with =1> on > and consider =\u>, and extend by . 2) Choose regularizations > such that <\enumerate-alpha> )\B(0,R)> fixed, -u|1,p|>\2>. Set <\equation*> G=x\\:lim\>u(x) and <\equation*> u>(x)=lim\>u(x) for G>. We'll show that \G\|=0>. Fix a coordinate direction, say ,0,1)>. Write \=(y,x)> with \>. Let <\equation*> f(y)=\|\1>>\|D>(u-u)\|(y,x)\x Also let <\equation*> f(y)=>f(y). Observe that <\equation*> >f(y)*\y>>>\|\1>\|D>(u-u)\|\x=>-u|1,1|>\>>\\. Then \> for \> a.e. Fix s.t. \>. This implies <\equation*> lim\>f(y)=0. Let (t)=u(y,t)> for \>. Then <\equation*> g(t)-g(t)=>\>(u-u)(y,x)\x. Thus <\equation*> \|g(t)-g(t)\|\>\|\>(u-u)(y,x)\|\x\f(y) uniformly in . Thus <\equation*> lim\>g(t)=lim\>u(y,t)=u>(y,t) is a continuous function of . We may write <\eqnarray*> (t)>||>(x)|\>\x>>|>||\(\))>>>>|>(y,t)>||> function >.>>>> Thus <\equation*> u>(y,t)=>h(x)\x for every \>. Thus >> is absolutely continuous on the line . <\theorem> >(\)>)> Let p\\>. Let <\equation*> \\u:u\C>(\),\\. Then |\>=W(\)>. <\remark> The above theorem is stronger than the previous approximation theorem , which was only concerned with compactly contained subsets \\\>. <\proof> (Sketch, cf. Evans for details) Use partition of unity and previous approximation theorem. The idea is to exhaust > by |\>\\> for which >\>, for example <\equation*> \\x\\:dist(x,\\)\1/k. Choose partition of unity subordinate to <\equation*> G=\\|\>,\=\ and previous theorem on mollification. <\theorem> Suppose L(\)> and \\1>. Suppose there exists 0> such that <\equation*> | ->\|u(x)->\|\x\M*r> for all balls \>. Then C>(\)> and <\equation*> osc u\C(n,\)M*r>. Here, <\equation*> \|B(x,r)\|=|n>r, <\equation*> >=u(y)\y=| ->u(y)\y, <\equation*> osc u=supB>(u(x)-u(y))=supB>\|u(x)-u(y)\|. and finally >> is the space of Hölder-continuous functions with exponent >. <\proof> Let be a Lebesgue point of . Suppose B(z,r)\\>. Then <\eqnarray*> >->\|>||u(y)->\y>>||>|\|u->\|\y>>||>|\|u->\|\y>>||>|| ->\|u->\|\y\2\M*r>.>>>> Choose and iterate this inequality for increasingly smaller balls. This yields <\eqnarray*> >)>->>|>|M>>>>||>|>>>>> independent of . Since is a Lebesgue point, <\equation*> lim\>>)>=u(x). Thus <\equation*> \|u(x)->\|\C(n,\)M*r>, which also yields <\eqnarray*> >\|>|>|>\|+\|>->\|>>||>|)M*r>.>>>> For any Lebesgue points , s.t. <\equation*> B(x,r/2)\B(z,r)B(y,r/2)\B(z,r), this inequality holds: <\equation*> \|u(x)-u(y)\|\C(n,\)M*r>. This shows C>>. To obtain Poincaré's and Morrey's Inequalities, first consider some potential estimates. Consider the Riesz kernels <\equation*> I>(x)=\|x\|-n> for \\n> and the Riesz potential <\equation*> (I>\f)(x)=>>>\y. In >, -n>\L>, for \\n>, but not =0>. <\lemma> Suppose \|\\|\\>, \\n>. Then <\equation*> >\|x-y\|-n>\y\C(n,\)\|\\|/n>, where <\equation*> C(n,\)=\/n>/n>|\>. <\proof> Let \>, without loss . choose with 0> such that \|> <\eqnarray*> >\|y\|-n>\y>||\B>\|y\|-n>\y+\B>\|y\|-n>\y,>>|\|y\|-n>\y>||\B>\|y\|-n>\y+\>\|y\|-n>\y.>>>> We know <\eqnarray*> \B>\|y\|-n>\y>|>|-n>\B>1\y>>|||-n>\>1\y>>||>|\>\|y\|-n>\y>>>> Thus, <\eqnarray*> >\|y\|-n>\y>|>|\|y\|-n>\y=\\-n>\\\=|\>r>.>>>> Then <\equation*> |\>r\r=\||\>. So, <\equation*> |\>r>=/n>n/n>|\>\|\\|/n>. \; <\theorem> Let p\\>. Suppose \|\\> and L(\)>. Then, <\equation*> f|L(\)|>\C(\)|>, where <\equation*> C=\n\|\\|. Recall <\equation*> If(x)=>>\y,x\\. <\proof> By Lemma , <\equation*> >\|x-y\|\y\C. Therefore <\eqnarray*> f(x)\|>|>|>>\y\>|\|x-y\|>\y>>>||>|>|\|x-y\|>.>>>> Therefore <\eqnarray*> >\|If(x)\|\x>|>|>>|\|x-y\|>y*\x|\>>>>||>||p>C>>||||p>.>>>> <\theorem> Suppose > convex, \|\\>. Let )>. Suppose W(\)>, p\\>. Then <\equation*> | ->>\|u(x)->>\|\x\C(n,p)d| ->>\|D*u\|\x <\remark> Many inequalities relating oscillation to the gradient are called Poincaré Inequalities. <\remark> This inequality is not scale invariant. It is of the form <\equation*> | ->>\|u(x)->>\|\x\C\>>| ->>\|D*u\|\x. <\corollary> Let W(\)> and \\1>. Suppose there is 0> s.t. <\equation*> \|D u\|\x\M*r> for all \>. Then C>(\)> and <\equation*> oscu\C*M*r>,C=C(n,\). <\proof> For any \>, Poincaré's Inequality gives <\equation*> | ->\|u->\|\x\C*r| ->\|D u\|=|n>r>\|D u\|\C*M*r>. Then use Campanato's Inequality. <\proof> (of Theorem ) Using pure calculus, derive <\equation*> \|u(x)->\|\|n>| ->>>\y. Let \|=1> and <\equation*> \(\)=sup0>{x+t\\\}, which can be seen as the distance to the bounary in the direction >. Let > and t\\(\)>. Then <\eqnarray*> ||)\|>>||>|\|D u(x+s\)\|\s>>||>|(\)>\|D u(x+s\)\|\s.>>>> Since <\equation*> u(x)->=u(x)-| ->>u(y)\y=| ->>u(x)-u(y)\y, we have <\eqnarray*> >>|>|| ->>\|u(x)-u(y)\|\y>>|||\|>>(\)>\|u(x)-u(x+t\)\|t\t*\\>>||>|\|>>(\)>(\)>\|D u(x+s\)\|\s*t\t*\\>>||>|\|>>(\)>)|s>s\s*\\\|n>,>>>> considering <\equation*> max>(\)>t\t=max>(\)|n>=|n>. Rewrite the integral using <\equation*> s\s*\\=\y as <\equation*> \|u(x)->\|\|n>| ->>\y. Recall that <\equation*> If(x)>>\y. Using Theorem on Riesz potentials, we have <\eqnarray*> >\|u(x)->\|\x>|>|>|n\|\\|>>>\y\x>>||>||n\|\\|>C>\|D u(y)\|\y>>>> with =\n\|\\|>. Thus <\eqnarray*> >|L(\)|>>|>||n\|\\|>\n\|\\||\>*\|(n\|\\|)>=d|n\|\\|>>(\)|>>>>> Now, realize that d|n\|\\|>> is just the ratio of volumes of ball of diameter to volume of \|>, which is universally bounded by the isoperimetric inequality. So, the inequality takes the form <\equation*> >|L(\)|>\>>\>>\(\)|>. The desire to make Poincaré's Inequality scale-invariant leads to <\theorem> Suppose C(\)>. Then for p\n>, we have <\equation*> >>(\)|>\C(n,p)(\)|>, where <\equation*> p>=. <\remark> This inequality is , and >> is the only allowable exponent. Suppose we had <\equation*> >\|u(x)\|\x\C(n,p,q)>\|D u(x)\|\x for every C(\)>. Then since >(x)=u(x/\)> for \0> is also in >, we must also have <\eqnarray*> >\|u>(x)\|\x>|>|>\|D u>(x)\|\x>>|\>\|u>(x)\|\>>|>|>>\|D u>\|\x>>|\>\|u(x)\|\x>|>||\>>\|D u(x)\|\x.>>>> We then have <\equation*> \|>\|\>C|>. Unless <\equation*> \=\, we have contradiction: simply choose \0> or \\>. So we must have <\equation*> =-q=p>. <\remark> Suppose . Then the Inequality is <\equation*> >>(\)|>\C(\)|>. Consider >=>. The best constant is when >. Then <\equation*> LHS=>\>(x)\x>=\|B\|>=|n>>. And, <\equation*> RHS=>\|D u(x)\|\x=-dimensional volume>=\. So, we have <\equation*> |n>>\C\\. This gives the sharp constant. Thus it turns out that in this case the Sobolev Inequality is nothing but the Isoperimetric Inequality. <\proof> <\equation> u(x)=>Du(,\,x,y,x,\,x|\> \>)\y. Then <\equation*> \|u(x)\|\>\|Du()\y,k=1,\,n. First assume , >=1>=n/(n-1)>, 1>. Then <\equation*> \|u(x)\|\>\|Du()\|\y. We need a <\equation*> >ff\f\x\|p|>|p|>\|p|>, provided <\equation*> >+>+\+>=1. In particular, we have <\equation*> >ff\f\x\>f\>f, choosing =p=\p=n-1>. Progressively integrate () on ,\,x> and apply Hölder's Inequality. Step 1:\ <\eqnarray*> >\|u(x)\|\x>|>|>\|Du(|^>)\|\y|\>>>>\>>Du()\y|\>(x)>>>\x>>||>|>\|Du(|^>)\|\y>\|Du()\|\y\x.>>>> Step 2: Now integrate over >: <\eqnarray*> >>\|u(x)\|>\x\x>|>|>>\|Du()\|\x\y>|\>>>>>>|||\>>Du()\y>>>\|Du()\|\y\x>\x>>>> Use Hölder's Inequality again. Repeat this process times to find <\equation*> >\|u(x)\|>\x\>\|Du\|\x> or <\eqnarray*> >\|u(x)\|>\x>>|>|>\|Du\|\x>>>||>|>\|Du\|\x,>>>> where we used <\equation*> \a|n>\+\+a|n>. Since <\equation*> \|D u\|=u\|+\+\|Du\|>, we have by Cauchy-Schwarz <\equation*> \|Du\|\>\|D u\|. Therefore, <\equation*> ||||||||>|>\>|>>.>>>>> For 1>, we use the fact that <\equation*> D u>=\*u-1>D u for any >. Therefore we may apply the Sobolev Inequality with to find <\eqnarray*> >\|u\|\>\x>>|>|>>\|D u>\|\x=|>>\|u\|-1>\|D u\|\x>>||>||>>\|u\|-1)p>\x>>>\|D u\|\x>.>>>> Choose > that <\equation*> \\=(\-1)*p. This works for p\n> and yields <\equation*> >>|>\>p>|>, where <\equation*> p>=\\ as n>. <\theorem> Suppose W(\)>, p\\>. Then C(\)>. And <\equation*> oscu\C*r|>. In particular, if >, is locally Lipschitz. <\proof> Poincaré's Inequality in > reads <\equation*> | ->\|u->\|\x\C*r| ->\|D u\|\x. Therefore, by Jensen's Inequality <\eqnarray*> | ->\|u->\|\x>|>|| ->\|D u\|\x>>||||n>r>(B)|>>>||>||>.>>>> Now apply Campanato's Inequality. What have we obtained? |gr-frame|>|gr-geometry||gr-line-arrows|||>>>|(\)>|>|>> for >p\n>|>||>|>(\)> for p\\>.|>||||>>>||>>||||>>>||>>||||>>>||>>>>|> Typical example where we need >: Suppose is a map \\>. (We are often interested in .) Especially care about <\equation*> >det(D u)\x for \\\>. Then <\equation*> det(D u)=>(-1)>u>\u> So, we need \L(\)> or W>. <\theorem> If W(\)>, then BMO(\)>, where <\equation*> [u]=sup| ->\|u->\|\x and )\{[u]\\}>. For a compact domain, <\equation*> L\\L\\\L>\BMO, where > is contained in the dual of . <\definition> A Banach space > is into a Banach space > (written \B>) if there is a continuous, linear one-to-one mapping \B>. <\example> (\)\L>>(\)> for p\n>. Let > be bounded. <\example> (\)\C(|\>)> for p\\>. <\example> (\)\L(\)> for p\n> and q\p>>, where we used <\equation*> (\)|>\>>|>\|\\|>>, which is derived from Hölder's Inequality. <\definition> The imbedding is (written \B>) if the image of every bounded set in > is precompact in >. Recall that in a complete metric: precompact>totally bounded. <\theorem> Assume > is bounded. Then <\enumerate-numeric> (\)\L(\)> for p\n> and q\p>>. (\)\C(|\>)> for p\\>. <\remark> We only have strict inequality in part 1. (That is, >> does not work.) <\proof> By Morrey's Inequality, (\)\C(|\>)>. Now apply the Arzelà-Ascoli theorem. We have to reduce to Arzelà-Ascoli. Let be a bounded set in (\)>. We may as well assume that C(\)>. Let \0> be a standard mollifier. Consider the family <\equation*> A>={u\\>\|u\A},\>(y)=>\>. >> is precompact in (|\>)>. We must show >> is uniformly bounded, equicontinuous. <\equation*> u>(x)=>>\>u(y)\y=>>\>u(y)\y. Therefore, <\eqnarray*> >(x)\|>|>||\|>|\>(\)|>.>>||>||\|>|\>\|\\|(\)|>>>||>||\|>|\>\|\\|.>>>> Similarly, <\equation*> D u>(x)=>>D \>u(y)\y. Thus <\equation*> \|D u>(x)\|\>|\|>\|\\|. The claim is thereby established. In particular, the claim implies >> is precompact in (\)>. (Indeed, if >> is convergent in (|\>)>, then by DCT, >> is convergent in (\)>. We also have the estimate <\eqnarray*> >(x)\|>||>>\>(u(x)-u(x-y))\y>>||,supp(\)\B(0,1)>>|\(z)(u(x)-u(x-\z))\z>>>> By the fundamental theorem of calculus, the subterm <\equation*> u(x)-u(x-\z)=|\t>u(x-\*t*z)\t\D u(x-\t*z)\z*\t. Then <\eqnarray*> >(x)\|>|>|\(z)\|z\|>\|D u(x-t\)\|\t*\z,\=.>>>> (We use \0> and differentiability on a line.) Therefore, <\eqnarray*> >\|u(x)-u>(x)\|\x>|>|\(z)\|z\|>>\|D u(x-t\)\|\x|\>)>*\t \z>>||>|(\)|>\(z)\|z\|>\t \z>>||>|(\)|>\\M\|\\|,>>>> where <\equation*> (\)=>\|D u(x-t\)\|\x\>\|D u(x)\|\x. using C+>. Summary: <\itemize> >> precompact in (\)>>totally bounded, Every A> is >-close to >\A>>. Therefore is totally bounded in >. This shows that is precompact in (\)>. If q\p>>, we have <\equation*> >|L|>\>|L(\)|>>|L>>(\)|>\>|\>>\>|\>>, where <\equation*> =|1>+|p>>. Therefore is totally bounded in (\)>. |gr-frame|>|gr-geometry||gr-line-arrows|||>>>|W>|>|>|>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>|>>>|>|>|>|>|)>|>||||>>>||||>>|(|\>>)|>|L>|>|>|>|>>|>|>|>>>|> Reference: Gilbarg/Trudinger, Chapter 3 and 8 The basic setup in divergence form: <\eqnarray*> ||D u+d*u>>|||(aDu+bu)+cDu+d*u,>>>> where \\n>>, \\>, \\>. Main assumptions: <\enumerate-numeric> Strict ellipticity: There exists \0> such that <\equation*> \A(x)\\\\|\\| for every \>, \\>. L>(\)>. There exists \0>, \0> such that <\equation*> >(\)|>A)>|L>(\)|>\\ and <\equation*> >|>+|>+|>\\. Typical problem is to minimize <\equation*> I[u]=>E(D u)\x, where is ``energy''. If is a minimizer, we obtain the Euler-Lagrange equations as follows: <\eqnarray*> |\t>I[u+t*v]\|>|||\t>>E(D(u+t*v))\x\|=>D E(D(u+t*v))\D v*\x>>|||>D E(D*u)\D v*\x.>>>> Necessary condition for minimum: <\equation*> >D E(D u)\D v*\x=0 for all test functions . This ``means'' that <\equation*> >D(D E(D u))\v*\x, which is the term that we had in the first place--namely the : <\equation*> div(D E(D u))=0 with \\> and \\> is a given smooth function, for example > for 1>. In coordinates, <\equation*> DD>E(Du)]=0\D,p>E(Du)\Du=0tr(A*Du)=0, where E(D u(x))>, which is the unknown as yet.\ Assuming solves the above problem. Show that is regular. A priori, we only know that L>>>DeGiorgi and Nash>classical regularity. Formally multiply by C(\)> and integrate by parts: <\eqnarray*> ||>(div(A*D u+b*u)+(c\D u+d*u))\v*\x>>|||>(D vA*D u+b\D v*u)+(c\D u+d*u)v*\x>>|||>>> Basic assumption: W(\)>. Then is well-defined for all C(\)> and by Cauchy-Schwarz for all W(\)>. Now consider the classical Dirichlet problem: <\eqnarray*> || \,>>||| \\.>>>> <\definition> Given L(\)>, L(\)>, \W(\)>. W(\)> is a solution to <\eqnarray*> || \,>>||| \\>>>> if <\enumerate> >[g*v-f*\D v]\x> for C(\)> \W(\)>. We want 0\sup>u\sup\>u>. How do we define \>u>? <\definition> Suppose W(\)>. We say 0> on \> if <\equation*> u=max(u,0)\W(\). Similarly, v> on \> if <\equation*> (u-v)\W(\). <\definition> <\equation*> sup\>u=infk\\:u\k=infk\\:(u-k)\W(\). <\description> )>>There is a \0> such that A(x)\\\\|\\|> for all \>, \\>. )>>There is \0>, \0> such that <\equation*> >(|>+|>)+>|>\\,A)|\|>\\. <\definition> Given >, find W(\)> such that <\eqnarray*> )L u>||>>,>>|u>||\>>,>>>> where )> means and means \W(\)> with <\eqnarray*> ||>D v(A*D u-b*u)-(c\D u+b)v*\x,>>|||>D v\f-g*v*\x.>>>> If is in divergence form, say <\equation*> 0=A Du+b*\D u+d*u, where we need 0> to obtain a maximum principle (see Evans or Gilbarg&Trudinger, Chapter 3). <\description> )>>0> in the weak sense, that is <\equation*> >(div b+d)v*\x\0\v\C(\), v\0. Precisely, <\equation*> >d*v-b\D v*\x\0\v\C(\), v\0. <\definition> W(\)> is a to the Generalized Dirichlet Problem if F(v)> for all C(\)> with 0>, which is <\equation*> L u\g+div f read in a weak sense. <\theorem> Suppose 0> and )>, )>, )> hold. Then <\equation*> sup>u\sup\>u. <\remark> Recall <\eqnarray*> \>u>||\:(u-k)\W(\)}>>|||0:(u-k)\W(\)}.>>>> <\remark> There are no assumptions of boundedness or connectedness or smoothness on >. Compare the above theorem with the classical maximum principle for u\0>. <\corollary> (\)> solutions to the Generalized Dirichlet Problem are unique if they exist. <\remark> Nonuniqueness of the extension problem. Consider the ball and <\equation*> u(x)=a+(1-a)\|x\| for \>. <\equation*> \|D u(x)\|\\\a=0,n\3. <\proof> (of weak maximum principle) Step 1) The inequality )> <\equation*> >(d v-D v\b)\x\0 for 0>, C(\)> holds for all W(\)> (since by )>, L>>). Step 2) Basic inequality: <\equation*> B[u,v]\0 for C(\)> and 0>. <\eqnarray*> >D v(A*D u+b u)-(c\D u+d*u)v*\x>|>|>|>D vA\D u-(b+c)D u\v>|>|>d(u*v)-b\D(u*v)\x\0.>>>> Now choose test functions cleverly such that 0> and W(\)>. (applying step 1) But holds for W(\)> and W(\)> holds for W(\)> and C(\)>, which is OK. (See the chain rule for > in Evans.) <\equation*> >D vA*D u*\x\(b+c)D u\v*\x, provided 0>, 0>, W(\)>. Step 3) Let sup\>u>. Suppose >u\l> (else there is nothing to prove). Choose k\sup>u> and >. We know that W(\)> by the definition of . <\equation*> l=sup\>u>=inf{k\0:(u-k)\W(\)}. Assume k\sup>u=:m>, (u-k)>>. Then <\equation*> D v=|k,>>||k.>>>>> And if ={D v\0}>, we have <\eqnarray*> >\|D v\|\x|>>D vA*D v\x>||(E)>>|\>v\|D v(x)\|\x.>>>> <\equation*> >\|D v\|\2\>\|v\|\x>\|D v\|\x. Thus we obtain <\equation*> (\)|>\2\(\)|>. By Sobolev's Inequality, <\equation*> >>(\)|>\C(\)|>\C2\(\)|>\C2\\|\\|>>(\)|>. Thus <\equation> \|\\|\2\>\0, independent of . Letting m>, we obtain that \> (else W(\)>. Choosing , obtain a.e. contradicting (). <\definition> A continuous operator \B>, where > and > are Banach spaces, is called if is precompact in > for every bounded set B>. <\theorem> Assume B> is linear, continuous and . Then either <\enumerate> has a solution 0> <\with|par-first|0> or > exists and is a bounded linear operator from B>. Read this as ``Uniqueness and Compactness>Existence'' <\theorem> Let \\\\> be bilinear form on a Hilbert space such that <\enumerate> K> for some 0>, k> for some 0>. Then for every \>> there exists a \> such that for every \>. Assumption 2 above is called . <\proof> 1) Riesz representation theorem. For any \> the map B[u,v]> defines a bounded linear functional on >. By the Riesz Representation Theorem, there is \>> such that <\equation*> B[u,v]=T v(u) for every \>. Thus we obtain a linear map \\>>, T v>. 2) K>, so \K>. Moreover, <\equation*> k\B[v,v]=T v(v)\. Thus <\equation*> 0\k\|>\K. Claim: is one-to-one. k\=0\=0>. Claim: is onto. If not, there exists 0> such that )\z>. Now use that )> is closed. Choose . Then <\equation*> 0==T z(z)\k <\theorem> Let > be bounded, assume >, >, >. Then the Generalized Dirichlet Problem has a solution for every L(\)> and \W(\)>. Then the Generalized Dirichlet Problem can be stated as finding a W(\)> such that <\equation*> B[u,v]=F(v)W(\)>>. using <\eqnarray*> ||>(f\D v-g*v)\x.>>>> <\proof> (Step 1) Reduce to the case =0>. Consider =u-\>. (Step 2) <\lemma> Assume (>), (>) hold. Then <\equation*> B[u,u]\|2>>\|D u\|-\\>\|u\|\x. <\proof> \; <\eqnarray*> ||>[A\D u|\>+D u|\>+>\x.>>|||>D uA*D u\x|(E)>\>\|D u\|\x.>>||>||>+|>)>\|u\|*\|D u\|\x\|2>(\)|2>+>(|>+|>)(\)|2>>>>> using the elementary inequality <\equation*> 2a*b\\a+|\> for \0>. By assumption (>), <\equation*> |2>+|2>|2\>+|>|2>\\\. Now combine these estimates. \W(\)>, a Hilbert space. >=>>>. Isn't >=\> by reflexivity of Hilbert spaces? No, only \>>. In >, we denote <\equation*> H(\)\u\\:(1+\|k\|)\|(\)\|\\\\. This works for every \>. If , we have <\equation*> >(1+\|k\|)\|(\)\|\\=C>(\|u\|+\|D u\|)\x=C(\)|2>. By Parseval's Equation <\equation*> >u(x)v>(x)\x=C>(k)>(k)\k. If H>, H>, then RHS is <\equation*> |>=>(1+\|k\|)(k)(1+\|k\|)>(k)\k\|>|> by Cauchy-Schwarz. (cf. a 1-page paper by Meyer-Serrin, PNAS, 1960s, the title is .) Every \> also defines an element of >> as follows: Define <\equation*> I(u)(v)=>u(x)v(x)\x v\H. Recall that the first step in the proof of our Theorem is to reduce to =0> by setting =u-\> if \0>. <\lemma> :\\\>> is compact. <\proof> I>, where :\\L> is compact by Rellich and :L\\>> is continuous. We are trying to solve <\equation> L u=>\>> Indeed, given , , we have defined <\equation*> F(v)=>(D v\f-g*v)\x. We treat () as an equation in >>. Define <\equation*> L>=L-\I for \\> and the associated bilinear form <\equation*> B>[u,v]=B[u,v]+\>u(x)v(x)\x. Thus, <\eqnarray*> >[u,u]>||>u(x)v(x)\x>>|||Lemma >>||2>>\|D u\|\x-\\>\|u\|\x+\>\|u\|\x>>||>||2>>\|D u\|\x+>\|u\|\x=\|2>.>>|>|>|\+\/2.>>>> So >> is coercive>Lax-Milgram: >:\>\\> is bounded. <\eqnarray*> || \>>>||>|>u+\I(u)=g+div f \>.>>||>|>|\>>*>>|\>>=L>(g+div f) \.>>>> Weak maximum principle>if , then . By the Fredholm alternative, using >I>\!u> for every . <\remark> >> is the abstract Green's function. <\itemize> Bootstrap arguments: Finite differences and Sobolev spaces Weak Harnack Inequalities: Measurable>Hölder continuous (deGiorgi, Nash, Moser) Let <\equation*> \u=)-u(x)|h>, where > is the th coordinate vector w.r.t. the standard basis of >. u> is well-defined on \\\> provided dist(\,\\)>. <\theorem> \\\>, dist(\,\\)>, <\enumerate-alpha> Let p\\> and W(\)>. Then u\L(\)> and <\equation*> u|L(\)|>\(\)|>. Let p\\>. Suppose L(\)> and <\equation*> u|L(\)|>\M, for all dist(\,\\)>>W(\)> and (\)|>\M>. Ell. regularity started over. Existence of weak solutionssmoothness of <\itemize> >Regularity of weak solutions >Uniqueness of classical solutions+Existence. Basic assumptions: ,E,E> as before, (assume ). <\theorem> Assume , > >, >. Moreover, assume , Lipschitz functions. Then for any \\\> we have <\equation*> (\)|>\C(\)|>+(\)|>, where ,d,K)>, where |>,|>)> and =dist(\,\\)>. In particular, a.e. in >. <\proof> Uses finite differences > for \|h\|\d>. It suffices to show Du|L(\)|>> uniformly bounded for \|h\|\d/2>.\ Definition of weak solutions is: for every C(\)> <\equation*> >D v(A*D u+b*u)-(c\D u+d*u)v\x=>g*v*\x. Rewrite as <\equation> >D v(A*D u)\x=>*v*\x for all C(\)>, where <\equation*> =g+(c+b)\D u+d*u. By )> we know that \L(\)>. Now think about ``discrete integration by parts'': <\eqnarray*> >(\v)f(x)\x>||>v(x)\f(x)\x>>>> for every L(\)>. We may replace C(\)> by v\C(\)> in (), provided h\d/2>. Then we have <\equation> >D v(A\D u)|\>)>\x=->(D\v)A*D u\x)>->\v*\x. In coordinates, )> is <\eqnarray*> (a(x)Du(x))>||(x+h*e)Du(x+h*e)-a(x)Du(x)|h>>>|||(x+h*e)(\Du)(x)+(\a)(x)Du(x).>>>> By assumption, (x)> is Lipschitz, therefore <\eqnarray*> a(x)\|>||(x+h*e)-a(x)\||h>\)\\|h\||\|h\|>=Lip(a),>>>> where <\equation*> \\Lip(a)=sup\>(x)-a(y)\||\|x-y\|>. <\equation*> \; We may rewrite () as <\eqnarray*> >(D vA(x+h*e)D\u\x>||>(\v+\D v)\x>>||>||>v|L|>+|L|>|>>>||>||L|>+|L|>)|>>>||>|(\)|>+(\)|>|>.>>>> This holds for all C(\)> and by density for all W(\)>. So we may choose <\equation*> v=\\u, where \C(\)> and <\equation*> dist(supp(\),\\)\|2>. By strict ellipticity )>, we have <\equation*> \A\\\\|\\| \\\, x\\. If \0>, we have <\equation*> \(\D u)A(x+h*e)(\D u)\\\\|\D u\|. Therefore, \u> in the estimate of rewritten () <\eqnarray*> >\\|\D u\|\x>||(E)>>|>\(\D u)A\D u>>||>>|>D vA\D u->(v*D\)A*\D u>>||>||>+|>)-(\)???.>>>> <\equation*> D v=D(\\u)=D \\u+\D\u. Observe that we may choose =1> on > and \C(\)> such that |L>|>\C(n)/d>. Estimate RHS using this to find <\equation*> \>\|D\u\|\x\\>\\|D\u\|\x\C(\)|>+(\)|>. <\theorem> Assume )> and )>. Assume L(\)>, L> for some n>. Then if is a > subsolution with 0> on \>, we have <\equation*> sup>u\C|L(\)|>+k, where <\equation*> k=>|>+|>C=(n,\,q,\|\\|). <\proof> To expose the main idea, assume that <\equation*> f=0,g=0\k=0 and , . We need to show <\equation*> sup>u\C|L|>. Recall that (1) 0> on \> means that <\equation*> u=max{u,0}\W(\). (2) is a subsolution if <\equation*> B[u,v]\F(v) for W(\)> and 0>, which means that <\equation*> >D v(A*D u+b u)\x\0 for W(\)> and 0>. Choose test functions of the form )>> for some \1>. Let u> for brevity. We know that W(\)>. Let <\equation*> H(z)=>>|z\N,>>|>|N,>>>>> i.e. |gr-frame|>|gr-geometry||gr-line-arrows|none||||>>>||>>||||>>>||>>|||||||>||>>|>|>||>>>|> Let <\equation*> v(x)=\|H(z)\|\z. Then <\equation> D v(x)=\|H(w)\|D w(x). Note that 0> by construction. Moreover, (w)\|\L>> and W(\)>>W(\)>. We have from () that <\eqnarray*> >D vA*D u\x>|>|>(D vb)u(x)\x>>|>||>|>\|H(w)\|D wA*D u*\x>||>\|H(w)\|D wA*D w*\x>>||>|>\|H(w)\|\|D w\|\x.>>>> On the other hand, <\eqnarray*> ->(D vb)u(x)\x>||>\|H(w)\|D wb*u*\x>>||>>|>\|H(w)\|D wb*w\x>>|||CS>>|>(w)\|\|D w\||\>>\x>\|H(w)\|\|b\|\|w\|\x.>>>> Thus we have <\eqnarray*> >\|D H(w)\|\x>|>|>\|D H(w)\|\x>\|H(w)\|\|b\|\|w\|\x>>|||>>|\>\|D H(w)\|\x+|>|\>>\|H(w)\|\|w\|\x.>>>> Therefore <\equation*> >\|D H(w)\|\x\|>|\>>\|H(w)\|\|w\|\x|(E)>\>\|H(w)\|\|w\|\x. By Sobolev's Inequality <\equation*> >>(\)|>\C(n)(\)|>\\C(n)(w)w|L(\)|>. This inequality is independent of , so take \>. Then >>, (\)=\w-1>>, so <\equation*> w*H(w)=\\>. Then <\equation*> >\|w\|2>>\x>>\\C(n)\>\|w\|>\x. Thus we have <\equation> >\|>\(\C(n)\)>|>,\\1. Note that >=2n/(n-2)\2>. Let n/(n-2)>. Then iterate (): <\eqnarray*> =1>|>|\(\C(n))>>|=r>|>||>\(\C(n)r)\(\C(n)r)(\C(n)).>>>> By induction, <\eqnarray*> |>>|>|C(n))+\+>>(r)+>+>>>>||>|C(n))>(r)>.>>>> Let \> and obtain <\equation*> >|>=sup u\C|2|>. Label two common assumptions for this section <\description> Assume )>, )>. Also assume L(\)>, L(\)> for some n>. <\theorem> Assume (1), (2). Assume is a subsolution. Then for any ball \> and 1> <\equation*> supu\CR|L(B(y,2R))|>+k(R), where <\equation*> k(R)=|\>+R and <\equation*> C=Cn,|\>,\|\\|,\. <\theorem> Assume (1), (2). If is a (\)> and 0> in a ball \>, then <\equation*> R(B(y,2R))|>\Cinfu+k(R) for every p\n/(n-2)> with and as before. Now, let us consider the consequences of Theorem 1 and 2. <\theorem> Assume (1), (2). Assume is a > solution with 0>. Then <\equation*> supu\Cinfu+k(R). <\theorem> Assume (1), (2) and )>. Assume > connected. Suppose is a > subsolution. If for some ball \>, we have <\equation*> supu=sup>u, then . <\proof> Suppose >u>. Also suppose \> and u=M>. Let , then 0> (i.e. supersolution) and 0>. Apply weak Harnack inequality with : <\equation*> R(M-u)\x\Cinf(M-u)=0. > is open. Even though is not continuous, it is still true that is relatively closed in >. Then > since > is connected. <\theorem> Assume (1), (2). Assume W> solves . Then is locally Hölder continuous and for any ball =B(y,R)\\> and R\R>. Then <\equation*> oscu\C*R>R>sup>\|u\|+k. Here, and are as before and =a(n,\/\,\,R,q)>. <\proof> To avoid complications work with the simpler setting <\equation*> L u=div(A*D u)=0, i.e. , . Assume without loss R/4>. Let <\eqnarray*> \sup>\|u\|,>||>|\sup>u,>||\inf>u,>>|\sup>u,>||\inf>u.>>>> Let (R)\osc>u=M-m>. Observe that -u\0> on > and -u)=0>. Similarly, \0> on > and )=0>. Thus, we can apply the weak Harnack inequality with to obtain <\equation*> R>(M-u)\x\Cinf>(M-u)=C(M-M). Likewise, <\equation*> R>(u-m)\x\Cinf>(u-m)=C(m-m). Add both inequalities to obtain <\equation*> >>(M-m)\x=C(M-m)\C-m)|\>>u>--m)|\>>u>. Rewrite as <\equation*> \(R)\\\(4R) for some \1>. Fix R>. Choose such that <\equation*> >R\r\>R. Observe that (R)> is non-decreasing since (r)=sup>u-inf>u>. Therefore <\eqnarray*> (r)>|>|>R>>||>|\(R).>>||>|>\(R),>>>> where we used <\equation*> >\>\>, therefore <\eqnarray*> log(r/R)>|>|>|m\-log(r/R)/log 4>|>|>>> General set-up: <\equation*> I[u]=>F(D u(x))\x. Here, we have \\>, 1>. \\n>>. Minimize over \>, where > is a class of admissible functions. <\example> Let > be open and bounded and \\>, \\> given, <\equation*> I[u]=>\|D u\|-g*u\x and =W(\)>. The terms have the following meanings: <\description> >>Represents the strain energy in a membrane. >Is the work done by the applied force. General principles: <\enumerate> Is >I[u]\-\>? Is >I[u]=min>I[u]>? (This will be resolved by the due to Hilbert.) To show 1.): Suppose L(\)>. Then <\eqnarray*> >g*u*\x>|>||>|>>>||>|\|2>+>|2>.>>>> By the Sobolev Inequality, <\equation*> >>|>\C(n)|>. Moreover, >\2> and <\eqnarray*> |>>||>>|>>|>\|\\|>>||>|)|>.>>>> Then <\eqnarray*> ||>\|D u\|\x->g u*\x>>||>||2>-\C|2>+>|2>>>||>||2>->|2>>>|||(\)>>||2>->|2>,>>>> where the step )> uses the Sobolev inequality again, with a suitable > chosen. This is called a . In particular, <\equation*> infI[u]\->|2>\-\. Since -\>, there is some sequence > such that ]\inf I[u]>.\ on }>: <\eqnarray*> ||>\|D u\|\x->g*u*\x>>||>|>\|D u\|+\|u\|\x+>\|g\|\x>>||||2>+|2>.>>>> By coercivity, we have <\eqnarray*> |W(\)|2>>|>|]|\>>+>|2>|\>>,>>>> where term > is uniformly bounded because ]\inf>. We could say ]\inf+1>. The main problem is: We cann only assert that there is a converging subsequence. That is, >\u> in (\)>, where we relabel the subsequence >> as >. <\theorem> is weakly lower semicontinuous. That is, if \v>, then <\equation*> I[v]\liminf\>I[v]. Assuming the theorem, we see that is a minimizer. Indeed, <\equation*> I[u]|>liminf\>I[u]=inf\>I[v]\I[u]. is also strictly convex> is a minimizer: <\equation*> I+v|2>\(I[v]+I[v]) with equality only if =\v> for some \\>. <\proof> Assume two minimizers \\u>. Then <\equation*> I+u|2>\I[u]+I[u]=min\>I[v], which contradicts the definition of the minimum. <\theorem> Assume n>\\> is and 0>. Then <\equation*> I[u]=>F(D u(x))\x is weakly lower semicontinuous in (\)> for p\\>. <\proof> From homework, we know that \>F(A)> where > is an increasing sequence of piecewise affine approximations. Since > is piecewise affine, if <\eqnarray*> >|>|(\)>>>>|>|>|(\)>>,>>>> we have <\equation*> >F(D u)\x\>F(D u)\x. Thus, <\eqnarray*> >F(D u)\x>||\>>F(D u)\x>>| \>|>|\>>F(D u)\x>>|||\>I[u.>>>> Now let \>, and use the monotone convergence theorem to find <\equation*> I[u]=>f(D u)\x\liminf\>I[u]. Suppose is as given in this picture: |gr-frame|>|gr-geometry||gr-line-arrows|none||||>>>||>>||||>>>||>>||||||||||||>|>|>|>|>||>||>||>||>||>>>|.> Consider (x)=f(k*x)>, 1>, [0,1]>. This just makes oscillate faster. We then know that <\equation*> g|\>|L>>\a+(1-\)b. Suppose is a nonlinear function. Consider the sequence <\eqnarray*> (x)>||(x))>>|||| g(x)=a,>>|| g(x)=b.>>>>>>>>> Then <\equation*> G\G=\F(a)+(1-\)F(b). But then in general <\eqnarray*> ||>lim> F(g)\F(>lim> g)>>|||a+(1-\)b)>>>> However if is , we do have an <\equation*> F(g)\>lim>F(g). Fix , that is \\>, write for \>. Let W(\)>, consider . If is a critical point >(0)=0>. <\equation*> i(t)=|\t>>F(D u+t*D v)\x=>D F(D u+t*D v)\D v \x. So, <\equation> 0=i(0)=>D F(D u)\D v \x. This is the weak form of the Euler-Lagrange equations <\eqnarray*> || \,>>||| \\.>>>> With index notation <\equation*> i(t)=>F|\z>(D u+t*D v)\v|\x> \x. If is a minimum, (0)\0>. <\equation*> i(t)=>F|\z\z>(D u+t*D v)\v|\x>*v|\x> \x Thus, <\equation> 0\>F|\z\z>(D u)\v|\x>*v|\x> \x=>D vDF(D u)*D v \x. A useful family of test functions: Consider <\equation*> \(s)=||>|>>||s\1>>||s\2>>|>|>>|>|>>>>> Fix \\> and \C>(\)>. Consider <\equation*> v>(x)=\\(x)\|\>|\>)>, where the term )> oscillates rapidly in the direction >. <\equation*> v>|\x>=\|\x>\\|\>|\>)>+(x)\\|\>\|\>. Therefore, <\equation*> v>|\x>*v>|\x>=\(x)\\|\>\\+O(\)=\\\+O(\). Substitute in () and pass to limit <\equation*> 0\>\(x)\F|\z\z>(D u)\\x. Since > is arbitrary, we have <\equation*> \DF(D u)\\0,\\\. So, is convex>() is an elliptic PDE. <\theorem> Assume . Then is w.l.s.c.> is convex in (\)> for p\\>. <\proof> Fix \> and suppose =Q=[0,1]>. Let x>. For every C>(\)>, we have <\equation*> I[u]=>F(z) \x=F(z)\>F(z+D v)\x. This is all we have to prove, because we may choose smooth functions to find DF(z)\\0>. For every divide into subcubes of side length >. Let > denote the center of cube >, where l\2>. |gr-frame|>|gr-geometry|||||>||>||>||>||>||>||>||>|>||>>>|> Define a function > as follows: <\equation*> u(x)=>v(2(x-x))+u(x) for in >. <\equation*> D u(x)=D v(2(x-x))+z for in >. Thus, \D u=z>. Since liminf\>I[u]>, we have <\eqnarray*> |>|\>>>F(z+D v(2(x-x))) \x>>|||\>2>F(z+D v(2(x-x))) \x()>>|||>F(z+D v)\x.>>>> Typical example: \\\\>. |gr-frame|>|gr-geometry||gr-line-arrows|||>>>|||||||>|||||>||||>>>|||>>|>|>|)>|>|>>>|> Typically, <\equation*> F(D u)=D uD u|\>>+|\>>. (cf. Ch. 3, little Evans) \\>, 2> <\equation*> \=u\W(\,\):u=g \\ p\\>, > open, bounded, <\equation*> I[u]=>F(D u(x)) \x with n>\\>, >>. Always assume coercive, that is <\equation*> F(A)\c\|A\|-c. >The main issue is the weak lower semicontinuity of . What `structural assumptions' must satsisfy? if , we know that should be . This is sufficient for all . Is this necessary? Convexity is bad because it contradicts material frame indifference. Let's replicate a calculation already done: Let I[u+t*v]>, [-1,1]>. Assume (0)=0>, (0)\0>. <\equation*> i(t)=>F(D u+t*D v) \x. <\equation*> i|\t>=>|\t>F(D u+t*D v)\x=>F|\A>(D u+t*D v)v|\x>\x (Use summation convention.) <\equation*> 0=i(0)\0=>F|\A>(D u)v|\x>\x. This is the weak form of the Euler-Lagrange equations <\equation> -|\x>F|\A>(D u)=0 for ,m>, so we have a . Now consider (0)\0>. <\equation> i(0)=>F|\A\A>(D u)v|\x>*v|\x>\x\0. As before, consider oscillatory test functions: |gr-frame|>|gr-geometry||gr-line-arrows|none||||>>>||>>||||>>>||>>|||||>||>||>||>||>|(s)>|>>>|> Fix \\>, \\>, \C>(\;\)>. <\equation*> v(x)=\\(x)\\|\>\. Then <\equation*> v|\x>=\\(x)\\|\>\+\(x)\\|\>\\. Thus <\equation*> v|\x>*v|\x>=\(x)\\\\+O(\) Substitute in () and let \0>, <\equation*> 0\>(x)|\>>F|\A\A>\\\\ \x. This suggests that should satisfy <\equation> (\\\)DF(\\\)\0 for every \\>, \\>. \\=\\> is a rank-one matrix. is convex if DF(A)B\0> for every \n>>. However, we only need to be rank one in (). () is known as the Legendre-Hadamard condition. It ensures the of the (). Thus, we see that if is w.l.s.c. then should be rank-one convex. Q: Is that sufficient? <\definition> is if <\equation*> F(A)\F(A+D v(x)) \x for every \n>> and C>(Q,\)>. Here is the unit cube in >. |gr-frame|>|gr-geometry||gr-line-arrows|||>>>||||>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||>>>|> Subject the boundary of a cube to an affine deformation . Then for Q> satisfies the boundary condition for \Q>. <\equation*> I[u]=F( D u) \x=F(A). Thus (QC) implies I[u+v]> for any C>(Q)>>affine deformation is the best. : <\enumerate> or a minor of <\definition> is if is a convex function of the minors of . What's known: <\theorem> Assume C>> satisfies the growth condition <\equation> \|F(A)\|\C(1+\|A\|) with some 0>. Then is w.l.s.c.> is QC. <\remark> \; |\>> Polyconvex |\>> Quasiconvex |\(\)>> Rank-one-convex (RC).> \; )> is known for 3>, 2> (Sv¥rak, '92), but not known for , 2>. We'll prove that if \W> for n> and \u>>)\det(D u)> in >. (>>) If (x)\L(\,\n>)> and \A>, it is not true that )\det(A)>. <\note> ``>'' is straightforward. Simply choos and =A*x+v(x)> (>>periodic scaling). Assume is QC and statisfies . <\lemma> There is a 0> such that <\equation*> \|D F(A)\|\C(1+\|A\|). <\proof> Fix \n>> and a rank-one matrix \\> with >, > coordinate vectors in > and >. We know that QC>RC, therefore the function <\equation*> f(t)=F(A+t(\\\)) is convex. By homework, we know that is locally Lipschitz and <\equation*> \|D F(A)(\\\)\|=\|f(0)\|\max[-r,r]>\|f(t)\|. Then <\eqnarray*> ||\\))\|>>|||>>|+t\|\\\\|)>>||>|+r).>>>> Choose to find <\equation*> \|f(0)\|\C(1+\|A\|). <\proof> (of Theorem ) Assume is QC, show is w.l.s.c. <\description> <\equation*> F(D(A*x))\x=F(A)\F(A+D v(x) If \u> in >, then <\equation*> >F(D u)\x\liminf\>>F(D u)\x. Subdivide domain > into small cubes: <\equation*> >F(D u)\x\>F(>)\x|QC>>F(D u)\x+. 1) Assume \u> in (\,\)>. Then <\equation*> sup|L(\,\n>)|>\\ by the uniform boundedness principle (Banach-Steinhaus). By considering a subsequence, we have <\equation*> u\u (\,\)>> (cf. Lieb&Loss) Define the measures <\equation*> \(\x)=(1+\|D u\|+\|D u\|)\x. By the uniform bounds, <\equation*> sup \(\)\\. Then there is a subsequence \\> with <\equation*> (\)|\>>\liminf\>\(\). Suppose is a hyperplane perpendicular to the unit vector >. Therefore, (\\H)\0> for at most countably many hyperplanes. |gr-frame|>|gr-geometry||||||||>|>|>||>||>||>||>||>||>||>||>>>|> By translating the axes if necessary, we can assert that if > denotes the dyadic lattice with side length >, then (\Q)=0> for every \\> and every . Let > denote the piecewise constant function with value <\equation*> >D u(x)\x on the cube >. By Lebesgue's Differentiation Theorem, \D u> a.e. for \> in (\,\n>)>. Then <\equation*> >\|F((D u))-F(D u)\|\x\0 by DCT. 2) Fix \0>, choose \\\> such that <\equation*> \\>F(D u)\x\\. Choose so large that <\eqnarray*> |L|>>|>|,>>|)|L|>>|>|.>>>> Preview: Where is this proof going? <\equation*> I[u]\ <\eqnarray*> ]>|>|>F(D u)\x>>|||>F(D u+(D u-D u))\x>>||>|>F(D u)\x+E>>||>|>F(|\>>)\x+E+E>>|||>>|+E+E.>>>> (Let's not complete this proof.) <\equation*> I[u]=>F(D u)\x for \\>, n>\\>. The Euler-Lagrange equations read <\equation> |\x>F|\A>(D u)=0,i=1,\,m. <\definition> is a if holds for every C(\)>. \\\\> <\theorem> is a null-Lagrangian. The associated Euler-Lagrange equation is <\equation> |\x>cof(D u)=0,i=1,\,n. <\proof> <\enumerate-numeric> A matrix identity: <\equation*> (det A)|\A>=(cof A) If , then holds. <\equation*> (cof A)=(n-1)\(n-1) det( without row , column >. Algebra identity: <\equation*> A=(cof A). <\equation*> (det A)Id=A(cof A). Let denote . <\equation> det A*\=AB Claim 1 follows from , since > depends only on > l,j\m>. Differentiate both sides w.r.t. >: <\eqnarray*> >|||\x>(det A)\>>||||\x>(det A)>>|||(det A)|\A>\A|\x>>>||>>|||||A|\x>>>>>>,>>>> where we have used summation over repeated indices. <\eqnarray*> >||||||A|\x>B>>>>>+AB|\x>|\>>>>>> > terms are typically not the same for arbitrary matrices . However, if , then <\equation*> BA|\x>=Bu|\x\x>=Bu|\x\x>=BA|\x> Comparing terms, we have <\equation*> AB|\x>=0,i=1,\,n or div(cof D u)=0\\>. <\equation*> cof D u=n\n >>|>>|>>>>> <\equation*> div(cof D u)=>>>>> If is invertible, we have as desired. If not, let >=u+\x>. Then >=D u+\I> is invertible for arbitrarily small \0> and <\equation*> div(cof(D u>))=0. Now let \0.> <\theorem> Suppose \u> in (\,\)>, p\\>. Then <\equation*> det(D u)\det(D u)L(\). <\proof> Main observation is that may be written as a divergence. <\eqnarray*> >||B>>|||(D u)(cof D u)>>|||*u|\x>(cof D u)>>||||\x>u(cof D u)>>|||(cof D u)u.>>>> Note that above > is the th component of , while below and in the statement, > means the th function of the sequence. It suffices to show that <\equation*> >\(x)det(D u)\x\>\(x)det(D u) \x for every \C>(\)>. But by step 1, we have <\equation*> >\(x)det(D u)\x=->\|\x>u(cof(D u))\x. By Morrey's Inequality, > is uniformly bounded in (\,\)>. By Arzelà-Ascoli's theorem, we may now extract a subsequence )>> that converges uniformly. It must converge to . Note that if \f> uniformly and \g> in (\)>,then <\equation*> fg\f*g in (\)>. Now use induction on dimension of minors. Alternative: Differential forms calculation: <\equation*> >\(x)det(D u) \x=>\(x)\u\\u\\\u=>\(x)\(u\\u\\\u) (stopped in mid-deduction, we're supposed do this by ourselves...) <\theorem> Suppose >\>> is continuous. Then there is some >> such that . <\theorem> There is no continuous map >\\B> such that on B>. <\proof> (of Theorem ) Assume >\>> does not have a fixed point. Let , >\\>. Then 0> and is bounded away from 0. Consider . is continuous, and <\equation*> w:>\\B contradicts the No Retract Theorem. <\proof> (of Theorem ) . Assume first that is smooth (>>) map from >\\B>, and on B>. Let be the identity >\>>. Then on B>. But then since the determinant is a null Lagrangian, we have <\equation> >>det(D u)\x=det(D w)\x=\|B\|. However, =1> for all B>. That means <\equation*> uu=1\u|\x>u=0,j=1,\,n. In matrix notation, this is <\equation*> (D u)u=0. Since , 0 is an eigenvalue of >. This contradicts . . Suppose >\\B> is a retract onto B>. Extend \\> by setting outside . Note that 1> for all . Let >> be a positive , radial mollifier, and consider <\equation*> u>=\>\u. >For > sufficiently small, >(x)\|\1/2>. Since >> is radial, we also have >(x)=x> for 2>. Set <\equation*> w>(x)=>(x/2)|\|u>(x/2)\|> to obtain a smooth retract onto B> contradicting Step 1. <\remark> This is closely tied to the notion of the of a map. Given >\\> smooth, we can define <\equation*> deg(u)=det(D u)\x. Note that if on B>, then we have <\equation*> deg(u)=1=deg(Id). This allows us to define the degree of Sobolev mappings. Suppose W(\,\)> with p\\>. Here, <\equation*> det(D u)=>(-1)>u|\x>>\u|\x>>. So by Hölder's Inequality, L>>L>>We can define . It turns out that we can always define the degree of maps by approximation. Loosely, <\enumerate-numeric> Mollify >=u\\>>. Show if >> is smooth, then >)> is an integer >)\lim> as \0>. > independent of > for > small enough. Nirenberg, Courant Lecture Notes. If we know that the degree is defined for continuous maps, then since n>, then W(B;\)>, n>, we know C(B;\)>, so is well-defined. What happens if ? Harmonic maps/liquid crystals \S>. (Brezis, Nirenberg) Don't need to be continuous to define . Sobolev Embedding: <\equation*> W\>|p\\,>>|VMO>|>|>|n,q\p>=.>>>>> <\equation*> [u]=\|u->\|. Vanishing mean oscillation. <\theorem> VMO>. (?) If \W(\;\)> with p>, then if \u>, also have <\equation*> >det(D u)\x\>det(D u)\x > is continuous. This is still true if , provided we know that )\0>. (Muller, Bull. AMS 1987) We will briefly write (NSE) for: <\eqnarray*> +u\\u>||u-\p)|\>>+>>>>|\u>||>|||(x)\\u=0>>>> for )\\\\>. <\equation*> (u\\u)=uu|\x>;u+u\u=|\>>. RHS has parameter > <\equation*> u+u\\u=-\u-\p. If =0>, we have Euler's equations. (Newton's law for fluids) If \0>, we may as well assume =1>. \u=0> is simply conservation of mass:> If the fluid had density >, we would have the balance law <\equation*> \\+div(\u)=\\+(\\u)\+u\\\=0. If we further assume <\equation*> \\+u\\\=0, that is <\equation*> |D t>=0, then we have \u=0>. Compare with Burgers Equation: <\equation*> \u+u*\u=0,x\\,t\0. It is clear that singularities form for most smooth initial data. The pressure has the role of maintaining incompressibility. Take the divergence of (NSE1): <\equation*> \\(u>+u\\u)=\\(-\p+u>). Then <\equation*> Tr(\u\u)=-p. Thus p\0>. Flows are steady if they don't depend on . In this case we have <\eqnarray*> \u+\p>||u,>>|\u>||>>> If =0>, we have ideal (i.e. no viscosity), steady flows: <\equation*> u\\u+\p=0,\\u=0\\|2>+p=0,\\u=0, or /2+p=const>, which is called . |gr-frame|>|gr-geometry||gr-line-arrows|||>>>||||||||>||||>>>|||>>||||>>>||>>| more, less|>| less, more|>>>|> =curl u>. This is a scalar when .\ Vorticity equation: <\eqnarray*> \+\\(u\\u)>||\,>>|\u>||>|\u>||.>>>> In 2-D, this is simply <\equation*> \+u\\u=\,>>|u=0,>>|\u=\,>>>>>>>>>> where the first equation is an advection-diffusion equation for >. Assume 0> for simplicity. Dot the first NSE above with : <\equation*> |\t>|2>+u\\|2>+p=\\(u\\u)-\|\u\|. Integrate over >: <\equation*> |\t>>|2> \x=->\|\u\| \x\,t)|L|2>\|L|2>. <\equation*> >\|\u\| \x\|L|2>. <\theorem> For every \L(\)>, there exist distributional solutions L>(\,L(\))>, such that the energy inequalities hold. , Ladyzhenskaya>uniqueness. Hopf's paper on website, Serrin's commentary. <\eqnarray*> u+u\\u>||p+u,>>|\u>||>>> G> open subset of >, =G\(0,\)> space-time. Initial boundary value problem: <\equation*> u(x,0)=u(x)\\u=0. No-slip boundary conditions: <\equation*> u(x,t)=0x\\G. (Compare this to Euler's equation, where we only assume that there is no normal velocity.) Recall the example of a divergence-free vector field from the last final. |gr-frame|>|gr-geometry||gr-line-arrows|||>>>||>||>||||>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>>>|> Observe that only the continuous boundary-normal field matters, not the (discontinuous) boundary-tangential field. We want to push the requirement \u=0> into >. \u=0> in > simply means <\equation*> u\\\*\x=0 for every \C>(\)>. Let \:\\C>L(G,\)}>. is the space of gradients in (G)>. If P>, then there exists \C>(G)> such that \\h> in (G,\)>. Then <\equation*> L(G)=>>\>|\>>. In all that follows, C>(,\)> is a divergence-free vector field <\equation*> \u+\u|\>\\(u\u)>>>=-\p+u. In coordinates, <\equation*> \u+uu|\x>=-p|\x>+u|\x\x>i=1,\,n. Take inner product with and integrate by parts: <\equation*> (W)->\a\u+a\(u\u)|\>:>+a\u \x \t=0 <\eqnarray*> >auu|\x> \x \t>||>|\x>(au)u \x \t>>|||>a|\x>uu \x \t->au|\x>u \x \t>.>>>> For the weak form, consider that <\equation*> >a\\p=->(div a)p \x \t=0 means we lose the pressure term. Also, recall <\equation*> u\u\uu=u*u. If \n>>, then B=trAB]>. Similarly, weak form of u=0> is <\equation*> (W)>u\\\ \x \t=0\C>(>)>. <\definition> C>(,\),\\a=0}> w.r.t. the space time norm <\eqnarray*> >||>(\|a\|+\|\a\|) \x \t>>|||>aa+a|\x>*a|\x>\x \t>>>> Space for initial conditions: <\equation*> L(G,\)={b\C>(G,\)} in (G,\)>. Observe that by the Helmholtz projection, <\equation*> L(G,\)=|\>>\>|\>>. <\theorem> Let \> be open. Suppose \P>(G)>. Then there exists a vector field V> that satisfies the weak form )>, )> of the Navier-Stokes equations. Moreover, <\itemize> )-u|L(G)|>\0> as 0>. Energy inequality <\equation*> \|u(x,t)\|\x+\|\u(x,s)\| \x \s\\|u(x)\|\x for 0>. <\remark> <\enumerate> No assumptions on smoothness of G>. No assumptions on space dimension. (Yet there is a large gap between and 2>.) <\equation*> \; <\initial> <\collection> <\references> <\collection> |1>> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|figure> > > > |\> and |\> are unit vectors.)|> > > > > > > > > > > > > > > > > > > > > > > |f(x)>.|> > > > > > > > <\associate|toc> |math-font-series||Table of contents> |.>>>>|> |math-font-series||1Scalar Conservation Laws> |.>>>>|> |1.1Shocks and the Rankine-Hugoniot condition |.>>>>|> > |1.2Hopf's treatment of Burgers equation |.>>>>|> > |1.3Two basic examples of Solutions |.>>>>|> > |1.4Entropies and Admissibility Criteria |.>>>>|> > |1.5Kruºkov's uniqueness theorem |.>>>>|> > |math-font-series||2Hamilton-Jacobi Equations> |.>>>>|> |2.1Other motivation: Classical mechanics/optics |.>>>>|> > |2.1.1Hamilton's formulation |.>>>>|> > |2.1.2Motivation for Hamilton-Jacobi from classical mechanics |.>>>>|> > |2.2The Hopf-Lax Formula |.>>>>|> > |2.3Regularity of Solutions |.>>>>|> > |2.4Viscosity Solutions |.>>>>|> > |math-font-series||3Sobolev Spaces> |.>>>>|> |3.1Campanato's Inequality |.>>>>|> > |3.2Poincaré's and Morrey's Inequality |.>>>>|> > |3.3The Sobolev Inequality |.>>>>|> > |3.4Imbeddings |.>>>>|> > |math-font-series||4Scalar Elliptic Equations> |.>>>>|> |4.1Weak Formulation |.>>>>|> > |4.2The Weak Maximum Principle |.>>>>|> > |4.3Existence Theory |.>>>>|> > |4.4Elliptic Regularity |.>>>>|> > |4.4.1Finite Differences and Sobolev Spaces |.>>>>|> > |4.5The Weak Harnack Inequality |.>>>>|> > |math-font-series||5Calculus of Variations> |.>>>>|> |5.1Quasiconvexity |.>>>>|> > |5.2Null Lagrangians, Determinants |.>>>>|> > |math-font-series||6Navier-Stokes Equations> |.>>>>|> |6.1Energy Inequality |.>>>>|> > |6.2Existence through Hopf |.>>>>|> > |6.2.1Helmholtz projection |.>>>>|> > |6.2.2Weak Formulation |.>>>>|> >