> <\body> |>> Syllabus: <\enumerate> Hamilton-Jacobi <\enumerate> lectures (steady) fast-sweeping level-set method Boltzmann type equations <\enumerate> lectures finite difference splitting method (+characteristic/Lagrange type methods) spectral methods Semiconductor Moment Models in semiconductor device simulations <\enumerate> lectures drift-diffusion energy transport models hydrodynamic models quantum {hydrodynamics, drift-diffusion} <\table-of-contents|toc> Hamilton-Jacobi Equations> |.>>>>|> Finite-Difference Method |.>>>>|> > Two-dimensional Extensions |.>>>>|> > Lax-Friedrichs Hamiltonian |.>>>>|> > Gudonov Hamiltonian |.>>>>|> > Osher-Sethian Hamiltonian |.>>>>|> > Unstructured Grids |.>>>>|> > High-Order Methods |.>>>>|> > DG for Hamilton-Jacobi |.>>>>|> > Proving Error Estimates |.>>>>|> > Level-Set Methods |.>>>>|> > Boltzmann-Type Equations> |.>>>>|> Semiconductor Device Simulation Models> |.>>>>|> Drift-diffusion model |.>>>>|> > High field model |.>>>>|> > Momentum solvers |.>>>>|> > Energy Transport Models |.>>>>|> > Hydrodynamic Model |.>>>>|> > Kinetic Model |.>>>>|> > Quantum Effects |.>>>>|> > \; Typed by Akil Narayan, Andreas Klöckner {anaray,kloeckner}@dam.brown.edu\>. One-dimensional Hamilton-Jacobi equations look like this: <\equation> \+H(\)=0. is the (nonlinear). For comparison, hyperbolic look like this: <\equation> u+f(u)=0. Note that formally, if the (or ) is linear, there's no difference between the two types of equations. Some differences remain however from the choice of function spaces where solutions are sought. For Hamilton-Jacobi Equations, our solutions will be Lipschitz-continuous. ||||||>|||>|||>>>>|Comparison of Hamilton-Jacobi and Conservation Laws.> There's extra horridness waiting for people who have time for high-risk research: Equations like <\equation*> \+H(\)+f(\)=0 also occur in applications. If we set >, then a Hamilton-Jacobi equation () becomes +H(u)=0>. <\definition> > is called a of () if, for any smooth function > at each local maximum point >,>)> of -\>, we have <\equation*> \(>,>)+H(\(>,>))\0. \; <\definition> > is called a of () if, for any smooth function > at each local minimum point >,>)> of -\>, we have <\equation*> \(>,>)+H(\(>,>))\0. <\definition> > is called a if it is both a viscosity sub- and supersolution. Deeper mathematical insight can be obtained from papers by Crandall, Lions, and Souganidis. There is an >>-contraction result that supplies well-posedness of the sort <\equation*> -\|L>|>\(\,0)-\(\,0)|L>|>. Assume we have a few grid points \x\x\x\\> and also assume we approximate \\(x,t)>. We briefly recall that a correct discretization of a conservation law () depended crucially on the knowledge of the present ``wind direction''. This would determine the choice between <\eqnarray*> \|>>|>|)-f(u)|\x>f(u)\0,>>|\|>>|>|)-f(u)|\x>f(u)\0.>>>> This led to the introduction of a to cope with the possibility of (u)> changing sign. <\equation*> f(u)\|>\(u,u)-(u,u)|\x>, with (u,u)> satisfying: <\itemize> (\,\)> is Lipschitz-continuous. Consistency: (u,u)=f(u)>. Monotonicity: (\,\)>. So for Hamilton-Jacobi we simply set <\eqnarray*> \|>)>|>|-\|\x>H(u)\0,>>|\|>)>|>|-\|\x>H(u)\0.>>>> The generalization to changing signs of (u)> becomes <\equation*> -\|\x>,-\|\x>. The scheme we come up with reads <\equation*> (\)+-\|\x>,-\|\x>=0, where we again assume that \ (u,u)> satisfying: <\itemize> (\,\)> is Lipschitz-continuous. Consistency: (u,u)=f(u)>. Monotonicity: (\,\)>. <\theorem> Solutions of monotone schemes converge to the viscosity solution. <\equation*> -\x>||>>>\Cx>. [Begin Lecture October 3, 2007] Recall the scheme <\equation> (\)+ -\|\x>, -\|\x>=0 where > is a monotone numerical Hamiltonian (): <\itemize> > is a Lipschitz continuos function with respect to both arguments (\, \)> (increasing in first argument, decreasing in second) (u,u) = H(u)> \ (consistency) Any monotone flux studied in conservation laws can be used for >, e.g. a Gudonov-type flux: <\equation*> H(u,u) = \u\u>H(u)>||\u>>|\u\u>H(u)>||->\u>>>>> And we can also use Engquist-Osher, or Lax-Friedrichs: <\equation> H(u,u) = H+u|2>- |2>(u-u) where =max\|H(u)\|>, or another `Lax-Friedrichs' flux: <\equation> H(u,u) = H(u) +H(u) - \(u-u) \; <\theorem> >A monotone scheme is convergent to the viscosity solution <\equation> x>-\|L>|>\Cx> \; The two-dimensional Hamilton-Jacobi equation takes the form <\equation> \ + H(\,\)=0 Note that all of the above has only been developed for equation in one space dimension, equation (). Recall that one dimension is special: HJ equations in one dimension are easily recast into 1D hyperbolic conservation laws, for which all of the above analysis is already well-known. We can attempt to do a similar thing in 2D: from equation (), let >, >. Then equation () yields the system <\equation> +H(u,v)=0>>|+H(u,v)=0>>>>> Thus our scalar equation has yielded an undesirable system of equations. We can recast () as\ <\equation> >|>>>>+|>|>>>>+|>|>>>> = 0 \; We can define and so that the second and third terms read > and >, respectively, and write Jacobians as <\equation*> |(>) = ||>|>>||>>>>>||(>) = |||||>|>|>>>>>>>>>> Now the eigenvalues corresponding to these Jacobians are and 0. If 0>, then everything is fine, we have a well-posed hyperbolic system. However, if , then we have a weakly well-posed system, and thus we must now deal with solving a weakly well-posed problem, which is undesirable. We avoid this and deal directly with our scalar equation in two dimension, let us introduce a `monotone Hamiltonian', (u,u;v,v)>, which we ask to satisfy <\itemize> > is Lipschitz in all arguments (\,\;\,\)> (u,u;v,v)=H(u,v)> and we introduce the semi-discrete `monotone scheme' for equation () <\equation> (\)+-\|\x>,-\|\x>;-\|\x>,-\|\x>=0 then we have the same result for 2D as is stated in theorem . Note that the introduction of the relatively complex restrictions on our monotone Hamiltonians is necessary: because the Hamiltonian is possibly (probably) nonlinear, we cannot simply take >=+> where > and > are appropriate monotone Hamiltonians for 1D problems.\ So what are the possible monotone Hamiltonians at our disposal? As usual, we have a Lax-Friedrichs choice: <\equation> (u,u;v,v) = H+u|2>, +v|2> - \*(u-u) - \(v-v) where =max\|H(u,v)\|> and =max\|H(u,v)\|>. Note that the range over which the maximums are taken define whether we have a local or a global Lax-Friedrichs flux (cf. 1D hyperbolic conservation laws). If we have global numbers such that u\B> and v\D> for all space, then taking maximums over this is a `global' flux. However, if we consider the ranges in the definitions of > and > as <\equation*> \=max\u\u>>|\v\v>>>>>>\|H(u,v)\| and similarly for >. One can show (cf. Osher and Shu) that this flux is not monotone. Can we define a proper local Lax-Friedrichs flux? Sort of. If we define\ <\equation*> =max\u\u>>|v\D>>>>>>\|H(u,v)\|>| >|=maxu\B>>|\v\v>>>>>>\|H(u,v)\|>>>>> where are the appropriate global bounds on and , then this in conjunction with equation () gives a true monotone Hamiltonian. We also have a Gudonov flux for the 2D scheme (): <\equation> (u,u;v,v) = ext\u\u> ext\v\v> H(u,v) where we define the function as\ <\equation> ext\u\u>=\u\u>>||\u>>|\u\u>>||\u>>>>> \; Note that the Godonov flux definition () is symmetric, and it is not the same if we reverse the order in which we apply the functions. I.e.,\ <\equation*> min\u\u>max\v\v>H(u,v) >>|>>|>>>>>max\v\v>min\u\u>H(u,v) However, one can use the Gudonov Hamiltonian as shown in equation (), or one can switch the direction of the application of the operators, and it can be shown that either Gudonov flux is a valid monotone Hamiltonian.\ Another type of monotone Hamiltonian which we may implement is the Osher-Sethian Hamiltonian: ,v)> and is monotone with respect to each argument, one can use the Hamiltonian <\equation> (u,u;v,v) = f(>,>) where\ <\equation> >=min>,0+maxu,0>||,\)>>|minu,0+max>,0>||,\)>>>>> and >> is defined similarly. The great advantage of this monotone Hamiltonian is its ease of implementation and low computational cost. However, the restriction on the required form of the analytic Hamiltonian limits its applicability. NB this monotone Hamiltonian has the flavor of the Engquist-Osher monotone flux for hyperbolic conservation laws.\ A typical equation on which can use the Osher-Sethian Hamiltonian is the : <\equation> \ + +\>=0 In this equation +v>>, which satisfies the hypotheses of the Osher-Sethian Hamiltonian formulation, so one can use () and (). This equation is solved very often in the context of level-set methods. The only known monotone flux on unstructured grids is a Lax-Friedrichs flux introduced by Abgrall. See Shu's paper for details and picture. The general form looks like\ <\equation*> =H\(\\)|2\> - |\>\\)+(\\)|2>\> where the sums are taken over all the adjacent faces.\ We have so far described first-order monotone schemes for Hamilton-Jacobi equations. We can now handle equations in one space dimension very well, and we have methods for solving two-dimensional equations either on Cartesian grids, or on unstructured grids. Now we shall attempt to formulate some high-order methods to solve equations () and (). We again consider the one-dimensional Hamilton-Jacobi equation () with the scheme () where we require monotonicity of the flux: (\,\)>. Recall from the hyperbolic conservation law case that when formulating high-order methods, we had many options for scheme type (FD vs FV) and there were regularity restrictions on the mesh in 1D. However, for H-J equations in 1D, the idea is much simpler: we restrict ourselves to <\itemize> finite-difference schemes any mesh We define a high-order scheme as a semi-discrete ODE of the form <\equation> )+(u,u) = 0>>|>|>=\>+O(\x)>>>>> We impose the following stencils (cf hyperbolic conservation laws): <\itemize> The stencil for > is biased to the left The stencil for > is biased to the right The modus operandi for this method is very familiar from conservation laws: we identify a stencil over which to interpolate a polynomial > of degree . When then differentiate this polynomial, and evaluate the result at >, and assign that as the value of >>.\ \; One main setback for using this method for conservation laws is the following: for conservation laws, the underlying function which the interpolant represents is discontinuous, so our reconstruction will be oscillatory near shocks. However, for H-J equations, the underlying function defining the interpolant is continuous. Thus, one may conjecture that perhaps our direct interpolant reconstruction will be more well-behaved. However, it turns out that even with our increased regularity in >, this straightforward method still does not work except for rather simple problems. (cf Y. Cheng paper) \; We can, however, borrow many of the tools we introduced for conservation laws (e.g. ENO). However, we shall need a bigger stencil to achieve the same order since we are differentiating our interpolant and thus require one more degree of freedom.\ \; Let us recall the ENO procedure to compute >: <\enumerate> Start with the stencil =\,\>. Build the linear function (x) = \--\|\x>(x-x)> Add either ,\)> or ,\)> to the stencil depending on the magnitude of the updating Newton divided difference, which is a measure of the oscillatory nature of the polynomial. E.g. for this step, the divided differences to compare are -\|\x>--\|\x>-x)>> and -\|\x>--\|\x>-x)>>. Upon choosing the new stencil, build a new polynomial. Continue in this fashion by enlargening the stencil and updating the polynomial > until the desired accuracy is required. =p(x)> And this is the abridged WENO procedure: <\enumerate> Start with one stencil Consider the all the possible th-order stencils to add, construct all the possible polynomials, and linearly combine all these polynomials into via nonlinear weights of the form: <\equation*> w\p(x)+c*p(x)\x. \; To convert this into an approrpiate H-J formulation, we can do the same thing, except for WENO, we use nonlinear weights of the form <\equation*> w\c*p(x)\x. We make this modification because the reconstructed interpolant has more smoothness than in the conversation law case, and it is now the second derivative norm which is a more appropriate measure of regularity. (Cf. Jiang & Peng paper, 5th-order, conforming Cartesian meshes) If one wishes to do either nonconforming Cartesian meshes or unstructured meshes, the above procedure does not work out-of-the-box. Instead, others have worked about the WENO procedure for triangular meshes (cf Yongtao Zhang thesis). Recall FD WENO (Jiang & Peng): For HCL, transforamtion form global to uniform mesh must be smooth>nonuniform Cartesian meshes do not work. For HJ, said transformation need not be smoot , can do this on nonunifrom Cartesian meshes (and smooth meshes). (Thus WENO here is more popular for HJ equation sthan for HCL equations.) DG for HJ is not quite so popular yet because geometries are regular for many applications, because FD WENO is easily -adaptible and because FD is more mature. Applying DG: <\enumerate> :\ <\equation*> \+H(\)=0, then set > and get <\equation*> >uv->H(u)v\x+(v)\|, where \(u,u)> and =v> and =v>. Altogether, we get <\equation*> >(\)v->H(\)v\x+(v), where |^>=((\),(\))>. : In 1D: <\equation*> >\+>H(\)\x=0. In 2D: +H(\,\)=0>. Setting >, > gives the nasty system <\equation*> +H(u,v)=0>>|+H(u,v)=0>>>>>\. Finte, but the system may be weakly hyperbolic if =0, H\0)> or =0,H\0)>. Despite the complications, we can still write down a DG scheme: <\eqnarray*> )uw-H(u,v)w+\K>w\\>||>|vz-H(u,v)z+\K>w\\>||>>> where\ <\equation*> \H|\u>>|>|H|\v>>|>>>> If we do what we did in 1D and let \>, \> and we count equations and unknowns, we find an overdetermined system--BUT:\ <\equation*> f\x \y+H(u,v)\x\y=0 is OK. <\enumerate> Start with \P>: <\equation*> u=\,v=\. Euler forward of )> to get >, >. We can advance )> using a time stepper for \P>, there may not exist a polynomial of degree such that =u>, =v>. ``Do your best'': Find a least-squares > solution so that <\equation*> -u|L|2>+-v|L|2>=min\P>-u|L|2>+-u|L|2>. (This method is from Hu & Shu in SISC '99.) The nasty and unmotivated least-squares approach method can however be recast in a more beautiful manner: (Li & Shu) Instead of all of \P>, we consider only a subspace of it, namely a space P\P> whose members satisfy the condition <\equation*> \-\=u-v=0. It turns out that the two procedures yield the same results, as proven in Li & Shu, Appl. Math. Let. 2005. Yet another different method: (Yingda Cheng & Shu) Let's start with something simple: <\equation*> \+a(x)\=0 or <\equation*> \+(a(x)\)=a(x)\, i.e. a hyperbolic conservation law with a source term. Further, assume 0>. Construct a DG method for this <\eqnarray*> >\v\x->a(x)\v\x+a(x)\v-a(x)\v>||>a(x)\v\x>>||>|>>|>\v\x+>(a(x)\)v\x>-)\v>+a(x)\v>||>|)\v>-a(x)\v>||>a(x)\v\x>>>|(a(x)\)v>=a(x)\v>+a(x)\v>||>|>(\+a(x)\)v\x+a(x)[\]v>||>>> We take this as inspiration for the general case +H(\)=0>, and from the sky falls the following monstrosity (ouch): <\eqnarray*> >(\+H(\))v\x>||>|minI>H(\)-minI>H(\)[\]v>||>|maxI>H(\)-maxI>H(\)[\]v>||>>> Yingda came up with this by experimentally figuring out what perturbation to the coefficient in the above ``inspiring'' example are possible. This scheme is a pure upwinding scheme, effectively of Roe type. This scheme work satisfactorily except for cells where (\)=0>, where a quick fix is to go back to Hu & Shu. For HCLs, a typical convergence proof goes like this: <\itemize> > is bounded in some strong (semi-)norm, e.g. BV-norm <\equation*> |TV|>=\|u-u\|. )\TV(u>) or )\C(n\t)>> a subsequence of > converges to when 0>. ``Something'' is a weak solution if a conservative scheme is used (Lax-Wendroff theorem). If a discrete entropy condition (cell entropy inequality/wavewise entropy inequality) is satisfied, then the ``something'' will be an entropy solution. )> Use the uniqueness of the entropy solution to conclude that the original sequence of the numerical solutions (not just a subequence) converges to this unique entropy solution. (Every subsequence has a subsequence that converges to something unique.) Also, there is a ``brute-force'' approach estimating |L|>\C>. For HJ equations, things are generally expected to be better, since the solutions are smoother. In fact, it is not hard to prove )\C> for the Hu & Shu \ DG scheme. However, \ there is no equivalent for the step )> in the HJ case for the first approach. There is however some ``brute-force'' work done in the literature. Level set methods can be described as methods to capture the evolution of an interface. There are two ways in general to capture an interface: <\itemize> Lagrangian: mark locations of the interface and evolve the locations in time Eulerian: define a function over space as and define the level set as x:F(x)=0}>.\ \; In the Lagrangian framework, suppose that we have one marker denoted as >> which denotes the location of one point on the interface. If we can calculate the temporal derivative of the location of this point and call it >>, the velocity, then solving\ <\equation> >|\t>=> will generate the evolution of one point on the interface. Now suppose >\> and we want to evolve an interface which is a one-dimensional subset of >. We parameterize the points on via e.g. normalized arclength [0,1)> and we can then solve for <\equation*> >(p,t) = (x(p,t), y(p,t)) via some evolution method similar to (). Analogously, if >\>, then our unknown is a two-dimensional subspace, and is thus parameterized by two variables: <\equation*> >(p,t) = (x(p,q,t), y(p,q,t), z(p,q,t)) \; Now consider the Eulerian framework: we consider a function :\\>. We let > be a subset of >. Define\ <\equation*> =(x,y): \(x,y,t)\0>>|=(x,y): \(x,y,t)\0>>|\=(x,y): \(x,y,t)=0>>>>> Then \\> is the interface we are looking for, and solving for satisfying (x,y,t)=0> defines the interface \> as a function of time. Note that this framework is easily generalized to higher dimensions.\ \; Consider a very special form of > as (x,y,t) = +y>-1>. This function represents the signed distance function between a point in > and the unit circle. I.e., <\equation*> \=|)>||\>>|)>||\>>>>> where > is the unit circle in > and ,\)> is the natural Euclidean function on >. It turns out that any function satisfying d\|=1> is a distance function.\ \; Consider the Lagrangian framework (again). Suppose that our marker >> evolves with the law (). We can decompose the velocity of the marker into a portion tangential to the interface (>>) and a component normal to the interface (>>). The portion of the velocity >> in the direction >> does not change the shape of the object, it simply rotates the markers around the interface. Thus, it does not contribute to the change in shape and we shall not consider it in our evolution. With this, our new evolution equation simplifies to\ <\equation*> >|\t>=(>\>)*> \; Now consider the Eulerian framework. We wish to solve for , satisfying (x(t),y(t),t)=0>. Taking a total derivative of > yields <\equation*> \|\t>>||\|\t>+\|\x>*x|\t>+\|\y>*y|\t>>>|||>|||\|\t>+x|\t>,y|\t>\\\>>|||>|||\|\t>+F*>\\\,>>>>> where\ <\equation*> >=\|\|\\\|>. Then the evolution equation simplifies to\ <\equation> \|\t>+F*\|\\\|=0. But a distance function satisfies \\|=1>, so we recover <\equation*> \|\t>+F=0 for distance functions >. Equation () is the starting point for our mathematical discussion. The function that we are seeking as a solution of Boltzmann-type equations is a density over space (1D to 3D), velocity (1D to 3D), and time (1D). Total dimensionality is 6 (+1 for time). The equations that we're dealing with read <\eqnarray*> +u*fE*f|\>>>||>n*M(u)-f,>>|||,>>|\)>||),\(0)=0,\(1)=v,>>||>|>>f(x,u,t)\u,>>|||>\>e/2\>,>>>> where we have given constants , , >, >, and >, > are given functions of . The coordinates observe the bounds: x\1>, \u\\>, 0>. This equation has the following features: <\description> >>It's a scalar equation. >>LHS is a simple convection in conservation form. Upwinding is simple to implement. >>RHS is ``just'' a source term >>Dimension is too high. 1D or at most 2D in space. 1D in space >1D or 2D in velocity space. (collision dependent) 2D in space >2D or 3D in velocity space. >>Often the characteristic speed convection is unbounded. That, of course, kills explicit timestepping in its reliance on CFL-type conditions <\equation*> max \|u\|t|\x>+max-Et|\u>\1. Possible resolutions include: <\itemize> Just bound it \u\u>. in the Boltzmann equation is replaced by a bounded function . >>More complicated collision models exist: <\equation*> K(x,u,u)f(u)\u. , : compute a numerical integral in the dimension of . Treating the LHS: <\itemize> spectral methods. In space: <\itemize> Chebyshev Legendre (no good FFT) In velocity space: \u\\> in multiple dimensions <\itemize> Infinite domains: Hermite and Laguerrre polynomials have been tried... ...with mixed success can sometimes be pretty nonsmooth Lagrange-type methods (follow characteristics) <\itemize> increase t> tremendously Usual ``high-resolution'' solvers <\itemize> treat > and > by TVD, ENO, , DG drastic restrictions on t> save time by freezing weights in WENO (high-order linear approximation) Treating the RHS: <\itemize> simple relaxation type collision can be handled explicitly as a source term. handled explicitly as a source term <\itemize> difficulty arises when \\x> or u>. (``stiff'' source term)>t> has to be reduced use an implicit solver for the source term could choose an implicit solver to overcome this difficulty or go really high-tech and use : <\equation*> f=f+\t(C+S), where stands for the convection term and stands for the source term. In general, the collision term is a real integral. <\itemize> Fatemi & Odeh: use linear interpolation to actually evaluate the integral. Possibly use WENO interpolation to avoid oscillation. A. Majorana (2001) et al. use a clever transform to avoid interpolation and have the integral evaluation always use values at exact grid points. Possibly use Fast Multipole Methods to reduce computational cost. Or use FFT. \ \; \; [Name-dropping in Shu's circle of friends: Joseph Jerome (Northwestern University), Odeh (Bell labs?), and Fatemi (NU as well? Graduated)] <\equation> n+J=0, where is the number density of electrons, and is the current density. Physical properties of the medium include the doping density >. We can decompose into a hyperbolic and a viscous factor as follows: <\equation> J = J+J. We define\ <\equation*> J=-\*n*E, where > is the . A typical value for > is m/V]>. One of the usual functions values for > is\ <\equation*> \=|||=10 (\m)>>|||=2\10 (\m)>>>>> or we can also make > a function of (the electric field) as\ <\equation*> \=\(E) = |1+\|E\||v>>>, for =4.0>, =2.0 [\m/ps]>. Recall that is the electric field given by\ <\equation*> E = -\, where > is the and is solved for via <\equation*> ||*\)=e*(n-n)>>|>|(0) = 0, \(L) = v>>>>> > is the and > is some bias potential. Now we can define the diffusive term as <\equation*> J=-\*(n*\), where we have introduced the temperature > <\equation*> \=|m>*T. Note that in this model, the diffusive term )> is linear in , so this is a relativey `easy' model. However, this model is difficult to compute because the doping > is very often discontinuous, and leads to very sharp gradients (layers) in the number density . A very popular method (e.g. commercial softwares) for solving drift-diffusion systems like this one is the method: suppose we have steady state. Then the equation basically reads () <\equation*> (a*n)-(b*n)=0 Integrating as\ <\equation*> n-n=const. then the solution can be written as\ <\equation*> n=c*e>. However, one can still just hit () with ENO/WENO/DG solvers, and it will work quite well.\ (See Cercignani and Gamba and Levermore) Idea: use Maxwell's equations as the driving force. Start with same equations () and (). But now take <\equation*> =-\*n*E-\*\*>*n*(-\*n*E+\)>>|=-\*(n*(\+2*\*E))+\*\*E*(\*n*E)>>>>> Computationally, is not that much different than drift-diffusion: just add some more terms to solver.\ So far we have only conserved `mass', i.e. the mass density . However, it also makes sense to conserve `momentum': <\equation*> nu=f*v*\v, where is the phase-space density function (unknown in the Boltzmann eqn).\ Idea: use mass and `energy' conservation: <\equation*> W=f*(v-nu)*\v Then we model momentum explicity so that we don't have to conserve it. The equations are then conservation of mass () and some conservation law for . There are many, many types of energy transport models. A particular one is <\equation*> u+f(u)=g(u)+h(u), with\ <\equation*> u =>|>>>>>, f(u) = \*n*(E)>>|(E)+D(E)>>>>>, g(n) =>|(E)>>>>>, and <\equation*> h(u) = >|(E)*(\)+>(n-n)*n*D(E)-n*E|\t>>>>>> Advantage of this model: only two unknowns (as opposed to three for momentum conservation).\ <\equation> +(n*v)=0>>|>|+(p*v+n*k*T)=-e*n*E->>>|>|+(n*w+n*v*k*T)=-\*n*v*E-|\>+(\*n*T)>>>>> is the momentum, is temperature, is energy. The LHS of the system above exactly the Euler equations of gas dynamics. This is great: we can just apply the usual hyperbolic conservation law techniques. However, it's costly: in 2D, we've got 4 unknowns.\ <\equation*> f+u*f-*E*f=> is the phase-space density function. is the Maxwellian (Gaussian) profile in velocity-space. This model is more general than drift-diffusion, high field, and (maybe) energy transport. However, one cannot derive the hydrodynamic model from this one. One could also replace the right-hand side by\ <\equation*> Q(f) = >s(k,k)*f(t,x,k) - s(k,k)*f(t,x,k)*\k, where the kernel is defined as\ <\equation*> )>||(k,k)*\*(z(k)-z(k))+>>|||>|||)(n+1)*\*(z(k)-z(k)+\*\)+n*(\(z(k)-z(k)-\\),>>>>> where\ <\equation*> z(k) = |~>|m>\*\|k\|>*>*|m>*\|k\|. Earliest attempts at incorporating quantum effects into these models is attributed to C. Gardner, who developed a quantum hydrodynamic model. Unfortunately, that first model looks like it might have some problems with it. Quantum models for simpler models (drift-diffusion) have been done, and are relatively well-accepted.\ Nobody has really done anything solid with quantum effects in the kinetic model. We'ere solving the 2D Eikonal equation <\equation*> \|\\(x,y)\|=f(x,y) x\\\\ with the condition (x)=0> for on some given curve >. Let ={(x,y)}>, where x=\y>. \ and then set <\equation> :-\|h>+-\|h>=f, \ where <\eqnarray*> >|>|,\),>>|>|>|,\).>>>> Steps in the algorithm: <\enumerate> Initialize = \> Fix > on >, never let it change. Until convergence, do <\enumerate> Pick the next out of <\itemize> I>, J> 1>, J> 1>, 1> I>, 1> and for each point in the sweep according to , do: <\enumerate> solve () for a quantity |\>> set the new \min(|\>,\)> \ in a sweeping motion, moving according to . The solution to () can be written down explicitly: <\equation*> |\>=h>|fh,>>|h-(a-b)>|2>>|.>>>>> Remarks: <\itemize> Easily generalizable to multiple dimensions The entire thing is essentially a Gau˙-Seidel update. Since Gau˙-Seidel is order-dependent, we try each ordering in turn. A 1D problem is done after 2 sweeps. In sweeps, you can bring down the error to order . Thinking about the 1D situation is really helpful: Solve f\|=1>, keeping . Sweeping right just gives , and sweeping left puts the mandatory kink at if starting out to the left with slope 1 from . Next, we're trying to solve a more horrible, even more nonlinear equation: <\equation*> \+\\|\|\\\|>\|\\\|=0, where again we're given =0> on the curve >, and > is sort of a ``normal speed'' or a ``surface tension''. Focusing on the zero level set gives us the time-independent equation <\equation*> \|\|\\\|>\|\\\||\>=1 on all of >. The solution of this problem takes a shape called . To tackle this nonlinearity, we use the Lax-Friedrichs Hamiltonian, which in 1D reads <\equation*> (\,\)=H+\|2>-\+\|2>, where <\equation*> \\max\[\,\]>H|\p>. To repeat, our equation reads \)=R(x)>, with =g> on > given. Then our L-F update can be written as <\equation> |\>=x|\>R(x)-H-\|2\x>+-\|2>, and again we'll have =min(\,|\>)>. The computational boundary is handled by means of ghost cells, setting <\eqnarray*> >||,>>|>||.>>>> Remarks: <\itemize> We're no longer upwinding--() takes information from sides of the point currently being updated. The flux is monotone. > It's easier to do in higher dimensions. > No nonlinear inversion necessary. > Takes many more Gau˙-Seidel iterations to converge. > In order to achieve high-order, we will look at a WENO approximation \>. Recall: <\eqnarray*> )>||-\|\x>,>>|)>||-\|2\x>.>>>> Now we're going to have <\eqnarray*> )>||)-\|2h>+w-4\+\|2h>,>>|)>||)-\|2h>+w+4\-3\|>,>>>> where our weights are <\eqnarray*> >||>,\=+[\-2\+\]|\+[\-2\+\]>,>>|>||>,\=+[\-2\+\]|\+[\-2\+\]>.>>>> )>> are analogous. We introduce the Godunov Hamiltonian just as above: <\equation*> |\>=h>|fh,>>|h-(a-b)>|2>>|.>>>>> But this time, we are going to set <\equation*> a=u=min\-\x(\),\-\x(\), and likewise. using our WENO approximations. Remarks: <\itemize> Neither monotone nor upwinded. We do not take =min(\,|\>)>, because we might accidentally generate values the viscosity solution, which would kill our nice high-order accuracy. Instead, we take =|\>>. WENO may also be introduced for the Lax-Friedrichs Hamiltonian. Recall our previous result <\equation*> |\>=|\x>+|\y>>R-H-\|2\x>,-\|2\y>+\-\|2\x?>+\-\|2\y>. To introduce WENO into this, we do the following: <\enumerate> Replace <\equation*> H(\,\)\H)+(\)|2>,)+(\)|2> Replace <\equation*> +\|2>\+\x[(\)-(\)]|2\x>. Remarks: <\itemize> LF-WENO is more robust than Godunov-WENO. The high-order methods require a good starting guess to actually converge. One possibility is to have a low-order method to produce a starting guess, and then use high-order. <\equation*> \; The Boltzmann equation models the evolution of (multiple) species of particles in phase (physical-velocity) space. The collisionless Boltzmann equation is called the . What kinds of collisions can we have? In semiconductor devices, electron-electron and electron-hole collisions are negligible (hole = positively charged ion). The only external applied field we consider is the electric field.\ \; Consider the number density . The rate of change due to convection is zero (neglecting collision): <\equation*> f|\t>=0 It's reasonable to say the rate of change of collision and convection are the same: <\equation*> f|\t>=f|\t> The convection can be carried out by following characteristics: <\equation*> >=u>>|>|>=-E(x,t)>>>>> The collision term \u)> is the rate that a particle with position at time changes its velocity from > to due to a scattering (collision). Via probabilities and Pauli exclusion principle, the scattering term has the form \ slide advanced (d'oh!) \; This formulation employed to incorporate quantum effects in semiconductor crystal lattice. Semiclassical Boltzmann eqn: <\equation*> \f+u(k)\\f->E\\f=Q(f), where\ <\equation*> Q(f) = )>s(x,k,k)*f(t,x,k)*(1-f(t,x,k))-s(x,k,k)*f(t,x,k)*(1-f(t,x,k))\k* \; Types of collisions: <\itemize> conservation of particles <\equation*> >Q(f)*\k=0 Relaxation to thermal equilibrium (something else) Low-density approximation: quadratic term ignored We will look at relaxation to equilibrium: the Maxwellian (equilibrium distribution) is\ <\equation*> M(k) = N>*e-(k)|k*T>> where >> is a normalizing term.\ We also will look at Polar optical scattering: <\equation*> Q(f) = >S(k,k)*f(t,x,k)-S(k,k)*f(t,x,k)*\k, where <\equation*> S(k,k) = G(k,k)*(n+1)*\(\(k)-\(k)+\\) + something else The ((\(k)-\(k)+\\)> function is used as an approximation. Usually (k)> can be taken as some simple smooth function to make the calculation of easier. We can take <\itemize> (k)=> parabolic non-parabolic Kane dispersion A common application of this equation is for Gallium-arsenide semiconductors. Direct band-gap semiconductors like GaAs can be used for LEDs and for semiconductor lasers.\ <\equation*> f|\t>+u*f|\x>-E(x,t) = blah blah In order to determine the electric field, we need to solve a Poisson problem <\equation*> >>|>|\)=e(n(x,t)-n)>>|>|(0,t) = 0>>|>|(1,t) = v>>>>> \; Usually the collision term is\ <\equation*> > All the moments (except the zeroth order one) relax to the equilibrium at the same rate, but momentum and energy realistically don't do this, so this is a disadvantage of this method.\ \; This equation has stuff like\ <\equation*> f+u*f=0, but is technically unbounded. This is bad for CFL conditions. The way to get around this is to discretize so that you cut off at say a>, and you increase until you see that the answer doesn't change. E.g. WENO is used to solve these things.\ <\equation*> f|\t>+>\|\k>\f|\x>->E\f|\k>=Q(f) is the model we shall use. > and are constants. In this case, we take (k)> in as <\equation> \(k) = |~>|m>\\|k\|>>*|m>*\|k\|, where |~>> is the non-parabolicity factor (parabolic band |~>=0>). we use |~>=0.5>. The collision operator is\ <\equation*> Q(f) = S(k,k)*f(t,x,k)-S(k,k)*f(t,x,k)\k, and is some horrible function of (k)>. Again the -field is obtained via a Poisson system.\ We've got a 6-D in space, 1-D in time for a 3D device. This huge physical space with the collision integral is motivation for using DSMC. DSMC is simple, and relatively cheap to compute. The disadvantage is that this method introduces some noise, so we need lots and lots of particles (Monte-Carlo trials) in order to get a reasonable approximation back. This is why we need deterministic solvers. However, this is expensive. Ways that have been explored to do this are <\itemize> Use spherical coordinates and energy as an unknown >can get down to 2 dimensions Use parabolic approximation, solve HCL with upwinding and a predictor-corrector in time. However, there is singularity in the transformation, so you've got to do some fancy footwork to get around this.\ Mazorana and Pidatella use a new transformation of variables which is good even in the non-parabolic case, and changes the BP system into an HCL, and this is great.\ The HCL is linear if we neglect nonlinearities in . A good way to avoid oscillations inherent with high-order oscillations is WENO. Usually these applications are 1D, but they `can be generalized' to as much as 2D (5D phase space) problem.\ Crazy transformation in : <\equation*> k=**T*w>|h>***w>>*cos\, >*sin\,\. In addition we use (). Need to find the Jacobian also, which can be done. The equation is nondimensionalized by introducing appropriate variables for . the new unknown is\ <\equation*> \(t,z,w,\)=s(w)*F(t,z,w,\) <\equation*> s(w) = **w>*(1+2*\w) where )> is the direct transformation of at some energy. (x>, \|v\|>, \cos(angle between z and k)>).\ <\equation*> f\\*\w*\\*\z We now write the collision term as )>, which is a one-dimensional integral over >. Now [0,L]>, [0,w]>, \[-1,1]>. We let =N*h*w> be the maximum value of the energy. We can also introduce some `cut' in the collision kernel so that the number of electrons with velocity less than > is constant in time. The new eqn is\ <\equation*> \|\t>+(a\)|\z>+(a\)|\w>+(a\)|\\>=s(\)*C(\) This is a form for which we can do successive one-dimensional differentiations, in contrast to the non-transformed equation, for which we must do a three-dimensional differentiation. In addition, )> is a one-dimensional integral (as opposed to the original three-dimensional integral without the >-function approximation).\ The numerical scheme uses a 5th order FD WENO method with a 3rd order TVD RK in time. > is chosen to be an integer multiple of >, where > is some translation in in evaluating > needed to compute the collision operator. > is thus chosen so that (t,z,w\\,\)> lands exactly on a grid point. Thus, w> is also chosen so that +\> is an integral multiple of w>. This saves time so that we don't have to do any high-order interpolation to evaluate (t,z,w\\,\)>. The scheme is to integrate the BTE on some \> rectangle. We use a tensor-product like approximation so that we can do constant-coefficient one-dimensional derivative approximations for a multi-dimensional non-constant coefficient problem. \; <\strong> [Begin Lecture 11/07/07: Yanlai Chen] (cf. R. Nochetto) \; The general adaptive method includes the following (looped) steps: <\enumerate> Solve a problem on a triangulation > Estimate the error in solving on this mesh Mark elements for which the error is too large Refine/Coarsen the grid according to the marked elements \; The `estimate' step usually involves estimates of the error as follows: <\itemize> e\<\|\|\>\C*>\(T)> (upper bound) Lower bound: *>\(T)\\<\|\|\>e\<\|\|\>> Often times we don't really use the lower bound because we don't do much coarsening. The `refine' step usually involves refined meshes |^>> and to determine: <\equation*> argmin|^>>|^>>\(T)\\*\(\) for some \(0,1)>. If you do this, with the estimation loop, then one can prove convergence on loop : <\equation*> \<\|\|\>u-u\<\|\|\>\C*\ for some \1>.\ (cf. T. Tang) The steps here are\ <\enumerate> Evolve steps Redistribute the mesh Update One keeps the number of elements \|> the same, but redistributes the vertices (and thus nodes) of the elements according to a `monitor function', of which <\equation*> w=u\|> is an example. Let )> be the physical domain (global) quantities while > is the computational domain (standard) coordinates. One evolves the element vertices as <\equation*> (w*x>)>=0 \w*x>=0 The procedure here is\ <\enumerate> Solve Estimate Generate a new mesh Utilitze a computable function > which satisfies u-u\<\|\|\>\\(u)>. \; The equation we're trying to solve is\ <\equation> u+H(\u)=f. Why no time-evolution? (Unofficially: can't prove error bound.) However, suppose > solves the Eikonal equation, e.g. (). Then >> satisfies ().\ \; Another way: first obtain =u>, which solves\ <\equation*> \+H(\)=f and compute via\ <\equation*> u(x) = f(0) - H(\(0))+\(s)\s Note: is also a steady-state solution to\ <\equation*> w+w+H(w)=0. So anothe way for obtaining > is to solve <\equation*> \+\+H(\)-f=0. \; Newton's method is used to solve the weak formulation of (). However, Newton's method only converges for sufficiently close initial guesses. So a dual Runge-Kutta-type method is used to supplement Newton-Raphson far away from the point of convergence.\ Error Estimate> Several estimates have been attempted: <\itemize> 1995: Cockburn & Gau developed a method for conservation laws > H-J equations. However, the effectivity indicator is )>.\ Crandall & Lions: Continuous dependence > error estimate, the effectivity index is in smooth, monotone regions, but )> for non-smooth solutions Albert, Cockburn, French, & Paterson developed an , or effectivity index for monotone solutions (in smooth or non-smooth regions). However, it's )> for high-order DG schemes. However, one can postprocess the solution to get an effectivity index, but only when the mesh is uniform.\ Chen and Cockburn: the effectivity index is for scheme, but only for one-dimensional problems.\ One can show\ <\equation*> u)=f>>|v)=g>>>>>\\<\|\|\>u-v\<\|\|\>>>\\<\|\|\>f-g\<\|\|\>>> Then we can form\ <\equation*> u)=f>>|+H(\u)=g>>>>> where we define as\ <\equation*> g = u+H(p): p\D>||\ 0>>|u+H(p): p\D>||\0>>>>> in accordance with the theory of viscosity solutions. If we do this then\ <\equation*> \<\|\|\>u-u\<\|\|\>>>\sup\|R(u)\|, where is the residual.\ \; <\strong> [Missed Lecture 11/29/07: Wei Wang and Dan Paulsen] \; <\strong> [Begin Lecture 12/05/07: Chi-Wang Shu] \; Much of this section is take from A.S. Acnleh, et al, , v51 (2005), pp 35-59.\ Notation: <\itemize> : size : time : density of individuals of size at time We take (0,L]\(0,T]>, where and are some upper bounds. The PDE is\ <\equation> u+(g(x,Q(x,t)*u)+m(x,Q(x,t))*u=0 We can call this a conservation law with a `source term'. represents some sort of growth. Boundary conditions can be prescribed as\ <\equation*> g(0,Q(0,t))*u(0,t)=C(t)+\(x,Q(x,t))*u(x,t)*\x, t\(0,T]. can be viewed as some source (e.g. growth from some external source). is called the `environment'. It is defined as\ <\equation*> Q(x,t) = \w(\)*u(\,t)*\\+w(\)*u(\,t)*\\. Thus the `environment' depends globally on the density everywhere. \\1> is a constant which takes into account the notion that individuals with size smaller than have limited (or reduced) influence on those of size . Some physical interpretations of certain quantities are\ <\itemize> : growth rate : mortality rate >: reproduction rate )>: inflow rate of size-zero individuals from some external source We assume some hypotheses on properties and regularity of the system: <\enumerate> C>, 0> for (0,L)>, , (x,Q)\0> 0>, C> (x,Q)\0>, \C>, (x,Q)\\> 0>, C> 0>, C> BV(0,L)>, 0> \; Acnleh uses a first-order backward-Euler scheme with upwinding: <\equation*> -u|\t>+,Q(x,t)*u)-g(x,Q(x,t)*u)|\x>+m(x,Q(x,t)*u=0. The quadrature used to compute the integrals in the definition of is Riemann integration using right-hand values. This is to avoid using the value , which depends in a complicated fashion on the global values of .\ The above scheme is weighed toward theory more than practice: it's hard to implement but one can prove existence and uniqueness for solutions to the PDE and convergence of the scheme. If one instead uses Euler forward, then the implementation becomes much easier (using the right-hand Riemann quadrature). However, proving convergence is much harder, but it can be done (Jun Shen). However, both schemes are first-order.\ \; Suppose we're looking for a second-order TVD scheme. One ingredient would be second-order quadrature, which requires , and this makes implementation much harder. But this can also be done.\ [long,long discussion on quadrature] One can also do e.g. 5th order WENO to do this stuff. \; [comments about DG conference] Consider the equation\ <\equation*> \+a*\=b*\, with 0>. We define the discontinuous finite element space <\equation*> V=v:v>\P(I). The scheme is: find \V> such that\ <\equation*> B(\,v)=>(\)*v*\x-a*>\(v)\x+a*(\)>(v)>-a(\)>*(v)>->b*\*v*\x=0 for all \V>.\ <\proposition> (Cell entropy inequality for DG) <\equation*> *|\t>*>(\)\x+>->\b*>\*\x which implies <\equation*> |\t>(\)*\x\b*(\)*\x <\proof> Take =\>: <\equation*> ||(\,\)=>(\)*\*\x-a*>\(\)\x+a*(\)>(\)>-a(\)>*(\)>->b*\\*\x>>|||>||||\t>**>(\)*\x-*a*(\)>+*a*(\)>+\.>>|||>|||*|\t>*>(\)\x+>->+\>-b*>\*\x.>>>>> \; where we have defined\ <\equation*> =-*a*(\)+a*(\)=*a*(\), and <\equation*> \=a(\)+a(\)-a(\)(\)=a*(\-\\0 \; <\proposition> (Error estimate) Let > be the exact solution to the PDE. define -\>. Then we have\ <\equation*> (e)(x,T)*\x\O(h) <\proof> We have (\,v)=0> for all \V>. Thus, we have\ <\equation*> B(e,v)=0 for all \V>. We can rewrite as\ <\equation*> e=\-\=(\-\)+(\-\)=\+e where \V>. Then we have\ <\equation*> B(e,e)=0 which gives us\ <\equation> B(e,e)=-B(\,e) The first proposition gives us\ <\equation*> B(e,e)=>(e)*\x+>->+*ae>- b*>(e)*\x. And also\ <\equation*> B(\,e)=>\*e*\x-a*>*\(e)*\x+a*\>(e)>-a*\>(e)>->b*\*e*\x. Now we define the projection >> as <\equation*> >(w-w)*v*\x=0 \ \ \ \ \ \ \ v\P(I) and <\equation*> (w-w)>=0 \ \ \ \ \ \ \ \ j One can check that these two conditions give a unique projection. Then we can reduce (\,e)> to\ <\equation*> B(\,e)=>\*e*\x->b*\*e*\x. We also get\ <\equation*> >\**\x\O(h). We preserve this order property because the projection leaves polynomials of degree untouched. After summing () over we get\ <\equation*> |\t>*(e)*\x-b*(e)*\x\O(h)+*(e)*\x+O(h)+*(e)*\x This implies that\ <\equation*> |\t>(e)*\x\(\|b\|+2)*(e)*\x Using Gronwall, we obtain <\equation*> (e)(x,T)*\x\O(h) and from here can get\ <\equation*> (e)(x,T)*\x\O(h) \; <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|table> > <\associate|toc> |math-font-series||Table of contents> |.>>>>|> |math-font-series||1Hamilton-Jacobi Equations> |.>>>>|> |1.1Finite-Difference Method |.>>>>|> > |1.2Two-dimensional Extensions |.>>>>|> > |1.2.1Lax-Friedrichs Hamiltonian |.>>>>|> > |1.2.2Gudonov Hamiltonian |.>>>>|> > |1.2.3Osher-Sethian Hamiltonian |.>>>>|> > |1.2.4Unstructured Grids |.>>>>|> > |1.3High-Order Methods |.>>>>|> > |1.4DG for Hamilton-Jacobi |.>>>>|> > |1.5Proving Error Estimates |.>>>>|> > |1.6Level-Set Methods |.>>>>|> > |math-font-series||2Boltzmann-Type Equations> |.>>>>|> |math-font-series||3Semiconductor Device Simulation Models> |.>>>>|> |3.1Drift-diffusion model |.>>>>|> > |3.2High field model |.>>>>|> > |3.3Momentum solvers |.>>>>|> > |3.4Energy Transport Models |.>>>>|> > |3.5Hydrodynamic Model |.>>>>|> > |3.6Kinetic Model |.>>>>|> > |3.7Quantum Effects |.>>>>|> > |math-font-series||4Fast Sweeping> |.>>>>|> |4.1Godunov Hamiltonian |.>>>>|> > |4.2Lax-Friedrichs Hamiltonian |.>>>>|> > |4.3High-Order Methods |.>>>>|> > |math-font-series||5Boltzmann Equations for Semiconductor Devices> |.>>>>|> |5.1Convection |.>>>>|> > |5.2Collision |.>>>>|> > |5.3Semiclassical Boltzmann Eqn |.>>>>|> > |5.4Kinetic model for a GaAs diode |.>>>>|> > |5.5WENO solver for transients of the Boltzmann-Poisson system |.>>>>|> > |5.5.1Solving |.>>>>|> > |5.5.2Transformation |.>>>>|> > |5.5.3Numerical Scheme |.>>>>|> > |math-font-series||6Adaptive High-Order DG Method w/Error Control for H-J Equations> |.>>>>|> |6.1Error Indictator Methods |.>>>>|> > |6.2Moving Mesh Methods |.>>>>|> > |6.3Adaptivity with Error Control |.>>>>|> > |6.3.1|A Posteriori> Error Estimate |.>>>>|> > |6.3.2Continuous Dependence |.>>>>|> > |math-font-series||7Other Applications> |.>>>>|> |7.1Hierarchical Size-Structured Model |.>>>>|> >