My first experience with the numerical solution of partial differential equations (PDEs) was with finite difference methods. I found finite difference methods to be somewhat fiddly: it is quite an exercise in patience to, for example, work out the appropriate fifth-order finite difference approximation to a second order differential operator on an irregularly spaced grid and even more of a pain to prove that the scheme is convergent. I found that I liked the finite element method a lot better as there was a unifying underlying functional analytic theory, Galerkin approximation, which showed how, in a sense, the finite element method computed the best possible approximate solution to the PDE among a family of potential solutions. However, I came to feel later that Galerkin approximation was, in a sense, the more fundamental concept, with the finite element method being one particular instantiation (with spectral methods, boundary element methods, and the conjugate gradient method being others). In this post, I hope to give a general introduction to Galerkin approximation as computing the best possible approximate solution to a problem within a certain finite-dimensional space of possibilities.
Systems of Linear Equations
Let us begin with a linear algebraic example, which is unburdened by some of the technicalities of partial differential equations. Suppose we want to solve a very large system of linear equations , where the matrix is symmetric and positive definite (SPD). Suppose that is where is so large that we don’t even want to store all components of the solution on our computer. What can we possibly do?
One solution is to consider only solutions lying in a subspace of the set of all possible solutions . If this subspace has a basis , then the solution can be represented as and one only has to store the numbers . In general, will not belong to the subspace and we must settle for an approximate solution .
The next step is to convert the system of linear equations into a form which is more amenable to approximate solution on a subspace . Note that the equation encodes different linear equations where is the th row of and is the th element of . Note that the th equation is equivalent to the condition , where is the vector with zeros in all entries except for the th entry which is a one. More generally, by multiplying the equation by an arbitrary test row vector , we get for all . We refer to this as a variational formulation of the linear system of equations . In fact, one can easily show that the variational problem is equivalent to the system of linear equations:
(1)
Since we are seeking an approximate solution from the subspace , it is only natural that we also restrict our test vectors to lie in the subspace . Thus, we seek an approximate solution to the system of equations as the solution of the variational problem
(2)
One can relatively easily show this problem possesses a unique solution . In what sense is a good approximate solution for ? To answer this question, we need to introduce a special way of measuring the error to an approximate solution to . We define the -inner product of a vector and to be and the associated -norm . All of the properties satisfied by the familiar Euclidean inner product and norm carry over to the new -inner product and norm (e.g., the Pythagorean theorem). Indeed, for those familiar, one can show satisfies all the axioms for an inner product space.
We shall now show that the error between and its Galerkin approximation is -orthogonal to the space in the sense that for all . This follows from the straightforward calculation, for ,
(3)
where since solves the variational problem Eq. (1) and since solves the variational problem Eq. (2).
The fact that the error is -orthogonal to can be used to show that is, in a sense, the best approximate solution to in the subspace . First note that, for any approximate solution to , the vector is -orthogonal to . Thus, by the Pythagorean theorem,
(4)
Thus, the Galerkin approximation is the best approximate solution to in the subspace with respect to the -norm, for every . Thus, if one picks a subspace for which the solution almost lies in then will be a good approximate solution to , irrespective of the size of the subspace .
Variational Formulations of Differential Equations
As I hope I’ve conveyed in the previous section, Galerkin approximation is not a technique that only works for finite element methods or even just PDEs. However, differential and integral equations are one of the most important applications of Galerkin approximation since the space of all possible solution to a differential or integral equation is infinite-dimensional: approximation in a finite-dimensional space is absolutely critical. In this section, I want to give a brief introduction to how one can develop variational formulations of differential equations amenable to Galerkin approximation. For simplicity of presentation, I shall focus on a one-dimensional problem which is described by an ordinary differential equation (ODE) boundary value problem. All of this generalized wholesale to partial differential equations in multiple dimensions, though there are some additional technical and notational difficulties (some of which I will address in footnotes). Variational formulation of differential equations is a topic with important technical subtleties which I will end up brushing past. Rigorous references are Chapters 5 and 6 from Evans’ Partial Differential Equations or Chapters 0-2 from Brenner and Scott’s The Mathematical Theory of Finite Element Methods.
As our model problem for which we seek a variational formulation, we will focus on the one-dimensional Poisson equation, which appears in the study of electrostatics, gravitation, diffusion, heat flow, and fluid mechanics. The unknown is a real-valued function on an interval which take to be . We assume Dirichlet boundary conditions that is equal to zero on the boundary . Poisson’s equations then reads
(5)
We wish to develop a variational formulation of this differential equation, similar to how we develop a variational formulation of the linear system of equations in the previous section. To develop our variational formulation, we take inspiration from physics. If represents, say, the temperature at a point , we are never able to measure exactly. Rather, we can measure the temperature in a region around with a thermometer. No matter how carefully we engineer our thermometer, our thermometer tip will have some volume occupying a region in space. The temperature measured by our thermometer will be the average temperature in the region or, more generally, a weighted average where is a weighting function which is zero outside the region . Now let’s use our thermometer to “measure” our differential equation:
(6)
This integral expression is some kind of variational formulation of our differential equation, as it is an equation involving the solution to our differential equation which must hold for every averaging function . (The precise meaning of every will be forthcoming.) It will benefit us greatly to make this expression more “symmetric” with respect to and . To do this, we shall integrate by parts:
(7)
In particular, if is zero on the boundary , then the second two terms vanish and we’re left with the variational equation
(8)
Compare the variational formulation of the Poisson equation Eq. (8) to the variational formulation of the system of linear equations in Eq. (1). The solution vector in the differential equation context is a function satisfying the boundary condition of being zero on the boundary . The right-hand side is replaced by a function on the interval . The test vector is replaced by a test function on the interval . The matrix product expression is replaced by the integral . The product is replaced by the integral . As we shall soon see, there is a unifying theory which treats both of these contexts simultaneously.
Before this unifying theory, we must address the question of which functions we will consider in our variational formulation. One can show that all of the calculations we did in this section hold if is a continuously differentiable function on which is zero away from the endpoints and and is a twice continuously differentiable function on . Because of technical functional analytic considerations, we shall actually want to expand the class of functions in our variational formulation to even more functions . Specifically, we shall consider all functions which are (A) square-integrable ( is finite), (B) possess a square integrable derivative ( is finite), and (C) are zero on the boundary. We refer to this class of functions as the Sobolev space .
Now this is where things get really strange. Note that it is possible for a function to satisfy the variational formulation Eq. (8) but for not to satisfy the Poisson equation Eq. (5). A simple example is when possesses a discontinuity (say, for example, a step discontinuity where is and then jumps to ). Then no continuously differentiable will satisfy Eq. (5) at every point in and yet a solution to the variational problem Eq. (8) exists! The variational formulation actually allows us to give a reasonable definition of “solving the differential equation” when a classical solution to does not exist. Our only requirement for the variational problem is that , itself, belongs to the space . A solution to the variational problem Eq. (8) is called a weak solution to the differential equation Eq. (5) because, as we have argued, a weak solution to Eq. (8) need not always solve Eq. (5).
The Lax-Milgram Theorem
Let us now build up an abstract language which allows us to use Galerkin approximation both for linear systems of equations and PDEs (as well as other contexts). If one compares the expressions from the linear systems context and from the differential equation context, one recognizes that both these expressions are so-called bilinear forms: they depend on two arguments ( and or and ) and are a linear transformation in each argument independently if the other one is fixed. For example, if one defines one has . Similarly, if one defines , then .
Implicitly swimming in the background is some space of vectors or function which this bilinear form is defined upon. In the linear system of equations context, this space of -dimensional vectors and in the differential context, this space is as defined in the previous section. Call this space . We shall assume that is a special type of linear space called a Hilbert space, an inner product space (with inner product ) where every Cauchy sequence converges to an element in (in the inner product-induced norm). The Cauchy sequence convergence property, also known as metric completeness, is important because we shall often deal with a sequence of entries which we will need to establish convergence to a vector . (Think of as a sequence of Galerkin approximations to a solution .)
With these formalities, an abstract variational problem takes the form
(9)
where is a bilinear form on and is a linear form on (a linear map ). There is a beautiful and general theorem called the Lax-Milgram theorem which establishes existence and uniqueness of solutions to a problem like Eq. (9).
Theorem (Lax-Milgram): Let and satisfy the following properties:
- (Boundedness of ) There exists a constant such that every , .
- (Coercivity) There exists a positive constant such that for every .
- (Boundedness of ) There exists a constant such that for every .
Then the variational problem Eq. (9) possesses a unique solution.
For our cases, will also be symmetric for all . While the Lax-Milgram theorem holds without symmetry, let us continue our discussion with this additional symmetry assumption. Note that, taken together, properties (1-2) say that the -inner product, defined as , is no more than so much bigger or smaller than the standard inner product of and .
Let us now see how the Lax-Milgram theorem can apply to our two examples. For a reader who wants a more “big picture” perspective, they can comfortably skip to the next section. For those who want to see Lax-Milgram in action, see the discussion below.
Applying the Lax-Milgram Theorem
Begin with the linear system of equations with
with inner product
,
, and
. Note that we have the inequality
. In particular, we have that
. Property (1) then follows from the
Cauchy-Schwarz inequalityapplied to the
-inner product:
. Property (2) is simply the established inequality
. Property (3) also follows from the Cauchy-Schwarz inequality:
. Thus, by Lax-Milgram, the variational problem
for
has a unique solution
. Note that the linear systems example shows why the coercivity property (2) is necessary. If
is positive semi-definite but not positive-definite, then there exists an eigenvector
of
with eigenvalue
. Then
for any positive constant
and
is singular, so the variational formulation of
has no solution for some choices of the vector
.
Applying the Lax-Milgram theorem to differential equations can require powerful inequalities. In this case, the -inner product is given by , , and . Condition (1) is follows from a application of the Cauchy-Schwarz inequality for integrals:
(10)
Let’s go line-by-line. First, we note that the absolute value of integral is less than the integral of absolute value. Next, we apply the Cauchy-Schwarz inequality for integrals. Finally, we note that . This establishes Property (1) with constant . As we already see one third of the way into verifying the hypotheses of Lax-Milgram, establishing these inequalities can require several steps. Ultimately, however, strong knowledge of just a core few inequalities (e.g. Cauchy-Schwarz) may be all that’s needed.
Proving coercivity (Property (2)) actually requires a very special inequality, Poincaré’s inequality. In it’s simplest incarnation, the inequality states that there exists a constant such that, for all functions ,
(11)
With this inequality in tow, property (2) follows after another lengthy string of inequalities:
(12)
For Property (3) to hold, the function must be square-integrable. With this hypothesis, Property (3) is much easier than Properties (1-2) and we leave it as an exercise for the interested reader (or to a footnote for the uninterested reader).
This may seem like a lot of work, but the result we have achieved is stunning. We have proven (modulo a lot of omitted details) that the Poisson equation has a unique weak solution as long as is square-integrable! What is remarkable about this proof is that it uses the Lax-Milgram theorem and some inequalities alone: no specialized knowledge about the physics underlying the Poisson equation were necessary. Going through the details of Lax-Milgram has been a somewhat lengthy affair for an introductory post, but hopefully this discussion has illuminated the power of functional analytic tools (like Lax-Milgram) in studying differential equations. Now, with a healthy dose of theory in hand, let us return to Galerkin approximation.
General Galerkin Approximation
With our general theory set up, Galerkin approximation for general variational problem is the same as it was for a system of linear equations. First, we pick an approximation space which is a subspace of . We then have the Galerkin variational problem
(13)
Provided and satisfy the conditions of the Lax-Milgram theorem, there is a unique solution to the problem Eq. (13). Moreover, the special property of Galerkin approximation holds: the error is -orthogonal to the subspace . Consequently, is te best approximate solution to the variational problem Eq. (9) in the -norm. To see the -orthogonality, we have that, for any ,
(14)
where we use the variational equation Eq. (9) for and Eq. (13) for . Note the similarities with Eq. (3). Thus, using the Pythagorean theorem for the -norm, for any other approximation solution , we have
(15)
Put simply, is the best approximation to in the -norm.
Galerkin approximation is powerful because it allows us to approximate an infinite-dimensional problem by a finite-dimensional one. If we let be a basis for the space , then the approximate solution can be represented as . Since form a basis of , to check that the Galerkin variational problem Eq. (13) holds for all it is sufficient to check that it holds for . Thus, plugging in and into Eq. (13), we get (using bilinearity of )
(16)
If we define and , then this gives us a matrix equation for the unknowns parametrizing . Thus, we can compute our Galerkin approximation by solving a linear system of equations.
We’ve covered a lot of ground so let’s summarize. Galerkin approximation is a technique which allows us to approximately solve a large- or infinite-dimensional problem by searching for an approximate solution in a smaller finite-dimensional space of our choosing. This Galerkin approximation is the best approximate solution to our original problem in the -norm. By choosing a basis for our approximation space , we reduce the problem of computing a Galerkin approximation to a linear system of equations.
Design of a Galerkin approximation scheme for a variational problem thus boils down to choosing the approximation space and a basis . Picking to be a space of piecewise polynomial functions (splines) gives the finite element method. Picking to be a space spanned by a collection of trigonometric functions gives a Fourier spectral method. One can use a space spanned by wavelets as well. The Galerkin framework is extremely general: give it a subspace and it will give you a linear system of equations to solve to give you the best approximate solution in .
Two design considerations factor into the choice of space and basis . First, one wants to pick a space , where the solution almost lies in. This is the rationale behind spectral methods. Smooth functions are very well-approximated by short truncated Fourier expansions, so, if the solution is smooth, spectral methods will converge very quickly. Finite element methods, which often use low-order piecewise polynomial functions, converge much more slowly to a smooth . The second design consideration one wants to consider is the ease of solving the system resulting from the Galerkin approximation. If the basis function are local in the sense that most pairs of basis functions and aren’t nonzero at the same point (more formally, and have disjoint supports for most and ), the system will be sparse and thus usually much easier to solve. Traditional spectral methods usually result in a harder-to-solve dense linear systems of equations. It should be noted that both spectral and finite element methods lead to ill-conditioned matrices , making integral equation-based approaches preferable if one needs high-accuracy. Integral equations, themselves, are often solved using Galerkin approximation, leading to so-called boundary element methods.
Upshot: Galerkin approximation is a powerful and extremely flexible methodology for approximately solving large- or infinite-dimensional problems by finding the best approximate solution in a smaller finite-dimensional subspace. To use a Galerkin approximation, one must convert their problem to a variational formulation and pick a basis for the approximation space. After doing this, computing the Galerkin approximation reduces down to solving a system of linear equations with dimension equal to the dimension of the approximation space.
from Hacker News https://ift.tt/3tWT8WB
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.