Intro to Computational Electrostatics: Finite Difference
Table of Contents
1 Finite Difference Method for Electrostatics
The below derivation and examples are originally from Electromagnetic Fields and Waves, an excellent book by Magdy F. Iskander. I have reworked and summarized them here.
Oftentimes you want to answer questions about an electrical conductor at rest (i.e., not under some dynamic perturbation). In other words, you want to characterize it.
Classically speaking, this involves answering a few simple questions: we want to know the electrical potential throughout the conductor, the electric field, and the capacitance. (From there we can start asking and answering questions about the group velocity of a wave passing through the conductor, current v. temperature, and so on.)
First, we need some equations.
1.1 Basic Equations
1.1.1 Faraday's Law
First, let's take Faraday's Law:
\begin{equation} \oint_c \boldsymbol{E} \cdot d\boldsymbol{l} = - \frac{d}{dt} \int_s \boldsymbol{B} \cdot d\boldsymbol{s} \end{equation}This says that an electric field through a closed shape (contour) is coupled to a time-varying magnetic field through the surface formed by that same closed shape, in a direction such that the magnetic field opposes the change (Lenz's law).
In differential form, this is:
\begin{equation} \nabla \times \boldsymbol{E} = - \frac{\partial \boldsymbol{B}}{\partial t} \end{equation}The \(\times\) symbol is the curl of the field (we don't need to worry about that here).
Now, a static electric field corresponds to no time-varying magnetic field (a static magnetic field may still be present, but decoupled):
\begin{equation} \nabla \times \boldsymbol{E} = 0 \end{equation}Mathematically speaking, a vector field with zero curl is equivalent to the gradient of a static field. The scalar field associated with an electric field is the scalar potential \(\Phi\):
\begin{equation} \boldsymbol{E} = - \nabla \Phi \end{equation}Solid.
1.1.2 Gauss' Law
One more law:
\begin{equation} \oint_s \epsilon_0 \boldsymbol{E} \cdot d\boldsymbol{s} = \int_v \rho_v dv = Q \end{equation}This says that the net outflow of the electric field through a closed surface is equal to the free charge contained in the volume that surface surrounds.
In differential form, this is:
\begin{equation} \nabla \cdot (\epsilon_0 \boldsymbol{E}) = \rho_v \end{equation}1.1.3 Poisson and Laplace's Equations
- Poisson's Equation
To obtain Poisson's equation, we substitute the electrostatic potential gradient form of the electric field into the differential form of Gauss' Law:
\begin{equation} \nabla \cdot \epsilon (- \nabla \Phi) = \rho_v \end{equation}\(\epsilon\) here is the electric permittivity of the material the electric field is induced in. If the material is homogeneous, it's a constant, so we can move it over:
\begin{equation} - \nabla^2 \Phi = \frac{\rho_v}{\epsilon} \end{equation} - The Laplacian Operator
\(\nabla^2\) is called the Laplacian operator, and it's defined as follows:
- Rectangular coordinates:
- Cylindrical coordinates:
- Spherical coordinates:
- Laplace's Equation
When there is no charge distribution in a space, \(\rho_v\) is zero and Poisson's equation becomes:
\begin{equation} - \nabla^2 \Phi = 0 \end{equation}Okay. Now we've got enough tools to get our feet wet.
1.2 Motivating Examples
Before we get into the computational angle, let's compute a couple of examples using these equations analytically.
1.2.1 Spherical Capacitor
Consider a spherical capacitor, with an inner conductor with a radius \(a\), and an outer conductor with radius \(b\). The inner conductor is maintained at a constant potential \(\Phi = V\), while the outer conductor is grounded.
So. We have two boundary conditions and a differential equation:
\begin{equation} - \nabla^2 \Phi = 0 \end{equation} \begin{equation} \Phi (r = a) = V \end{equation} \begin{equation} \Phi (r = b) = 0 \end{equation}We also have symmetry about \(\theta\) and \(\phi\), so Laplace's equation is manageable:
\begin{equation} \nabla^2 \Phi = \frac{1}{r^2} \frac{d}{dr}\left(r^2 \frac{d \Phi}{dr} \right) = 0 \end{equation}This is a good time for the time-honored method of guessing what the solution will look like. Since the derivative is taken with respect to \(r\) twice, and we can see we need to cancel with \(r^2\) to avoid increasing the degree of \(r\) in the function, we guess the general first-order form of \(r^{-1}\):
\begin{equation} \Phi = \frac{A}{r} + B \end{equation}where \(A\) and \(B\) are constants we'll determine from the boundary conditions.
Let's check this solution:
\begin{equation} - \nabla^2 \Phi = \frac{1}{r^2} \frac{d}{dr}\left(r^2 \frac{d}{dr} \left( \frac{A}{r} + B \right) \right) \end{equation} \begin{equation} \frac{1}{r^2} \frac{d}{dr} (-A) = 0 \end{equation}Holds water. Let's move on to the constants.
At \(r = a\):
\begin{equation} \Phi (r = a) = V = \frac{A}{a} + B \end{equation}At \(r = b\):
\begin{equation} \Phi (r = b) = 0 = \frac{A}{b} + B \end{equation}Working out the algebra, we get:
\begin{equation} \implies B = - \frac{A}{b} \\ \implies V = \frac{A}{a} - \frac{A}{b} \\ \implies A = V \left(\frac{1}{a} - \frac{1}{b} \right)^{-1} \\ \implies B = - \frac{V}{\frac{b}{a} - 1} \end{equation}Thus,
\begin{equation} \Phi = \frac{V}{r \left(\frac{1}{a} - \frac{1}{b} \right)} - \frac{V}{\frac{b}{a} - 1} \end{equation}Now we can compute the electric field:
\begin{equation} E = - \nabla \Phi = \frac{d \Phi}{dr} \implies E = \frac{V}{r^2 \left(\frac{1}{a} - \frac{1}{b} \right)} \boldsymbol{a}_r \end{equation}since the electric field must also remain constant with respect to \(\theta\) and \(\phi\).
With this and Gauss' law, we can compute the charge:
\begin{equation} \oint_s \epsilon_0 \boldsymbol{E} \cdot d\boldsymbol{s} = Q \end{equation}Recall that the differential surface element of a sphere \(d\boldsymbol{s} = r \sin \theta d \theta r d \phi\), or \(r^2 \sin \theta d\theta d\phi\). (You can figure this out by multiplying together the differential arclength of the two sides of a 'patch' on the surface of a sphere.)
Likewise, the inner conductor is contained in a sphere, so we integrate \(\phi\) over \(0\) to \(2\pi\), and \(\theta\) over \(0\) to \(\pi\):
\begin{equation} \oint_s \epsilon_0 \boldsymbol{E} \cdot d\boldsymbol{s} = \epsilon \int^\pi_0 \int^{2\pi}_{0} \frac{V}{r^2 \left(\frac{1}{a} - \frac{1}{b} \right)} r^2 \sin \theta d \theta d \phi \\ \implies \frac{\epsilon}{\left(\frac{1}{a} - \frac{1}{b} \right)} \int^\pi_0 \sin \theta d\theta \phi \bigg\rvert^{2\pi}_0 \\ \implies - \frac{2 \pi \epsilon V}{\left(\frac{1}{a} - \frac{1}{b} \right)} \left[ -1 -1 \right] \\ \implies \frac{4 \pi \epsilon V}{\left(\frac{1}{a} - \frac{1}{b} \right)} \end{equation}Finally, we can compute the capacitance:
\begin{equation} C = \frac{Q}{V} = \frac{4 \pi \epsilon}{\left( \frac{1}{a} - \frac{1}{b} \right)} \end{equation}1.2.2 Parallel Plate Capacitor
Let's do one more.
Consider a pair of parallel conductive plates, notionally infinite in extent, separated from each other by a distance \(d\). In between the two plates are dielectrics with permittivities \(\epsilon_1\) and \(\epsilon_2\) of possibly differing material, of thickness \(d_1\) and \(d_2\) respectively. One plate is held at a potential \(V\), while the other is grounded. We want to compute \(\Phi\), \(\boldsymbol{E}\), and \(C\).
From this we know that there is symmetry over the \(x\) and \(y\) coordinates, so we only need to consider \(z\). We also know there is no free charge within the capacitor. So, our first stop is Laplace's equation in \(z\):
\begin{equation} \nabla^2 \Phi_1 = \frac{d^2 \Phi}{dz^2} \biggr\rvert_{d \in [0, d_1]} = 0 \end{equation}We also need to handle the different solution in the other material.
\begin{equation} \nabla^2 \Phi_2 = \frac{d^2 \Phi}{dz^2} \biggr\rvert_{d \in [d_1, d_2]} = 0 \end{equation}Naturally \(\Phi\) is a continuous electric field, but we need to consider these two domains separately and then enforce that continuity across the material boundary.
The continuity constraint for the electric field is this:
\begin{equation} \boldsymbol{n} \cdot (D_1 - D_2) = 0 \end{equation}This equation says that the difference in electric fluxes in either material normal to the boundary surface should be zero. We'll use this in just a second.
Anyway, if differentiating \(\Phi\) twice needs to equal \(0\), let's guess a first-degree function for the solution, and then plug in the boundary equations to solve for the constants:
\begin{equation} \Phi_1(z) = Az + B \\ \Phi_2(z) = Cz + D \\ \implies \Phi_1(0) = V = B \\ \implies \Phi_2(d) = 0 = Cd + D \end{equation}Since the electric field is continuous over the dielectric interface, and there is no free charge at the interface, we can make the following two assertions about the interface:
\begin{equation} \Phi_1(d_1) = \Phi_2(d_1) \\ D_1(z = d_1) = D_2(z = d_2) \end{equation}Now we can talk about the electric flux across the boundary.
The electric flux in a material is given by:
\begin{equation} D_r = \epsilon_0 \epsilon_r \boldsymbol{E} \\ \implies - \epsilon_0 \epsilon_r \nabla \Phi(z) \end{equation}So, our boundary condition is:
\begin{equation} \epsilon_1 \nabla \Phi_1(z) \bigg\rvert_{z = d_1} = \epsilon_2 \nabla \Phi_2(z) \bigg\rvert_{z = d_1} \\ \implies \epsilon_1 A d_2 = \epsilon_2 C d_2 \\ \implies C = \frac{\epsilon_1}{\epsilon_2} A \end{equation}Coming back to the equation for \(\Phi_2\):
\begin{equation} \Phi_2(d) = 0 = Cd + D \\ \implies 0 = \frac{\epsilon_1}{\epsilon_2} Ad + D \\ d = d_1 + d_2 \\ \implies D = - \frac{\epsilon_1}{\epsilon_2} A (d_1 + d_2) \end{equation}Likewise, for \(\Phi_1\):
\begin{equation} \Phi_1(z) = Az + B \\ \implies A d_2 + V = \frac{\epsilon_1}{\epsilon_2} A d_2 + \left( \frac{\epsilon_1}{\epsilon_2} \right) (d_1 + d_2) A \\ \implies V = - \left( \frac{\epsilon_1}{\epsilon_2} d_1 + d_2 \right) A \\ \implies A = \frac{-V}{\frac{\epsilon_1}{\epsilon_2} d_1 + d_2} \end{equation}Now we can substitute to get the potentials in both regions:
\begin{equation} \Phi_1(z) = V \left( 1 - \frac{z}{\frac{\epsilon_1}{\epsilon_2} d_1 + d_2} \right) \\ \Phi_2(z) = - \left( \frac{V}{\frac{\epsilon_1}{\epsilon_2} d_1 + d_2} \right) \left( \frac{\epsilon_1}{\epsilon_2} \right) \left( z - (d_1 + d_2) \right) \end{equation}The electric fields are then:
\begin{equation} E_1 = - \frac{d}{dz} \left( V \left( 1 - \frac{z}{\frac{\epsilon_1}{\epsilon_2} d_1 + d_2} \right) \right) \boldsymbol{a}_z \\ \implies E_1 = \frac{V}{\frac{\epsilon_1}{\epsilon_2} d_1 + d_2} \boldsymbol{a}_z \end{equation}and
\begin{equation} E_2 = - \frac{d}{dz} \left( - \left( \frac{V}{\frac{\epsilon_1}{\epsilon_2} d_1 + d_2} \right) \left( \frac{\epsilon_1}{\epsilon_2} \right) \left( z - (d_1 + d_2) \right) \right) \boldsymbol{a}_z \\ \implies E_2 = \frac{V \epsilon_1}{\epsilon_1 d_1 + \epsilon_2 d_2} \boldsymbol{a}_z \end{equation}The only free charges are provided by the voltage \(V\) at the first plate's boundary. Since the materials are all homogeneous, we can compute the charge density directly:
\begin{equation} \epsilon_1 E_{1} = \rho_s \\ \implies C = \frac{\rho_s}{V} = \frac{\epsilon_1}{\frac{\epsilon_1}{\epsilon_2} d_1 + d_2} \end{equation}1.3 Deriving the Finite Difference Method
All that mathematics is great, but we would like to have something that computers can easily process. To do that, we're going to start by writing the Taylor approximation of a one-dimensional potential distribution about a position \(x_0\): \(\Phi(x_0 + h)\):
\begin{equation} \Phi(x_o + h) = \Phi(x_0) + \frac{h^2}{2} \frac{d^2\Phi}{dx^2} \bigg\rvert_{x_0} + \frac{h^3}{3 \cdot 2} \frac{d^3\Phi}{dx^3} \bigg\rvert_{x_0} + ... \end{equation}For the sake of simplicity, let's throw away all of the terms of higher order than 2, and solve for \(\frac{d\Phi}{dx}\):
\begin{equation} \frac{d\Phi}{dx} \bigg\rvert_{x_0} = \frac{\Phi(x_0 + h) - \Phi(x_0)}{h} - \frac{h^2}{2} \frac{d^2\Phi}{dx^2} \bigg\rvert_{x_0} \end{equation}This is a difference equation (the discrete version of a differential equation). Since it calculates the difference between \(\Phi\) at its center and the positive delta \(h\), you can be more specific and call it the forward difference equation.
The backward difference equation is:
\begin{equation} \frac{d\Phi}{dx} \bigg\rvert_{x_0} = \frac{\Phi(x_0 - h) - \Phi(x_0)}{h} + \frac{h^2}{2} \frac{d^2\Phi}{dx^2} \bigg\rvert_{x_0} \end{equation}Nice symmetry. The only difference is that the error term has a different sign (which makes sense if you think about it geometrically).
What happens if you subtract the two Taylor series approximations from each other instead?
\begin{equation} \Phi(x_0 + h) - \Phi(x_0 - h) = 2h \frac{d\Phi}{dx} + \frac{2h^3}{6} \frac{d^3\Phi}{dx^3} + ... \end{equation}The even terms of the Taylor series cancel out. Solving for \(\frac{d\Phi}{dx}\) again, we get:
\begin{equation} \frac{d\Phi}{dx} = \frac{\Phi(x_0 + h) - \Phi(x_0 - h) - \frac{2h^3}{6} \frac{d^3\Phi}{dx^3}}{2h} \\ \implies \frac{\Phi(x_0 + h) - \Phi(x_0 - h)}{2h} - \frac{h^2}{6} \frac{d^3\Phi}{dx^3} \end{equation}One thing to notice here is that the error term is smaller, and it's a function of a higher order derivative we said we were going to neglect. (In other words, we're assuming the function has a small third derivative–it's somewhat "smooth".) So this central difference equation is much better.
In all of the difference equations, the error is a monotonically increasing function of slice width \(h\). So if we make \(h\) small, the error will shrink as well.
Now we just continue in this vein to get the second difference. We know that the derivative itself is a limit of a difference equation, so all we need to do is take our existing approximations and formulate a difference equation:
\begin{equation} \frac{d^2\Phi}{dx^2} \bigg\rvert_{x_0} \approx \frac{\frac{d\Phi}{dx} \bigg\rvert_{x_0 + h/2} - \frac{d\Phi}{dx} \bigg\rvert_{x_0 - h/2}}{h} \end{equation}For concision's sake, let's number these points:
\begin{equation} \Phi(x_0 - h) = \Phi_1 \\ \Phi(x_0) = \Phi_2 \\ \Phi(x_0 + h) = \Phi_3 \end{equation}Then, if we unpack the derivatives as difference equations again, we get:
\begin{equation} \frac{d^2\Phi}{dx^2} \bigg\rvert_{x_0} \approx \frac{\frac{\Phi_3 - \Phi_2}{h} - \frac{\Phi_2 - \Phi_1}{h}}{h} \\ \implies \frac{\Phi_3 + Phi_1 - 2\Phi_2}{h^2} \end{equation}Now, we know that the above has to be true for any single independent variable, and that the derivative is a linear operator in the variables of the function. So we can generalize this to any number of variables. Here's an example of a 2D function:
\begin{equation} \Phi(x_0, y_0) = \Phi_0 \\ \Phi(x_0 + h, y_0) = \Phi_1 \\ \Phi(x_0 - h, y_0) = \Phi_2 \\ \Phi(x_0, y_0 + h) = \Phi_3 \\ \Phi(x_0, y_0 - h) = \Phi_4 \end{equation}(If it helps, imagine these points as naming "center", "east", "west", "north", "south".)
\begin{equation} \frac{\partial^2\Phi}{\partial x^2} \bigg\rvert_{x_0, y_0} \approx \frac{\Phi_1 + \Phi_2 - 2\Phi_0}{h^2} \\ \frac{\partial^2\Phi}{\partial y^2} \bigg\rvert_{x_0, y_0} \approx \frac{\Phi_3 + \Phi_4 - 2\Phi_0}{h^2} \end{equation}Now we can write an approximation to Laplace's equation:
\begin{equation} \nabla^2 \Phi = \frac{\partial^2\Phi}{\partial x^2} + \frac{\partial^2\Phi}{\partial y^2} \approx \frac{\Phi_1 + \Phi_2 + \Phi_3 + \Phi_4 - 4\Phi_0}{h^2} \end{equation}1.4 Algorithm
You may be asking: what does that derivation do for us?
Remember that we need to know the boundary conditions for any electrophysics problem to produce a specific solution. In the formulation we derived for Laplace's equation above, we divide the space we're interested in into a uniform grid (if you're interested, let \(h\) vary and see how it turns out), producing a system of equations for each point.
In that case, boundary conditions will be values for \(\Phi\) at each of the boundary points of the grid. If we have enough boundary conditions, the system of equations is solvable. For example, if we were to iterate through Gaussian elimination, those boundary conditions would trickle through the whole system in the form of more and more determined values: the algorithm would converge on the complete answer. (And if you've noticed, we can write as many equations as we have unknowns.)
So here is what we need to do:
- Postulate a grid of discrete potentials approximately mapping to a continous space.
- Form \(N\) equations with \(N\) unknowns, where \(N\) is the total number of grid points. (This will be a matrix equation of the form \(\boldsymbol{Ax} = \boldsymbol{b}\).)
- Solve that equation. (In practice, this is a left matrix divide.)
1.4.1 Understanding the Rectangular FD Grid
Here's some aids to applying this.
If you look at the coefficients for the equation, you may notice that there is a pattern of values that will always be present (with one exception I'll explain in just a minute):
- \(-4\) for the center point
- \(1\) for the neighboring points
If you write your potentials as a column vector, you'll see a band of \(-4\) along the coefficient matrix diagonal, surrounded by a band of \(1\) on the first adjacent diagonals (corresponding to the first axis neighbors). Lastly, there will be two bands on more distant diagonals (corresponding to the second axis neighbors). These will be on the \(k-1\) th diagonals, where \(k\) is the number of nodes in one row of your grid (where "row" is whatever your first axis is).
The exception to the adjacent diagonal band pattern is when there is a nonzero potential; that is, when that equation includes a nonzero boundary. (Technically, this is where Laplace's equation becomes Poisson's, which is one explanation for why the form looks different). At these points, the constant vector will have the boundary value (with a flipped sign), and the coefficient matching the boundary will be absent, because there's nothing to solve for there.
(If you have some programming training, this is a good place in your implementation to create a function that interprets these patterns to construct the grid.)