4 D. BERGHAUS, R. S. JONES, H. MONIEN, AND D. RADCHENKO
one chooses mν=νπ/α for the odd eigenfunction basis, one obtains functions that trivially
fulfill the boundary condition along both edges of the wedge. This property (combined with
the exponential decay of the basis functions) makes the MPS very useful when computing
eigenvalues of triangular shapes: the boundary condition on two of the three edges can be
trivially fulfilled by the construction of ψν. To obtain the spectral parameter kone truncates
the expansion of Ψ to some finite order Nand searches for the parameter kfor which the
expansion vanishes on a discrete set of points on the remaining edge up to the desired
numerical precision (this approach is often referred to as point-matching). This results in a
linear system of equations
ψ1(k, r1, θ1). . . ψN(k, r1, θ1)
.
.
.....
.
.
ψ1(k, rN, θN). . . ψN(k, rN, θN)
·
c1
.
.
.
cN
=
0
.
.
.
0
,
which is square if the number of point-matching points is also chosen to be N. The parame-
ter know corresponds to an approximation of the true spectral parameter if the coefficients
are essentially invariant under the choice of matching points. In view of this, one computes
the value of kfor which the determinant becomes zero (i.e., when the matrix becomes sin-
gular) using root-finding algorithms (we used the secant method) [16]. By increasing N, one
can obtain better approximations and hence better precision for the eigenvalue.
This approach has been applied by the second author [24, 25] who expanded from the
vertex with angle β/2 (see Fig. 1) and applied point-matching to the edge ∂Ω2to compute
eigenvalues of polygons with n≤150 to 50 digits of precision. We remark that the MPS
has the limitation that the point-matching matrix becomes ill-conditioned for large N. This
has been further analyzed by Betcke and Trefethen [4,5] in 2005 who proposed an improved
approach that is referred to as modified MPS. Their main idea has been to compute an
additional matrix that consists of (randomly chosen) points in the interior of the considered
region. This matrix is designed to ensure that the eigenfunction is non-zero in the interior.
By minimizing the lowest generalized singular value of these two matrices, one can reliably
locate the eigenvalues of a given shape because spurious solutions are excluded and Ncan be
chosen as required, without conditioning the matrix too poorly. The modified MPS might be
regarded as a successor of the MPS and is superior in most applications. For the computation
of eigenvalues of relatively simple shapes at arbitrary precision arithmetic, the original MPS
can however still prove itself to be very competitive because the ill-conditioning can be
overcome by increasing the working precision and because the generalized singular value
decomposition is very computationally expensive compared to the LU-decomposition that
is required for the determinant computation. Additionally, the function of the determinant
w.r.t. kbecomes almost linear close to the eigenvalues which makes the location of the roots
very efficient [24].
We further remark that recent progress has been achieved in making the results of the
MPS rigorous (i.e., with certified error bounds) [11, 20]. For our application, certifying the
eigenvalues yields no benefit because the derivation of the closed-form expressions using the
LLL algorithm is non-rigorous (unless one knows a bound of the height of the coefficients
in advance, which we are unaware of). Additionally, specifically certifying the eigenvalues
results in a loss of precision.
3. Domain Decomposition
Despite the absolute convergence of the MPS (i.e., one always gets better estimates by
increasing Nas long as the working precision is sufficiently increased) the algorithm which
has been applied by the second author in [24,25] has the disadvantage that the convergence
rate (which is the eigenvalue precision per amount of matching points) drastically decreases
for polygons with more edges (and hence “thinner” fundamental regions). To overcome