Title:

Computing matrix functions in arbitrary precision arithmetic

Functions of matrices arise in numerous applications, and their accurate and efficient evaluation is an important topic in numerical linear algebra. In this thesis, we explore methods to compute them reliably in arbitrary precision arithmetic: on the one hand, we develop some theoretical tools that are necessary to reduce the impact of the working precision on the algorithmic design stage; on the other, we present new numerical algorithms for the evaluation of primary matrix functions and the solution of matrix equations in arbitrary precision environments. Many stateoftheart algorithms for functions of matrices rely on polynomial or rational approximation, and reduce the computation of f(A) to the evaluation of a polynomial or rational function at the matrix argument A. Most of the algorithms developed in this thesis are no exception, thus we begin our investigation by revisiting the PatersonStockmeyer method, an algorithm that minimizes the number of nonscalar multiplications required to evaluate a polynomial of a certain degree. We introduce the notion of optimal degree for an evaluation scheme, and derive formulae for the sequences of optimal degree for the schemes used in practice to evaluate truncated Taylor and diagonal Pade approximants. If the rational function r approximates f, then it is reasonable to expect that a solution to the matrix equation r(X) = A will approximate the functional inverse of f. In general, infinitely many matrices can satisfy this kind of equation, and we propose a classification of the solutions that is of practical interest from a computational standpoint. We develop a precisionoblivious numerical algorithm to compute all the solutions that are of interest in practice, which behaves in a forward stable fashion. After establishing these general techniques, we concentrate on the matrix exponential and its functional inverse, the matrix logarithm. We present a new scaling and squaring approach for computing the matrix exponential in high precision, which combines a new strategy to choose the algorithmic parameters with a bound on the forward error of Pade approximants to the exponential. Then, we develop two algorithms, based on the inverse scaling and squaring method, for evaluating the matrix logarithm in arbitrary precision. The new algorithms rely on a new forward error bound for Pade approximants, which for highly nonnormal matrices can be considerably smaller than the classic bound of Kenney and Laub. Our experimental results show that in double precision arithmetic the new approaches are comparable with the stateoftheart algorithm for computing the matrix logarithm, and experiments in higher precision support the conclusion that the new algorithms behave in a forward stable way, typically outperforming existing alternatives. Finally, we consider a problem of the form f(A)b, and focus on methods for computing the action of the weighted geometric mean of two large and sparse positive definite matrices on a vector. We present two new approaches based on numerical quadrature, and compare them with several methods based on the Krylov subspace in terms of both accuracy and efficiency, and show which algorithms are better suited for a blackbox approach. In addition, we show how these methods can be employed to solve a problem that arises in applications, namely the solution of linear systems whose coefficient matrix is a weighted geometric mean.
