Title:

Computational problems in linear algebra

In this thesis we consider the problems that arise in computational linear algebra when the matrix involved is vary large and sparse. This necessarily results in the emphasis throughout being mostly on iterative methods, but we begin with a chapter in which the usual direct methods for the solution of linear equations are summarized with the very large problem in mind. Methods for bandmatrices and block tridiagonal matrices are considered and compared. Kron's method of tearing is described briefly. The basic and wellestablished techniques of iterative solution of linear equations are described. Sufficient conditions for convergence are stated and the convergence is established under these conditions. A section is devoted to the problem of deciding when the iterations may be terminated. The method of Kaczmarz is given more emphasis than is usual in the literature, a symetric version is introduced and rigorous proofs of convergence are given. Matrices with "property A" are defined and the usual associated results are given. The methods of Carré and Kularud for finding the optimum relaxation parameter in SOR are described and compared with a new technique. We also consider methods for finding the best relaxation parameter in symmetric SOR . Block iterative methods are described briefly, including Varga'a result on the comparison of different splittings. Gradient and semiiterative methods are considered in detail, including the cases of nonsymmetric and symmetric nondefinite matrices, and are compared for numerical stability. For the eigenvalue problem we again summarize some wellknown methods with an emphasis on the very large matrix. We give an example of the accurate results that can be expected from Lanezos' method of minimised iterations without reorthogonalisation and show how such results can be guaranteed. Inverse iteration, the Rayleigh quotient iteration and Rutishauser's LR algorithm are considered. We also devote a chapter to iterative methods that yield continually improving approximations to an eigenvector and corresponding eigenvalue. These are very economical in storage but have received little attention in the literature. We introduce the concept of "numerical rank", since rank really has no meaning for a matrix given by rounded numbers. We give means of finding bounds for this numerical rank. The extension of the methods to complex matrices is considered briefly and we conclude with a description of some problems that may give rise to large sparse matrices. We consider in particular the socalled "Lmembrane" problem and use a conformal transformation. In this way we also illustrate the problem of the solution of an elliptic equation over an area with a curved boundary. We consider the iteration of Federenko, which has its most obvious application in the field of elliptic equations. The thesis is concluded with two further examples, from linear analysis of variance and front surveying, each of which is likely to involve large sparse matrices.
