Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.732595
Title: Computational problems in linear algebra
Author: Reid, J. K.
Awarding Body: University of Oxford
Current Institution: University of Oxford
Date of Award: 1964
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
Abstract:
In this thesis we consider the problems that arise in computational linear algebra when the matrix involved is vary large and sparse. This necessarily results in the emphasis throughout being mostly on iterative methods, but we begin with a chapter in which the usual direct methods for the solution of linear equations are summarized with the very large problem in mind. Methods for band-matrices and block tridiagonal matrices are considered and compared. Kron's method of tearing is described briefly. The basic and well-established techniques of iterative solution of linear equations are described. Sufficient conditions for convergence are stated and the convergence is established under these conditions. A section is devoted to the problem of deciding when the iterations may be terminated. The method of Kaczmarz is given more emphasis than is usual in the literature, a symetric version is introduced and rigorous proofs of convergence are given. Matrices with "property A" are defined and the usual associated results are given. The methods of Carré and Kularud for finding the optimum relaxation parameter in SOR are described and compared with a new technique. We also consider methods for finding the best relaxation parameter in symmetric SOR . Block iterative methods are described briefly, including Varga'a result on the comparison of different splittings. Gradient and semi-iterative methods are considered in detail, including the cases of non-symmetric and symmetric non-definite matrices, and are compared for numerical stability. For the eigenvalue problem we again summarize some well-known methods with an emphasis on the very large matrix. We give an example of the accurate results that can be expected from Lanezos' method of minimised iterations without re-orthogonalisation and show how such results can be guaranteed. Inverse iteration, the Rayleigh quotient iteration and Rutishauser's L-R algorithm are considered. We also devote a chapter to iterative methods that yield continually improving approximations to an eigenvector and corresponding eigenvalue. These are very economical in storage but have received little attention in the literature. We introduce the concept of "numerical rank", since rank really has no meaning for a matrix given by rounded numbers. We give means of finding bounds for this numerical rank. The extension of the methods to complex matrices is considered briefly and we conclude with a description of some problems that may give rise to large sparse matrices. We consider in particular the so-called "L-membrane" problem and use a conformal transformation. In this way we also illustrate the problem of the solution of an elliptic equation over an area with a curved boundary. We consider the iteration of Federenko, which has its most obvious application in the field of elliptic equations. The thesis is concluded with two further examples, from linear analysis of variance and front surveying, each of which is likely to involve large sparse matrices.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.732595  DOI: Not available
Share: