Use this URL to cite or link to this record in EThOS:
Title: Deterministic global optimisation and location of transition states
Author: Nerantzis, Dimitrios
ISNI:       0000 0004 6059 2685
Awarding Body: Imperial College London
Current Institution: Imperial College London
Date of Award: 2016
Availability of Full Text:
Access from EThOS:
Access from Institution:
Transition states (index-1 saddle points) play a crucial role in determining the rates of chemical transformations but their reliable identification remains challenging in many applications. Deterministic global optimization methods have previously been employed for the location of transition states (TSs) by initially finding all stationary points and then identifying the TSs among the set of solutions. We propose several regional tests, applicable to general nonlinear, twice continuously differentiable functions, to accelerate the convergence of such approaches by identifying areas that do not contain any TS or that may contain a unique TS. The tests are based on the application of the interval extension of theorems from linear algebra to an interval Hessian matrix. They can be used within the framework of global optimization methods with the potential of reducing the computational time for TS location. We present the theory behind the tests, discuss their algorithmic complexity and show via a few examples that significant gains in computational time can be achieved by using these tests. Next, we present and explore the behaviour of a branch-and-bound algorithm for calculating valid bounds on the k-th largest eigenvalue of a symmetric interval matrix. Branching on the interval elements of the matrix takes place in conjunction with the application of Rohn’s method (an interval extension of Weyl’s theorem) in order to obtain valid outer bounds on the eigenvalues. Inner bounds are obtained with the use of two local search methods. The algorithm has the theoretical property that it provides bounds to any arbitrary precision ε > 0 (assuming infinite precision arithmetic) within finite time. In contrast with existing methods, bounds for each individual eigenvalue can be obtained even if its range overlaps with the ranges of other eigenvalues. Performance analysis is carried out through various examples. Finally, we present a refinement method in order to improve (reduce) the α values given by the scaled Gerschgorin method and thus create tighter convex underestimators. We apply the new method and compare it with the scaled Gerschgorin method on a number of test interval symmetric matrices. Although in the experiments we use the scaled Gerschgorin method the refinement algorithm can be utilized to improve the α values of any other method as well.
Supervisor: Adjiman, Claire Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral