Title:

Some contributions to the theory of mathematical programming

As stated earlier the Simplex Method (or its variations e.g. Dual Simplex Method) has thus far been the most effective and widely used general method for the solution of linear programming problems. The Simplex Method in its various forms starts initially with a basic feasible solution and continues its moves in different iterations within the feasible region till it finds the optimal solution. The only other notable variation of the Simplex Method, namely the Dual Simplex Method, on the other hand, by virtue of the special formulation of the linear programming problem, starts with an infeasible solution and continues to move in the infeasible region till it finds the optimal solution at which it enters the feasible region. In other respects both the Simplex and the Dual Simplex Methods follow essentially the same principle for obtaining the optimal solution. The rigorous mathematical features have been widely discussed in the literature [12, 16, 34, 35, 38, 68, 77] and only those formal aspects of this topic which are closely related to the subject of this thesis will be outlined. The Multiplex Method, though reported in the literature [30, 15, 69, 71, 29, 32], is not so well known and has also not been widely coded on electronic computers. It had earlier been programmed for the English Electric's Computer ‘DEUCE' by the author [72] and Ferranti's ‘MERCURY' by OleJohn Dahl in 1960 [15]. Later both the above mentioned computers were obsolete and the efforts presently concentrate on coding it for UNIVAC 1100 and IBM 360. The Multiplex Method, as such, has been included in the present thesis and discussed in some detail in chapter 2. The flow diagram and the algorithm for the method is given in section 2.4, chapter 2. The main body of the thesis consists of developing a new linear programming method which has been called the Bounding Hyperplane Method – Part I. This is explained in detail in chapter 3. The method could initially start with either a basic feasible or infeasible point and in its subsequent moves it may either alternate between the feasible and the infeasible regions or get restricted to either of them depending upon the problem. It is applicable as a new phase which we call phase 0 to the Simple Method, particularly in situations where an initial basic feasible point is not available. In such cases it either results in a feasible point at the end of phase 0 or else yields a ‘better' infeasible point for phase 1 operations of the Simplex Method. Moreover, it is found that the number of iterations required to reach either the former by the application of phase 0 or the latter by the application of first phase 0 and then phase 1 are, in general, less than those required by following phase 1 alone. This is explained with illustrations in Chapter 6. Even when applied alone the method, in general, yields the optimal solution in fewer iterations as compared with the Simplex Method. This is illustrated with examples in chapter 3. We also develop and illustrate a powerful but straightforward method whereby we first find the solution to the equality constraints and (if the former does not yield an inconsistent solution point) then the transformations to the latter are obtained from the equality solution tableau corresponding to the former. This results in reducing the iteration time appreciably for each iteration of the method. It has been called the B.H.P.M. – part II and is discussed in chapter 4. To estimate the time taken by the B.H.P and the Simplex Method, the two codes (written in Fortran) have been run on a number of problems taken from the literature. The results have been summarised in chapter 7. Finally, the suggestions for further research towards i. the extensions of the B.H.P.M. to the quadratic programming problem where the function in (1.1.1) is positive semidefinite, and (ii) the accuracy of computations in linear programming, in general, are discussed in sections 8.1 and 8.2 respectively of chapter 8.
