Abstract
Trust region methods are a class of numerical methods for optimization. Unlike line search type methods where a line search is carried out in each iteration, trust region methods compute a trial step by solving a trust region subproblem where a model function is minimized within a trust region. Due to the trust region constraint, nonconvex models can be used in trust region subproblems, and trust region algorithms can be applied to nonconvex and ill-conditioned problems. Normally it is easier to establish the global convergence of a trust region algorithm than that of its line search counterpart. In this talk, the speaker will review recent results on trust region methods for unconstrained optimization, constrained optimization, nonlinear equations and nonlinear least squares, nonsmooth optimization and optimization without derivatives. Results on trust region subproblems and regularization methods will also be discussed.
About the speaker
Prof Ya-xiang Yuan received his PhD at the Department of Applied Mathematics and Theoretical Physics of University of Cambridge in 1986, where he worked as a Rutherford Research Fellow at Fitzwilliam College for three years before returning to China in 1988. He then joined the Chinese Academy of Sciences, and was one of the Vice Presidents of the Academy of Mathematics and Systems Science (AMSS) from 1999 to 2006. He was the Director of the Institute of Computational Mathematics and Scientific/Engineering Computing from 1995 to 2006, and is currently Professor there. He is also President of the Operations Research Society of China.
Prof Yuan is recognized for his contributions to nonlinear optimization, particularly on trust region methods, quasi-Newton methods and nonlinear conjugate gradient methods. On trust region methods, he gave the fundamental theoretical result of the optimal conditions for the Celis-Dennis-Tapia problem, and proved the interesting result that the step obtained by the truncated conjugate gradient method (Steihaug-Toint method) will reduce the quadratic objective function by at least a half of the maximal reduction in the whole trust region ball. On quasi-Newton methods, jointly with Richard Byrd and Jorge Nocedal, he proved the global convergence of the convex Broyden family of quasi-Newton methods, except the DFP method. On nonlinear conjugate gradient methods, he and his student Yuhong Dai gave the new nonlinear conjugate gradient method, called the “Dai-Yuan method”, which is widely regarded as one of the four famous nonlinear conjugate gradient methods.
Prof Yuan received numerous awards, including the L. Fox Prize, the Feng Kang Scientific Computing Prize, the Young Scientist Award of China and China National Funds for Distinguished Young Scientists, “China Top 10 Outstanding Youth” Award, the second Prize of National Natural Science Award, and the Shiing- Shen Chern Mathematics Award, etc. He is a Fellow of the Society for Industrial and Applied Mathematics and a Member of the Chinese Academy of Sciences.
|