Skip to main content

IE 601: Optimization Techniques

Prerequisite:  Exposure to relevant concepts at undergraduate level and instructor consent

Contents

The aim of this course is to have some basic understanding of provably convergent computational schemes for R^n constrained optimization problems. Some examples, mainly from decision making viewpoint. A brisk look at linear programming: Fundamental theorem of linear programming, Degenerate solutions, Simplex based methods, Cycling, Duality, Complementary slackness conditions.

Non-linear programming: First and second order conditions. Iterative methods and associated issues. Line search methods: Stationarity of limit points of steepest decent, successive step-size reduction algorithms, etc. Hessian based algorithms: Newton, Conjugate directions and Quasi-Newton methods.

Constrained optimization problems: Lagrange variables, Karush-Kuhn-Tucker conditions, Regular points, Sensitivity analysis. Quadratic programming, Convex problems. Optional Topics: Mixed integer models; Interior point methods; Iterative schemes for constrained problems; Sequential quadratic programming methods; Barrier methods; Trust-region methods, etc.

References

  • D. Bertsekas Nonlinear programming, 2nd Edition, Athena Scientific, 1999, Nashua.
  • V. Chvatal Linear programming, W. H. Freeman, 1983, New York.
  • E. K. P. Chong and S. Zak, An introduction to optimization, 2nd Edition, 2004, John Wiley and Sons (Asia) Pvt. Ltd., Singapore
  • R. Fletcher, Practical methods of optimization, 2nd Edition, Wiley, 2000, New York
  • D. Luenberger, Linear and nonlinear programming, 2nd Edition, 1984, Kluwer Academic Publisher, New York
  • O. L. Mangasarian, Nonlinear programming, SIAM, 1987, Philadelphia
  • J. Nocedal and S. Wright, Numerical optimization, 1999, Springer-Verlag, New York
  • A. Ruszczynski, Nonlinear optimization, 2006, Princeton University Press, Princeton
  • R. K. Sundaram, A first course in optimization theory, 1996, Cambridge University Press, Cambridge