Optimization methods (1st year of Master's degree, DM Department), autumn 2023

Course description

The course is aimed at studying the complexity theory of algorithms for continuous optimization problems, as well as some features of their practical implementation. Classical results on the optimality of theoretical guarantees of the convergence rate of methods of the 1st and 2nd order for convex problems in large-dimensional spaces are investigated, which is naturally associated with modern applications in data analysis.

The key part of the course is the multistep (accelerated, instantaneous) gradient-type methods for smooth convex problems (the heavy ball method, the fast gradient method, the method of similar triangles, the conjugate gradient method), for which optimal estimates of the convergence rate on the class of smooth convex and strongly convex problems in large-dimensional spaces are known. A detailed theoretical analysis of the accelerated method of such triangles, its adaptive version and applicability to the problems of composite optimization known in data analysis (for example, LASSO regression) are considered.
A notable part of the course is related to the introduction to the theory of numerical methods for nonsmooth optimization problems and stochastic gradient-type methods. Stochastic gradient and subgradient descents will be considered, as well as well-known adaptive stochastic methods AdaGrad and Adam.

Instructors

Lecturer — Stonyakin F.S.

Seminarian — Kuruzov I.A
Используя этот сайт, вы соглашаетесь с тем, что мы используем файлы cookie.