Rogozin Alexander Viktorovich

Receives an MIPT award for the contribution to numerical methods since 2020

Born on February 08, 1996

Education

Graduated from Yandex School for Data Analysis in 2019. 

Graduated from MIPT, Phystech school of applied mathematics and informatics in 2020. 

PhD student at Phystech school of applied mathematics and informatics since 2020.

Work Experience

2019 - 2021 — teaching assistant at MIPT 

2020 - 2022 — junior researcher at MIPT 

2020 - 2022 — leading researcher of joint MIPT--Huawei project on signal processing 

Since 2022 — researcher at MIPT

Teaching

MIPT, Probability theory, 2019 - 2021

Research Interests

Distributed optimization, mahine learning

Публикации

2024

Rogozin, A., Beznosikov, A., Dvinskikh, D., Kovalev, D., Dvurechensky, P., Gasnikov, A. Decentralized saddle point problems via non-Euclidean mirror prox (2024) Optimization Methods and Software. Scopus DOI Q1

Nguyen, N.T., Rogozin, A., Metelev, D., Gasnikov, A. Min-Max Optimization over Slowly Time-Varying Graphs (2024) Doklady Mathematics, 108 (Suppl 2), pp. S300-S309. Scopus DOI Q2

Metelev, D., Rogozin, A., Gasnikov, A., Kovalev, D. Decentralized saddle-point problems with different constants of strong convexity and strong concavity (2024) Computational Management Science, 21 (1), статья № 5. Scopus DOI Q2

Nguyen, N.T., Rogozin, A.V., Gasnikov, A.V. Average-Case Optimization Analysis for Distributed Consensus Algorithms on Regular Graphs (2024) Russian Journal of Nonlinear Dynamics, 20 (5), pp. 907-931. Scopus DOI Q3

Metelev, D., Beznosikov, A., Rogozin, A., Gasnikov, A., Proskurnikov, A. Decentralized optimization over slowly time-varying graphs: algorithms and lower bounds (2024) Computational Management Science, 21 (1), статья № 8. Scopus DOI Q3

2023

Metelev, D., Rogozin, A., Kovalev, D., Gasnikov, A. Is Consensus Acceleration Possible in Decentralized Optimization over Slowly Time-Varying Networks? (2023) Proceedings of Machine Learning Research, 202, pp. 24532-24554. Scopus DOI A*

Chezhegov, S., Rogozin, A., Gasnikov, A. On Decentralized Nonsmooth Optimization (2023) Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 13930 LNCS, pp. 25-38. Scopus DOI Q3

Vedernikov, R.A., Rogozin, A.V., Gasnikov, A.V. Decentralized Conditional Gradient Method on Time-Varying Graphs (2023) Programming and Computer Software, 49 (6), pp. 505-512. Scopus DOI Q3

Demyan Yarmoshik, Alexander Rogozin, Alexander Gasnikov. Decentralized optimization with affine constraints over time-varying networks (2023) Scopus DOI Q3

Chen J., Lobanov A.V., Rogozin A.V. Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique (2023) Computer Research and Modeling, 15 (2), pp. 469 - 480 Scopus DOI Q4

2022

Chezhegov S., Novitskii A., Rogozin A., Parsegov S., Dvurechensky P., Gasnikov A. A General Framework for Distributed Partitioned Optimization // IFAC-PapersOnLine, Vol. 55, No. 13, P. 139 - 144 Scopus WOS DOI Q3

Yarmoshik D., Rogozin A., Khamisov O.O., Dvurechensky P., Gasnikov A. Decentralized Convex Optimization Under Affine Constraints for Power Systems Control // Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 13367, P. 62 - 75 Scopus WOS DOI Q3

Rogozin, A., Yarmoshik, D., Kopylova, K., Gasnikov, A. Decentralized Strongly-Convex Optimization with Affine Constraints: Primal and Dual Approaches (2022) Communications in Computer and Information Science, 1739 CCIS, pp. 93-105. Scopus WOS DOI Q4

2021

Kovalev, D., Shulgin, E., Richtárik, P., Rogozin, A., Gasnikov, A. ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks (2021) Proceedings of Machine Learning Research, 139, pp. 5784-5793. Scopus WOS DOI A*

Rogozin, A., Lukoshkin, V., Gasnikov, A., Kovalev, D., Shulgin, E. Towards Accelerated Rates for Distributed Optimization over Time-Varying Networks (2021) Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 13078 LNCS, pp. 258-272. Scopus WOS DOI Q3

Gorbunov E., Rogozin A., Beznosikov A., Dvinskikh D., Gasnikov A. Recent Theoretical Advances in Decentralized Distributed Convex Optimization // Springer Optimization and Its Applications, Vol. 191, P. 253 - 325 Scopus WOS DOI Q3

Maslovskiy, A., Pasechnyuk, D., Gasnikov, A., Anikin, A., Rogozin, A., Gornov, A., Antonov, L., Vlasov, R., Nikolaeva, A., Begicheva, M. Non-convex Optimization in Digital Pre-distortion of the Signal (2021) Communications in Computer and Information Science, 1476 CCIS, pp. 54-70. Scopus WOS DOI Q4

Beznosikov, A., Rogozin, A., Kovalev, D., Gasnikov, A. Near-Optimal Decentralized Algorithms for Saddle Point Problems over Time-Varying Networks (2021) Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 13078 LNCS, pp. 246-257. Scopus WOS DOI Q4

Rogozin, A., Bochko, M., Dvurechensky, P., Gasnikov, A., Lukoshkin, V. An Accelerated Method for Decentralized Distributed Stochastic Optimization over Time-Varying Graphs (2021) Proceedings of the IEEE Conference on Decision and Control, 2021-December, pp. 3367-3373. Scopus WOS DOI

Beznosikov, A., Rogozin, A., Scutari, G., Gasnikov, A. Distributed Saddle-Point Problems Under Similarity (2021) Advances in Neural Information Processing Systems, 10, pp. 8172-8184. Scopus WOS DOI

2020

Rogozin, A., Uribe, C.A., Gasnikov, A.V., Malkovsky, N., Nedic, A. Optimal distributed convex optimization on slowly time-varying graphs (2020) IEEE Transactions on Control of Network Systems, 7 (2), статья № 8882272, pp. 829-841. Scopus WOS DOI Q1

Rogozin, A., Gasnikov, A. Penalty-Based Method for Decentralized Optimization over Time-Varying Graphs (2020) Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12422 LNCS, pp. 239-256. Scopus WOS DOI Q3

Используя этот сайт, вы соглашаетесь с тем, что мы используем файлы cookie.