In early December, New Orleans hosted one of the most significant events in the field of machine learning and artificial intelligence — the NeurIPS Conference (Conference and Workshop on Neural Information Processing Systems).

Employees of leading companies and research centers in the field of artificial intelligence gathered to discuss possible solutions to current problems of the academic world and identify trends in the future development of artificial intelligence technology. This year, only 2,672 reports were accepted for the conference. Of these, 11 are authored by a team of talented scientists from the new laboratory of Mathematical Optimization Methods.

One of the most active employees of the laboratory is Alexander Beznosikov, a junior researcher at the Laboratory of Advanced Combinatorics and Network Applications of the FPMI. Alexander graduated from the Master's degree program «Data Mining» and entered graduate school (h index 17, h-10 index 21). He is the author of 4 articles. And he not only wrote most of these articles, but was also the author of the very idea for each article. At the time of writing, Alexander was studying at the 6th year of the Physics and Technology School of Applied Mathematics and Computer Science. It seems that this is a record, and so far no student of any Russian university can boast of such performance!

Optimization methods have a fairly long history of research, but in recent years there has been a lot of interest in them. And this is primarily due to the development of machine learning. Distributed algorithms began to play a special role, where several computing devices are used instead of one. With this approach, you can significantly speed up the learning process of popular and newest models on huge datasets. The key difference from unallocated algorithms is the process of organizing effective communications. This is due to the fact that often in distributed algorithms, most of the work time is spent on communicating devices with each other, and not on computations useful for solving the problem.

All the works of Alexander and the entire team of the MMO laboratory are devoted to distributed algorithms, various approaches to the issue of their effectiveness. Let's talk about the latest scientific achievements in a little more detail: 

Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity

Authors: Dmitry Kovalev, Alexander Beznosikov, Ekaterina Borodich, Alexander Gasnikov, Gesualdo Scutari
In this paper, a popular formulation of the distributed learning problem is considered, when the data on local computing devices are similar. It turns out that in this case, the algorithm operation process can be significantly accelerated by reducing the number of communications. Almost 10 years ago, convergence estimates of distributed algorithms were proved — a benchmark to strive for. Various authors, including very famous ones, approached this problem, but they did not get the optimal algorithm. The report presents an algorithm that is optimal both from the point of view of communication complexity and from the point of view of local computing.

Optimal Algorithms for Decentralized Stochastic Variational Inequalities

Authors: Dmitry Kovalev, Alexander Beznosikov, Abdurakhmon Sadiev, Michael Persiianov, Peter Richtarik, Alexander Gasnikov
Here we consider a broader class of problems — variational inequalities, which can be used to describe both saddle problems and stationary point search problems. Distributed algorithms are being developed for this class of tasks based on a decentralized architecture of communication between computing devices. In this case, the devices are connected to each other in a certain graph and can only communicate with neighbors. This takes into account the case when the graph can change significantly during the operation of the algorithm. For example, due to network failures.

In the last few years, fashionable directions of distributed learning have appeared, such as collaborative and federated. The essence of these directions is that computing devices are user devices: laptops, tablets, phones. And the data on which the model is trained is local user data. Hence, there are even more additional important aspects of distributed algorithms: data privacy, personalization of the result for each user, effective communication in conditions of poor connection of local computers.

Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees

Authors: Aleksandr Beznosikov, Peter Richtárik, Michael Diskin, Max Ryabinin, Alexander Gasnikov
In this paper, the problems of variational inequality continue to be considered. Only now the essence of the approach is to compress the transmitted information for a more efficient communication process between devices. Both so-called «unbiased/random» and «biased/greedy» information compression operators are considered. The resulting algorithms are superior to all competing methods.

Decentralized Local Stochastic Extra-Gradient for Variational Inequalities

Authors: Aleksandr Beznosikov, Pavel Dvurechensky, Anastasia Koloskova, Valentin Samokhin, Sebastian U Stich, Alexander Gasnikov
The last article is also about variational inequalities and decentralized algorithms for them, but considers a method that is able to work in critical situations, in the spirit of completely severing all communication connections between devices. It turns out that even in this case it is possible to give some guarantees of the algorithms' operation and obtaining an acceptable result.

NeurIPS is primarily an opportunity for people from the field of data science to follow the latest scientific achievements and research results. And the team of our new laboratory is the pride of the School! A team that deals with really important and relevant scientific problems that are already in demand in the real world.

Students of the FPMI always have the opportunity to join the scientific research of the laboratory. Science is incredibly interesting!

Используя этот сайт, вы соглашаетесь с тем, что мы используем файлы cookie.